You are on page 1of 17

CALSOFT EBOOK

A deep-dive on
Kubernetes for Edge
Index

1. Introduction ................................................................. 3

2. Why Kubernetes for Edge …………………………………………. 4

3. Approaches of Deploying Kubernetes for edge ………….. 5

4. Platforms Supporting Kubernetes for Edge ………………… 7

5. Case Studies ……………………………………………………………… 13

6. Canonical Guide: Deploying Kubernetes at the Edge …. 14

7. Kubernetes IoT Edge Working Group ………………………… 16


Introduction

Adoption of Kubernetes into Datacenters or cloud has been remarkable since its release in 2014. From orchestrating lightweight
application containers, Kubernetes is emerged as an enabler to handle and schedule diverse IT workloads, from virtualised network
functions to AI/ML and GPU hardware resources.

In few years of presence, core capabilities of Kubernetes improved exponentially with diverse IT workloads introduced with new
technologies. With such drastic surge in data center, Kubernetes is now considered to adopt at edge infrastructure, which possesses
less capacity resources and persistent connection to central cloud for processing data generated by different IoT devices.

Kubernetes has become a de-facto standard if enterprises move to scale up their IT infrastructure for achieving cloud native
capabilities.

Various challenges are involved in edge-based infrastructure associated with resource and workloads management. There will be
thousands of edge and far edge nodes to manage in short period. Having a greater centralized control from cloud, security policies
and extremely less latency are basic features expected with edge infrastructure to be deployed by either enterprises as well as
telecom service providers. Let us understand why and how Kubernetes will play a role to get over such challenges.

Figure: Kubernetes Architecture from Wikipedia

Source: https://containerjournal.com/2019/08/01/powering-edge-with-kubernetes-a-primer/

www.calsoftinc.com 3
Why Kubernetes for Edge

Edge nodes adds the additional layer in the IT infrastructure required by enterprises and service providers in the on-premise and
cloud data center architecture. So, it is imperative for admins to manage the workloads at edge level dynamic and automated way
like the same way for on-premise or cloud.

Additionally, whole architecture contains different types of computing hardware resources and software applications. Kubernetes
comes to rescue as it is characterised due to infrastructure agnostic capability. It can manage diverse set of workloads from different
compute resources seamlessly.

For such edge-based environments, Kubernetes can be useful for orchestrating and scheduling resources from cloud to edge data
center workloads.

Also, Kubernetes can be helpful in managing and deployment of edge devices along with cloud configurations. Ref

In normal edge and IoT based architecture, analytics and control plane services resides in cloud. As the flow of operations and data is
from cloud-edge-devices and an opposite way, there is a need for a common operational paradigm for automated processing and
execution of instructions. Kubernetes provides this common paradigm for all the network deployments so that any policies and rule-
set can be applied to overall infrastructure. Also, policies can be narrowed down for specific channels or edge nodes based on
particular configuration requirements. Kubernetes provides horizontal scaling for infrastructure and application development,
enables high availability, common platform for rapid innovation from cloud to edge and more importantly pushing edge nodes to get
ready for low latency application access from different IoT devices.

Another critical requirement for edge environment can be high availability of services deployed at edge. This can be special feature
enterprises will seek while taking decision to use Kubernetes for edge orchestration. Kubernetes is bundled with monitoring and
tracking using APIs and interconnection among all cluster nodes. Moreover, containers are fast to switch in between and highly
resilient.

Figure: Azure IoT Edge approach for implementing Kubernetes for edge orchestration

Image source: https://thenewstack.io/tutorial-kubernetes-for-orchestrating-iot-edge-deployments/

Source: https://containerjournal.com/2019/08/01/powering-edge-with-kubernetes-a-primer/

www.calsoftinc.com 4
Approaches of Deploying Kubernetes for edge

The basic Kubernetes architecture is something like this

Figure – Kubernetes Architecture

Kubernetes cluster consists of Master and Nodes. Master is responsible for having API exposed to developer and scheduling the
deployment of all cluster including nodes. Nodes contains the container runtime environment either it can be Docker or rkt. It
contains element called Kublet, which communicates with master, and pods, which are collection of one or more containers. Node
can be a virtual machine in the cloud.

For edge-based scenarios, the approaches can be as below.

1. Whole Clusters at the edge. In this approach, whole Kubernetes cluster is deployed within edge nodes. This option is ideal for
a requirement where edge node might have less capacity resources or single server machine and do not want to consume
more resources for control planes and nodes. K3s is the reference architecture suited for this type of solution.

k3s is wrapped in a simple package that reduces the dependencies and steps needed to run a production Kubernetes cluster.
This makes the clusters lightweight that can be possible to run on less capacity edge nodes.

www.calsoftinc.com 5
2. The second approach of using Kubernetes for edge is referred from KubeEdge architecture. It is based on Huawei’s IoT Edge
platform, Intelligent Edge Fabric (IEF). In this, control plane resides in cloud (either public cloud or private data center) and
managing the edge nodes containing containers and resources. This architecture allows support for different hardware
resources present at edge and enables optimization in edge resource utilization. This helps to save set up and operations
costs significantly for edge cloud deployment.

3. Third option is Hierarchical cloud + edge in which Virtual Kubelet is used as reference architecture. Virtual Kubelets resides at
cloud part. It basically contains the abstract of nodes and pods deployed at the edge. Virtual Kubelets gets a supervisory
control for edge nodes, containing containers. Using virtual kubelets enables flexibility in resource consumption for edge
based architecture.

There are challenges while using Kubernetes for Edge and IoT use cases. Like, infrastructure, control plane and data plane related.
Source: https://containerjournal.com/2019/08/01/powering-edge-with-kubernetes-a-primer/

www.calsoftinc.com 6
Platforms Supporting Kubernetes for Edge

KubeEdge
KubeEdge is an open source system extending native containerized application orchestration and device management to hosts at the
Edge. It is built upon Kubernetes and provides core infrastructure support for networking, application deployment and metadata
synchronization between cloud and edge. It also supports MQTT and allows developers to author custom logic and enable resource
constrained device communication at the Edge. KubeEdge consists of a cloud part and an edge part.

Advantages
 Edge Computing
With business logic running at the Edge, much larger volumes of data can be secured & processed locally where the data is
produced. Edge nodes can run autonomously which effectively reduces the network bandwidth requirements and
consumptions between Edge and Cloud. With data processed at the Edge, the responsiveness is increased dramatically and
data privacy is protected.
 Simplified development
Developers can write regular http or mqtt based applications, containerize them, and run them anywhere - either at the
Edge or in the Cloud - whichever is more appropriate.
 Kubernetes-native support
With KubeEdge, users can orchestrate apps, manage devices and monitor app and device status on Edge nodes just like a
traditional Kubernetes cluster in the Cloud. Locations of edge nodes are transparent to customers.
 Abundant applications
It is easy to get and deploy existing complicated machine learning, image recognition, event processing and other high
level applications to the Edge.

Source: https://github.com/kubeedge/kubeedge

K3s
K3s is a lightweight Kubernetes distribution designed for developers and operators looking for a way to run Kubernetes in resource-
constrained environments. Rancher Labs launched the project to address the increasing demand for small, easy to manage
Kubernetes clusters running on x86, Arm®v7-A and 64-bit Armv8-A processors in edge computing environments. K3s can provide
distribution of Kubernetes that requires less than 512 MB of RAM, and is ideally suited for edge use cases. There is a significant
demand for K3s among organizations in the retail, finance, telco, utility and manufacturing sectors.

Kubernetes that requires less than 512 MB of RAM, and is ideally suited for edge use cases. We see significant demand for K3s
among organizations in the retail, finance, telco, utility and manufacturing sectors."

How K3s reduces the size of Kubernetes

To reduce the memory required to run Kubernetes, the engineering team at Rancher Labs developing K3s focused on four primary
changes:

 Removing old and non-essential code: K3s does not include any alpha functionality that is disabled by default or old features
that have been deprecated, such as old API groups, which are still shipped in a standard deployment. Rancher also removed
all non-default admission controllers, in-tree cloud providers, and storage drivers, opting instead to allow users to add in any
drivers they need.

 Consolidating the packaging of running processes: To conserve RAM, Rancher combined the processes that typically run on
a Kubernetes management server into a single process. Rancher has also combined the Kubelet, kubeproxy and flannel agent
processes that run on a worker node into a single process.

 Using containerd instead of Docker as the runtime container engine: By substituting containerd for Docker, Rancher was
able to cut the runtime footprint significantly, removing functionality like libnetwork, swarm, Docker storage drivers and
other plugins.

 Introducing SQLite as an optional datastore in addition to etcd:Rancher added SQLlite as optional datastore in K3s to
provide a lightweight alternative to etcd that has both a lower memory footprint, as well as dramatically simplified
operations.

Source: https://rancher.com/press/2019-02-26-press-release-rancher-labs-introduces-lightweight-distribution-kubernetes-simplify/

www.calsoftinc.com 7
Kubespray
Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes
clusters configuration management tasks. It has added support for the bare-metal cloud Packet. This allows Kubernetes clusters to be
deployed across next-generation edge locations, including cell-tower based micro datacenters.

Source: Installing Kubernetes with Kubespray & Bringing Kubernetes to the bare-metal edge

Azure IoT Edge


Azure IoT Edge moves cloud analytics and custom business logic to devices so that your organization can focus on business insights
instead of data management. Scale out your IoT solution by packaging your business logic into standard containers, then you can
deploy those containers to any of your devices and monitor it all from the cloud.

Analytics drives business value in IoT solutions, but not all analytics needs to be in the cloud. If you want to respond to emergencies
as quickly as possible, you can run anomaly detection workloads at the edge. If you want to reduce bandwidth costs and avoid
transferring terabytes of raw data, you can clean and aggregate the data locally then only send the insights to the cloud for analysis.

Azure IoT Edge is made up of three components:

 IoT Edge modules are containers that run Azure services, third-party services, or your own code. Modules are deployed to IoT
Edge devices and execute locally on those devices.

 The IoT Edge runtime runs on each IoT Edge device and manages the modules deployed to each device.

 A cloud-based interface enables you to remotely monitor and manage IoT Edge devices.

Source: What is Azure IoT Edge?

Mirantis Cloud Platform Edge (MCP Edge)


MCP Edge delivers low footprint, low latency and high throughput edge infrastructure that is based on open standards and can be
centrally managed. The software product integrates Kubernetes, OpenStack and Mirantis’ flexible infrastructure manager,
DriveTrain, empowering operators to deploy a combination of container, VM and bare metal points of presence connected by a
unified management plane.

Source: https://www.mirantis.com/company/press-center/company-news/mirantis-launches-virtual-appliance-for-building-
kubernetes-based-edge-clouds/

www.calsoftinc.com 8
Akraino Edge Stack
Akraino Edge Stack is an open source software stack that improves the state of edge cloud infrastructure for carrier, provider, and
IoT networks.

Akraino Edge Stack offers new levels of flexibility to scale edge cloud services quickly, to maximize the applications or subscribers
supported on each server, and to help ensure the reliability of systems that must be up at all times.

Akraino Edge Stack also provides processing power closer to endpoint customer devices to meet application latency requirements of
less than ~20 milliseconds.

Akraino has Kubernetes-Native Infrastructure (KNI) Blueprint Family which focuses on leveraging the best-practices and tools from
the Kubernetes community to declaratively manage edge computing stacks at scale and with a consistent, uniform user experience
from the infrastructure up to the services and from developer environments to production environments on bare metal or on public
cloud.

All blueprints in this family share the following characteristics:

 They implement the Kubernetes community’s Machine API, allowing users to declaratively configure and consistently deploy
and lifecycle manage Kubernetes clusters no matter whether on-prem or on public cloud, on VMs or on bare metal, at the
edge or at the core.

 Leverage the community’s Operator Framework for automated and secure lifecycle management of applications in the edge
computing stack. Operators allow applications to be lifecycle managed as Kubernetes resources, in event-driven manner, and
fully RBAC-controlled. They may provide more than deployment and upgrade actions for an application, e.g. auto-
reblancing/scaling, analytics, and usage metering, and may be created from Helm Charts, using Ansible or Go.

 Optimize for Kubernetes-native container workloads, but allow mixing in VM-based workloads via KubeVirt as needed.

Source: Kubernetes-Native Infrastructure (KNI) Blueprint Family

www.calsoftinc.com 9
EdgeXFoundry
EdgeX Foundry is a vendor-neutral open source project hosted by The Linux Foundation building a common open framework for IoT
edge computing. At the heart of the project is an interoperability framework hosted within a full hardware- and OS-agnostic
reference software platform to enable an ecosystem of plug-and-play components that unifies the marketplace and accelerates the
deployment of IoT solutions. Rohit P Sardesai, System Architect at Huawei Technologies shared a blog post on EdgeX Foundry
website about how to run EdgeX Foundry platform on Kubernetes to leverage its benefits. You can access the steps from below
source link

Link: https://www.edgexfoundry.org/blog/2018/01/30/huawei-runs-edgex-foundry-kubernetes/

StartlingX
StarlingX is an open source project that offers you the services you need for your distributed edge cloud either to choose from or to
deploy it as one package. The project builds on existing services in the open source ecosystem by taking components of cutting edge
projects such as Ceph, OpenStack and Kubernetes and complementing them with new services like configuration and fault
management with focus on key requirements as high availability (HA), quality of service (QoS), performance and low latency.

Figure – StarlingX Container Platform

At Open Infra Summit at Denver 2019, StarlingX architects showcased the integration of Kubernetes for StarlingX based edge
platform. Presentation included an overview of the solution including: integration with StarlingX platform services for deployment
and life cycle management of the kubernetes cluster, integration with openstack keystone for authentication/authorization, the
StarlingX one and two node solutions, two node master implementation and integrated ceph cluster. Also, followed by a demo of
platform capabilities. You can know more from video and presentation on this link https://www.openstack.org/summit/denver-
2019/summit-schedule/events/23215/starlingx-hardened-managed-kubernetes-platform-for-the-edge

Source: Introducing StarlingX

www.calsoftinc.com 10
Other tools
Eclipse ioFog
Eclipse ioFog is a universal Edge Compute Platform which offers a standardized way to develop and remotely deploy secure
microservices to edge computing devices. ioFog can be installed on any hardware running Linux and provides a universal runtime for
microservices to dynamically run on the edge. Companies in different vertical markets such as retail, automotive, oil and gas, telco,
and healthcare are using ioFog to turn any compute device into an edge software platform.

Source: Eclipse ioFog: Evolving Toward Native Kubernetes Orchestration at the Edge

Eclipse ioFog software release that makes any Kubernetes distribution edge-aware, allowing customers to create a true cloud-to-
edge continuum and deploy applications and microservices from the cloud to any edge device.

Source: Edgeworx Launches True Cloud-to-Edge Continuum for Any Kubernetes Distribution

Eclipse hawkBit
Eclipse hawkBit™ is a domain independent back-end framework for rolling out software updates to constrained edge devices as well
as more powerful controllers and gateways connected to IP based networking infrastructure.

Source: https://www.eclipse.org/hawkbit/

Key features:

• Scalable to millions of devices, and terabytes of software, on a global scale


• Supports complex rollout strategies (grouping, cascading, error detection)
• Supports standard and proprietary protocols
• Can run on Kubernetes, deployed via Helm chart

Source: www.vand.io/chart/kiwigrid/hawkbit-update-server/

Figure: Eclipse hawkBit

www.calsoftinc.com 11
Eclipse Ditto
Eclipse Ditto is a technology in the IoT implementing a software pattern called “digital twins”. A digital twin is a virtual, cloud based,
representation of his real world counterpart (real world “Things”, e.g. devices like sensors, smart heating, connected cars, smart
grids, EV charging stations).

The technology mirrors potentially millions and billions of digital twins residing in the digital world with physical “Things”. This
simplifies developing IoT solutions for software developers, as they do not need to know how or where exactly the physical “Things”
are connected.

With Ditto a thing can just be used as any other web service via its digital twin.

It can be deployed on Kubernetes. A github link is https://github.com/eclipse/ditto/tree/master/deployment/kubernetes

Source: https://www.eclipse.org/ditto/intro-overview.html

Eclipse Hono
Eclipse Hono provides uniform (remote) service interfaces for connecting large numbers of IoT devices to a (cloud) back end. It
specifically supports scalable and secure data ingestion (telemetry data) as well as command & control type message exchange
patterns and provides interfaces for provisioning & managing device identity and access control rules. It can be deployed to
Kubernetes.

A guide is here https://www.eclipse.org/hono/docs/latest/deployment/helm-based-


deployment/#targetText=Eclipse%20Hono%E2%84%A2's%20components,using%20the%20Helm%20package%20manager.

Source: https://github.com/eclipse/hono

www.calsoftinc.com 12
Case Studies

Target Stores – Using Kubernetes as Deployment Interface that Realizes Distributed Edge Computing
The challenge teams faced with developing software for the stores was figuring out how to effectively build a continuous delivery
pipeline to 1,800 deployment targets that was safe and expedient. Also, 1800 stores need to be effectively self-sufficient and can’t
rely on uninterruptible connection with Target data centers or cloud presence. Here, Kubernetes came into a picture.

Target stores developed an “Unimatrix” as the centralized apex system that would be the interface for deploying software to the
stores. By interacting with Unimatrix, teams could affect changes across the entire fleet of Kubernetes clusters with a single
command. Unimatrix was responsible for capturing the intended state for an application and distributing that state down to the
store clusters. Through Unimatrix, each in-store Kubernetes cluster is able to act and operate as its own standalone mini-cloud, while
appearing to developers to be a part of a broader collective.

Source: https://tech.target.com/infrastructure/2018/06/20/enter-unimatrix.html

Chick-fil-A - Edge Computing with Kubernetes

Kubernetes has been used by restaurant chain to get data from IoT devices present at various outlets, processed at central cloud and
send back analysis data to restaurant to improve user as well as kitchen experience to handle the spikes in demand. To know more
about the entire process, please refer a detailed blog post below

Source: https://medium.com/@cfatechblog/edge-computing-at-chick-fil-a-7d67242675e2

Bosch IoT Suite – Development of IoT Solution and deployment of it on Edge locations
Kubernetes has been used to develop Bosch IoT suite. It is built using microservices architecture and followed the multicloud model
to host different microservices. Kubernetes provides this common deployment model. It allowed the development teams to be
independent but deploy their services into an integrated platform. Kubernetes is available on all major cloud platforms. It became
the common orchestration tool that allows the Bosch IoT Suite services to be consistently deployed and managed.

IoT Suite customers require the Bosch IoT Suite to be deployed in different global regions. Data privacy, network latency often
require the IoT platform to be deployed close to the devices – either on-prem or in country. Kubernetes makes it possible to deploy
different services in different locations.

Source: https://blog.bosch-si.com/bosch-iot-suite/adopting-kubernetes-to-build-iot-solutions/

www.calsoftinc.com 13
Canonical Guide: Deploying Kubernetes at the Edge

How can you create an edge cloud? Edge clouds should have at least two layers – both layers will maximise operational effectiveness
and developer productivity – and each layer is constructed differently.

The first layer is the Infrastructure-as-a-Service (IaaS) layer. In addition to providing compute and storage resources, the IaaS layer
should satisfy the network performance requirements of ultra-low latency and high bandwidth.

The second layer is the Kubernetes layer, which provides a common platform to run your applications and services. Whereas using
Kubernetes for this layer is optional, it has proven to be an effective platform for those organisations leveraging edge computing
today. You can deploy Kubernetes to field devices, edge clouds, core datacenters, and the public cloud. This multi-cloud deployment
capability offers you complete flexibility to deploy your workloads anywhere you choose. Kubernetes offers your developers the
ability to simplify their devops practices and minimise time spent integrating with heterogeneous operating environments.

Okay, but how can I deploy these layers? At Canonical, we accomplish this through the use of well defined, purpose-built technology
primitives. Let’s start with the IaaS layer, which the Kubernetes layer relies upon.

Physical infrastructure lifecycle management


The first step is to think about the physical infrastructure, and what technology can be used to manage the infrastructure effectively,
converting the raw hardware into an IaaS layer. Metal-as-a-Service (MAAS) has proven to be effective in this area. MAAS provides
the operational primitives that can be used for hardware discovery, giving you the flexibility to allocate compute resources and
repurpose them dynamically. These primitives expose bare metal servers to a higher level of orchestration through open APIs, much
like you would experience with OpenStack and public clouds.

With the latest MAAS release you can automatically create edge clouds based on KVM pods, which effectively enable operators to
create virtual machines with pre-defined sets of resources (RAM, CPU, storage and over-subscription ratios). You can do this through
the CLI, Web UI or the MAAS API. You can use your oawn automation framework or use Juju, Canonical’s advanced orchestration
solution.

MAAS can also be deployed in a very optimised fashion to run on top of the rack switches – just as we demonstrated during the
OpenStack Summit in Berlin.

www.calsoftinc.com 14
Image 1: OpenStack Summit Demo : MAAS running on ToR switch (Juniper QFX5100AA)

Edge application orchestration


Once discovery and provisioning of physical infrastructure for the edge cloud is complete, the second step is to choose an
orchestration tool that will make it easy to install Kubernetes, or any software, on your edge infrastructure. Juju allows you to do just
that – you can easily install Charmed Kubernetes, a fully compliant and upstream Kubernetes. And with Kubernetes you can install
containerised workloads, offering them the highest possible performance. In the telecommunications sector, workloads like
Container Network Functions (CNFs) are well suited to this architecture.

There are additional benefits to Charmed Kubernetes. With the ability to run in a virtualised environment or directly on bare metal,
fully automated Charmed Kubernetes deployments are designed with built-in high availability, allowing for in place, zero downtime
upgrades. This is a proven, truly resilient edge infrastructure architecture and solution. An additional benefit of Charmed Kubernetes
is its ability to automatically detect and configure GPGPU resources for accelerated AI model inferencing and containerised
transcoding workloads.

Source: https://ubuntu.com/blog/deploying-kubernetes-at-the-edge-part-i-building-blocks

www.calsoftinc.com 15
Kubernetes IoT Edge Working Group

Eclipse Foundation, an organization that’s working to standardize on open source technologies for IoT, teamed up with the Cloud
Native Computing Foundation, which leads the development of Kubernetes. They created a new Kubernetes IoT Edge Working
Group to solve many of the challenges that still exist when trying to implement Kubernetes-based software containers in IoT edge
deployments.

Some of the great-unsolved problems at the IoT edge — connectivity, manageability, scalability, reliability, security challenges, how
to bring compute resources closer to edge devices for faster data processing and actions — are being solved as one-offs by the
enterprises that are jumping into IoT. This new working group sees Kubernetes as having great potential as a foundational
technology for extending hybrid environments to the IoT edge, and believes that broader industry collaboration on requirements
definition around Kubernetes will accelerate broader adoption of IoT. Once again, it’s the rising tide theory of open source.

As Red Hat’s Dejan Bosanac (lead for the Kubernetes IoT Edge Working Group) says:

“IoT and edge applications have many distributed components that don’t usually sit together within the same datacenter
infrastructure. There are messaging challenges, security has to be re-invented for every application and service, and there are
integration and data locality issues with sidecar services. These are issues that shouldn’t have to be re-invented every time — they
should be open source infrastructure with broad industry support. Red Hat and this working group see Kubernetes and other cloud-
native projects in its orbit as having broad potential sitting between gateways, edge nodes and cloud platforms. Much like the LAMP
stack was instrumental to the web-applications era, this group is focused on accelerating a Kubernetes stack for running cloud
infrastructure and distributed components at the IoT edge.”

Some of the initial targets for the group in how it evolves Kubernetes for IoT edge applications:

 Supporting Industrial IoT (IIoT) use cases scaling to millions of constrained devices a) connecting directly to Kubernetes-based
cloud infrastructure (IP enabled devices), or b) connecting via IoT gateways (for non-IP enabled devices)

 Via Edge nodes, bringing computing closer to data sources to support processing and acting on data sooner. Anticipated
benefits include reduced latency, lower bandwidth, and improved reliability. Some example use cases:

o Deploying data streaming applications to the Edge nodes in order to reduce traffic and save bandwidth between
devices and the central cloud.

o Deploying a serverless framework for using local functions that can be triggered as a response to certain events
(without communication with the cloud)

 Providing a common control plane across hybrid cloud and edge environments to simplify management and operations

The initial focus of the working group will be to flesh out IoT edge computing use cases and see how Kubernetes can be used (and to
what extent). Among some of the requirements identified so far:

 For IIoT applications, the Kubernetes ingress layer must scale to millions of connections

 That same ingestion layer must provide first-class support for IIoT messaging protocols such as MQTT (it is primarily HTTP TLS
centric today)

 Kubernetes must support multi-tenancy for environments where devices and gateways are shared

Group contact
 Slack

 Mailing list

 Open Community Issues/PRs

Source: K8s at the Edge – Some Context on the New Kubernetes IoT Working Group

www.calsoftinc.com 16
About Calsoft
Calsoft provides end-to-end product development solutions along with quality assurance, ecosystem
integration and new age professional services such as DevOps assessment, datacenter advisory, support and
sustaining capabilities to assist global ISVs in enhancing their product road maps and achieving strategic
business goals. Our deep domain knowledge across Datacenter technologies and verticals such as IoT, AI and
machine learning help customers create exceptional products within well-defined time and budget.

Global Offices
U.S.A. HEADQUARTERS INDIA HEADQUARTERS
1762 Technology S. No 320/1/C, Bavdhan (B)
DR STE 229, Tal – Mulshi, Dist – Pune
San Jose, 411 021
California – 95110-1385 Phone: +91 (20) 6654 4444
Phone: (408) 834 7086 Fax: +91 (20) 6654 4000
Email: Email:
marcom@calsoftinc.com marcom@calsoftinc.com

BENGALURU (INDIA) PUNE (INDIA)


Shailendra Techno Park Block Congo, Embassy,
Plot No.116, EPIP Zone, Techzone, Wing B, 7th Floor,
1st Stage, Whitefield, Rajiv Gandhi InfoTech, Phase II,
Bangalore-560066, India Hinjewadi, Pune – 411057, India
Phone: +91 (80) 6757 7880 Phone: +91 (20) 6654 4444
Email: Email:
marcom@calsoftinc.com marcom@calsoftinc.com

You might also like