You are on page 1of 24

A TECHNICAL SEMINAR REPORT ON

EDGE COMPUTING

Submitted in partial fulfillment of requirement for the award of the degree of

BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING

Submitted by

SABAVATH YADAGIRI
20BD5A0535

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY
(Approved by AICTE, Affiliated to JNTUH)
Narayanaguda, Hyderabad, Telangana-29 2022-23

i
CERTIFICATE

This is to certified that seminar work entitled “EDGE COMPUTING” is a bonafide work
carried out in the seventh semester by “SABAVATH YADAGIRI (20BD5A0535)” in partial
fulfilment for the award of Bachelor of Technology in “COMPUTER SCIENCE &
ENGINEERING” from JNTU Hyderabad during the academic year 2022 - 2023 and no part
of this work has been submitted earlier for the award of any degree.

TS INCHARGE INTERNAL EXAMINER

HEAD OF THE DEPARTMENT

ii
CONTENTS
CHAPTER TITLE PAGE NO.

Abstract 1

List of figures and tables 2

1 Introduction 3

2 What is edge computing? 4

2.1. A deeper exploration of edge computing considerations. 5

2.2. Edge VS Cloud 6

3 Three tier edge computing model 7

4 Why edge computing 7

4.1. Exploring characteristics 8

4.2. Where is the edge in edge computing 9

5 Key drivers and barriers in the edge computing market 10

5.1. Edge computing is a strategic priority for many tier1 10


operators, and deployments are underway in several
regions
5.2. 5G roll-outs and enterprise and iot service opportunities 11
are driving edge investments, but strategic
objectives and barriers vary by region
6 Use cases of edge computing 13

6.1. Examining certain scenarios 14

7 Challenges 18

8 Conclusion 20

References 21

iii
ABSTRACT
Cloud Computing has revolutionized how people store and use their data. However, there are
some areas where cloud is limited; latency, bandwidth, security and a lack of offline access
can be problematic. To solve this problem, users need robust, secure and intelligent on-premise
infrastructure of edge computing. When data is physically located closer to the users who
connect to it, information can be shared quickly, securely, and without latency. In financial
services, gaming, health care and retail, low levels of latency are vital for a great digital
expertise. To improve reliability and faster response time, combine cloud with edge
infrastructure.

1
LIST OF TABLES AND FIGURES
FIGURE NO. FIGURE NAME PAGE NO.

2.1 A small-scale edge computing model 6

3 Three-tier edge computing model 7


4.2 The edge in edge computing 9

5.1 Regional distribution of survey participants and their 11


edge computing deployment status

5.2 Main motivation for deploying edge clouds, by 11


region, 2020

5.3 Top three motivations for deploying edge clouds, by 13


region, 2020

6.1 Mobile edge computing model 15

6.2 Naas 16

2
1. INTRODUCTION

In recent years, with the proliferation of the Internet of Things (IoT) and the wide penetration of
wireless networks, the number of edge devices and the data generated from the edge have been growing
rapidly. According to International Data Corporation (IDC) prediction [20], global data will reach 180
zettabytes (ZB), and 70% of the data generated by IoT will be processed on the edge of the network
by 2025. IDC also forecasts that more than 150 billion devices will be connected worldwide by 2025.
In this case, the centralized processing mode based on cloud computing is not efficient enough to
handle the data generated by the edge. The centralized processing model uploads all data to the cloud
data center through the network and leverages its supercomputing power to solve the computing and
storage problems, which enables the cloud services to create economic benefits. However, in the
context of IoT, traditional cloud computing has several shortcomings.

1) Latency: Novel applications in the IoT scenario have high real-time requirements. In the
traditional cloud computing model, applications send data to the data center and obtain a response,
which increases the system latency. For example, high-speed autonomous driving vehicles require
milliseconds of response time. Serious consequences will occur once the system latency exceeds
expectations due to network problems.

2) Bandwidth: Transmitting large amounts of data generated by edge devices to the cloud in a
realtime manner will cause great pressure on network bandwidth. For example, Boeing 787 generates
more than 5 GB/s of data, but the bandwidth between an aircraft and satellites is insufficient to support
realtime transmission

3) Availability: As more and more Internet services are deployed on the cloud, the availability of
these services has become an integral part of daily life. For example, smartphones users who get used
to voice-based services, e.g., Siri, will feel frustrated if the service is unavailable for a short period of
time. Therefore, it is a big challenge for cloud service providers to keep the 24 × 7 promise.

4) Energy: Data centers consume a lot of energy. According to Sverdlik’s research [2], the energy
consumption of all data centers in the United States will increase by 4% by 2020, reaching 73 billion
kilowatt-hours. With the increasing amount of computation and transmission, energy consumption will
become a bottleneck restricting the development of cloud computing centers.

3
5) Security and Privacy: Data in the interconnection of thousands of households are closely related
to users’ lives. For example, indoor cameras transmitting video data from the house to the cloud will
increase the risk of leaking users’ private information. With the enforcement of EU General Data
Protection Regulation (GDPR) [18], data security and privacy issues have become more important for
cloud computing companies.

These challenges have pushed the horizon of edge computing, which calls for processing the data at
the edge of the network. It has developed rapidly since 2014 with the potential to reduce latency and
bandwidth charges, address the limitation of computing capability of the cloud data center, and increase
availability as well as protect data privacy and security.

2. WHAT IS EDGE COMPUTING?


It is worth highlighting that many overlapping and sometimes conflicting definitions of edge
computing exist—edge computing means many things to many people. But for our purposes, the most
mature view of edge computing is that it is offering application developers and service providers cloud
computing capabilities, as well as an IT service environment at the edge of a network.
The aim is to deliver compute, storage, and bandwidth much closer to data inputs and/or end users. An
edge computing environment is characterized by potentially high latency among all the sites and low
and unreliable bandwidth—alongside distinctive service delivery and application functionality
possibilities that cannot be met with a pool of centralized cloud resources in distant data centers. By
moving some or all of the processing functions closer to the end user or data collection point, cloud
edge computing can mitigate the effects of widely distributed sites by minimizing the effect of latency
on the applications.
Edge computing first emerged by virtualizing network services over WAN networks, taking a step
away from the data centre. The initial use cases were driven by a desire to leverage a platform that
delivered the flexibility and simple tools that cloud computing users have become accustomed to.
As new edge computing capabilities emerge, we see a changing paradigm for computing—one that is
no longer necessarily bound by the need to build centralized data centres. Instead, for certain
applications, cloud edge computing is taking the lessons of virtualization and cloud computing and
creating the capability to have potentially thousands of massively distributed nodes that can be applied
to diverse use cases, such as industrial IoT or even far-flung monitoring networks for tracking real time
water resource usage over thousands, or millions, of locations.
Many proprietary and open-source edge computing capabilities already exist without relying on
distributed cloud—some vendors refer to this as “device edge.” Components of this approach include
elements such as IoT gateways or NFV appliances. But increasingly, applications need the versatility
of cloud at the edge, although the tools and architectures needed to build distributed edge
infrastructures are still in their infancy. Our view is that the market will continue to demand better
capabilities for cloud edge computing.

4
Edge computing capabilities include, but are not limited to:

• A consistent operating paradigm across diverse infrastructures.


• The ability to perform in a massively distributed (think thousands of global locations)
environment.
• The need to deliver network services to customers located at globally distributed remote
locations.
• Application integration, orchestration and service delivery requirements.
• Hardware limitations and cost constraints.
• Limited or intermittent network connections.
• Methods to address applications with strict low latency requirements (AR/VR, voice, and so
forth).
• Geofencing and requirements for keeping sensitive private data local.

2.1. A DEEPER EXPLORATION OF EDGE COMPUTING


CONSIDERATIONS.
The "edge" in edge computing refers to the outskirts of an administrative domain, as close as possible
to discrete data sources or end users. This concept applies to telecom networks, to large enterprises
with distributed points of presence such as retail, or to other applications, in particular in the context
of IoT. One of the characteristics of edge computing is that the application is strongly associated with
the edge location. For telecoms, “the edge” would refer to a point close to the end user but controlled
by the provider, potentially having some elements of workloads running on end user devices. For large
enterprises, “the edge” is the point where the application, service or workload is used (e.g. a retail store
or a factory). For the purposes of this definition, the edge is not an end device with extremely limited
capacity for supporting even a minimal cloud architecture, such as an IoT or sensor device. This is an
important consideration, because many discussions of edge computing do not make that distinction.

5
Figure 2.1. A Small-scale edge computing model

2.2. EDGE VERSUS CLOUD


Edge computing and cloud computing are not substituted relationships; rather, they are
complementary. The ubiquity of smart devices and rapid development of modern virtualization and
cloud technologies have brought edge computing to the foreground, defining a new era in cloud
computing. Edge computing needs powerful computing power and massive storage support of a cloud
computing center, and the cloud computing center also needs the edge computing model to process
massive data and privacy data.

Edge computing has several obvious advantages. First, a large amount of temporary data are processed
at the edge of the network, but not all of it is uploaded to the cloud, which greatly reduces the pressure
on the network bandwidth and data center power consumption. Second, data processing near the data
producer does not require the response of the cloud computing center through the network, which
greatly reduces the system latency and enhances the service response capability. Finally, edge
computing stores users’ private data on edge devices instead of uploading it, which reduces the risk of
network data leakage and protects security and privacy.

6
3. THREE-TIER EDGE COMPUTING MODEL

Figure 3. Three-tier edge computing model

By analyzing several representative application scenarios of edge computing, in Fig. 3, we abstract a


typical three-tier edge computing model: IoT, edge, and cloud. The first tier is IoT, including drones,
sensors in the connected health area, devices and appliances in the smart home, and equipment in the
industrial Internet. Multiple communication protocols are used to connect IoT and the second tier,
edge. For example, drones can connect to a cellular tower by 4G/LTE, and sensors in the smart home
can communicate with the home gateway through Wi-Fi. Edge, including connected and autonomous
vehicles, cellular tower, gateway, and edge servers, requires the huge computing and storage
capabilities of the cloud to complete complex tasks. The protocols between IoT and the edge usually
have the characteristics of low power consumption and short distance, while the protocols between the
edge and the cloud have large throughput and high speed. The Ethernet, optical fibers, and the coming
5G are the preferred communication protocols between the edge and the cloud.

4. WHY EDGE COMPUTING?


The explosive growth and increasing computing power of IoT devices has resulted in unprecedented
volumes of data. And data volumes will continue to grow as 5G networks increase the number of
connected mobile devices.
In the past, the promise of cloud and AI was to automate and speed innovation by driving actionable
insight from data. But the unprecedented scale and complexity of data that’s created by connected
devices has outpaced network and infrastructure capabilities.
Sending all that device-generated data to a centralized data centre or to the cloud causes bandwidth
and latency issues. Edge computing offers a more efficient alternative: data is processed and analysed
closer to the point where it's created. Because data does not traverse over a network to a cloud or data
centre to be processed, latency is significantly reduced. Edge computing — and mobile edge

7
computing on 5G networks — enables faster and more comprehensive data analysis, creating the
opportunity for deeper insights, faster response times and improved customer experiences

4.1. EXPLORING CHARACTERISTICS

The defining need that drives cloud edge computing is the need for service delivery to be closer to
users or end-point data sources. Edge computing environments will work in conjunction with core
capacity, but aim to deliver an improved end user experience without putting unreasonable demands
on connectivity to the core. Improvements result from:

• Reducing latency: The latency to the end user could be lower than it would be if
the compute was farther away—making, for instance, responsive remote desktops
possible, or successful AR, or better gaming.
• Mitigating bandwidth limits: The ability to move workloads closer to the end
users or data collection points reduces the effect of limited bandwidth at a site. This
is especially useful if the service on the edge node reduces the need to transmit large
amounts of data to the core for processing, as is often the case with IoT and NFV
workloads. Data reduction and local processing can be translated into both more
responsive applications and reduces the cost of transporting terabytes of data over
long distances.

But there are trade-offs. To deliver edge computing, it is necessary to vastly increase the number of
deployments. This institutes a significant challenge to widespread edge deployments. If managing a
single cloud takes a team of ten, how can an organization cope with hundreds or even thousands of
small clouds? Some requirements include:

• Standardization and infrastructure consistency are needed. Each location has to be


similar; a known quantity.
• Manageability needs to be automated; deployment, replacement and any
recoverable failures should be simple and straightforward.
• Simple, cost-effective plans need to be laid for when hardware fails.
• Locally fault-tolerant designs might be important, particularly in environments that
are remote or unreachable—zero touch infrastructure is desirable. This is a question
that balances the cost of buying and running redundant hardware against the cost of
outages and emergency repairs.

8
Considerations include:

 Do these locations need to be self-sufficient?


 If a location has a failure, no one is going to be on-site to fix it, and local
spares are unlikely.
 Does it need to tolerate failures? And if it does, how long is it going to be
before someone will be available to repair it—two hours, a week, a month?

• Maintainability needs to be straightforward—untrained technicians perform manual


repairs and replacements, while a skilled remote administrator re-installs or
maintains software.
• Physical designs may need a complete rethink. Most edge computing environments
won’t be ideal—limited power, dirt, humidity and vibration have to be considered.

4.2. WHERE IS THE EDGE IN EDGE COMPUTING?

We can agree that reduced latency and reduced cost are key characteristics of edge computing, then
sending data over the WAN — even if it’s to a private data centre in your headquarters — is NOT edge
computing. To put it another way, edge computing means that data is processed before it crosses any
WAN, and therefore is NOT processed in a traditional data center, whether it be a private or public
cloud data center.

The following picture illustrates the typical devices in an Industrial IoT solution and who claims to
have “Edge Compute” in this topology:

Figure 4.2. The edge in edge computing

9
As you can see, the edge is relative. The service provider’s edge is not the customer’s edge. But the
most important difference between the edge compute locations depicted is the network connectivity.
End devices, IoT appliances, and routers are connected via the LAN — maybe Wi-Fi or Gigabit
Ethernet cable. That is usually a very reliable and cheap link. The link between the routers/gateways
and cell tower, is the most critical. That’s the last mile from the service provider. It introduces the most
latency and is the most expensive for the end customer. It is the 5G or 4G uplink. Once you’re on the
cell tower, the provider has fiber and you’re safe from a throughput perspective, but then you’re
looking at increasing costs.

As you can also infer from the graphic, end devices should be excluded from edge compute because it
can be near impossible to draw the line between things, smart things and edge compute things.

5. KEY DRIVERS AND BARRIERS IN THE EDGE


COMPUTING MARKET

5.1. EDGE COMPUTING IS A STRATEGIC PRIORITY FOR MANY TIER-1


OPERATORS, AND DEPLOYMENTS ARE UNDERWAY IN SEVERAL
REGIONS

Edge computing is a top strategic priority for many Tier-1 operators across North America (NA),
Western Europe (WE) and Asia–Pacific (APAC) (87%). These operators are either at the planning or
deployment phase of their edge cloud strategies. 30% of the operators surveyed are ‘first movers’ (that
is, they are already in the process of deploying an edge cloud), and operators from Asia– Pacific lead
this group. 57% of operators are currently outlining their plans for building edge computing
infrastructure in the near term; we refer to them as ‘followers’ in this paper. A small portion of
operators (13%) are still exploring the value and requirements of edge computing, and do not have
immediate plans to deploy edge clouds.

10
5.1. regional distribution of survey participants and their edge computing deployment status

5.2. 5G ROLL-OUTS AND ENTERPRISE AND IOT SERVICE OPPORTUNITIES


ARE DRIVING EDGE INVESTMENTS, BUT STRATEGIC OBJECTIVES AND
BARRIERS VARY BY REGION

There are three main drivers of operators’ edge computing strategies: their ambitions in the enterprise
and IoT services markets, requirements for new edge locations to support 5G roll-outs and network
cloudification and, to a lesser extent, new opportunities in the consumer market. However, there is a
stark contrast in the primary motivations for deploying edge computing in different geographies (see
Figure 5.2).

5.2. main motivation for deploying edge clouds, by region, 2020

11
• Most operators in North America (70%) are driven by their plans for 5G and vRAN roll-outs
and backhaul traffic management. These operators are mainly focused on supporting their 5G and
network cloudification efforts by creating private edge computing nodes that will host network
functions such as vRAN (CUs or DUs) or 5G packet core (for example, vUPF), which are increasingly
transforming into disaggregated and distributed cloud-native architecture

• The majority of Western European operators (70%) have a more service-driven motivation for
deploying edge computing, based on enabling new enterprise, IoT and consumer edge services. For
example, this could be in the form of a public edge cloud service that offers various ‘as a service’
business models (SaaS, PaaS and IaaS) for specific enterprise IT workloads and IoT applications at
the edge. The roadmaps for 5G roll-outs in this region have so far been less ambitious than those in
North America and Asia–Pacific, suggesting that operators in Western Europe may be placing a greater
emphasis on the new service revenue benefits of edge in order to justify their early investments.

• Operators in Asia–Pacific have a more-balanced deployment strategy that is evenly split


between the two types of drivers listed above. However, operators’ responses become much more
homogenous across all regions when it comes to the broader motivations for deploying edge clouds,
as shown in Figure 5.3. This indicates that operators’ different starting points and approaches will
coalesce and converge over time. For example, technology driven operators are likely to co-locate new
edge applications alongside their 5G network functions in their edge clouds in order to monetize their
5G investments with new or enhanced services, such as ultra-low-latency use cases. Conversely,
service driven operators may deploy some of the 5G network functions across their generic edge
computing footprint to reduce the cost of their 5G deployments by utilizing existing edge
infrastructure.

12
5.3. top three motivations for deploying edge clouds, by region, 2020

6. USE CASES OF EDGE COMPUTING


There are probably dozens of ways to characterize use cases and this paper is too short to provide an
exhaustive list. But here are some examples to help clarify thinking and highlight opportunities for
collaboration. Four major categories of workload requirements that benefit from a distributed
architecture are analytics, compliance, security, and NFV.

• DATA COLLECTION AND ANALYTICS: IoT, where data is often collected from a large
network of microsites, is an example of an application that benefits from the edge computing
model. Sending masses of data over often limited network connections to an analytics engine
located in a centralized data center is counterproductive; it may not be responsive enough, could
contribute to excessive latency, and wastes precious bandwidth. Since edge devices can also
produce terabytes of data, taking the analytics closer to the source of the data on the edge can
be more cost-effective by analyzing data near the source and only sending small batches of
condensed information back to the centralized systems. There is a tradeoff here—balancing the
cost of transporting data to the core against losing some information.
• SECURITY: Unfortunately, as edge devices proliferate––including mobile handsets and IoT
sensors––new attack vectors are emerging that take advantage of the proliferation of endpoints.
Edge computing offers the ability to move security elements closer to the originating source
of attack, enables higher performance security applications, and increases the number of
layers that help defend the core against breaches and risk.
• COMPLIANCE REQUIREMENTS: Compliance covers a broad range of requirements,
ranging from geofencing, data sovereignty, and copyright enforcement. Restricting access to

13
data based on geography and political boundaries, limiting data streams depending on copyright
limitations, and storing data in places with specific regulations are all achievable and
enforceable with edge computing infrastructure.

• NETWORK FUNCTION VIRTUALIZATION (NFV): Network Function Virtualization


(NFV) is at its heart the quintessential edge computing application because it provides
infrastructure functionality. Telecom operators are looking to transform their service delivery
models by running virtual network functions as part of, or layered on top of, an edge
computing infrastructure. To maximize efficiency and minimize cost/complexity, running
NFV on edge computing infrastructure makes sense.
• REAL-TIME: Real-time applications, such as AR/VR, connected cars, telemedicine, tactile
internet Industry 4.0 and smart cities, are unable to tolerate more than a few milliseconds of
latency and can be extremely sensitive to jitter, or latency variation. As an example, connected
cars will require low latency and high bandwidth, and depend on computation and content
caching near the user, making edge capacity a necessity. In many scenarios, particularly where
closed-loop automation is used to maintain high availability, response times in tens of
milliseconds are needed, and cannot be met without edge computing infrastructure.

• NETWORK EFFICIENCY: Many applications are not sensitive to latency and do not require
large amounts of nearby compute or storage capacity, so they could theoretically run in a
centralized cloud, but the bandwidth requirements and/or compute requirements may still make
edge computing a more efficient approach. Some of these workloads are common today,
including video surveillance and IoT gateways, while others, including facial recognition and
vehicle number plate recognition, are emerging capabilities. With many of these, the edge
computing infrastructure not only reduces bandwidth requirements, but can also provide a
platform for functions that enable the value of the application—for example, video surveillance
motion detection and threat recognition. In many of these applications, 90% of the data is
routine and irrelevant, so sending it to a centralized cloud is prohibitively expensive and
wasteful of often scarce network bandwidth. It makes more sense to sort the data at the edge
for anomalies and changes, and only report on the actionable data.

6.1. EXAMINING CERTAIN SCENARIOS


The basic characteristic of the edge computing paradigm is that the infrastructure is located closer to
the end user, that the scale of site distribution is high and that the edge nodes are connected by WAN
network connections. Examining a few scenarios in additional depth helps us evaluate current
capabilities that map to the use case, as well as highlighting weaknesses and opportunities for
improvement.

14
• Retail/finance/remote location “cloud in a box”: Edge computing infrastructure that supports a
suite of applications customized to the specific company or industry vertical. Often used by the
enterprise, edge computing infrastructure, ultimately coupled together into distributed
infrastructure, to reduce the hardware footprint, standardize deployments at many sites, deliver
greater flexibility to replace applications located at the edge (and to have the same application
running uniformly in all nodes irrespective of HW), boost resiliency, and address concerns
about intermittent WAN connections. Caching content or providing compute, storage, and
networking for self-contained applications are obvious uses for edge computing in settings with
limited connectivity.

• Mobile connectivity: Mobile/wireless networks are likely to be a common environmental


element for cloud edge computing, as mobile networks will remain characterized by limited
and unpredictable bandwidth, at least until 5G becomes widely available. Applications such as
augmented reality for remote repair and telemedicine, IoT devices for capturing utility (water,
gas, electric, facilities management) data, inventory, supply chain and transportation solutions,
smart cities, smart roads and remote security applications will all rely on the mobile network
to greater or lesser degrees. They will all benefit from edge computing’s ability to move
workloads closer to the end user.

figure 6.1. mobile edge computing model

• Network-as-a-Service (NaaS): Coming from the need to deliver an identical network service
application experience in radically different environments, the NaaS use case requires both a

15
small footprint of its distributed platform at the edges, and strong centralized management tools
that cross over unreliable or limited WAN network connections in support of the services out
on the edge. The main characteristics of this scenario are: small hardware footprint, moving
(changing network connections) and constantly changing workloads, hybrid locations of data
and applications. This is one of the cases that needs infrastructure to support micro nodes—
small doses of compute in non-traditional packages (not all 19in rack in a cooled data center).
NaaS will require support for thousands or tens of thousands of nodes at the edge and must
support mesh and/or hierarchical architectures as well as on demand sites that might spin up as
they are needed and shutdown when they are done. APIs and GUIs will have to change to reflect
that large numbers of compute nodes will have different locations instead of being present in
the same data center.

Figure 6.2. Naas

• Universal Customer Premises Equipment (uCPE): This scenario, already being deployed today,
demands support for appliance-sized hardware footprints and is characterized by limited
network connections with generally stable workloads requiring high availability. It also
requires a method of supporting hybrid locations of data and applications across hundreds or
thousands of nodes and scaling existing uCPE deployments will be an emerging requirement.
This is particularly applicable to NFV applications where different sites might need a different
set of service chained applications, or sites with a different set of required applications that still
need to work in concert. Mesh or hierarchical architectures would need to be supported with
localized capacity and the need to store and forward data processing due to intermittent network

16
connections. Self-healing and self-administration combined with the ability to remotely
administer the node are musts.

• Satellite enabled communication (SATCOM): This scenario is characterized by numerous


capable terminal devices, often distributed to the most remote and harsh conditions. At the same
time, it makes sense to utilize these distributed platforms for hosting services, especially
considering the extremely high latency, limited bandwidth and the cost of over-the-satellite
communications. Specific examples of such use cases might include vessels (from fishing boats
to tanker ships), aircrafts, oil rigs, mining operations or military grade infrastructure.

17
7. CHALLENGES
Though there are plenty of examples of edge deployments already in progress around the world,
widespread adoption will require new ways of thinking to solve emerging and already existing
challenges and limitations.

We have established that the edge computing platform has to be, by design, much more fault tolerant
and robust than a traditional data center centric cloud, both in terms of the hardware as well as the
platform services that support the application lifecycle. We cannot assume that such edge use cases
will have the maintenance and support facilities that standard data center infrastructure does. Zero
touch provisioning, automation, and autonomous orchestration in all infrastructure and platform stacks
are crucial requirements in these scenarios. But there are other challenges that need to be taken under
consideration.

For one, edge resource management systems should deliver a set of high-level mechanisms whose
assembly results in a system capable of operating and using a geo-distributed IaaS infrastructure
relying on WAN interconnects. In other words, the challenge is to revise (and extend when needed)
IaaS core services in order to deal with aforementioned edge specifics—network
disconnections/bandwidth, limited capacities in terms of compute and storage, unmanned
deployments, and so forth.

Some foreseeable needs include:

• A virtual-machine/container/bare-metal manager in charge of managing machine/container


lifecycle (configuration, scheduling, deployment, suspend/resume, and shutdown).
• An image manager in charge of template files (a.k.a. virtual-machine/container images).
• A network manager in charge of providing connectivity to the infrastructure: virtual networks
and external access for users.
• A storage manager, providing storage services to edge applications.
• Administrative tools, providing user interfaces to operate and use the dispersed infrastructure.

These needs are relatively obvious and could likely be met by leveraging and adapting existing
projects. But other needs for edge computing are more challenging. These include, but are not
limited to:

• Addressing storage latency over WAN connections.


• Reinforced security at the edge—monitoring the physical and application integrity of each site,
with the ability to autonomously enable corrective actions when necessary.
• Monitoring resource utilization across all nodes simultaneously.
• Orchestration tools that manage and coordinate many edge sites and workloads, potentially
leading toward a peering control plane or "self-organizing edge."

18
• Orchestration of a federation of edge platforms (or cloud-of-clouds) has to be explored and
introduced to the IaaS core services.
o Automated edge commission/decommission operations, including initial software
deployment and upgrades of the resource management system’s components.
o Automated data and workload relocations—load balancing across geographically
distributed hardware.

19
8. CONCLUSION
From wearables to vehicles to robots, IoT devices are gaining momentum. As we move towards a more
connected ecosystem, data generation will continue to skyrocket, especially as 5G technology takes
off and enables faster connections. While a centralized cloud or data center has traditionally been the
go-to for data management, processing, and storage, each has its limitations. Edge computing can
provide an alternative solution, but since the technology is still in its infancy, it’s difficult to predict its
success moving forward.

Challenges around device capabilities — including the ability to develop software and hardware that
can handle computational offloading from the cloud — are likely to arise. Being able to teach machines
to toggle between a computation that can be performed at the edge and one that requires the cloud is
also a challenge. Even so, as adoption picks up, there will be more opportunities for companies to test
and deploy this technology across sectors. And while some use cases may prove the value of edge
computing more clearly than others, the potential impact on our connected ecosystem as a whole could
be game-changing.

20
REFERENCES

[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8789742

[2] Here’s How Much Energy All US Data Centers Consume, Data Center Knowl., San Francisco, CA,
USA, 2016

[3] P. Voigt and A. von dem Bussche, “The EU generaldata protection regulation (GDPR),” in A
PracticalGuide, 1st ed. Cham, Switzerland: Springer, 2017.

[4] https://www.cisco.com/c/dam/en/us/solutions/service-provider/edge-computing/pdf/white-
papersp-analysys-mason-research.pdf

[5] https://www.cbinsights.com/research/what-is-edge-computing/

[6] https://www.ibm.com/in-en/cloud/what-is-edge-computing

[7] https://www.openstack.org/use-cases/edge-computing/cloud-edge-computing-beyond-the-
datacenter/

[8] https://blogs.cisco.com/internet-of-things/an-introduction-to-edge-computing-and-use-
caseexamples

[9 https://www.theverge.com/circuitbreaker/2018/5/7/17327584/edge-computing-cloud-
googlemicrosoft-apple-amazon

21

You might also like