You are on page 1of 13

EXTENDING SCALABILITY OF M2M PLATFORMS WITH

FOG COMPUTING

ABSTRACT
As more and more M2M devices are connected to the Internet, the M2M platforms normally
deployed in the Cloud are increasingly overloaded with a large amount of data traffic. Though
more resources in the cloud may be allocated to alleviate such overloading issues, this research
proposes the alternative of utilizing Fog computing to extend the scalability of M2M platforms in
the cloud. The Fog is used not only to offload the over congested cloud but also to provide low
latency required by critical applications. Our first step is to migrate oneM2M, a global M2M
platform, to a Fog computing architecture in which the middle nodes of oneM2M are organized
into a highly scalable hierarchical container-based Fog nodes. A mechanism to dynamically
scale in/out the serving instances of the middle nodes in order to make the whole M2M platform
more scalable. The paper illustrates our system design and demonstrates our system scalability
capacity using an industrial IoT (IIoT) use case. Finally, we compare the performance of our
dynamic scaling mechanism with those based on a static and fixed pool of serving instances.
OBJECTIVE
To implement the The scalability platform in the cloud has been investigated in our previous
research effort where the resources in the cloud can be scaled up or down according to the
loading of cloud traffic.

SYSTEM REQUIREMENTS
Hardware requirements:

Processor : Any Processor above 500 MHz

Ram : 128Mb.

Hard Disk : 10 Gb.

Compact Disk : 650 Mb.

Input device : Standard Keyboard and Mouse.

Output device : VGA and High Resolution Monitor

Software Requirements

Front End/GUI Tool : NS2

Operating System : Fedora

Script : TCL
EXISTING SYSTEM
 The bottle filler in this stage is equipped with a RFID reader and a liquid level sensor and
in the process the bottle is filled up with the liquid.
 The Fog will receive data from both the RFID reader and the liquid level sensor and
check whether the liquid level in the bottle is in a proper range.
 If not, the Fog will store this product’s RFID in a defective RFID list.
 The bottle labeler in this stage automatically labels each bottle. It is equipped with a
RFID reader and a contrast sensor. Both the product’s RFID and the contrast sensor data
are sent to the Fog.
 The Fog will check if the label is in a right position. If not, the product’s RFID will also
be stored in the defective product RFID list.
 The cartoning machine in this stage automatically collects and packages a fixed number
of bottles into a carton.
 This stage is equipped with an RFID reader to let the Fog read a product’s RFID before
packaging.
 The Fog will check whether the product is in the defective product RFID list. If yes, the
Fog will activate the robot arm to push the product off the conveyor.

DISADVANTAGES

 By removing all defective products, only the flawless products will be packaged for final
shipping.
 Liquid Range Did not Predict the Exact value
 Traffic congestion is high.
 The complexity of the scaling mechanism will also increase.

LITERATURE SURVEY

1.Title:Comparison of coarsening schemes for multilevel graph partitioning


Author: C´edric Chevalier
Year:2017
Description
Graph partitioning is a well-known optimization problem of great interest in theoretical and
applied studies. Since the 1990s, many multilevel schemes have been introduced as a practical
tool to solve this problem. A multilevel algorithm may be viewed as a process of graph topology
learning at different scales in order to generate a better approximation for any approximation
method incorporated at the uncoarsening stage in the framework. In this work we compare two
multilevel frameworks based on the geometric and the algebraic multigrid schemes for the
partitioning problem.
Demerits:
The quality of coarse-level variables and equations is in progress, and currently it is not clear
wheather this relaxation plays an important role for the partitioning problem.

2.Title: Graph Neural Networks: A Review of Methods and Applications


Author: Jie Zhou
Year:2019
Description
Lots of learning tasks require dealing with graph data which contains rich relation information
among elements. Modeling physics system, learning molecular fingerprints, predicting protein
interface, and classifying diseases require that a model learns from graph inputs. In other
domains such as learning from non-structural data like texts and images, reasoning on extracted
structures, like the dependency tree of sentences and the scene graph of images, is an important
research topic which also needs graph reasoning models. Graph neural networks (GNNs) are
connectionist models that capture the dependence of graphs via message passing between the
nodes of graphs
Demerits:
It is hard to define localized convolutional filters and pooling operators

3.Title: Fast Influence-based Coarsening for Large Networks


Author: Manish Purohit
Year:2014
Description
A novel Graph Coarsening Problem to find a succinct representation of any graph while
preserving key characteristics for diffusion processes on that graph. We then provide a fast and
effective near-linear-time (in nodes and edges) algorithm coarseNet for the same. Using
extensive experiments on multiple real datasets, we demonstrate the quality and scalability of
coarseNet, enabling us to reduce the graph by 90% in some cases without much loss of
information
Demerits:
Motivated by the fact that in any real network, most edges and vertices are not important

4.Title: Community Discovery: Simple and Scalable Approaches


Author: Yiye Ruan
Year:2015
Description
The increasing size and complexity of online social networks have brought distinct challenges to
the task of community discovery. A community discovery algorithm needs to be efficient, not
taking a prohibitive amount of time to finish. The algorithm should also be scalable, capable of
handling large networks containing billions of edges or even more.
Demerits:
The order of node selection may affect computation time but not the convergence of modularity.

5.Title: Fog Computing and the Internet of Things: Extend the Cloud to Where the
Things Are
Author:San Jose
Year:2015
Description:
Analyzes the most time-sensitive data at the network edge, close to where it is generated instead
of sending vast amounts of IoT data to the cloud. Acts on IoT data in milliseconds, based on
policy. Sends selected data to the cloud for historical analysis and longer-term storage.
Demerits:
It is not practical to transport vast amounts of data from thousands or hundreds of thousands of
edge devices to the cloud.
PROPOSED SYSTEM
 Our proposed approach gives the system good elasticity under different traffic loads and
doesn’t create too much overhead with an additional scaling server.
 The scaling server automates the scale-in and scale-out procedure.
 Compared to the static approach, our proposed approach allows the system capacity to
dynamically grow and shrink as needed.
 we leverage virtualization enabled by the container technology to create high scalable
Fog-based M2M systems.
 Compared to virtual machines, containers provide lightweight virtual environment and
thus are more suitable to achieve fast response in the Fog network.
 We introduce two approaches to extend the scalability of M2M systems in the Fog.

ADVANTAGES
 It has highlighted eight essential pillars: security, scalability, openness, autonomy, RAS
(Reliability, Availability, and Serviceability), agility, hierarchy, and programmability to
support the OpenFog architecture
 It Improves Better Scalability.
 To avoid The Traffic congestion.
ALGORITHM EXPLANATION
Proposed Algorithm

Client side load balancing system which leverages strength of cloud components and overcomes
above mentioned disadvantages

Signature Driven Load Management (SigLM) using Cloud

The above algorithm works by capturing system’s signature like available RAM, current CPU
bandwidth available and other resources. Once captured, that value is compared with default
threshold value and accordingly load like incoming requests is shifted to target node machine
using Dynamic Time Wrapping (DTW) technique. Dynamic time wrapping works by
considering source node as given by SigLM algorithm and makes some calculations to predict
target node to which the load is to be shifted.

This algorithm has better results than conventional algorithms with following advantages :

 Caption of resource signature.


 Scheduling by comparing signature of each server
 30%-80% improved performance than existing approaches
 Scalable, efficient and 0.0% overhead
 Dynamic time wrapping (DTW) for selection of target node at runtime.
 Client side means to perform load balancing before requests hit to server.
SYSTEM ARCHITECTURE

Cloud
Service Discovery Server

Register

Fog Fog Manager M2M Middle Node Load Balancer

Fog Worker Node

Scaling Process

First Level container Second Level container


User User User

MODULE LIST
1. Cloud Setup
2. Fog Manager
3. Resource Monitoring
4. Load Balancing
5. Performance Analysis
MODULE DISCRIPION
Cloud Setup
The Cloud and the things must provide the scalable capacity, i.e. be able to flexibly and
dynamically scale in/out the serving instances across Fog nodes to match with the incoming
traffic.

Fog Manager
Fog can be seen as an extension of the cloud. In the Fog network, the topology could be
hierarchical, which means the data from the end-users or the devices can be processed at several
levels; each level carrying out specific data filtering and analytics and deciding whether to pass
to the next level for further Fog/Cloud processing.

Resource Monitoring
Performance of resource discovery and subscribe newly discovered resources of interest to the
first level MN-CSE. In addition, it will keep monitoring the sensor data in the First Level MN-
CSE’s resource tree. By performing data analytics, this AE can detect abnormal events and
notifies the second level MN-AE.

Load Balancing
The load balancer then will distribute the traffic to all the serving instances. In addition, a load
balancer is deployed between IoT/M2M devices and the Fog cluster to distribute all the incoming
requests from the IoT/M2M devices to the first level MNAE/ MN-CSE according to a chosen
load balancing policy.
Performance Analysis
In order to evaluate the performance of our proposed architecture, we compare it to the static
approach with a fixed pool of serving instances. The scaling server automates the scale-in and
scale-out procedure. To enhance our scaling server to be able to scale in/out the oneM2M
instances across different levels of the Fog hierarchy. The performance of our dynamic scaling
mechanism with those based on a static and fixed pool of serving instances.

PROBLEM DEFINITION
Fog computing intends to move the cloud capacity (compute, network, and storage) to the
continuum from the cloud to the things . Due to its locality and proximity, Fog computing, when
compared to cloud computing, can provide lower latency and quicker response. Therefore, Fog
can be seen as an extension of the cloud. In the Fog network, the topology could be hierarchical,
which means the data from the end-users or the devices can be processed at several levels; each
level carrying out specific data filtering and analytics and deciding whether to pass to the next
level for further Fog/Cloud processing.
CONCLUSION
• The scalability research related to the Fog is still in its infancy. A highly scalable
hierarchical container-based Fog architecture with auto-scaling mechanism. We
containerize the oneM2M Middle Nodes using the NS2 technology. Our approach
leverages Fog networking to extends the scalability of the one M2M platform. The
scaling server automates the scale-in and scale-out procedure. Compared to the static
approach, our proposed approach allows the system capacity to dynamically grow and
shrink as needed.

FUTURE WORK
• In the future, we plan to add more layers to our Fog hierarchy. As the hierarchy increases,
the complexity of the scaling mechanism will also increase. In this paper, our scaling
server is only able to scale in/out the oneM2M Middle Node instances in the first-level of
the hierarchy. In the next stage, we aim to enhance our scaling server to be able to scale
in/out the oneM2M instances across different levels of the Fog hierarchy. For the Agility
pillar of the Open Fog architecture, it is also necessary to allow Fog nodes in the same
level of the hierarchy to communicate with each other in order to reduce the hop counts
from the originator to the target Fog node. We plan to explore such eastbound/westbound
interfaces in the future.

REFERNCES
• [1] A. Anand, F. Dogar, D. Han, B. Li, H. Lim, M. Machadoy, W. Wu, A. Akella, D.
Andersen, J. Byersy, S. Seshan, and P. Steenkiste, “XIA: an architecture for an evolvable
and trustworthy internet”, HotNets '11 Proceedings of the 10th ACM Workshop on Hot
Topics in Networks, 2011.
• [2] I. Seskar, K. Nagaraja, S. Nelson, and D. Raychaudhuri, “MobilityFirst future Internet
architecture project”, AINTEC '11 7th Asian Internet Engineering Conference, 2011.
• [3] D. G. Andersen H. Balakrishnan N. Feamster T. Koponen D. Moon and S. Shenker,
“Accountable Internet Protocol (AIP)”, ACM SIGCOMM, 2008.
• [4] F. Teraoka, S. Kanemaru, K. Yonemura, “ZNA: A network architecture for new
generation network - Focusing on the session layer”, In Ubiquitous and Future Networks,
2011.
• [5] H. Jung, and S, Koh, “Mobile-Oriented Future Internet (MOFI): Architecture and
Protocols”, Release 2.2, 2010.
• [6] T. Kim, J. Lee, J. Heo and S. Cho, “Graph Contraction based Selforganizing
Networks for Future Internet”, In the proc. of ManFI workshop, 2013.
• [7] Ubigraph: Free dynamic graph visualization software, http://ubietylab.net/ubigraph/.,
2008.
• [8] M. George, B-A scale-free network generation and visualization,
http://www.mathworks.com/matlabcentral/fileexchange/11947, 2006.

You might also like