Professional Documents
Culture Documents
A B C A B C A B C A B C A B C A B C A B C A B C A C A C A B A C A C A B A C A C A B A C A C A B ... ...
Epoch 1 Epoch 2
Figure 2: The leadership distribution method of EdgeCons. The edge computing system in the example consists of
three edge nodes (A, B and C) and a backend cloud. The first two epochs are shown in the example.
executed from Phase 2; Phase 1 is considered already by their arrival time, and announces them to the edge
executed by the leader for the initial round. nodes in the next quasi-Paxos instance.
• Unlike Mencius, which distributes the leadership of • When a quasi-Paxos instance is over, the cloud will
the Paxos instances evenly among all the nodes, Edge- initiate the next quasi-Paxos instance by sending the
Cons distributes the leadership dynamically according values it has collected in-between the two quasi-Paxos
to the running history of the system. More specifically, instances to all the edge nodes. In case that no value
EdgeCons counts the number of the effective Paxos in- has been collected, the cloud will send a SKIP mes-
stances under the leadership of each edge node in the sage to the edge nodes. The edge nodes must accept
previous Nepoch epochs (Nepoch is a predefined positive the values (or the SKIP message), and reply an OKAY
value), denoted by Nef,i for edge node i (i = 1, .., n), message to the cloud. Having collected the OKAY
and distributes the leadership for the next epoch ac- messages from a majority of the edge nodes, the cloud
cordingly, proportional to the Nef,i values. Note that a considers that the current quasi-Paxos instance is over,
Paxos instance is considered effective if and only if it and starts the next quasi-Paxos instance.
has made the system agree on a proposed value. • The edge nodes cannot skip the quasi-Paxos instances.
• To distribute the leadership for the next epoch, Edge- In other words, they cannot execute the Paxos in-
Cons arranges the upcoming Paxos instances in a way stances posterior to each of the quasi-Paxos instances
that the Paxos instances are assigned to each edge node they have encountered, until they have received either
as evenly-scattered in the epoch as possible, such that the ordered values or a SKIP message from the cloud.
they are separated by the same or similar numbers of
Paxos instances assigned to the others. Because of the The consensus instances assigned to the cloud are
deterministicity of this algorithm, the edge nodes can called quasi-Paxos instances, because they are similar
learn the distribution of leadership for the upcoming to at first glance, but fundamentally different from the
epoch independently, without the need of interacting Paxos instances in Multi-Paxos. Being highly reliable,
with the other edge nodes. the cloud can actually guarantee consensus without col-
• EdgeCons also involves a backend cloud into the al- lecting the OKAY messages from a majority of the edge
gorithm. The cloud resides at the core of network, and nodes. We design EdgeCons in a way that the cloud col-
is therefore less efficient than the edge nodes in ex- lects the OKAY messages only for the performance con-
changing the messages of events. Nevertheless, it is siderations. Moreover, the cloud is not included in any
far more reliable than the edge nodes. We assume that quorum, no matter the consensus instance is a Paxos one
the cloud never fails or becomes network-partitioned, or a quasi-Paxos one. The main purpose of involving the
so it is always reachable to the edge nodes. backend cloud, as it can be seen, is to help the edge nodes
• In addition to the Paxos instances assigned to the edge propose values when their Paxos instances have been fre-
nodes, there are also quasi-Paxos instances assigned to quently skipped by the others for whatever reasons. This
the cloud in each epoch, which work as follows. When guarantees the progressiveness of the consensus process,
an edge node has failed to make the system agree on and sets a logical “upper bound” on the latency perceived
a value it intends to propose for Nfail times (Nfail is a by the clients. Note that this statement does not violate
pre-defined positive value), it transmits the value to the the FLP result [9], because EdgeCons makes a different
cloud. The cloud collects all such values, orders them assumption by involving the cloud. The cloud is barely
an arbitrator, but not a replica as the edge nodes, because
it does not maintain a local state for the event log. Fi- skip a Paxos instance after waiting for 80 ms, and those
nally, in EdgeCons, the leader edge node proposes at in E-Paxos will try to urge the execution of a consensus
most one value in a Paxos instance, but the cloud can pro- instance after waiting for 80 ms. With these settings, we
pose potentially many values in a quasi-Paxos instance. compare the user-perceived latency under different con-
Since the values are of small data amount, proposing sensus algorithms. Figure 3 depicts the results.
many values does not make a big difference in the trans-
mission time with proposing one value. 80
[3] B IELY, M., M ILOSEVIC , Z., S ANTOS , N., AND S CHIPER , A. [19] L AMPORT, L. Paxos made simple. ACM SIGACT News 32, 4
S-paxos: Offloading the leader for high throughput state machine (2001), 18–25.
replication. In Proceedings of the 2012 IEEE 31st Symposium on [20] L AMPORT, L. Generalized consensus and paxos. Tech. rep.,
Reliable Distributed Systems (2012), SRDS ’12, pp. 111–120. MSR-TR-2005-33, Microsoft Research, 2005.
[4] B ONOMI , F., M ILITO , R., Z HU , J., AND A DDEPALLI , S. Fog [21] L AMPORT, L. Fast paxos. Distributed Computing 19, 2 (2006),
computing and its role in the internet of things. In Proceedings 79–103.
of the First Edition of the MCC Workshop on Mobile Cloud Com-
[22] L EE , K., F LINN , J., AND N OBLE , B. D. Gremlin: Schedul-
puting (2012), MCC ’12, pp. 13–16.
ing interactions in vehicular computing. In Proceedings of the
[5] B ORT, J. Google apologizes for cloud outage. http: Second ACM/IEEE Symposium on Edge Computing (2017), SEC
//www.businessinsider.com/google-apologizes-for- ’17, pp. 4:1–4:13.
cloud-outage-2016-4, 2016. [23] M A , L., Y I , S., AND L I , Q. Efficient service handoff across
[6] B URGER , V., PAJO , J. F., S ANCHEZ , O. R., S EUFERT, M., edge servers via docker container migration. In Proceedings of
S CHWARTZ , C., WAMSER , F., DAVOLI , F., AND T RAN -G IA , the Second ACM/IEEE Symposium on Edge Computing (2017),
P. Load dynamics of a multiplayer online battle arena and sim- SEC ’17, pp. 11:1–11:13.
ulative assessment of edge server placements. In Proceedings of [24] M AO , Y., J UNQUEIRA , F. P., AND M ARZULLO , K. Mencius:
the 7th International Conference on Multimedia Systems (2016), Building efficient replicated state machines for wans. In Pro-
MMSys ’16, pp. 17:1–17:9. ceedings of the 8th USENIX Conference on Operating Systems
[7] C HEN , Z., H U , W., WANG , J., Z HAO , S., A MOS , B., W U , Design and Implementation (2008), OSDI ’08, pp. 369–384.
G., H A , K., E LGAZZAR , K., P ILLAI , P., K LATZKY, R., [25] M ORARU , I., A NDERSEN , D. G., AND K AMINSKY, M. There
S IEWIOREK , D., AND S ATYANARAYANAN , M. An empirical is more consensus in egalitarian parliaments. In Proceedings of
study of latency in an emerging class of edge computing appli- the Twenty-Fourth ACM Symposium on Operating Systems Prin-
cations for wearable cognitive assistance. In Proceedings of the ciples (2013), SOSP ’13, pp. 358–372.
Second ACM/IEEE Symposium on Edge Computing (2017), SEC
’17, pp. 14:1–14:14. [26] M ORTAZAVI , S. H., S ALEHE , M., G OMES , C. S., P HILLIPS ,
C., AND DE L ARA , E. Cloudpath: A multi-tier cloud computing
[8] C UI , H., G U , R., L IU , C., C HEN , T., AND YANG , J. Paxos framework. In Proceedings of the Second ACM/IEEE Symposium
made transparent. In Proceedings of the 25th Symposium on Op- on Edge Computing (2017), SEC ’17, pp. 20:1–20:13.
erating Systems Principles (2015), SOSP ’15, pp. 105–120.
[27] NAKAMOTO , S. Bitcoin: A peer-to-peer electronic cash system.
[9] F ISCHER , M. J., LYNCH , N. A., AND PATERSON , M. S. Impos- https://bitcoin.org/bitcoin.pdf, 2008.
sibility of distributed consensus with one faulty process. J. ACM
[28] NASTIC , S., T RUONG , H.-L., AND D USTDAR , S. A middleware
32, 2 (1985), 374–382.
infrastructure for utility-based provisioning of iot cloud systems.
[10] H A , K., A BE , Y., E ISZLER , T., C HEN , Z., H U , W., A MOS , In Proceedings of 2016 IEEE/ACM Symposium on Edge Comput-
B., U PADHYAYA , R., P ILLAI , P., AND S ATYANARAYANAN , M. ing (2016), SEC ’16, pp. 28–40.
You can teach elephants to dance: Agile vm handoff for edge [29] O KI , B. M., AND L ISKOV, B. H. Viewstamped replication: A
computing. In Proceedings of the Second ACM/IEEE Symposium new primary copy method to support highly-available distributed
on Edge Computing (2017), SEC ’17, pp. 12:1–12:14. systems. In Proceedings of the Seventh Annual ACM Sympo-
[11] H AO , Z., AND L I , Q. Edgestore: Integrating edge computing sium on Principles of Distributed Computing (1988), PODC ’88,
into cloud-based storage systems: Poster abstract. In Proceedings pp. 8–17.
of 2016 IEEE/ACM Symposium on Edge Computing (2016), SEC [30] O NGARO , D., AND O USTERHOUT, J. In search of an un-
’16, pp. 115–116. derstandable consensus algorithm. In Proceedings of the 2014
[12] H AO , Z., N OVAK , E., Y I , S., AND L I , Q. Challenges and soft- USENIX Conference on USENIX Annual Technical Conference
ware architecture for fog computing. IEEE Internet Computing (2014), USENIX ATC ’14, pp. 305–320.
21, 2 (2017), 44–53. [31] PATEL , M., NAUGHTON , B., C HAN , C., S PRECHER , N.,
[13] H AO , Z., TANG , Y., Z HANG , Y., N OVAK , E., C ARTER , N., A BETA , S., N EAL , A., ET AL . Mobile-edge computing introduc-
AND L I , Q. Smoc: A secure mobile cloud computing platform. tory technical white paper. White Paper, Mobile-Edge Computing
In Proceedings of 2015 IEEE International Conference on Com- (MEC) Industry Initiative (2014).
puter Communications (2015), INFOCOM ’15, pp. 2668–2676. [32] S ATYANARAYANAN , M. The emergence of edge computing.
[14] JANG , M., L EE , H., S CHWAN , K., AND B HARDWAJ , K. Soul: Computer 50, 1 (2017), 30–39.
An edge-cloud system for mobile applications in a sensor-rich [33] S ATYANARAYANAN , M., BAHL , P., C ACERES , R., AND
world. In Proceedings of 2016 IEEE/ACM Symposium on Edge DAVIES , N. The case for vm-based cloudlets in mobile com-
Computing (2016), SEC ’16, pp. 155–167. puting. IEEE Pervasive Computing 8, 4 (2009), 14–23.
[34] S ATYANARAYANAN , M., S CHUSTER , R., E BLING , M., F ET-
TWEIS , G., F LINCK , H., J OSHI , K., AND S ABNANI , K. An
open ecosystem for mobile-cloud convergence. IEEE Communi-
cations Magazine 53, 3 (2015), 63–70.
[35] S HI , W., C AO , J., Z HANG , Q., L I , Y., AND X U , L. Edge com-
puting: Vision and challenges. IEEE Internet of Things Journal
3, 5 (2016), 637–646.
[36] S HI , W., AND D USTDAR , S. The promise of edge computing.
Computer 49, 5 (2016), 78–81.
[37] T RIGGS , R. How far we’ve come: a look at smart-
phone performance over the past 7 years. https://www.
androidauthority.com/smartphone-performance-
improvements-timeline-626109/, 2015.
[38] W EI , W., G AO , H. T., X U , F., AND L I , Q. Fast mencius: Men-
cius with low commit latency. In Proceedings of 2013 IEEE Inter-
national Conference on Computer Communications (2013), IN-
FOCOM ’13, pp. 881–889.
[39] W OOD , G. Ethereum: A secure decentralised generalised trans-
action ledger. http://gavwood.com/paper.pdf, 2014.
[40] X IAO , Y., AND Z HU , C. Vehicular fog computing: Vision and
challenges. In Proceedings of 2017 IEEE International Confer-
ence on Pervasive Computing and Communications Workshops
(2017), PerCom Workshops ’17, pp. 6–9.
[41] Y I , S., H AO , Z., Q IN , Z., AND L I , Q. Fog computing: Platform
and applications. In Proceedings of 2015 the Third IEEE Work-
shop on Hot Topics in Web Systems and Technologies (2015),
HotWeb ’15, pp. 73–78.
[42] Y I , S., H AO , Z., Z HANG , Q., Z HANG , Q., S HI , W., AND L I ,
Q. Lavea: Latency-aware video analytics on edge computing
platform. In Proceedings of the Second ACM/IEEE Symposium
on Edge Computing (2017), SEC ’17, pp. 15:1–15:13.
[43] Y I , S., L I , C., AND L I , Q. A survey of fog computing: Con-
cepts, applications and issues. In Proceedings of the 2015 Work-
shop on Mobile Big Data (2015), MoBiData ’15, pp. 37–42.
[44] Y I , S., Q IN , Z., AND L I , Q. Security and privacy issues of fog
computing: A survey. In Proceedings of the 10th International
Conference on Wireless Algorithms, Systems, and Applications
(2015), WASA ’15, pp. 685–695.
[45] Z HANG , I., S HARMA , N. K., S ZEKERES , A., K RISHNA -
MURTHY, A., AND P ORTS , D. R. K. Building consistent trans-
actions with inconsistent replication. In Proceedings of the 25th
Symposium on Operating Systems Principles (2015), SOSP ’15,
pp. 263–278.
[46] Z HAO , M., AND F IGUEIREDO , R. J. Application-tailored cache
consistency for wide-area file systems. In Proceedings of the 26th
IEEE International Conference on Distributed Computing Sys-
tems (2006), ICDCS ’06, pp. 41–41.