You are on page 1of 10

ScienceDirect

Available online at www.sciencedirect.com

ScienceDirect
Procedia Computer Science 00 (2019) 000–000
Available online at www.sciencedirect.com www.elsevier.com/locate/procedia
Procedia Computer Science 00 (2019) 000–000
www.elsevier.com/locate/procedia
ScienceDirect
Procedia Computer Science 171 (2020) 1439–1448

Third International Conference on Computing and Network Communications (CoCoNet’19)

Self-Managed Block Storage


Third International Conference Scheduling
on Computing and Network for OpenStack-based
Communications (CoCoNet’19)

Self-Managed Block StorageCloud Scheduling for OpenStack-based


Akash A Malla a, Sumedha ShindebCloud
, Narayan D Gc, Mohammed Moin Mullad
a b KLE Technological University,
Akash ASchool
Malla
a
, Sumedha
of Computer Science andShinde , Narayan D Gc, Mohammed
Engineering, Moin
Hubballi,580031, IndiaMullad

a
School of Computer Science and Engineering, KLE Technological University, Hubballi,580031, India
Abstract

Software-defined storage (SDS) enables cloud providers to abstract the storage resources from underlying physical hardware by
Abstract
making storage resources programmable. It enhances the scalability and management of storage resources for better efficiency.
Software-defined block
storagestorage
(SDS)component in OpenStack
enables cloud providers tois referred to asstorage
abstract the cinder.resources
The storagefromscheduling algorithm
underlying physicalinhardware
OpenStackby
cinder isstorage
making inefficient and one
resources dimensionalItasenhances
programmable. it considers the capacity
the scalability andofmanagement
storage disks of as the only
storage parameter
resources to schedule
for better the
efficiency.
request. This compromises
Software-defined QoScomponent
block storage to the customer. This paper
in OpenStack presents
is referred to asthe design
cinder. Theofstorage
a self-managed
schedulingblock storage
algorithm scheduling
in OpenStack
model. is
cinder Proposed algorithms
inefficient and oneconsiders the performance
dimensional attributes
as it considers like read
the capacity of and writedisks
storage IOPSasof thestorage
onlyhosts and classifies
parameter hoststhe
to schedule by
generating
request. a quality
This status using
compromises QoS machine learning classifiers.
to the customer. This paperFurther,
presentsthis
thestatus is used
design in conjunctionblock
of a self-managed with astorage
best-fitscheduling
algorithm
to makeProposed
model. a final scheduling
algorithmsdecision.
considersPrivate cloud test-bed
the performance with like
attributes 5 cinder hostswrite
read and is set up to
IOPS of test scheduler
storage performance.
hosts and Results
classifies hosts by
clearly show
generating that thestatus
a quality proposed
usingmodel
machineoutperforms the defaultFurther,
learning classifiers. scheduler
thisby evenly
status distributing
is used the volume
in conjunction with arequests
best-fitacross the
algorithm
cinder
to makenodes.
a finalThe proposeddecision.
scheduling model also maintains
Private the QoSwith
cloud test-bed by providing the best
5 cinder hosts available
is set up to teststorage backend
scheduler for every volume
performance. Results
request.show that the proposed model outperforms the default scheduler by evenly distributing the volume requests across the
clearly
cinder nodes. The proposed model also maintains the QoS by providing the best available storage backend for every volume
request.
© 2020 The Authors. Published by Elsevier B.V.
© 2020
This The
is an Authors.
open accessPublished by Elsevier
article under B.V.
the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
© 2020 The Authors.
Peer-review Published byofElsevier
under responsibility B.V. committee of the Third International Conference on Computing and Network
the scientific
Peer-review under responsibility of the scientific committee of the Third International Conference on Computing and Network
This is an open access
Communications
Communications article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
(CoCoNet’19)
(CoCoNet’19).
Peer-review under responsibility of the scientific committee of the Third International Conference on Computing and Network
Communications
Keywords: (CoCoNet’19)
SDS, OpenStack, Cinder, Storage Scheduling, classification, Logistic regression, SVM, Random Forest, Fio.

Keywords: SDS, OpenStack, Cinder, Storage Scheduling, classification, Logistic regression, SVM, Random Forest, Fio.
1. Introduction

1. Introduction
Cloud computing has emerged as one of the most significant Technology enabling organizations to host and deliver
services over the internet. Cloud vendor enterprises can create a public, private and hybrid cloud to provide services
Cloud computing storage,
like computation, has emerged as one
network of theCloud
to users. most significant Technology
customer enterprises canenabling
focus onorganizations to host
production and and deliver
not worry
services over the internet. Cloud vendor enterprises can create a public, private and hybrid cloud to provide services
like
* computation,
Corresponding storage,
author. network
E-mail address: to users. Cloud customer enterprises can focus on production and not worry
akashmalla000@gmail.com

* Corresponding author. E-mail address: akashmalla000@gmail.com


1877-0509 © 2020 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review©under
1877-0509 2020responsibility
The Authors. of the scientific
Published committee
by Elsevier B.V.of the Third International Conference on Computing and Network Communications
(CoCoNet’19)
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the Third International Conference on Computing and Network Communications
(CoCoNet’19)

1877-0509 © 2020 The Authors. Published by Elsevier B.V.


This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the Third International Conference on Computing and Network
Communications (CoCoNet’19).
10.1016/j.procs.2020.04.154
1440 Akash A Malla et al. / Procedia Computer Science 171 (2020) 1439–1448
Akash A Malla / Procedia Computer Science 00 (2019) 000–000

about issues like scalability, availability, and cost. Cloud customers and vendors agree on Service level agreement
(SLA) specifying the requirements [1]. It is often observed that a majority of vendors do not provide clear service
level objectives as the performance of services deviate from the standard behavior.

OpenStack is an open source infrastructure as a service (IaaS) platform to create private and hybrid clouds. In
OpenStack, significant amount of research has been done in the virtual machine (VM) scheduling [7][9] and storage
scheduling but there is not much of improvement as the technology still uses the worst fit algorithm as default to
schedule the request. Block storage service in OpenStack (OpenStack cinder) allows users to request persistent
volume blocks for their VM, these volume blocks get created on the capable hosts and then attached to the requested
VM. Default volume scheduling algorithm in OpenStack does not consider host performance in scheduling thus
resulting in bad cinder host allocation, in turn, reducing the QoS. So, in order to achieve host performance-based
scheduling storage properties like read and write IOPS can be used to determine the performance of the hosts. The
IOPS of storage disks depends on the type of Disks (HDD, SSD), Disk health (no of good blocks in disk). If a good
performing host is given to the customer then the QoS will increase, promising a superlative bond between
customers and vendors. With the enhancements happening in Machine learning (ML) technology various ML
classifiers like Logistic regression, Support Vector Machine and Random Forests are used to predict the
performance of backend hosts. Furthermore, prediction techniques like Markov model [10] can be used for
classification.
In this paper, Flexible I/O tester (Fio) is used to detect the IOPS of cinder hosts. Observation shows that the IOPS
value fluctuates over time, In order to tackle the issue Machine learning is introduced to monitor the host's
performance over time and derive a conclusion about hosts performance being Good or Bad. We used this
performance metric as an extra dimension to best-fit algorithm making it a Modified Best Fit algorithm to schedule
the volume request.
The paper is organized as follows. Section 2, describes the inefficiency of the default scheduler in OpenStack;
also discuss the related work done towards volume scheduling. Section 3, provides a detailed description of the
proposed scheduling model. Section 4, describes the implementation of the proposed scheduler in OpenStack. In
Section 5, we show the results and analysis of our implementation. Finally, Section 6 concludes the work and
present future scope.

2. Related work
This section discusses related work done in this area to improve the efficiency of volume scheduler.
2.1 Resource scheduling in cloud
Singh, Sukhpal et al. [2] presented a survey on resource scheduling in cloud computing discussing various issues
and challenges in resource scheduling. This paper presents a brief idea of performance-based storage. The
elementary approach volume scheduling problem would be by taking performance factor (read and write IOPS) of
cinder nodes and incorporate that in provisioning decision. IOPS is the input/output performance measurement in
storage devices. There are two categories of memory operations performed either sequential or random. Sequential
operations access a memory location in discs in a contagious manner, whereas random operations access memory in
non-continuous manner. Total IOPS is total no of I/O operations per second.
Z. Yao et al. [3] proposed a new algorithm called Multi-dimensional Vector bin packing. The paper proves that
proposed algorithm provides better scheduling performance in a simulated environment. (i) Note taken from this
research was it explored cinder hosts I/O throughput, CPU usage, network bandwidth as multiple dimensions to
schedule the volume request. Yao, Zhihao et al. [4] presented a similar paper on SLA-aware resource scheduling.
The paper shows the poor performance of default scheduler to incoming volume requests and the solution also
minimized SLA violation rate. (ii) Note taken from this paper was to reduce the SLA violation rate, the author
considered IOPS of hosts in scheduling decision. Ravandi Babak et al. [5] and Ioannis Papapanagiotou et al. [6]
illustrated ML algorithms for volume scheduling in OpenStack. The paper shows that Decision Tree and Bayesian
Akash A Malla et al. / Procedia Computer Science 171 (2020) 1439–1448 1441
Akash A Malla / Procedia Computer Science 00 (2019) 000–000 3

network are two best machine learning algorithms for storage scheduling. (iii) The implementation shows
advantages of machine learning in selection of optimal host from host pool.
From (i), (ii), (iii) concludes that IOPS of backend host can be considered as a parameter to schedule the volume
request. Parameter such as network bandwidth is dropped as we consider that will not influence in good scheduling
decision. Further, machine learning can be employed in training and classification of hosts based on IOPS.
2.2 Scheduling in OpenStack

OpenStack Architecture [8] is made up of a series of interrelated open source projects that deliver a massively
scalable cloud operating system. Table 1, shows 7 major components constituting infrastructure to handle
orchestration, dashboard, messaging, bare-metal provisioning, etc.
Table 1. OpenStack Components
Service Software Component
Compute Nova
Image Glance
Object Storage Swift
Dashboard Horizon
Identity Keystone
Network Quantum
Volume Storage Cinder

Persistent volume provision is a process of attaching a volume by identifying the most suitable host for volume
allocation. In default scheduler choosing a host is done by two 2 steps.

i. Filtering – capable host list is prepared by eliminating the host which cannot accommodate the volume
request.
ii. Weighing – In this, host with the highest capacity is chosen from the capable host list.
Once the process is done, volume is created at the chosen host and attached to VM that requested it. The major
drawback being the algorithm does not consider backend host performance for scheduling. This results in the wrong
host selection for scheduling. The second drawback is that this solution does not take into account of service level
agreement.

3. Methodology
In this section, proposed performance aware storage scheduling model is discussed. The system model
employs a machine learning module in determining hosts performance. Further, this performance is used to
determine the optimal cinder host to provision volume request. The proposed system model is divided into two
major components:
i. The Machine learning module (MLM) periodically gathers performance features from the backend
cinder nodes. Using these parameters its job is to determine the performance of each node to be Good
or Bad.
ii. Modified Best Fit scheduling module (MBFSM). This module takes in user requirements and
performs initial filtering based on capacity. It then takes the performance of backend nodes from MLM
and decides an ideal cinder node to allocate the volume.
As shown in Fig. 1. the volume creation request is gathered from various cloud customers to the controller node. In
controller node, the MBFSM filter discards hosts based on capacity. It generates a filtered host list that is provided
to MBFSM weighing phase. In weighing phase, the capacity and performance of hosts are used in determining
scheduling decision. The decision is then used to create the volume in the appropriate host and attach it to cloud
customers VM.
1442 Akash A Malla et al. / Procedia Computer Science 171 (2020) 1439–1448
Akash A Malla / Procedia Computer Science 00 (2019) 000–000

Fig. 1. Proposed System Model.

The MLM consists of two modules (1) Performance parameter generator, responsible for generating training data.
It periodically takes factors from backend hosts in regular time intervals and predicts the performance of hosts and
appends it to training data. The prediction is based on Logistic Regression model. Other classification models also
tested but their accuracy turned out to be low. This is explained in detail in section V. (2) The Performance backend
decision maker module generates a final performance status by combining the decision over the complete dataset,
this status is provided to MBFSM. In comparison to the default scheduler, the changes made in the filtering and
weighing process Figure 2 shows better host provisioning procedure. For the incoming volume request of 100GB,
filtering process eliminates hosts by available volume in backend host. Weighing phase considers both performance
and capacity requested to choose the capable host.

4. Implementation
In this section, we illustrate the experimental setup deployed in our private cloud. We also elucidate various
algorithms used to implement our proposed methodology.

Fig. 2. Host selection process in proposed scheduler.


Akash A Malla et al. / Procedia Computer Science 171 (2020) 1439–1448 1443
Akash A Malla / Procedia Computer Science 00 (2019) 000–000 5

4.1 Experimental setup


The scenario in Fig. 2. is deployed in private cloud. Setup description is described in Table 2.
Table 2. Resources used in system model deployment
Cinder
Nodes IP RAM Disk Size
volume

Cinder host 1 192.168.31.10 10 GB

Cinder host 2 192.168.31.12 15GB

Cinder host 3 192.168.31.13 20GB


4 GB 50 GB
Cinder host 4 192.168.31.14 30GB

Cinder host 5 192.168.31.15 25GB

Controller 192.168.31.2 - 4 GB 50 GB

4.2 Algorithm
The algorithm used in the deployment scenario is a composition of four sub-algorithms.
1. Performance parameter generation algorithm (Machine learning based).
2. Performance backend decision making algorithm.
3. Filtering Phase
4. Weighing phase: Modified best fit decision-making algorithm.

Algorithm 4.2.1 Performance Parameter Generation


Input: hostList.
Output: recorded data.
IrThreashold =500 // Initial read lOPS threshold
IwThreashold=500 // Initial write IOPS threshold
Itime15 minutes // Initial training data generation
tDataset [] // Training Dataset
recordedData[] // Predicted data to be used in further steps
for each host in the hostList do:
riops  host.getbackendReadIO() // riops is read IOPS
wiops host.getbackendWriteIO() // wiops is write IOPS
if time is less than Itime minutes do:
// for first Intime minutes generate initial training data
if riops > IrThreashold and wiops > IwThreashold then do:
P_backend  “GOOD” //performance of backend host
else do:
P_backend  “BAD”
end if
tDataset.append (hostId, riops, wiops, p_backend)
else do:
// ML prediction of performance after Itime
rowLRegression.predict.P_backend(riops,wiops,TDataset)
reocrdedData.append(hostId,row)
end if
1444 Akash A Malla et al. / Procedia Computer Science 171 (2020) 1439–1448
Akash A Malla / Procedia Computer Science 00 (2019) 000–000

Algorithm 4.2.1 is the first component of MLM generates training data for logistic regression model and predicts the
performance of backend host at that timestamp. Fio is used to measure the performance of the hosts. Since there is
no initial training dataset, we gathered data from backend node for the first 15 minutes and consider host
performance to be Good if the read and write IOPS is greater than 500 IOPS. After doing Fio test in cinder hosts
majority of IOPS values were falling around 500 IOPS and taken to segregate hosts. These thresholds can be
changed based on cloud provider requirement. Once the initial training data is generated it is utilized as training
dataset by Logistic regression model. Logistic regression predicts the new incoming performance factors after 15
minutes. This prediction is recorded and appended periodically to the training dataset. In our implementation, the
following read-write test was used each time testing the disk performance, it provides a detailed IO statistic from
which read and write IOPS was extracted and transferred to a file. There are other options for choosing the type of
benchmarking test user wants. Options such as read-intensive test, write-intensive test are available. Job size of 500
MB was utilized as it was suitable for all the nodes as it took less time to execute and produced believable output.

sudo fio --randrepeat=1 --ioengine=libaio --direct=1 gtod_reduce=1--name=test filename=


random_read_write.fio --bs=4k --iodepth=64 --size=500M --readwrite=randrw --rwmixread=75 >
file.txt

Algorithm 4.2.2 is the second component of MLM. It takes recorded dataset from algorithm 1 as an input feature
and generates final decision on volume backend performance. This is done by grouping the count of Good records
of that host in the recorded set. This grouped count (level) is compared with a threshold of 70 percent. If level
surpasses the threshold then that particular backend host is termed as good else termed as Bad host.

Algorithm 4.2.2 Performance backend decision making.


Input: recorded dataset
Output: volume backend performance list [ ]
SetTreshold  0.7
Status Null
hostLevelList in trainedset groupby hostId count “GOOD”
for each host in hostLevelList do:
if host has Level of “Good” > SetTreshold then do:
status  “GOOD”
volumeBackendPerformenceList.Append(hostId, status)
else:
status  “BAD”
volumeBackendPerformenceList.Append(hostId, status)
end if

Algorithm 4.2.3, is pseudo code for filtering the host. It is the 1st component of MBFSM. It retrieves the cloud
customer’s volume request and creates a filtered list of hosts that can satisfy the request. It is a capacity-based
filtering process.

Algorithm 4.2.3 Filtering hosts

Input: customer volume request, hostList [ ]


Output: backend host filtered list (FilteredList [ ] )
for each host in the hostList do:
capacity  host. getBackendCapacity ()
if capacity > customer volume requirement then do:
FilteredList. Append (hostId, capacity)
Akash A Malla et al. / Procedia Computer Science 171 (2020) 1439–1448 1445
Akash A Malla / Procedia Computer Science 00 (2019) 000–000 7

else:
continue
// ignore the host and carry on with remaining hostList
end if
end for
if FilteredList is null then do:
print that the volume request cannot be satisfied.
end if

Algorithm 4.2.4, is the 2nd component of MBFSM. It takes filtered hosts from algorithm 3, customer volume request
from the user and volume backend performance from algorithm 2 as input features. Using these features algorithm
first maps hosts to available capacity and backend performance. The mapped list is then sorted in ascending order
then algorithm picks the perfect host to schedule the volume request. If the scheduler does not find a host that has
good performance then it chooses a host from the host pool with bad performance using a best-fit algorithm. As the
performance is used as another dimension along with the capacity to schedule the request, it is termed as a modified
best-fit scheduling algorithm.

Algorithm 4.2.4 Modified Best Fit algorithm.


Input: FilteredList [ ], customer volume requirement,
volumeBackendPerformenceList [ ]
Output: chosen optimal host to allocate the volume request.
Flag  false
for each node in FilteredList do:
cinderHosts map capacity to volumeBackendPerformence
end for
cinderHosts. sort. Assending ()
for each cHost in cinderNodes do:
if cHost.Capacity > customer volume requirement and
cHost.volumeBackendPerformence equals “GOOD”
then do:
schedule the volume request in cHost
flag true
endif
end for
if flag equals! true then do:
schedule the volume request in cinderHost first host.
end if

5. Results and Discussions


In this section, we discuss the results obtained (A) Average IOPS of cinder hosts. (B) Selection of optimal
classifier. (C) Volume distribution on different cinder nodes (D) percentage no. of good volume allocated. (E) Mean
good volume allocation.
1446 Akash A Malla et al. / Procedia Computer Science 171 (2020) 1439–1448
Akash A Malla / Procedia Computer Science 00 (2019) 000–000

Table 3. Prediction accuracy for 5 iterations. Table 4. Test cases tested on setup.
Test Cases Input Volume Requests
Iteration logistic SVM Random
Regression Forest Test Case 1 4,8,10,12,14
1 92.11 84.21 89.47 Test Case 2 16,17,18,5,7
2 90.48 83.33 90.48
Test Case 3 2,2,1,1,6,10,15,15
3 91.49 87.23 89.36
Test Case 4 5,6,11,12,16
4 92.16 80.39 92.16
5 91.07 85.71 87.5 Test Case 5 1,1,1,9,9,14,14

5.1 Average IOPS of cinder hosts.


Fig. 3. Shows average IOPS of hosts for the initial 100 records of data. It is evident from the graph that although
setup have homogenous hardware configuration of nodes, the average read and write IOPS of backends are
different. This contrasting IOPS value would affect the performance of volume blocks, specifying the need to use
this feature in the volume placement process.

Fig. 3. IOPS of various cinder node.

5.2 Selection of optimal classifier


Since classification is the prime factor in the determination of the performance of cinder host at periodic time
intervals, Selection of an effective predictive model is important. We trained several ML models with initial training
data average accuracy of each model over 5 iterations was taken. As shown in Fig. 4. the averaged accuracy Logistic
Regression model turned out to be better than the other classifiers. Hence, we choose logistic regression for further
training of data.

Fig. 4. Classification accuracy of various classifiers.

5.3 Volume distribution on different cinder nodes.


In order to check volume distribution on hosts by both provisioning algorithms, we sampled random input test
cases of volume requests, given in Table 4. The reason for considering 5 cases is to check whether the distribution
potential holds good for a variety of input requests. Observations made were noted and represented in Fig. 5.
Akash A Malla et al. / Procedia Computer Science 171 (2020) 1439–1448 1447
Akash A Malla / Procedia Computer Science 00 (2019) 000–000 9

Fig. 5. Volume Distribution across hosts by Schedulers.

Consider Test case 1, The OpenStack’s default scheduler (DS) in Figure 5, places the majority of volume request
in the cinder host with the larger capacities (aka Cinder 4 and 5 in our case). The scheduler is unaware of the fact
that performance of hosts 4 and 5 are poor compared with cinder 1, 2, 3. Further other test cases 2, 3, 4 and 5 show a
similar pattern of distribution. Two significant flaws of default scheduler can be noticed (1) uneven distribution of
volume request, (2) No importance given to the performance of cinder node as it schedules most of the volume
request in bad host. It is also noticeable in Figure 5 that Logistic regression based proposed Scheduler (PS) provides
relatively distributed volumes requests across the cinder nodes for all test input samples. Flaws (1) and (2) are
rectified by providing better volume placement across hosts and scheduling majority of the block allocation in good
cinder hosts (aka cinder 1, 2 and 3).

5.4 Percentage no. of good volume allocated

For each test case in Table 5, the total percentage of good blocks allocated by both the schedulers is represented in
Fig. 6. For instance, in test case 1, one major difference can be observed, that using DS algorithm only 20 % of the
requested volumes were provisioned to good cinder back ends whereas 80 % of the volumes were allocated to bad
cinder hosts. On the other hand, the PS algorithm scheduled 60 % of volume request to the good cinder backend.
Similar performance can be observed in the rest of the test cases.

Fig. 6. No. of Good/Bad volumes allocated.

5.5 Mean good volume allocation

Figure 7 illustrates the comparison of mean good volumes allocation by both the schedulers over 5 test cases. A
conclusion can be made that the modified scheduler performs better than default as its average is higher. The
standard deviations (σ) for both the algorithms were 9.79 and 6.57 respectively. Low σ in case of proposed
scheduler indicates that there is less spread in data points showing that it would perform similarly in future test
inputs.
1448 Akash
Akash A Malla
A Malla et al. / Procedia
/ Procedia ComputerComputer Science
Science 00 (2019)171 (2020) 1439–1448
000–000

Figure 7: Mean good volume allocation.

6. Conclusion and Future Scope

Efficient allocation of storage hosts with least energy consumption in cloud data center is an important research
issue. This work presents a new method of provisioning the volume requests called Self-Managed block storage
scheduler for OpenStack based cloud. It describes a strategy of predicting the host's backend performance using
machine learning, this performance is incorporated with volume available on backend cinder to improve scheduling
decision. The system model was built and deployed in private cloud testbed for volume requests. Observations
clearly show that logistic based scheduler outperforms the default scheduler in OpenStack by distributing the
volumes across the cinder nodes and allocating good cinder-volumes.
As future work, we plan to include SLA as a scheduling parameter with the proposed scheduling model.
Also, other performance factors such as CPU utilization and network bandwidth can also be incorporated in making
a scheduling decision

References

[1] P. Parakh, D. G. Narayan, M. Moin Mulla and V. P. Baligar, (2018) "SLA-aware Virtual Machine Scheduling in OpenStack-based Private
Cloud," 2018 3rd International Conference on Computational Systems and Information Technology for Sustainable Solutions (CSITSS),
Bengaluru, India, 2018, pp. 259-264.
[2] Singh, Sukhpal, and Inderveer Chana (2016). "A survey on resource scheduling in cloud computing: Issues and challenges." Journal of grid
computing 14.2 (2016): 217-264.
[3] Z. Yao, I. Papapanagiotou and R. D. Callaway, (2015) "Multi-dimensional scheduling in cloud storage systems," 2015 IEEE International
Conference on Communications (ICC), London, 2015, pp. 395-400.
[4] Yao, Zhihao, Ioannis Papapanagiotou, and Robert D. Callaway. (2014) "SLA-aware resource scheduling for cloud storage." Cloud
Networking (CloudNet), 2014 IEEE 3rd International Conference on. IEEE, 2014.
[5] Ravandi, Babak, Ioannis Papapanagiotou, and Baijian Yang. (2016) "A black-box self-learning scheduler for cloud block storage
systems." Cloud Computing (CLOUD), 2016 IEEE 9th International Conference on. IEEE, 2016.
[6] Ravandi, Babak, and Ioannis Papapanagiotou. (2017) "A Self-Learning Scheduling in Cloud Software Defined Block Storage." Cloud
Computing (CLOUD), 2017 IEEE 10th International Conference on. IEEE, 2017.
[7] Corradi, Antonio, Mario Fanelli, and Luca Foschini. (2014) "VM consolidation: A real case based on OpenStack Cloud." Future
Generation Computer Systems 32 (2014): 118-127.
[8] Sheela, P. Sai, and Monika Choudhary. (2017) "Deploying an OpenStack cloud computing framework for university campus." In 2017
International Conference on Computing, Communication and Automation (ICCCA), pp. 819-824. IEEE, 2017.
[9] Zellner, Samuel N. (2008) "Methods, systems, and storage mediums for providing information storage services." U.S. Patent 7,334,090,
issued February 19, 2008.
[10] Melhem, Suhib Bani, Anjali Agarwal, Nishith Goel, and Marzia Zaman. (2018) "Markov prediction model for host load detection and VM
placement in live migration." IEEE Access 6 (2018): 7190-7205.

You might also like