You are on page 1of 58

“PRIORITY WEIGHTED ROUND ROBIN ALGORITHM FOR LOAD

BALANCING IN THE CLOUD”

A Thesis

by
SADISKUMAR VIVEKANANDHAN

B. TECH, Anna University, 2009

Submitted in Partial Fulfillment of the Requirements for the Degree of

MASTER OF SCIENCE
in

COMPUTER SCIENCE

Texas A&M University-Corpus Christi


Corpus Christi, Texas
December 2017
c SADISKUMAR VIVEKANANDHAN

All Rights Reserved


December 2017
“PRIORITY WEIGHTED ROUND ROBIN ALGORITHM FOR LOAD
BALANCING IN THE CLOUD”

A Thesis

by
SADISKUMAR VIVEKANANDHAN

This thesis meets the standards for scope and quality of


Texas A&M University-Corpus Christi and is hereby approved.

Dr. Ajay K. Katangur Dr. Dulal C. Kar


Chair Committee Member

Dr. Longzhuang Li
Committee Member

December 2017
ABSTRACT

Cloud computing, which helps in sharing resources through networks, has become one of the
most widely used technologies in recent years. Vast numbers of organizations are moving to the
cloud since it is more cost-effective and easy to maintain. An increase in the number of consumers
using the cloud, however, results in increased traffic, which leads to the problem of balancing tasks
on the loads. Numerous algorithms have been proposed and implemented to handle these loads in
different ways. Various dynamic algorithms, which work better than static algorithms, are used by
Amazon, Windows Azure, Rackspace, HP, GoGrid, etc [16]. The performance of these dynamic
algorithms are scaled with different parameters, such as response time, throughput, utilization,
efficiency, etc. Each algorithm works better than others in different parameters. The weighted
round-robin algorithm is one of the most widely used load balancing algorithms. The proposed
algorithm is an improvement of the weighted round-robin algorithm, which considers the priority
of every task before assigning the tasks to different virtual machines (VMs). Usually, the priority
of tasks in the weighted round-robin algorithm is used to sort incoming tasks from high to low
priority. The proposed algorithm uses the priority of tasks to decide to which VMs the tasks should
be assigned dynamically. The same process is used to migrate the tasks from overloaded VMs to
under-loaded VMs. The simulations are conducted using CloudSim by varying cloud resources
such as data centers, hosts, VMs and different cloudlets for performance analysis. Simulation
results show that the proposed algorithm performs equivalent to the dynamic weighted round robin
algorithm in all the QoS factors, but it shows significant improvement in handling high-priority
tasks.

v
TABLE OF CONTENTS
CONTENTS PAGE
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
TABLE OF CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
CHAPTER I: INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Cloud Computing Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Software as a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Platform as a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Infrastructure as a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
CHAPTER II: RELATED WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
CHAPTER III: SYSTEM MODEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1 Priority Weighted Round Robin Algorithm . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Waiting Time Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Resource Load Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
CHAPTER IV: SYSTEM IMPLEMENTATION . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1 Hardware and Software Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 CloudSim Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Primary Components of CloudSim . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Implementation of Round-Robin Algorithm . . . . . . . . . . . . . . . . . . . . . 17
4.4 Implementation of Weighted Round-Robin Algorithm . . . . . . . . . . . . . . . . 18
4.5 Implementation of Dynamic Weighted Round-Robin Algorithm . . . . . . . . . . . 18
4.6 Implementation of Priority Weighted Round-Robin Algorithm . . . . . . . . . . . 19
CHAPTER V: RESULTS AND EVALUATION . . . . . . . . . . . . . . . . . . . . . . . . 21
5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Execution Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

vi
5.3 Total Completion Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.4 Total Waiting Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.5 Total Turnaround Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.6 Average Execution Time of a Cloudlet . . . . . . . . . . . . . . . . . . . . . . . . 24
5.7 Execution Time of Priority Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.8 Experiment Results and Performance Analysis . . . . . . . . . . . . . . . . . . . . 26
CHAPTER VI: CONCLUSION AND FUTURE WORK . . . . . . . . . . . . . . . . . . . 36
6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
APPENDIX A: STEPS TO CREATE SIMULATION ENVIRONMENT IN NETBEANS . . 39
APPENDIX B: STEPS TO IMPLEMENT ALGORITHMS USING CLOUDSIM . . . . . . 44

vii
LIST OF FIGURES
FIGURES PAGE
1.1 Cloud Computing Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Scheduler and Load Balancer of a Cloud Network . . . . . . . . . . . . . . . . . . . . 9
4.3 CloudSim Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.4 Execution Cost of RR, WRR, DWRR and PWRR . . . . . . . . . . . . . . . . . . . . 27
5.5 Total Completion Time (in ms) of RR, WRR, DWRR and PWRR . . . . . . . . . . . 28
5.6 Total Waiting Time (in ms - Time Shared) of RR, WRR, DWRR and PWRR . . . . . . 29
5.7 Total Turnaround Time (in ms - Time Shared) of RR, WRR, DWRR and PWRR . . . 31
5.8 Average Execution Time of a Cloudlet (in ms) of RR, WRR, DWRR and PWRR . . . 32
5.9 Completion Time of Level 1 Cloudlets (in ms) of RR, WRR, DWRR and PWRR . . . 33
5.10 Completion Time of Level 2 Cloudlets (in ms) of RR, WRR, DWRR and PWRR . . . 34
5.11 Completion Time of Level 5 Cloudlets (in ms) of RR, WRR, DWRR and PWRR . . . 35
7.12 CloudSim Files and Folders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
7.13 Creating New Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.14 Assigning Project Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.15 Project Source Folder. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.16 Adding JAR Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.17 Sample Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

viii
LIST OF TABLES
TABLES PAGE
5.1 Data Center Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.2 Configurations of Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Configurations of VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.4 Configurations of Cloudlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5 RR Performance with different VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.6 DWRR Performance with different VMs . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.7 PWRR Performance with different VMs . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.8 Execution Cost of RR, WRR, DWRR and PWRR . . . . . . . . . . . . . . . . . . . . 27
5.9 Total Completion Time (in ms) of RR, WRR, DWRR and PWRR . . . . . . . . . . . 28
5.10 Total Waiting Time (in ms - Time Shared) of RR, WRR, DWRR and PWRR . . . . . . 29
5.11 Total Turnaround Time (in ms - Time Shared) of RR, WRR, DWRR and PWRR . . . 30
5.12 Average Execution Time of a Cloudlet (in ms) of RR, WRR, DWRR and PWRR . . . 32
5.13 Completion Time of Level 1 Cloudlets (in ms) of RR, WRR, DWRR and PWRR . . . 33
5.14 Completion Time of Level 2 Cloudlets (in ms) of RR, WRR, DWRR and PWRR . . . 34
5.15 Completion Time of Level 5 Cloudlets (in ms) of RR, WRR, DWRR and PWRR . . . 35

ix
CHAPTER I: INTRODUCTION

Cloud computing is a technology that helps to provide services through the internet. SaaS
(Software as a Service), PaaS (Platform as a Service), and IaaS (Internet as a Service) are the three
types of services provided by the cloud [19]. These services provide resources, such as memory, in-
frastructure, software, virtual machines, and so on, through the internet. This has encouraged many
organizations to move to the cloud since they do not need to build resources on their own; instead,
they can consume the resources through the network. This not only reduces costs compared to
building their own resources, but also provides resources that are easy to maintain. Companies
only need to pay for the level of resources they use, and the elasticity of the cloud will help them
to scale resources up and down as necessary. Due to various benefits of cloud computing, increas-
ingly more organizations are moving towards the cloud [16]. As a result of this growth, the usage
and traffic across the cloud becomes high, which increases the need for more research in load bal-
ancing.
Load balancing is a method of sharing the entire load of a network to all nodes in the cloud.
The goal of load balancing is to achieve maximum efficiency of the network by increasing both
the throughput and the performance of each node [17]. To achieve this step, many algorithms have
been formulated and implemented. The main factors that need to be considered when designing a
load balancing algorithm are the total load in the network, the processing power of each node, the
load at each node, the size of each task, priority levels, etc. Earlier algorithms are implemented to
address different problems. For example, some algorithms are implemented to increase network
throughput, some aim to improve network performance, and some aim to increase the maximum
efficiency of the network.
The two most widely used algorithms to schedule tasks are round-robin (RR) and weighted
round-robin (WRR) algorithms [18]. The round-robin algorithm is one of the most common static
algorithms, which distribute tasks in the cyclic order of virtual machines. A RR algorithm does not

1
consider the processing capacity, length, or priority of the tasks, but it is one of the most success-
ful algorithms in terms of static distribution. The weighted round-robin algorithm is an improved
form of the RR. The WRR considers the processing capacity of virtual machines (VMs), and it
assigns more number of tasks to the VM with more processing capacity based on the weight given
to each VM. This algorithm is the base for many improved versions of load balancing algorithms.
The proposed algorithm, which is the priority weighted round-robin algorithm (PWRR) is also an
improved version of the WRR algorithm.
In this work, the priority of the tasks is considered as an important factor to efficiently select
the appropriate VM for scheduling and migrating the tasks, ranging from low to high priority. The
proposed algorithm also considers length of the task, processing capacity of each node, and num-
ber of tasks assigned to the nodes. In general, when the priority of tasks is considered, it is used
to arrange incoming tasks from high to low priority. This ensures that the high-priority tasks are
executed before other tasks that come under different priorities. However, in real time, it is not
possible to arrange the tasks based on priority since all requests will not arrive at the same time. In
the above case, if the tasks are not arranged based on priority, then the consideration of priority as
a factor will never provide any improvement. Thus, in the proposed algorithm, the priority of the
task is considered along with its length and processing capacity of the nodes, allowing for dynamic
scheduling. In this algorithm, the incoming task is allocated to the VM with less waiting time, and
which has fewer of equal or higher priority tasks. Even in task preemption, the high-priority task
from overloaded VMs is migrated to any VM with fewer tasks of equal or higher priorities and less
waiting time.

1.1 The Cloud Computing Stack

Cloud computing is widely referred to as a stack [19], as a result of various services being
built on top of each other under a common term, the cloud. As discussed earlier, cloud computing
provides services through the internet to various types of customers. Various usages of clouds
are on-demand resource access, pooling resources for many customers, adaptiveness by providing

2
easy scalability, pay per use, and so on. The stack or the services are widely classified into SaaS
(Software as a Service), PaaS (Platform as a Service) and IaaS (Internet as a Service) [4][8], as
shown in Figure 1.

Software as a Service

SaaS is defined as a service providing access to applications that are running in servers through
the internet [19]. Users can access these applications with the help of any kind of device, such as
computers, mobile phones, tablets, and so on, on one condition, the device should have internet
access. Most software providers enable access to the user through a pay-as-you-go subscription or
for free [19]. Pay-as-you-go simply means the users will be charged based on their usage. In the
case of free access, providers will not charge users, but instead generate profits with the help of
advertisements and other means.

Platform as a Service

PaaS provides access to the platform for software development through the internet. It provides
an easy way to create web applications without investing time or money in software and hardware
[19]. PaaS is akin to SaaS, but instead of providing software through the internet, PaaS provides the
environment to develop software through the internet [19]. This service is growing more popular
among startups, where they do not need more initial costs to develop their products. PaaS also
supports the trending concept of parallel programming, by which developers can simultaneously
work on their software from different places, with less complexity [19].

Infrastructure as a Service

IaaS is the process of providing a complete cloud infrastructure, such as servers, data storage,
network, and OS through the internet [19]. Rather than buying servers, storage, software, and so
on, organizations can expand their resource capabilities by getting all these resources on demand
through IaaS [19]. Infrastructure providers serve the organizations through private or public clouds.

3
Figure 1.1: Cloud Computing Services

In private clouds, the infrastructure will be dedicated to only one organization, but in public clouds,
it will be shared among many organizations. When a private and public cloud are combined to
efficiently utilize resources, it is called a hybrid cloud [19].

4
CHAPTER II: RELATED WORK

Due to drastic growth in cloud computing and explosive growth in the number of users, re-
searchers have increasingly turned their focus towards load balancing and scheduling [2][11][12].
Load balancing lies at the heart of cloud computing. The overall performance of an entire cloud
network completely relies on the load balancing techniques [14][15][17]. Load Balancer as a Ser-
vice (LBaaS) [16] is one of the more popular methods; rather than building a load balancer for
each cloud, companies prefer to consume LBaaS. In this method, the load balancer will be utilized
through the network, and it requires information about the entire network, including information
about every node within it. The majority of LBaaS providers use round-robin or weighted round-
robin algorithm as their static algorithm [16]. These two algorithms work better than first come
first served algorithm(FCFS), shortest job first algorithm(SJF), MaxMin algorithm, MinMin algo-
rithm, etc. [18].
Round-Robin Algorithm: The RR algorithm uses the concept of distributing the request to
the nodes in circular fashion [3]. When the first request arrives, the RR algorithm sends the request
to any random VM in the list of VMs available in the network and then scatter the immediate re-
quests in circular order. If a VM is assigned any request, it is pushed to the end of the list; likewise,
whenever a VM gets a request, it moves to the end of the list. Moreover, this algorithm works like
FCFS algorithm and provides the same performance when the time quantum is high. Otherwise,
the RR works better than the FCFS [18]. The weighted round-robin algorithm is an improved ver-
sion of RR, in which each VM is assigned a weight [1], which is calculated based on the processing
capacity of each node. Based on the weight, VMs are ordered from high to low. VMs with more
weight are assigned more requests compared to VMs with less weight. This approach produces
better performance and throughput.
Weighted Round-Robin Algorithm: Many researchers are also engaged in successive im-
provisation of the WRR algorithm by considering different parameters, such as node processing

5
power, queue length, job size, and so on, which makes the WRR algorithm a dynamic load balanc-
ing technique [5][6][9][13]. This helps the algorithm to decide where the request should land by
frequently calculating the load on each VM. In this process, the weight of each VM is calculated
with the help of the load, which makes this algorithm dynamic. Along with all these parameters,
inclusion of priority reduces the waiting time for the requests with higher priorities [10][13]. The
priority of tasks are considered while ordering the incoming tasks and the tasks or job queue are
ordered with the jobs from high priority to low priority in a way that allocates high priority tasks
to VMs, which are all weighted based on their processing power and load. This ensures that the
incoming high-priority tasks are processed before the low-priority tasks and reduces the wait time
for high-priority tasks to get completed. This approach, however, is not feasible in real time, since
it is impossible to arrange incoming requests in order. If the incoming requests are not ordered,
then the priority parameter will never be useful in this type of task scheduling.
Dynamic Weighted Round-Robin Algorithm: Recently, an improvised WRR algorithm was
proposed that lifts WRR algorithms to the next level by handling the unevenness of VM loads in
the network [6][20]. The dynamic process of the WRR algorithm considers the load of VMs, along
with the processing capacity and length of the task, to decide which VM should be allocated with
the next job. In some cases, the processing of these jobs may take more time due to longer execu-
tion time. In this condition, the load balancer helps to move the waiting task from an overloaded
VM to under-loaded VMs. The load balancer receives the information about the under-loaded VM
and chooses the VM to which the waiting task should be moved. If no under-loaded VMs are avail-
able, then no action is performed, because all VMs in the network must be overloaded or ideally
loaded.
In another research, an algorithm was proposed to achieve minimum completion time by con-
sidering priority as a factor [13][21]. In this algorithm, instead of ordering the incoming tasks
based on priority, the priority of each task is used to identify the VM to which the task should be
migrated. In this proposed honey bee algorithm [21], when the task scheduler finds unevenness in
the load among the nodes in the network, it migrates the high priority task from the overloaded VM

6
to any VMs with a lower number of similar priority tasks. This algorithm also uses the preemptive
method of migrating dependent tasks. This results in improving the minimum completion time and
maximizing utilization.

7
CHAPTER III: SYSTEM MODEL

Figure 2 depicts the overall idea of the process of scheduling and load balancing in a cloud
network. The scheduler and load balancer are the two most important components of any cloud
network. The task manager helps to collect information about each task arriving through the task
queue. The scheduler and load balancer are the components that use a load balancing algorithm
to distribute the loads evenly across the resources. The scheduler is a component that helps in
deciding the appropriate VM to which an arrived task should be allocated. It ensures that the task
is assigned to a VM that takes less time to complete the task by considering the amount of load in
that specific VM and the total time needed to complete the entire load [6].
The load balancer is another important component in this system; it ensures that the load in the
cloud network is evenly distributed. If a load balancer identifies any of the VMs as overloaded, it
immediately migrates the task from an overloaded VM to an under-loaded or ideally loaded VM.
The resource manager provides the information about each VM by getting the information from
its resource probe. The resource manager then gives information about the load of the VM, its
processing power, the number of tasks in its queue, and the priority of each task [6].
The incoming task is provided into the system through different types of interfaces, such as
web applications, servers, and so on. The task first reaches the task manager, which identifies the
length and priority of the task. The priority of tasks are given as integers (1-10). A task which
has low priority value is considered as high priority task. The incoming tasks are then passed to
the scheduler, which helps in allocating the incoming tasks to an appropriate VM by using the
proposed algorithm. The scheduler makes use of the information about the resources from the
resource manager to find the appropriate VM to handle the incoming task. Each VM has an in-
coming task queue, and the tasks allocated to each VM is available in the incoming task queue [6].
The resource manager also helps to calculate the weight to each VM by using various information,
such as the processing capacity of VMs, the number of tasks in each VM incoming task queue,

8
Figure 3.2: Scheduler and Load Balancer of a Cloud Network

and total time required to complete the tasks of each priority. The weight of each VM is calculated
every time the scheduler receives an incoming request. With the knowledge of this information,
the scheduler finds an appropriate VM for each incoming task.
The load balancer helps in finding load unevenness in the system by calculating the ratio be-
tween the total number of jobs in the queue and the number of available VMs. If the value is less
than 1, the load balancer requests the scheduler to find an appropriate VM for the job execution [6].
Otherwise, it calculates the load on each VM, and based on the calculations, it decides which VM

9
is less utilized and communicate with the scheduler to allocate the incoming task to that VM. The
load on each VM is also measured to ensure it is not overloaded. If any VM is overloaded, then
the job in the queue is moved to the under-loaded VM. In case of job migration, the load balancer
considers the priority of the task to be migrated to identify which VM is under-loaded and has the
least number of tasks of equal or higher priorities.

3.1 Priority Weighted Round Robin Algorithm

The two widely used load balancing algorithms are round-robin and weighted round-robin
algorithms. However, these algorithms are static algorithms, meaning they are not dynamically
updated based on the incoming requests. The RR algorithm allocates the incoming requests to the
available resources in a circular fashion, irrespective of the length of the task or the processing
capacity of resources. The WRR algorithm considers the processing capacity of resources, and a
higher number of incoming tasks are assigned to the resource with more processing power. These
two algorithms are used to compare the performance of the proposed priority weighted round-robin
algorithm. Along with these algorithms, the dynamic WRR algorithm is also used for comparison.
In this algorithm, whenever a new request is received, it finds an appropriate VM by calculating
the waiting time.
The proposed priority weighted round-robin algorithm uses the work of the WRR algorithm as
a base, but it shifts the algorithm from static to dynamic. This algorithm helps to find an appropriate
VM with the help of information that includes the processing power of VM, length of the tasks,
priority of incoming tasks, and processing time required for VMs to complete the tasks of equal or
higher priorities. Once a task arrives in the system, information about the task, such as the length
and the priority of the task is noted. Then the weight of each VM is calculated, as mentioned
earlier. Next, the incoming task is forwarded to the VM with more weight. Basically, the weight
of a VM refers to the amount of time that specific VM requires to complete the tasks with equal
or higher priorities in its queue. The weight of VM increases if the completion time is shorter, and
vice versa. In general, the time taken to process a task can vary in runtime; in that situation, task

10
migration is performed. The high priority task from the queue of the overloaded VM is migrated
to the under-loaded or ideally loaded VM, by finding out which VM has fewer of equal or higher
priority tasks. This helps the high-priority tasks to be completed faster, and the migration helps in
increasing the proper utilization of resources.

3.2 Waiting Time Calculation

In the proposed algorithm, each VM is assigned a weight based on the waiting time to process
the next incoming task. The weight of a VM is indirectly proportional to waiting time; the weight
of a VM is high if the waiting time is lower, and vice versa. In the proposed algorithm, waiting
time is calculated based on the priority of an incoming task. The priority of each task ranges from
1 to 10, and a task with a lower integer value is considered a high-priority task.
The waiting time of each VM, WTvm is calculated as follows:

i
W Tvm = ∑ Tp (3.1)
p=1

where p is the priority of tasks and T is an execution time of a task. The above equation is used
to calculate the waiting time of any VM when it receives an incoming task of priority i. In this
experiment, the cloudlet with a small number as priority is considered as a high-priority task, and
the cloudlet with a large number is considered as low-priority tasks. For example, if an incoming
task of priority 5 is received, then the waiting time is calculated for the tasks with priorities 1, 2,
3, 4, and 5. It calculates the total execution time of all tasks whose priority is equal or greater than
the incoming tasks.
The execution time of a task is calculated as follows:

S
T= (3.2)
Vm

where S represents the cloudlet size in MI (Million Instructions) and Vm represents the processing

11
capacity of a VM in MIPS (Million Instructions Per Second).

3.3 Resource Load Calculation

To perform task migration between virtual machines, the load of all VMs should be calculated,
and they should be classified into three categories: overloaded, under-loaded, or balanced [6]. To
perform this classification, total processing capacity of all VMs should be calculated (C), thresh-
old variance should be calculated for each VM, and finally, threshold should be calculated for each
VM. Two variances, Vmin and Vmax , are used to calculate the threshold range. These values are the
percentage utilization of a machine; in this experiment, we have used 0.7 as the minimum value
and 0.9 as the maximum value.
Total capacity of all virtual machines is calculated as follows:

k
C = ∑ ci (3.3)
i=1

where k represents the number of available virtual machines and c represents the processing ca-
pacity of a VM. Threshold for each VM is calculated as follows:

c
Tmin = ∗Vmin ∗ L ∗ n (3.4)
C

where Vmin represents the minimum variance, L represents the total capacity of a node, and n
represents the total number of virtual machines. Tmin represents the minimum threshold value of a
VM.
c
Tmax = ∗Vmax ∗ L ∗ n (3.5)
C

where Vmax represents the minimum variance and Tmax represents the maximum threshold value
of a VM.

12
The VMs are classified by using the following equations:

W Tvm < Tmin , Underloaded (3.6)

W Tvm > Tmax , Overloaded (3.7)

W Tvm = [Tmin , Tmax ], Balanced (3.8)

If any VM reaches above the threshold level, then the VM is considered an overloaded VM.
The task is migrated from the overloaded VM to an under-loaded VM whose load is less than the
threshold level. The under-loaded VM is ready to accept the task until it reaches the threshold
level. If there are no under-loaded VMs, then no migration is performed. As per the proposed
algorithm, while choosing the under-loaded VM, it selects the VM that has the lowest number of
tasks with equal or higher priorities.

13
CHAPTER IV: SYSTEM IMPLEMENTATION

To implement the proposed priority weighted round-robin algorithm and to compare its effec-
tiveness with the existing algorithms, the CloudSim 3.0.3 (hereinafter referred to as ”CloudSim”)
simulation tool has been utilized. CloudSim is a framework used to simulate the cloud environ-
ment and its services [7]. This section provides information about the hardware and software used
for the implementation. It also provides a detailed explanation of the architecture of CloudSim and
important components involved, followed by the implementation of the round-robin algorithm,
weighted round-robin algorithm, dynamic weighted round-robin algorithm, and priority weighted
round-robin algorithm.

4.1 Hardware and Software Used

The following hardware and software components are used for the implementation:

• Processor: Intel Core i5 @ 2.9 GHz

• RAM: 8 GB

• Operating Systems: macOS High Sierra

• IDEs: NetBeans 8.2

• Framework: CloudSim 3.0.3 Toolkit

• Programming Language: Java version 8

4.2 CloudSim Architecture

CloudSim is a tool to replace the deployment of proof of concept in the public cloud envi-
ronment. To simulate the concepts of cloud computing in the public cloud environment requires
access to a large number of servers, storage, OS, and network. It would cost more to perform the

14
simulation, even in a pay-per-use environment. To avoid this complication and efficiently perform
various simulations related to cloud operation, many tools are available. CloudSim was chosen
because of the following advantages.

• Flexibility: CloudSim allows for performing simulation with various configurations. In the
public cloud environment, it would be very difficult to implement various configurations of
resources.

• Simplicity: CloudSim makes it very easy to implement and customize the environment.

• Cost-effectiveness: It would be expensive to build a new cloud environment to test the ap-
plication and to perform the test again by modifying the environment. The CloudSim simu-
lation makes this easy with an already provided model that is easily modifiable.

Figure 4.3: CloudSim Architecture

15
Figure 3 shows the CloudSim architecture and its layers, which help in modeling different cloud
environments [7]. CloudSim architecture consists of three main layers: system layer, core, and
user-level middleware, which represent the SaaS, PaaS, and IaaS respectively. The user layer
provides access to the user to modify the basic configuration of cloud models, such as number of
servers, its configuration, users, loads and their specifications, and scheduling policies.

Primary Components of CloudSim

The important components of CloudSim architecture used to design cloud models are explained
in detail below.

• Regions: This component is used to specify the geographical locations of the cloud model
where the resources are allocated. In general, CloudSim provides six regions of different
continents in total.

• Data center: The data center provides the infrastructure to the cloud model generated by
CloudSim. Each data center consists of a group of hosts, and each host consists of a number
of VMs. All the components - data centers, hosts, and VMs have their own specifications.

• Data center characteristics: These characteristics help to configure the specifications for
each data center.

• Data center broker: The data center broker acts as a liaison between the user and the
provider. It gets the information about the cloud model with the help of a Cloud Information
Service (CIS). Whenever a request is generated by a user, it finds the suitable provider with
the help of the information it collects from the CIS. This is used to specify the requirements
in the simulation.

• Host: The host carries information about both computational capabilities in MIPS and mem-
ory capacity in MB. It also defines the scheduling policies to balance the incoming user
requests to the available VMs.

16
• User base: The user base contains information about the number of users, which in turn are
responsible for generating incoming traffic.

• Cloudlet: A cloudlet is also known as a user request. It contains a lot of information, such
as the source where the request originated (user base), the size of the request, number of
processing elements required to complete the execution, status of the request, start time,
finish time, and so on. The user request can be given to the cloud model by using this class
or through a file.

• Service broker: The service broker helps in deciding which data center should be used to
service the requests generated from the user.

• VM allocation policy: The VM allocation policy is a strategy to define which VM should


be allocated to hosts.

• VM scheduler: The VM scheduler uses either space shared or time shared to allocate the
cores to VMs.

• Task scheduling: Task scheduling is a strategy used to define which task is to be scheduled
to which VM.

4.3 Implementation of Round-Robin Algorithm

Algorithm 1 lists the step-by-step approach for the round-robin algorithm [3]. This is one of
the most widely used static algorithms. In this algorithm, the incoming requests is assigned to
VMs in cyclic order. The first request is assigned to any random VM, and the following requests
are processed in cyclic order. This algorithm is widely used among cloud providers for ideal static
conditions.

17
Algorithm 1 Round-Robin Algorithm Pseudocode
(1) Scheduler moves the first incoming request or cloudlet to the ready queue.
(2) Data center controller select which VM should get the first cloudlet.
(3) First cloudlet is assigned to any random VM.
(4) Once the first cloudlet is assigned, VMs are ordered in a cyclic manner.
(5) The VM which received the first cloudlet is moved back to all VMs.
(6) Next cloudlet is assigned to next VM in cyclic order.
(7) Go to step 3 for each cloudlet request until the scheduler processes all cloudlets.

4.4 Implementation of Weighted Round-Robin Algorithm

Algorithm 2 lists the step-by-step approach for the weighted round-robin algorithm [14]. This
algorithm is an improvement of the RR algorithm. In RR, the processing capacity of the processor
or VM is not considered while scheduling the task. But in the WRR algorithm, the weight for each
VM is calculated based on its processing power. VMs with more processing power have more
weight, and VMs with less processing power have less weight. Once the weight is calculated,
VMs are ordered based on weight, and VMs with more weight are assigned more tasks than the
other VMs.

Algorithm 2 Weighted Round-Robin Algorithm Pseudocode


(1) Scheduler moves the first incoming request or cloudlet to the ready queue.
(2) A weight is given to each VM based on the processing power, a VM with more processing
power has more weight.
(3) VMs are arranged in the order of decreasing weight from high to low in a cyclic fashion.
(4) Assign an incoming task or cloudlet to VM which has more weight. More tasks are assigned
to VM with more weight than the others.
(5) Once the desirable cloudlets are assigned to VM, it moves back to other VMs.
(6) The next cloudlet is assigned to the next VM in cyclic order.
(7) Go to step 4 for each cloudlet request until the scheduler processes all cloudlets.

4.5 Implementation of Dynamic Weighted Round-Robin Algorithm

Algorithm 3 lists the step-by-step approach for the dynamic weighted round-robin algorithm
[6]. This algorithm is an improved version of the WRR algorithm. In the WRR algorithm, once
the weight is given to the VMs, incoming tasks are assigned in the same order. It does not consider

18
the current load in each VM. But in the dynamic weighted round-robin algorithm, the waiting time
of each VM is calculated for each request, and an incoming cloudlet is assigned to VMs with less
waiting time.

Algorithm 3 Dynamic Weighted Round-Robin Algorithm Pseudocode


(1) Identify the length of an incoming task or cloudlet.
(2) Calculate waiting time for each VM based on the available loads on each VM.
(3) A weight is given to each VM based on the waiting time, a VM with less waiting time will
have more weight.
(4) Assign an incoming task to VM which has less waiting time.
(5) Once the incoming task is assigned, calculate threshold and load in each VM.
(6) If any VM is found to be overloaded.
(a) Pick any task from the task waiting queue of the VM.
(b) Identity the under loaded VMs.
(c) Select the VM which has less waiting time.
(d) Continue the process till it has overloaded VMs.
(7) Go to step 2 for each cloudlet request until the scheduler processes all cloudlets.

4.6 Implementation of Priority Weighted Round-Robin Algorithm

Algorithm 4 lists the step-by-step approach for the priority weighted round-robin algorithm.
This algorithm is an improvement of the dynamic weighted round-robin algorithm. In the dynamic
weighted round-robin algorithm, the waiting time of each VM is calculated for each request, and
an incoming cloudlet is assigned to VM with less waiting time. But in this algorithm, the priority
of each task is considered while calculating the waiting time of VMs.

19
Algorithm 4 Priority Weighted Round-Robin Algorithm Pseudocode
(1) Identify the priority of an incoming task
(2) Calculate waiting time for each VM based on the priority on an incoming tasks
(3) A weight is given to each VM based on the waiting time, VM with less waiting time should
have more weight
(4) Assign an incoming task to VM which has less waiting time based on priority
(5) Once the incoming task is assigned, calculate threshold and load in each VM
(6) If any VM is found to be overloaded
(a) Pick high priority task from the task waiting queue of the VM
(b) Identity the under loaded VMs
(c) Select the VM which has less number of tasks of same or higher priority
(d) Continue the process till it has overloaded VMs
(7) Process the high priority task from the waiting queue
(8) If the low priority task is waiting for more than fixed number of iterations
(a) Increase the priority of the waiting task by one level
(9) Allow the VM to pick the next task from the waiting queue, it should select the high priority
task from the queue and scheduler will continue from step 1 simultaneously

20
CHAPTER V: RESULTS AND EVALUATION

This chapter features a detailed discussion about the cloud model and its setup for performing
simulations and explains the results of each simulation. The quality of service (QoS) parameters
used to compare the performance of the proposed algorithm with the existing algorithms are exe-
cution cost, overall execution time of each algorithm in milliseconds(ms), average execution time
for one cloudlet of each algorithm in ms, time taken to complete level 1 priority tasks in ms, time
taken to complete level 2 priority tasks and, time taken to complete level 5 priority tasks in ms.

5.1 Experimental Setup

This experiment focuses on scheduling the incoming tasks based on the priority of the tasks,
along with the task length and the processing capacity of each resource. In CloudSim, the incoming
tasks are also known as cloudlets. Each cloudlet has its own size, length, id, priority, and so on. In
this experiment, different numbers of cloudlets with different sizes and different priorities, ranging
from 1 to 10, are used. Each cloudlet is submitted to the datacenter broker, which is responsible
for submitting the cloudlets to the data centers. Each data center has its own characteristics, such
as architecture, OS, virtual machine monitor (VMM), processing cost, memory cost, storage cost,
and bandwidth cost, which are shown in Table 5.1.
The host is created inside the data center. Each host has its own specifications, such as RAM,
bandwidth, storage and number of cores as shown in Table 5.2. VMs are created inside the hosts.
The configurations for VMs include processing power in MIPS, number of CPUs, RAM, band-
width, size, and VMM, which is shown in Table 5.3. In this experiment, different number of VMs
are considered to analyze the performance of the algorithm under different conditions. Load to the
cloud is also given dynamically by providing varying number of cloudlets of different sizes and
priorities, as shown in Table 5.4.

21
Table 5.1: Data Center Specifications

Architecture OS VMM Processing Memory Storage BW Cost


Cost Cost Cost
x86 Linux Xen 3.0 0.05 0.01 0.1
x86 Linux Xen 3.0 0.05 0.01 0.1

Table 5.2: Configurations of Hosts

RAM (in MB) Bandwidth (in Mbps) Storage (in MB) No. Of Cores
163,840 100,000 10,000,000 quad-core
163,840 100,000 10,000,000 dual-core

Table 5.3: Configurations of VMs

MIPS No. of CPUs RAM (in Bandwidth Image Size VMM


MB) (in Mbps) (in MB)
250 1 512 1000 10000 Xen
500 1 512 1000 10000 Xen
500 1 512 1000 10000 Xen
750 1 512 1000 10000 Xen

Table 5.4: Configurations of Cloudlets

Length (in MI) FileSize (in bytes) OutputSize (in bytes) Priority (range)
400 300 300 1-10
800 300 300 1-10
2000 300 300 1-10
4000 300 300 1-10

As mentioned earlier, the QoS parameters considered in this experiment to compare the pro-
posed algorithm with the existing algorithms are execution cost, total completion time, total wait-
ing time, total turnaround time, average execution time of one cloudlet, and time taken to complete
level 1, level 2 and level 5 priority tasks.

22
5.2 Execution Cost

The execution cost refers to the total cost of executing a cloudlet. In this experiment, the total
execution cost is calculated for the proposed algorithm in comparison with the existing algorithms.
The total execution cost is equal to the total cost required to complete the execution of all the
cloudlets submitted to the cloud network. This cost involves the costs for storage, processing,
memory, and bandwidth. The execution cost varies for each scheduling policy since each policy
performs differently. Execution cost is calculated as follows:
k
Si
CE = ∑ ∗Cs (5.9)
i=1 Vm

where k represents the number of cloudlets in the cloud network, Si represents the cloudlet size (in
MI), Vm represents the processing capacity of a VM (in MIPS), and Cs represents cost per second
required to process a cloudlet.

5.3 Total Completion Time

Total completion time refers to the total time taken for the cloud network to complete the
executions of all the cloudlets submitted to the network. In this experiment, the total completion
time is calculated for the proposed algorithm in comparison with the existing algorithms. This
involves the addition of total execution time and the total network delay for each cloudlet. The
total completion time in ms is calculated as follows:
k
Si
T=∑ + Nd (5.10)
i=1 Vm

where k represents the number of cloudlets in the cloud network, Si represents the cloudlet size
(in MI), Vm represents the processing capacity of a VM (in MIPS), and Nd represents the total
network delay (in ms).

23
5.4 Total Waiting Time

The total waiting time refers to the total time taken by all the cloudlets during processing in the
cloud network. This parameter has been measured in time-shared processing. In general, waiting
time of a cloudlet is the difference of the sum of arrival time and burst time from the completion
time. Total waiting time in ms is calculated as follows:
k
TW = ∑ (TCi − (TAi + TBi )) (5.11)
i=1

where k represents the number of cloudlets in the cloud network, TCi represents completion time
of a cloudlet (in ms), TAi represents arrival time of a cloudlet to the network (in ms), and TBi
represents the burst time of the cloudlet (in ms).

5.5 Total Turnaround Time

Turnaround time refers to the total time taken by all cloudlets to complete processing in the
cloud network. This parameter is measured in time-shared processing. In general, the turnaround
time of a cloudlet is the total time period from the time it is submitted to the network to the time
its execution is completed. Total turnaround time in ms is calculated as follows:
k
TTA = ∑ (TCi − TAi ) (5.12)
i=1

where k represents the number of cloudlets in the cloud network, TCi represents completion time
of a cloudlet (in ms), and TAi represents arrival time of a cloudlet to the network (in ms).

5.6 Average Execution Time of a Cloudlet

The average execution time of a cloudlet refers to the average time taken to complete the
execution of a cloudlet. In this experiment, the average execution time of a cloudlet is calculated
for the proposed algorithm, and it is compared with the existing algorithms. Average execution

24
time will be better if the scheduling policy directs the incoming tasks to suitable resources. The
formula to calculate average execution time of a cloudlet is shown below:

T
TAvg = (5.13)
N

where T represents the total time taken to complete all cloudlets, as represented in equation (3.2)
(in ms), and N represents the total number of cloudlets.

5.7 Execution Time of Priority Tasks

The execution time of priority tasks refers to the total time taken by a cloud model to complete
the execution of all the cloudlets belonging to a specific priority. In this experiment, all level-1
priority, level-2 priority and level-5 priority tasks are considered for comparing and analyzing the
performance of the proposed algorithm against the existing algorithms. This QoS parameter is the
most important in the current experiment since it is the main motive of the proposed algorithm to
prove its effectiveness compared to other existing algorithms.
To find the best possible number of VMs for the simulation, performance evaluation of different
algorithms with different VMs are performed as shown in Table 5.5, 5.6 and 5.7. From these tables,
it is evident that the performance is better with 50 VMs in all the compared algorithms. So, for the
rest of the experiments, the same set of VMs are considered for performance evaluation.

Table 5.5: RR Performance with different VMs

QoS 20 VMs 25 VMs 35 VMs 50 VMs 60 VMs 70 VMs 80 VMs


Execution Cost 23874.99 32767.5 32274.4 23874.99 23874.99 23874.99 23874.99
Tot. Comp. Time 7835.33 10811.71 10644.55 7838.51 7838.51 7838.51 7838.51
Avg. Exec. Time 3.74 4.32 4.25 3.73 3.73 3.73 3.73

25
Table 5.6: DWRR Performance with different VMs

QoS 20 VMs 25 VMs 35 VMs 50 VMs 60 VMs 70 VMs 80 VMs


Execution Cost 27458.42 27997.2 27833.3 22205.42 22817.8 22932.21 23003.43
Tot. Comp. Time 9068.95 9258.97 9226.8 7364.9 7375.3 7401.92 7432.7
Avg. Exec. Time 3.68 3.7 3.69 3.63 3.64 3.64 3.65

Table 5.7: PWRR Performance with different VMs

QoS 20 VMs 25 VMs 35 VMs 50 VMs 60 VMs 70 VMs 80 VMs


Execution Cost 27511.12 28108.5 27964.5 22299.42 23423.5 23789.1 23791.4
Tot. Comp. Time 9080.86 9296.69 9259 7388.14 7392.72 7421.32 7465.21
Avg. Exec. Time 3.68 3.71 3.7 3.67 3.68 3.68 3.69

5.8 Experiment Results and Performance Analysis

The performance of the priority weighted round-robin algorithm has been implemented, and
the analysis was performed using the CloudSim tool. The implementation is achieved by extend-
ing the classes already available in the simulation tool. The following illustrations of the execution
cost, total completion time, total waiting time, total turnaround time, average execution time of
a cloudlet, execution time of level-1 priority tasks, execution time of level-2 priority tasks and,
execution time of level-5 priority tasks are analyzed in round-robin (RR), weighted round-robin
(WRR), dynamic weighted round-robin (DWRR), and priority weighted round-robin (PWRR) al-
gorithms in heterogeneous conditions.
The execution cost of the all the algorithms is calculated with the cloud model of 50 VMs and
different sets of heterogeneous cloudlets, as shown in Table 5.8. Figure 5.4 shows the comparative
analysis of all the algorithms. It shows that the execution costs for the RR and WRR algorithms
are higher when compared to the DWRR. The proposed PWRR experiences a similar level of ex-
ecution cost when compared to the DWRR. The inclusion of priority into the scheduling did not
impact the execution cost of the algorithm.

26
Table 5.8: Execution Cost of RR, WRR, DWRR and PWRR

Cloudlets RR WRR DWRR PWRR


2500 23874.99 22932.72 22205.42 22299.42
2750 26253.22 25643.12 24425.3 24495.17
3000 29650 28710.5 26631.75 26839.22
3250 32028.22 31100.12 28814.02 28957.67
3500 34425 32615.3 31058.5 31128.15
3750 36803.22 34812.16 33272.72 33350.62
4500 44975 42896.4 39958.22 40096.65
4750 47353.22 45276.12 42165.97 42306.22
5000 49750 47742.6 44454.2 44513.77

Figure 5.4: Execution Cost of RR, WRR, DWRR and PWRR

Total completion times (in ms) of all the algorithms were calculated with the cloud model of
50 VMs and different sets of heterogeneous cloudlets, as shown in Table 5.9. Figure 5.5 shows
the comparative analysis of all the algorithms. It shows that the total time taken by RR and WRR
algorithms is higher when compared to the DWRR. The proposed PWRR took the same amount of
time to complete all the tasks when compared to DWRR. The inclusion of priority never affected

27
the performance of the cloud model in terms of total time taken to complete all tasks.

Table 5.9: Total Completion Time (in ms) of RR, WRR, DWRR and PWRR

Cloudlets RR WRR DWRR PWRR


2500 7838.51 7512.02 7364.94 7388.14
2750 8618.64 8321.43 8101.58 8111.27
3000 9404.92 9042.32 8833.82 8891.16
3250 10185.09 9885.56 9556.58 9595.25
3500 11971.36 11389.12 10300.93 10311.03
3750 12451.53 11851.23 11035.38 11041.66
4500 13104.25 12401.56 11751.41 11795.1
4750 15884.36 14913.22 13984.22 14018.27
5000 17670.66 16740.23 14743.06 14742.97

Figure 5.5: Total Completion Time (in ms) of RR, WRR, DWRR and PWRR

Total waiting times (in ms) of the all the algorithms were calculated with the cloud model of
50 VMs and different sets of heterogeneous cloudlets, as shown in Table 5.10. Figure 5.6 shows
the comparative analysis of all the algorithms. It shows that the total time taken by RR and WRR

28
algorithms is higher when compared to the DWRR. The proposed PWRR performed better when
compared to the DWRR. The inclusion of priority never affected the performance of the cloud
model in terms of total waiting time of all tasks.

Table 5.10: Total Waiting Time (in ms - Time Shared) of RR, WRR, DWRR and PWRR

Cloudlets RR WRR DWRR PWRR


2500 17535.24 13213.67 8831.28 8753.8
2750 19412.76 15421.77 9721.35 9632.4
3000 21382.34 17832.23 12510.21 12321.67
3250 23854.17 19045.34 15310.04 15112.87
3500 25321.98 21034.56 17652.78 17422.12
3750 27345.22 23456.92 19481.21 19165.34
4500 31078.48 27421.11 21487.28 21021.43
4750 33510.23 29234.91 24776.21 24321.66
5000 35611.36 31823.18 26891.48 26432.12

Figure 5.6: Total Waiting Time (in ms - Time Shared) of RR, WRR, DWRR and PWRR

29
Total turnaround times (in ms) of the all the algorithms were calculated with the cloud model
of 50 VMs and different sets of heterogeneous cloudlets, as shown in Table 5.11. Figure 5.7 shows
the comparative analysis of all the algorithms. It shows that the total time taken by RR and WRR
is higher when compared to the DWRR. The proposed PWRR performed better when compared to
the DWRR. The inclusion of priority never affected the performance of the cloud model in terms
of total turnaround time of all tasks.

Table 5.11: Total Turnaround Time (in ms - Time Shared) of RR, WRR, DWRR and PWRR

Cloudlets RR WRR DWRR PWRR


2500 36127.38 28345.8 17607.49 17499.13
2750 38825.52 30843.54 19442.7 19264.8
3000 42764.68 35664.46 25020.42 24643.34
3250 47708.34 38090.68 30620.08 30225.74
3500 50643.96 42069.12 35305.56 34844.24
3750 54690.44 46913.84 38962.42 38330.68
4500 62156.96 54842.22 42974.56 42042.86
4750 67020.46 58469.82 49552.42 48643.32
5000 71222.72 63646.36 53782.96 52864.24

30
Figure 5.7: Total Turnaround Time (in ms - Time Shared) of RR, WRR, DWRR and PWRR

Average execution times of a cloudlet (in ms) for all the algorithms were calculated with the
cloud model of 50 VMs and different sets of heterogeneous cloudlets, as shown in Table 5.12. Fig-
ure 5.8 shows the comparative analysis of all the algorithms. It shows that the average time taken
by RR and WRR to execute a cloudlet are higher when compared to the DWRR. The proposed
PWRR took the same amount of time to complete a cloudlet when compared to the DWRR. The
inclusion of priority never affected the performance of the cloud model in terms of average time
taken to complete a cloudlet.

31
Table 5.12: Average Execution Time of a Cloudlet (in ms) of RR, WRR, DWRR and PWRR

Cloudlets RR WRR DWRR PWRR


2500 3.73 3.71 3.64 3.67
2750 3.73 3.71 3.64 3.65
3000 3.73 3.71 3.64 3.65
3250 3.73 3.71 3.64 3.64
3500 3.73 3.71 3.63 3.64
3750 3.73 3.71 3.63 3.63
4500 3.73 3.71 3.63 3.63
4750 3.73 3.71 3.63 3.63
5000 3.73 3.71 3.63 3.63

Figure 5.8: Average Execution Time of a Cloudlet (in ms) of RR, WRR, DWRR and PWRR

The time taken to complete level 1, level 2 priority tasks and level 5 priority tasks (in ms)
of the all the algorithms was calculated with the cloud model of 50 VMs and different sets of
heterogeneous cloudlets, as shown in Table 5.13, Table 5.14 and Table 5.15. Figure 5.9, Figure
5.10 and Figure 5.11 show the comparative analysis of all the algorithms. It shows that the time
taken by RR and WRR to execute a cloudlet is higher when compared to the DWRR. The proposed

32
PWRR shows a significant difference in completing the high priority tasks when compared to the
DWRR. The time taken by PWRR is so much lower than RR, WRR and DWRR.

Table 5.13: Completion Time of Level 1 Cloudlets (in ms) of RR, WRR, DWRR and PWRR

Cloudlets RR WRR DWRR PWRR


2500 252.29 363.23 183.86 17.8
2750 276.29 382.84 201.3 19.41
3000 300.29 408.14 221.53 19.41
3250 324.29 423.67 238.31 21.03
3500 352.29 481.23 257.56 22.57
3750 376.29 507.92 276.68 22.54
4500 452.28 584.67 331.01 27.42
4750 476.28 607.84 349.35 29.02
5000 500.28 654.14 368.08 29.02

Figure 5.9: Completion Time of Level 1 Cloudlets (in ms) of RR, WRR, DWRR and PWRR

33
Table 5.14: Completion Time of Level 2 Cloudlets (in ms) of RR, WRR, DWRR and PWRR

Cloudlets RR WRR DWRR PWRR


2500 336.18 580.87 188.78 44.94
2750 368.18 641.65 207.57 44.94
3000 400.18 680.22 231.74 51.42
3250 432.18 732.33 242.82 53.48
3500 469.51 780.23 267.14 57.86
3750 501.51 872.03 280.34 65.79
4500 602.84 1044.81 336.22 77
4750 634.84 1092.86 359.23 80.41
5000 666.83 1160.02 376.37 83.56

Figure 5.10: Completion Time of Level 2 Cloudlets (in ms) of RR, WRR, DWRR and PWRR

34
Table 5.15: Completion Time of Level 5 Cloudlets (in ms) of RR, WRR, DWRR and PWRR

Cloudlets RR WRR DWRR PWRR


2500 252.29 542.47 183.97 97.99
2750 276.29 608.54 201.99 105.84
3000 300.29 672.18 219.79 112.75
3250 324.29 724.11 238.82 123.94
3500 340.29 773.81 257.92 147.93
3750 376.29 830.43 275.64 148.25
4500 452.28 1052.81 332.01 177.06
4750 476.28 1110.46 349.57 186.65
5000 500.28 1118.42 370.19 186.61

Figure 5.11: Completion Time of Level 5 Cloudlets (in ms) of RR, WRR, DWRR and PWRR

From the results and analysis, the proposed PWRR algorithm in terms of execution cost, total
time taken to execute all the cloudlets, waiting time, turnaround time, and the average time taken
to complete a single cloudlet is similar to that of the DWRR algorithm, that is widely used as a
scheduling algorithm by popular cloud providers. But when it comes to handling the tasks with
priorities, the PWRR algorithm outperforms the DWRR algorithm by a noticeable margin.

35
CHAPTER VI: CONCLUSION AND FUTURE WORK

This work provides a new scheduling algorithm, the PWRR algorithm, to efficiently handle
the incoming tasks of different priorities. In general, it performs better than the RR and WRR
algorithms in terms of QoS parameters: execution cost, total waiting time, total turnaround time,
average time taken to complete a single task, and total time taken to complete all tasks. It also
performs equivalent to the DWRR algorithm, which is widely used by cloud providers for task
scheduling, in terms of the above QoS parameters. In terms of handling the tasks with priority,
the PWRR algorithm shows significant improvement when compared to all the three (RR, WRR,
and DWRR) algorithms. From the results, it can be concluded that the PWRR algorithm performs
equivalent to the DWRR algorithm in all the QoS factors, but it shows significant improvement
over the DWRR algorithms when handling high-priority tasks. Hence, the PWRR algorithm is
more suitable than the DWRR algorithm for task scheduling, which has the ability to handle tasks
with and without priority.

6.1 Future Work

• The proposed algorithm can be enhanced to handle dynamic priority changes in incoming
tasks. This would help to efficiently handle tasks when their priority changes during runtime.

• Cost optimization can be introduced in the proposed algorithm, by handling power manage-
ment. This would enhance the energy utilization.

• Adapting dynamic priority changes or improvements in modifying the priority of tasks that
are in queue for a long time could completely avoid the starvation of low priority tasks.

• In real-time environments, considering the factors like slow internet connection, bandwidth
utilization, network delay, etc can improve the effectiveness of the proposed algorithm.

36
Bibliography

[1] Alipour, M. M., and Derakhshi, M. R. F. Two level fuzzy approach for dynamic load balancing
in the cloud computing. In Journal of Electronic Systems (March 2016), vol. 6, pp. 17–31.

[2] Anicas, M. An introduction to haproxy and load balancing concepts, 2014. [Online; accessed
26-July-2017].

[3] Bhumi, J. M., and Bhavisha, R. K. Review on round robin algorithm for task scheduling in cloud
computing. In International Journal of Emerging Technologies and Innovative Research
(March 2015), vol. 2, pp. 788–793.

[4] Ch Chakradhara Rao, M. L., and Kumar, Y. R. Cloud: Computing services and deployment
models. In International Journal Of Engineering And Computer Science (June 2013),
pp. 3389–3392.

[5] Devi, D. C., and Uthariaraj, V. R. Improved round robin algorithm for data center selection in
cloud computing. In INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES AND
RESEARCH TECHNOLOGY (February 2014), vol. 2014.

[6] Devi, D. C., and Uthariaraj, V. R. Load balancing in cloud computing environment using
improved weighted round robin algorithm for nonpreemptive dependent tasks. In The Sci-
entific World Journal) (October 2016), vol. 2016.

[7] Devi, R. K., and Sujan, S. A survey on application of cloudsim toolkit in cloud computing.
In International Journal of Innovative Research in Science, Engineering and Technology
(June 2014), vol. 3, pp. 3–4.

[8] Erl, T., Puttini, R., and Mahmood, Z. Cloud Computing: Concepts, Technology and Architec-
ture. Prentice Hall Press, Upper Saddle River, NJ, USA, 2013.

[9] G.THEJESVI, and T.ANURADHA. Distribution of work load at balancer level using enhanced
round robin scheduling algorithm in a public cloud. In International Journal of Advances
in Electronics and Computer Science (Feb 2016), vol. 3.

[10] Hicham, G. T., and Chaker, E. A. Cloud computing cpu allocation and scheduling algorithms
using cloudsim simulator. In International Journal of Electrical and Computer Engineering
(IJECE) (August 2016), vol. 6, pp. 1866–1879.

37
[11] Joshi, S., and Kumari, U. Load balancing in cloud computing: Challenges issues. In 2016
2nd International Conference on Contemporary Computing and Informatics (IC3I) (Dec
2016), pp. 120–125.

[12] Kaur, P., and Kaur, D. P. D. Efficient and enhanced load balancing algorithms in cloud com-
puting. In International Journal of Grid Distribution Computing (2015), vol. 8, pp. 9–14.

[13] Manirabona, A., Boudjit, S., and Fourati, L. C. A priority-weighted round robin schedul-
ing strategy for a wban based healthcare monitoring system. In 2016 13th IEEE Annual
Consumer Communications Networking Conference (CCNC) (Jan 2016), pp. 224–229.

[14] Megharaj, G., and Mohan, D. K. G. A survey on load balancing techniques in cloud com-
puting. In IOSR Journal of Computer Engineering (IOSR-JCE) (April 2016), vol. 18,
pp. 737–741.

[15] Puthal, D., Sahoo, B. P. S., Mishra, S., and Swain, S. Cloud computing features, issues, and
challenges: A big picture. In 2015 International Conference on Computational Intelligence
and Networks (Jan 2015), pp. 116–123.

[16] Rahman, M., Iqbal, S., and Gao, J. Load balancer as a service in cloud computing. In 2014
IEEE 8th International Symposium on Service Oriented System Engineering (April 2014),
pp. 204–211.

[17] Sidhu, A. K., and Kinger, S. Analysis of load balancing techniques in cloud computing. In
International Journal of Advanced Research in Computers and Technology (April 2013),
pp. 737–741.

[18] Singh, H., and Gangwar, R. C. A comparative study of load balancing algorithms in cloud
environment. In International Journal on Recent and Innovation Trends in Computing and
Communication (October 2014), vol. 2.

[19] Support, R. Understanding the cloud computing stack: Saas, paas, iaas, 2017. [Online; ac-
cessed 05-Nov-2017].

[20] Tani, H. G., and Amrani, C. E. Smarter round robin scheduling algorithm for cloud computing
and big data. In Journal of Data Mining and Digital Humanities (Jan 2017).

[21] Zalavadiya, K., and Vaghela, D. Honey bee behavior load balancing of tasks in cloud comput-
ing. In International Journal of Computer Applications (April 2016), vol. 139, pp. 0975–
8887.

38
APPENDIX A: STEPS TO CREATE SIMULATION ENVIRONMENT IN NETBEANS

The simulation is done in NetBeans IDE by using the library cloudsim. NetBeans is an applica-
tion development tool which helps in developing java application. It is an open source development
tool. Along with java it supports other languages also. It is a cross platform tool, which runs in
windows, mac, linux and solaris. Cloudsim is a framework to develop and simulate cloud environ-
ment. It is build on top of java language. In this experiment, cloudsim is integrated with NetBeans
to implement and simulate the proposed algorithms.
Step-by-Step Guide to Clousim Integration in NetBeans:

1) Download Cloudsim 3.0.3 from the URL ”https://github.com/Cloudslab/cloudsim/releases”


and extract the folder. This URL contains versions for both windows and macOS.

2) Once extraction over, please verify the contents of cloudsim folder as shown in the Fig. 7.12.

3) Before installing NetBeans IDE, please make sure the device has JAVA
in it. Otherwise download and install latest version of JDK from
”http://www.oracle.com/technetwork/java/javase/downloads”.

4) Download NetBeans IDE 8.2 from the URL ”https://netbeans.org/downloads/” and select the
version which has all bundles within it.

5) Run the downloaded installer to complete NetBeans IDE installation.

6) Open NetBeans IDE, click on File and NewProject. In the New Project window select ”Java
Application” and click on Next as shown in Fig. 7.13

7) New Java Application window will open. In this window provide the project name and click
on finish as shown in Fig. 7.14

39
8) Once the new project is created copy the ”org” folder from cloudsim’s ”source” folder as
shown in the Fig. 7.12 and paste it inside the ”src” folder of newly created project as shown
in Fig. 7.15

9) In the project structure add a jar file ”commons-math3-3.6.1.jar” which supports the mathe-
matical operations as shown in Fig. 7.16

10) Now the IDE is ready for the implementation as shown in Fig. 7.17

Figure 7.12: CloudSim Files and Folders.

40
Figure 7.13: Creating New Application.

Figure 7.14: Assigning Project Name.

41
Figure 7.15: Project Source Folder.

Figure 7.16: Adding JAR Files.

42
Figure 7.17: Sample Application.

43
APPENDIX B: STEPS TO IMPLEMENT ALGORITHMS USING CLOUDSIM

1) Cloudsim package should be initialized before the creation of any entities. This object helps
to access different resources, its registration, its information, etc. It is the first step of any
implementation in cloudsim.

i n t n u m u s e r = 1 ; / / number o f g r i d u s e r s
Calendar calendar = Calendar . getInstance ( ) ;
b o o l e a n t r a c e f l a g = f a l s e ; / / mean t r a c e e v e n t s
CloudSim . i n i t ( num user , c a l e n d a r , t r a c e f l a g ) ; / / Initialize
t h e CloudSim l i b r a r y

2) Second step is to create a data center. At least one datacenter should be created for the sim-
ulation. Data centers are nothing but hosts, where the virtual machines will be created.

Datacenter datacenter0 = createDatacenter (” Datacenter 0 ” ) ;


/ / adding the data c e n t e r s to the data c e n t e r list
L i s t < D a t a c e n t e r > dc = new A r r a y L i s t < D a t a c e n t e r > ( ) ;
d . add ( d a t a c e n t e r 0 ) ;

3) Next step is to create a broker. It is responsible for submission of cloudlets to the desirable
VM, with the help of different job scheduling policies.

DatacenterBroker broker = createBroker ( ) ;


i n t brokerId = broker . getId ( ) ;

44
4) Next step is the creation of list of VMs and Cloudlets. Then the list should be submitted to
broker.

int noCloudlets ;
i n t noVms ;
v m L i s t = createVM ( b r o k e r I d , noVms , 0 ) ; / / c r e a t i n g vms
c l o u d l e t L i s t = c r e a t e C l o u d l e t ( brokerId , noCloudlets , 0) ; / /
creating cloudlets
br ok er . submitVmList ( vmList ) ;
broker . submitCloudletList ( c l o u d l e t L i s t ) ;

5) Once the Cloudlets and VMs are created, then the simulation starts .

CloudSim . s t a r t S i m u l a t i o n ( ) ;

6) Printing the results after simulation.

L i s t < Cloudlet > newList = broker . g e t C l o u d l e t R e c e i v e d L i s t ( ) ;


CloudSim . s t o p S i m u l a t i o n ( ) ;
p r i n t C l o u d l e t L i s t ( newList ) ;
p r i n t C l o u d l e t n e t w o r k ( n e w L i s t , d , vmList , ” s p r e a d s h e e t . c s v ” ) ;
Log . p r i n t L i n e ( ” CloudSim f i n i s h e d ! ” ) ;

7) Cloudlet Class.

public Cloudlet (
final int cloudletId ,
f i n a l long cloudletLength ,

45
f i n a l i n t pesNumber ,
f i n a l long c l o u d l e t F i l e S i z e ,
f i n a l long cloudletOutputSize ,
f i n a l UtilizationModel utilizationModelCpu ,
f i n a l UtilizationModel utilizationModelRam ,
f i n a l UtilizationModel utilizationModelBw ,
f i n a l boolean record ,
final List < String > fileList )
private i n t classType ;
/ ∗ @param c l o u d l e t I d t h e u n i q u e ID which i s >= 0
@param c l o u d l e t L e n g t h i s s i z e i s t a s k l e n g t h ( i n M. I ) which
i s >= 0 . 0
@param c l o u d l e t F i l e S i z e t h e f i l e s i z e ( i n b y t e ) >= 1
@param c l o u d l e t O u t p u t S i z e t h e f i l e s i z e ( i n b y t e ) >= 1
@param u t i l i z a t i o n M o d e l C p u t h e u t i l i z a t i o n model cpu
@param u t i l i z a t i o n M o d e l R a m t h e u t i l i z a t i o n model ram
@param u t i l i z a t i o n M o d e l B w t h e u t i l i z a t i o n model bw
@paramThe r e c o r d s t h e t r a n s a c t i o n h i s t o r y f o r t h i s C l o u d l e t
.∗/

8) Creating a cloudlet.

p r i v a t e s t a t i c L i s t < Cloudlet > createCloudLets ( ) throws


FileNotFoundException {
List < Cloudlet > cloudletList ;
/ / Read C l o u d l e t s from an e x t e r n a l w o r k l o a d f i l e i n
t h e swf f o r m a t

46
W o r k l o a d F i l e R e a d e r w o r k l o a d F i l e R e a d e r = new
W o r k l o a d F i l e R e a d e r ( ” S i m u l a t i o n F i l e s / d a t a s e t . swf ” ,
1) ;
/ / G e n e r a t e c l o u d l e t s from w o r k l o a d f i l e
c l o u d l e t L i s t = workloadFileReader . generateWorkload ( ) ;
return cloudletList ;
}

9) Implementation of Dynamic Weighted Round-Robin.

p u b l i c s t a t i c i n t GetVmToProcess ( i n t vmCount , i n t
cloudletCount )
{
i n t i =0 , j = 0 ;
long t o t a s s i g n t a s k s =0;
d o u b l e w a i t T i m e =0 , p r e v W a i t T i m e = 0 ;
i n t bestVm = 0 ;
i f ( vmCount == 1 )
{
r e t u r n bestVm ;
}
f o r ( i = 0 ; i <vmCount ; i ++)
{
t o t a s s i g n t a s k s = 0;
f o r ( j = 0 ; j <c l o u d l e t C o u n t ; j ++)
{
i f ( i == c l o u d l e t L i s t . g e t ( j ) . getVmId ( ) )

47
{
totassigntasks = totassigntasks +
cloudletList . get ( j ) . getCloudletLength
() ;
}
}
waitTime = t o t a s s i g n t a s k s / v m l i s t . g e t ( i ) . getMips
() ;
Log . p r i n t L i n e ( ” W a i t i n g t i m e o f VM ” + i + ” ” +
waitTime ) ;
i f ( i ==0)
{
prevWaitTime = waitTime ;
bestVm = 0 ;
}
else
{
i f ( waitTime < prevWaitTime )
{
prevWaitTime = waitTime ;
bestVm = i ;
}
}
}
r e t u r n bestVm ;
}

48
10) Implementation of Priority Weighted Round-Robin.

p u b l i c s t a t i c i n t GetVmToProcess ( i n t vmCount , i n t
cloudletCount , i n t c l o u d l e t P r i o r i t y )
{
i n t i =0 , j = 0 ;
long t o t a s s i g n t a s k s =0;
d o u b l e w a i t T i m e =0 , p r e v W a i t T i m e = 0 ;
i n t bestVm = 0 ;
i f ( vmCount == 1 )
{
r e t u r n bestVm ;
}
f o r ( i = 0 ; i <vmCount ; i ++)
{
t o t a s s i g n t a s k s = 0;
f o r ( j = 0 ; j <c l o u d l e t C o u n t ; j ++)
{
i f ( ( i == c l o u d l e t L i s t . g e t ( j ) . getVmId ( ) )&& (
c l o u d l e t P r i o r i t y >= c l o u d l e t L i s t . g e t ( j ) .
getPriority () ) )
{
totassigntasks = totassigntasks +
cloudletList . get ( j ) . getCloudletLength
() ;
}
}

49
waitTime = t o t a s s i g n t a s k s / v m l i s t . g e t ( i ) . getMips
() ;
i f ( i ==0)
{
prevWaitTime = waitTime ;
bestVm = 0 ;
}
else
{
i f ( waitTime < prevWaitTime )
{
prevWaitTime = waitTime ;
bestVm = i ;
}
}
}
r e t u r n bestVm ;
}

50

You might also like