You are on page 1of 23

Cloud Computing Tutorial

CSN 520

Assignment 1: Case Study

Akshay Jadhav (22535002)


Anubhav Singh (22535005)
Yoginath Islavath (22535014)

Department of Computer Science and Engineering

Indian Institute of Technology Roorkee

06-10-2022
1. Introduction: CloudSim is a new, generalized, and extensible simulation
framework that enables seamless modeling, simulation, and experimentation of
emerging Cloud computing infrastructures and application services.

Installation Steps:
a. Before installing cloudsim we need to make sure we have the following:
Java Development Kit (JDK) - Since Cloudsim Simulation toolkit is written in Java
Programming language.
Eclipse IDE for Java Developers.
Common-math library which has math related functions can be downloaded from
Math – Download Apache Commons Math. Unzip in a separate folder.
b. Download the cloudsim source code as a zip file from Release cloudsim-3.0.3 ·
Cloudslab/cloudsim · GitHub and unzip it to a separate directory.
c. Open eclipse and create a new Java project by navigating to File->New->Java
Project.
d. In the new project window that open up fill in the following details
Project name: CloudSim
Uncheck the default location and set location to the path where we unzipped the
cloudsim folder.
Click on Next to add project settings
e. Open the Libraries tab and check if the common-math library is present. If not
present then add it by clicking on Add External Jars and selecting the
‘Commons-math3-3.x.jar file from unzipped common-math folder which we had
previously downloaded.
f. Click on Finish to complete the installation of cloudsim.

Architecture: The architecture involves three main layers:


a. User Code - This is the layer which is exposed to the user. The user specifies the
requirements of the hardware in this layer according to his/her needs. Users
redefine the characteristics of the stimulating environment.
b. CloudSim - This layer manages the creation and execution of core entities such
as Hosts, VMs and cloudlets. Also manages network related operations along
with allocation of resources and management.
c. CloudSim Core Simulation engine - This layer provides an interface for the
management of resources such as memory, VM and bandwidth of virtualized
datacentres.

Class Diagram -
Class Diagram

Fig: Cloudsim Class Diagram

Datacenter provide central infrastructure services.Hardware and


software provided by resource providers in the cloud computing paradigm.
It offers both homogeneous and heterogeneous resource configurations.
Each data center component uses a set of policies that are further used to
allocate bandwidth, memory, and storage devices.

DataEnterBroker acts as an intermediary between users and service


providers. it can act on user behalf, find the right cloud-her provider,
assist you in negotiating prices with providers, and meet QoS and
user requirements. A cloud developer can extend this class to create his
custom application.

SANStorage stands for Storage Area Network and is used to store large
amounts of data in data centers. With the help of this class, users can store
and retrieve data at any time depending on the available network
Bandwidth.

Virtual machine class is used to create instances of VMs. It also


manages VMs and is used to store VM properties such as memory,
processors, and its scheduling policy. All components are abstracted from
the VMScheduling class.

Cloudlets are used for cloud-based application services such as content


delivery, social networking, and business operations. Application

complexity can be expressed in terms of computational requirements. Each


Application component has a pre-allocated order and data transfer amount.

CloudCoordinator provides federated capacity to your data center. This


class is responsible for communicating with other peer cloud coordinator
services and cloud brokers. It also periodically monitors the internal state of
the data center during the simulation.

BWProvisioner is an abstract class used to allocate network bandwidth


between provisioning policies. Developers can extend this class with their
own strategies according to their needs.

MemoryProvisioner is another abstract class used to allocate memory to


VMs within a DC. Hosting VMs is possible only if this class finds free
Space.

VMProvisioner represents a provisioning policy for hosting VMs. Its main


task is to select available hosts in the data center that meet the memory,
storage, and availability requirements for VM provisioning. It is also used to
implement optimized policies. Like the class above.

VMMAllocationPolicy is an abstract class used to implement a time sharing


policy. It also allocates processing power to the VM.

VmScheduler models the shared processor pool and schedules a policy to distribute
processor cores to virtual machines.

Broker/Agent node, which is primarily in charge of distributing workloads (jobs and


tasks) among the network's computing resources, is replicated by the class FogBorker.

Controller This class is responsible for managing the simulation of application module
allocation. This involves running simulations over fog devices and processing tuples
using modules. AppModules will be scheduled to network nodes by the controller class,
which will also manage their overall operation.

Experiment - The simulation is run utilizing heterogeneous input parameters for VM,
cloudlets, and datacenters by reading the input from an external file system. We have
constructed 20 VMs, 40 cloudlets, and 2 datacenters (included along with code). Both
time-shared and space-shared allocation policies are handled in this manner. The size
parameter for each VM was adjusted, the length parameter for each cloudlet was
changed, and two heterogeneous datacenters with various mips values were built for
the VMs and cloudlets, respectively.

Results -
The start time and finish time for each cloud under the timeshare allocation policy are
different but they are the same under the space sharing method, as seen in the two
tables below. Furthermore, time-shared cloudlets require more time to run than
space-shared cloudlets do.

Time-shared allocation policy used and the results are given in the below table.
Cloudlet Status Datacent VM Id Time Start Finish
ID re ID Time Time

0 SUCCESS 2 0 3 0.2 3.2

1 SUCCESS 2 1 3.3 0.2 3.5

2 SUCCESS 2 2 3.6 0.2 3.8

3 SUCCESS 3 8 3.6 0.2 3.8

4 SUCCESS 3 9 3.8 0.2 4

5 SUCCESS 2 3 3.9 0.2 4.1

6 SUCCESS 3 10 4 0.2 4.2

7 SUCCESS 2 4 4.2 0.2 4.4

8 SUCCESS 3 11 4.2 0.2 4.4

9 SUCCESS 3 12 4.4 0.2 4.6

10 SUCCESS 2 5 4.5 0.2 4.7

11 SUCCESS 3 13 4.6 0.2 4.8

12 SUCCESS 2 6 4.8 0.2 5

13 SUCCESS 3 14 4.8 0.2 5

14 SUCCESS 3 15 5 0.2 5.2


15 SUCCESS 2 7 5.1 0.2 5.3

16 SUCCESS 3 8 5.2 0.2 5.4

17 SUCCESS 3 9 5.4 0.2 5.6

18 SUCCESS 3 10 5.6 0.2 5.8

19 SUCCESS 3 11 5.8 0.2 6

20 SUCCESS 3 12 6 0.2 6.2

21 SUCCESS 2 0 6.2 0.2 6.4

22 SUCCESS 3 13 6.2 0.2 6.4

23 SUCCESS 3 14 6.4 0.2 6.6

24 SUCCESS 2 1 6.51 0.2 6.71

25 SUCCESS 3 15 6.6 0.2 6.8

26 SUCCESS 2 2 6.8 0.2 7

27 SUCCESS 2 3 7.1 0.2 7.3

28 SUCCESS 2 4 7.4 0.2 7.6

29 SUCCESS 2 5 7.7 0.2 7.9

30 SUCCESS 2 0 7.81 0.2 8.01


31 SUCCESS 2 6 8 0.2 8.2

32 SUCCESS 2 1 8.11 0.2 8.31

33 SUCCESS 2 7 8.3 0.2 8.5

34 SUCCESS 2 2 8.41 0.2 8.61

35 SUCCESS 2 3 8.7 0.2 8.9

36 SUCCESS 2 4 9 0.2 9.2

37 SUCCESS 2 5 9.3 0.2 9.5

38 SUCCESS 2 6 9.6 0.2 9.8

39 SUCCESS 2 7 9.9 0.2 10.1

Space-shared allocation policy used and the results are given in the below table.

Cloudl Status Datacent VM Id Time Start Finish


et ID re ID Time Time

0 SUCCESS 2 0 1 0.2 1.2

1 SUCCESS 2 1 1.11 0.2 1.31

2 SUCCESS 2 2 1.22 0.2 1.42

3 SUCCESS 2 3 1.33 0.2 1.53


4 SUCCESS 2 4 1.44 0.2 1.64

5 SUCCESS 2 5 1.55 0.2 1.75

6 SUCCESS 2 6 1.66 0.2 1.86

7 SUCCESS 2 7 1.77 0.2 1.97

8 SUCCESS 3 8 1.8 0.2 2

9 SUCCESS 3 9 1.91 0.2 2.11

10 SUCCESS 3 10 2.02 0.2 2.22

11 SUCCESS 3 11 2.13 0.2 2.33

12 SUCCESS 3 12 2.24 0.2 2.44

13 SUCCESS 3 13 2.35 0.2 2.55

14 SUCCESS 3 14 2.46 0.2 2.66

15 SUCCESS 3 15 2.57 0.2 2.77

16 SUCCESS 2 0 2.6 1.2 3.8

17 SUCCESS 2 1 2.7 1.31 4.01

18 SUCCESS 2 2 2.8 1.42 4.22

19 SUCCESS 2 3 2.9 1.53 4.43


20 SUCCESS 2 4 3 1.64 4.64

21 SUCCESS 2 5 3.1 1.75 4.85

22 SUCCESS 2 6 3.2 1.86 5.06

23 SUCCESS 2 7 3.3 1.97 5.27

24 SUCCESS 3 8 3.4 2 5.4

25 SUCCESS 3 9 3.5 2.11 5.61

26 SUCCESS 3 10 3.6 2.22 5.82

27 SUCCESS 3 11 3.7 2.33 6.03

28 SUCCESS 3 12 3.8 2.44 6.24

29 SUCCESS 3 13 3.9 2.55 6.45

30 SUCCESS 3 14 4 2.66 6.66

31 SUCCESS 3 15 4.1 2.77 6.87

32 SUCCESS 2 0 4.2 3.8 8

33 SUCCESS 2 1 4.3 4.01 8.31

34 SUCCESS 2 2 4.4 4.22 8.62

35 SUCCESS 2 3 4.5 4.43 8.93


36 SUCCESS 2 4 4.6 4.64 9.24

37 SUCCESS 2 5 4.7 4.85 9.55

38 SUCCESS 2 6 4.8 5.06 9.86

39 SUCCESS 2 7 4.9 5.27 10.17


2 A. Introduction

FogWorkflowSim is an automated simulation toolkit for workflow performance evaluation


in Fog computing.

Installation:
Steps to install FogWorkflowSim tool are as follows
a. Download the zip file from the github link
https://github.com/ISEC-AHU/FogWorkflowSim which contains the tool.
b. Extract/unzip the file in a separate directory/folder.
c. Create a new java project in eclipse IDE by navigating to File->New->Java
Project.
d. Now a new project window will open, give the following details: Project name as
FogWorkFlowSim and Unselect the ‘Use default location’ option to browse the
location of the project file to where we extracted the FogWorkFlowSimTool. Click
on next to add project settings.
e. Open the Libraries tab and check if the common-math library is present. It should
be present since we added it while installing cloudsim. If not present then add it
by clicking on Add External Jars and selecting the ‘Commons-math3-3.x.jar file
from unzipped common-math folder which we had previously downloaded.
f. Click on Finish to complete the installation of Fog Workflowsim.

FogWorkflowSim Architecture

The modeling of a sophisticated Fog Computing environment and sophisticated workflow


system is the basis of FogWorkflowSim. FogWorkflowSim inherits the features of iFogSim and
WorkflowSim, as opposed to developing them from scratch. We adjusted resource allocation
and incorporated the iFogSim and WorkflowSim routines.
The Fog Computing environment layer, the workflow system layer, and the resource
management layer are the three layers of FogWorkflowSim that are depicted in the above
image. To help lower layers operate more efficiently, each layer is in charge of a certain task.
Listed below is a description of each tier.
1. Fog Computing environment layer
The End Device layer, Fog Node layer, and Cloud Server layer are the three levels that make
up the Fog Computing environment. To represent all different kinds of resources in the fog,
we utilize the Fog Device class. By altering the hardware requirements for things like
processing power, storage capacity, uplink/downlink bandwidth, and other things, it may be
used to emulate other devices. Additionally, it offers the interface for the control and
distribution of these hardware resources. The simulation of allocated workflow tasks, the
simulation of computing and storage resources, and other techniques are also included in
this class.
2. Workflow system layer
The Planner module, the Parser module, the Cluster module, the Engine module, and the
Scheduler module make up the workflow system layer. The simulation's launch is controlled
by the Planner module. The Parser module is in charge of converting the input workflow
file's XML format into the Task class, which the system uses to represent workflow tasks. The
Cluster module is in charge of grouping several tasks into a job in accordance with a certain
clustering technique. The Engine module is to submit jobs to the Scheduler module
according to the dependencies of tasks, reschedule unsuccessful tasks and stop the
simulation if all tasks are completed. The Scheduler module is the most crucial one. This
serves as the entry of the workflow scheduling algorithm, and it is also responsible for the
creation of Virtual Machines (VM) and the submission, update and return of workflow tasks.
3. Resource management layer
The Resource module, Offloading module, Scheduling module, and Controller module make
up the resource management layer. The Resource module is a virtualized pool for many
kinds of computing and storage resources. The system's Offloading and Scheduling modules
are extendable libraries enabling a variety of task scheduling and computation offloading
methodologies. The Controller module serves as the foundation for models of various
performance indicators that are used to assess the effectiveness of workflow applications
that are actively executing.
Class Diagram

Broker/Agent which is primarily in charge of assigning workloads (jobs and tasks)


among the network's computing resources.

Controller: This class is responsible for handling the simulation of application module
allocation, which comprises simulating over fog devices and processing tuples via
modules. The controller class will plan the placement of AppModules on network nodes
and manage their overall operation.

Data Centers: Infrastructure service modeling is easy. It has either a significant


number of homogeneous or heterogeneous hosts and servers, depending on the
hardware selections. The configuration of datacenter resources can be modeled. All of
the properties of a datacenter may be easily modeled and viewed.

VmScheduler: This programme schedules a policy to assign processor cores to virtual


machines while simulating shared

DatacentreBroker: a representative of a broker working for a user. It conceals VM


management processes such as VM creation, cloudlet submission to these VMs, and
VM deletion.
Dynamic planning is supported by WorkflowPlanner. We'll have global and static
algorithms in the future. WorkflowPlanner is where the WorkflowSim begins. A planning
algorithm is chosen based on the configuration.
Process Engine: The Workflow Engine manages jobs based on their dependencies
between works to ensure that a work can only be released once all of its parent jobs
have successfully completed. The Workflow Engine will only send unclaimed jobs to the
Scheduler. In the actual traces that we examined, DAGMan served as the Workflow
Engine.
Experiment
The existing algorithms like Min-Min, Max-Min, FCFS, RoundRobin, PSO and GA are
compared for different objective functions time, energy, and cost. We use the below
parameters settings for 2 Fog Nodes, 3 Cloud Servers, and 1 End Device.

Parameters Mobile Device Fog Device Cloud Server

MIPS 1000 1300 1600

Working Power(mW) 700 0 0

Idle Power(mW) 30 0 0

Data Transmission Power(mW) 100 0 0

Data Receiving Power(mW) 25 0 0

Task Execution Cost($) 0 0.48 0.96

Uplink and downlink bandwidths 20 Mbps and 40 Mbps


(End Device Layer)
Fig: Mips and Cost parameter settings for 3 cloud servers, 2 fog nodes and 1 End
device

For PSO algorithm, the particle number is 30. The learning factors C1 and C2 are both
2. The inertia weight is 1. For GA algorithm, the population size is 50. The rates of
crossover and mutation are 0.8 and 0.1 respectively. The iteration numbers of PSO and
GA are both 100.
Fig: Algorithm setting for PSO

Fig: Algorithm setting For GA


Results
After running all the existing algorithms like Min-Min, Max-Min, FCFS, RoundRobin,
PSO and GA with the parameters mentioned above, we get the following results for
Workflow Makespan (time), Energy consumed and Total cost for each algorithm. Below
table compares these results of all algorithms. (rounded off to 3 decimals)

Workflow Energy Consumed Total Cost


Make (J)
span
FCFS 452.2638942307 13.641616826923077 582.8221384615384
692
Round 448.8412980769 27.649138942307683 499.6248
Robin 2307

MaxMin 420.2059134615 26.749877403846153 511.1664923076923


3844
MinMin 118.5184615384 6.93235384615385 83.36492307692306
6152
PSO 412.99375 19.7393235384615385 504.02983846153836

GA 94.24230769230 6.204069230769235 83.36824615384614


769

2B

Introduction

Cloudlets are tasks/jobs, and Virtual Machines are resources. In two different scenarios,
we tested the algorithms' performance: in the first, we fixed the number of virtual
machines while varying the number of cloudlets; in the second, we fixed the number of
cloudlets while varying the number of virtual machines. Tables displaying the
makespans that the algorithms generate have been shown together with the associated
graphs.
In the first scenario, the number of virtual machines is fixed at 10, while the number of
cloudlets is varied by 10, from 10 to 40.
VMS fixed:10 Cloudlets varying

10 20 30 40

Method Improved 8 26.1 60.9 113.5


Used Genetic

Standard 12.4 44.7 86.5 146.8


Genetic

Table. Makesans for fixed VMS and Varying Cloudlets

You might also like