You are on page 1of 79

Ex No.

1 Install Virtual Box / VMware Workstation with different flavours of Linux or


Windows OS on top of Window 7 or 8

Aim

To Install a VirtualBox/VMware Workstation with different flavours of Linux or windows


OS on top of windows 7 or 8

Introduction

❖ Virtualization:
• Virtualization is the creation of virtual servers, infrastructures,
devices and computing resources.
• Virtualization changes the hardware-software relations and is
one of the foundational elements of cloud computing
technology that helps utilize the capabilities of cloud computing
to the full.
• Virtualization techniques allow companies to turn virtual their
networks, storage, servers, data, desktops and applications.

❖ Hypervisor or Virtual Machine Monitor (VMM)


A hypervisor or virtual machine monitor (VMM) is a piece
of computer software, firmware or hardware that creates and runs
virtual machines. A computer on which a hypervisor is running
one or more virtual machines is defined as a host machine. Each
virtual machine is called a guest machine. The hypervisor
presents the guest operating systems with a virtual operating
platform and manages the execution of the guest operating
systems. Multiple instances of a variety of operating systems may
share the virtualized hardware resources.

❖ Types of Virtualization
• Operating-system-level virtualization - is a server-
virtualization method where the kernel of an operating system
allows for multiple isolated user- space instances, instead of
just one. Such instances (sometimes called containers,
software containers,[1] virtualization engines (VE), virtual
private servers (VPS), or jails) may look and feel like a real
server from the point of view of its owners and users
• Platform / Hardware virtualization -Hardware virtualization
or platform virtualization refers to the creation of a virtual
machine that acts like a real computer with an operating
system. Software executed on these virtual machines is
separated from the underlying hardware resources. For
example, a computer that is running Microsoft Windows may
host a virtual machine that looks like a computer with the
Ubuntu Linux operating system; Ubuntu- based software can
be run on the virtual machine.
• In hardware virtualization, the host machine is the actual
machine on which the virtualization takes place, and the guest
machine is the virtual machine. The words host and guest are
used to distinguish the software that runs on the physical
machine from the software that runs on the virtual machine.
Different types of hardware virtualization include:

o Full virtualization: Almost complete simulation


of the actual hardware to allow software, which
typically consists of a guest operating system, to
run unmodified.
o Partial virtualization: Some but not all of the
target environment is simulated. Some guest
programs, therefore, may need modifications to
run in this virtual environment.
o Para virtualization: A hardware environment is not
simulated; however, the guest programs are executed
in their own isolated domains, as if they are running
on a separate system.
• Application virtualization is software technology that
encapsulates computer programs from the underlying operating
system on which it is executed. A fully virtualized application is
not installed in the traditional sense, although it is still executed as
if it were.

❖ Oracle Virtualbox

o VirtualBox is a general-purpose full virtualizer for x86


hardware, targeted at server, desktop and embedded use.Each
virtual machine can execute its own operating system,
including versions of Microsoft Windows, Linux, BSD, and
MS-DOS. VMware Workstation is developed and sold by
VMware, Inc., a division of EMCCorporation

❖ Ubuntu

o Ubuntu is an operating system like any other and it is free &


open source. It means that we can download it freely and
install on as many computers as we like. By the term open
source it means that we can actually see its code. To provide a
more secure environment, the ―SUDO‖ tool is used to
assign temporary privileges for performing administrative
tasks. Ubuntu comes installed with a wide range of Software
that includes Libre Office ,Firefox, Thunderbird

Software Requirements:

1.vmware workstation

2.windows XP disk image.

Procedure :

1. File ->choose create new virtual machine

2.Display new virtual machine wizard will appear -> choose any type of configuration ->here
will choose
custom ->click next

3.Choose -> hardware compatibility-> workstation 10.0.->click next


4.Install from menu choose ->installer disc image file->

5.Browse and select the XP disk Image rar file or (Any os image file )-> click open
6. click next

7. Type windows product key(its combine wiTH windows disc image file)-> click next
(no need to give password )
8. Change location instead C to E: -> click next ( this location not use os installed colon)

9,Change the processor -> number of processor ->1 and number of core preprocessor->1 ->
click next .
10.Change the ram size to compatible one -> 512 MB

11. Choose the any option in below-> use bridged networking ( in default use network
address translation(NAT))
11. Choose the any option in below-> use bridged networking ( in default use network
address translation(NAT))- > click next.

12. I/O controller type -> Choose bus logic -> click next .
13. virtual disk type -> select IDE-> click next .

14. in disk choose -> create a new virtual disk -> next
15. in select disk capacity wizard -> assign the maximum disk size-> 40 GB or above
Choose -> split virtual disk into multiple files. -> click next.

Change 300 gb
16.Browse and select -> windows XP profession.vmdk disk file -> click next.

17. Click finish button.


Wait for some time. It will take some time to install-> after installing windows xp screen
appear like this. It also called as guest os or virtual machine os.

Result:

VMware workstation is installed with different flavours of Windows Xp on


top of windows 8 successfully.
Ex No. 2 Install a C compiler in the virtual machine created using virtual box
and execute Simple Programs

Aim:

To Install a C compiler in the virtual machine and execute a sample program.

Procedure :

1. Follw the “Find procedure to run the virtual machine of different configuration.
Check how many virtual machines can be utilized at particular time” exercise steps

2. Copy turbo C compiler setup or exe file in guest OS and paste in c colon:

3. Install the turbo C like a normal process .

4. After installed C compiler type and execute the c program.


Source Code:

Sum of two numbers

#include<stdio.h> int
main()
{ int a,b,c;
printf("Enter two nos:"); scanf("%d%d",&a,&b); c=0;
c=a+b;
printf("Sum of two nos is: %d",c); return 0;
}

Step 3: Compile the Program

Step 4: Run the Program

Expected Output:
Enter two nos : 2 3 Sum of two nos is: 5
Result:

The simple C programs are executed with C compiler in the


Virtual Machine successfully and different programs are executed
and verified.
Ex No. Install Google App Engine. Create hello world app and other simple web
3&4 applications using python/java. Use GAE launcher to launch the web applications

Aim
To install Google App Engine. Create hello world app and other simple web applications using
python/java and to Use GAE launcher to launch the web applications

Procedure
Introduction

➢ Google Cloud Platform (GCP)


o Google Cloud Platform (GCP), offered by Google, is a suite of cloud
computing services that runs on the same infrastructure that Google uses
internally for its end-user products, such as Google Search, Gmail, file storage,
and YouTube.
o Alongside a set of management tools, it provides a series of modular cloud
services including computing, data storage, data analytics and machine learning.
o Google Cloud Platform provides infrastructure as a service, platform as a
service, and serverless computing environments.

➢ Platform as a Service (PaaS)


o Cloud computing service which provides a computing platform and a solution
stack as a service.
o Consumer creates the software using tools and/or libraries from the provider.
o Provider provides the networks, servers, storage, etc.

➢ Google App Engine:

o Google App Engine was first released as a beta version in April 2008.
o It is a is a Platform as a Service (PaaS) cloud computing platform for
developing and hosting web applications in Google-managed data centers.
o Google‘s App Engine opens Google‘s production to any person in the world at
no charge.
o Google App Engine is software that facilitates the user to run his web
applications on Google infrastructure.
o It is more reliable because failure of any server will not affect either the
performance of the end user or the service of the Google.
o It virtualizes applications across multiple servers and data centers.
▪ Other cloud-based platforms include offerings such as Amazon Web
Services and Microsoft's Azure Services Platform.
➢ Introduction of Google App Engine
• Google App Engine lets you run your web applications on Google's
infrastructure. App Engine applications are easy to build, easy to maintain,and
easy to scale as your traffic and data storage needs grow. With App Engine,
there are no servers to maintain: You just upload your application, and it's ready
to serve your users.
• You can serve your app from your own domain name (such as
https://www.example.com/) using Google Apps. Or, you can serve your app
using a free name on the appspot.com domain. You can share your application
with the world, or limit access to members of your organization.
• Google App Engine supports apps written in several programming languages.
With App Engine's Java runtime environment, you can build your app using
standard Java technologies, including the JVM, Java servlets, and the Java
programming language—or any other language using a JVM-based interpreter
or compiler, such as JavaScript or Ruby. App Engine also features a dedicated
Python runtime environment, which includes a fast Python interpreter and the
Python standard library. The Java and Python runtime environments are built to
ensure that your application runs quickly, securely, and without interference
from other apps on the system.
• With App Engine, you only pay for what you use. There are no set-up costs and
no recurring fees. The resources your application uses, such as storage and
bandwidth, are measured by the gigabyte, and billed at competitive rates. You
control the maximum amounts of resources your app can consume, so it always
stays within your budget. App Engine costs nothing to get started. All
applications can use up to 500 MB of storage and enough CPU and bandwidth
to support an efficient app serving around 5 million page views a month,
absolutely free. When you enable billing for your application, your free limits
are raised, and you only pay for resources you use above the free levels.
➢ Architecture of Google App Engine

➢ Features of Google App Engine


➢ GAE Application Environment:

• Google App Engine makes it easy to build an application that runs reliably, even
under heavy load and with large amounts of data. App Engine includes the
following features:
• Persistent storage with queries, sorting and transactions
• Automatic scaling and load balancing
• APIs for authenticating users and sending email using Google Accounts
• Task queues for performing work outside of the scope of a web request
• Scheduled tasks for triggering events at specified times and regular intervals
• Dynamic web serving, with full support for common web technologies

➢ Java Runtime Environment

• You can develop your application for the Java runtime environment using
common Java web development tools and API standards. Your app interacts
with the environment using the Java Servlets standard, and can use common
web application technologies such as Java Server Pages
• The Java runtime environment uses Java 6. The App Engine Java SDK supports
developing apps using either Java 5 or 6. The environment includes the Java SE
Runtime Environment (JRE) 6 platform and libraries. The restrictions of the
sandbox environment are implemented in the JVM. An app can use any JVM
byte code or library feature, as long as it does not exceed the sandbox
restrictions. For instance, byte code that attempts to open a socket or write to a
file will throw a runtime exception.
• Your app accesses most App Engine services using Java standard APIs. For the
App Engine data store, the Java SDK includes implementations of the Java Data
Objects (JDO) and Java Persistence API (JPA) interfaces. Your app can use the
JavaMail API to send email messages with the App Engine Mail service. The
java.net HTTP APIs accesses the App Engine URL fetch service.
• App Engine also includes low-level APIs for its services to implement
additional adapters, or to use directly from the application. See the
documentation for the data store, memcache, URL fetch, mail, images and
Google Accounts APIs. Typically, Java developers use the Java programming
language and APIs to implement web applications for the JVM. With the use
of JVM-compatible compilers or interpreters, you can also use other languages
to develop web applications, such as JavaScript, Ruby.

➢ Workflow of Google App Engine


Step1 : Login to www.cloud.google.com

Step2 : Goto Console


Step 3 : Google Cloud Platform is shown

Step 4 : Click Dashboard in the Google Cloud Plaform


Step 5 : Dashboard in the Google Cloud Plaform

Step 6 : Click New Project and give unique Project Name.

Example : kcet-cloud-project
Step 7 : Google App Engine is initated

Step 8 : Click create Application


Step 9 : Create app and Select Language Python

Step 10 : Python app is created in Google App Engine


Step 11 : Python app Engine application is created

Step 12 : Click Cloud Shell in the Kathir-Cloud-Project


Step 13 : Create a Directory PythonProject using mkdir command

Syntax : mkdir PythonProject

Step 14 : Click Editor to create Python application


Step 15 : Click New File in the PythonProject Folder (Python file)

Step 16 : Create main.py file


main.py file

import &nbsplogging

from &nbspflask import &nbspFlask

app =&nbspFlask( name )

@app.route('/')
def hello():
return 'Hello&nbspWorld'

if name == ' main ':


&nbspapp.run(host='127.0.0.1',port=8080, debug=True)

Step 17 : Create app.yaml file

app.yaml

runtime: python
env: flex
entrypoint: gunicorn -b :$PORT&nbspmain:app

runtime_config:
python_version: 3
Step 18 : Create requirements.txt file

requirements.txt

Flask==0.11.1
gunicorn==19.6.0

Step 19 : Move to Cloud Shell Environment to run the application


Step 20 : Move to Cloud Shell Environment to run the application

Syntax : gcloud app deploy

Continue the application. It enable service on the given project

It started building the object and fetching the storage object for the created application
It is updating the service

The application is successfully deployed and URL is

https://expanded-curve-289413.uc.r.appspot.com
Step 21 : Run your program in the broswer

Step 22 : Hello World Program is successfully run in the browser

Result:

Thus the Google App Engine is installed successfully and a web application to display hello
world using python is developed and deployed in the GAE and used GAE Launcher to launch the web
applications.
Ex No. 5 a Simulate a cloud scenario using CloudSim

Aim
To Simulate a cloud scenario using CloudSim

Procedure

Introduction:

❖ CloudSim
• A Framework for modeling and simulation of Cloud Computing
Infrastructures and services
• Originally built at the Cloud Computing Distributed Systems (CLOUDS)
Laboratory, The University of Melbourne, Australia
• It is completely written in JAVA
❖ Main Features of CloudSiM
o Modeling and simulation
o Data centre network topologies and message-passing applications
o Dynamic insertion of simulation elements
o Stop and resume of simulation
o Policies for allocation of hosts and virtual machines
❖ Cloudsim – Essentials
• JDK 1.6 or above http://tinyurl.com/JNU-JAVA
• Eclipse 4.2 or above http://tinyurl.com/JNU-Eclipse
• Alternatively NetBeanshttps://netbeans.org/downloads
• Up & Running with cloudsim guide: https://goo.gl/TPL7Zh
❖ Cloudsim-Directory structure
• cloudsim/ -- top level CloudSim directory
• docs/ -- CloudSim API Documentation
• examples/ -- CloudSim examples
• jars/ -- CloudSim jar archives
• sources/ -- CloudSim source code
❖ Cloudsim - Layered Architecture
❖ Cloudsim - Component model classes
o CloudInformationService.java
o Datacenter.java,Host.java,Pe.java
o Vm.java,Cloudlet.java
o DatacenterBroker.java
o Storage.java,HarddriveStorage.java, SanStorage.java

❖ Cloudsim - Major blocks/Modules


o org.cloudbus.cloudsim
o org.cloudbus.cloudsim.core
o org.cloudbus.cloudsim.core.predicates
o org.cloudbus.cloudsim.distributions
o org.cloudbus.cloudsim.lists
o org.cloudbus.cloudsim.network
o org.cloudbus.cloudsim.network.datacenter
o org.cloudbus.cloudsim.power
o org.cloudbus.cloudsim.power.lists
o org.cloudbus.cloudsim.power.models
o org.cloudbus.cloudsim.provisioners
o org.cloudbus.cloudsim.util

❖ Cloudsim - key components


o Datacenter
o DataCenterCharacteristics
o Host
o DatacenterBroker
o RamProvisioner
o BwProvisioner
o Storage
o Vm
o VMAllocationpolicy
o VmScheduler
o Cloudlet
o CloudletScheduler
o CloudInformationService
o CloudSim
o CloudSimTags
o SimEvent
o SimEntity
o CloudsimShutdown
o FutureQueue
o DefferedQueue
o Predicate and associative classes.
CloudSim Elements/Components

Procedure to import Eclipse, Cloudsim in your system

Step 1: Link to download Eclipse and download Eclipse for Windows 64bit into your Local
machine

https://www.eclipse.org/downloads/packages/release/kepler/sr1/eclipse-ide-
java-developers

Windows x86_64
Step 2: Download cloudsim-3.0.3 from git hub repository in your local machine

https://github.com/Cloudslab/cloudsim/releases/tag/cloudsim-3.0.3

Cloudsim-

Step 3: Download commons-maths3-3.6.1 from git hub repository in your local machine

https://commons.apache.org/proper/commons-math/download_math.cgi

Commons-
maths3-3.6.1-
bin.zip

Step 4: Downloaded Eclipse, cloudsim-code-master and Apache Commons Math 3.6.1 in


your local machine and extract cloudsim-3.0.3 and Apache Commons Math 3.6.1
Downloaded Files

Step 5: First of all, navigate to the folder where you have unzipped the eclipse folder and
open Eclipse.exe

Step 6: Now within Eclipse window navigate the menu: File -> New -> Project, to open the
new project wizard
Step 7: A ‗New Project‗ wizard should open. There are a number of options displayed and
you have to find & select the ‗Java Project‗ option, once done click ‘Next‗

Step 8: Now a detailed new project window will open, here you will provide the project name and
the path of CloudSim project source code, which will be done as follows:

Project Name: CloudSim.


Step 9: Unselect the ‘Use default location’ option and then click on ‘Browse’ to open the path
where you have unzipped the Cloudsim project and finally click Next to set project settings.

Step 10: Make sure you navigate the path till you can see the bin, docs, examplesetc folder in
the navigation plane.
Step 11: Once done finally, click ‗Next‘ to go to the next step i.e. setting up of project
settings

Step 12: Now open ‘Libraries’ tab and if you do not find commons-math3-3.x.jar (here ‘x’
means the minor version release of the library which could be 2 or greater) in the list then
simply click on ‗Add External Jar’ (commons-math3-3.x.jar will be included in the project
from this step)
Step 13: Once you have clicked on ‗Add External JAR’s‗ Open the path where you have
unzipped the commons-math binaries and select ‗Commons-math3-3.x.jar‘ and click on open.

Step 14: Ensure external jar that you opened in the previous step is displayed in the list and
then click on ‗Finish‘ (your system may take 2-3 minutes to configure the project)
Step 15: Once the project is configured you can open the ‗Project Explorer‗and start exploring
the Cloudsim project. Also for the first time eclipse automatically start building the workspace
for newly configured Cloudsim project, which may take some time depending on the
configuration of the computer system.

Following is the final screen which you will see after Cloudsim is configured.
Step 16: Now just to check you within the ‗Project Explorer‗, you should navigate to the
‗examples‗ folder, then expand the package ‗org.cloudbus.cloudsim.examples‗ and double
click to open the ‗CloudsimExample1.java‗
Step 17: Now navigate to the Eclipse menu ‗Run ->Run‗ or directly use a keyboard
shortcut ‘Ctrl + F11’ to execute the ‗CloudsimExample1.java‗.

Step 18: If it is successfully executed it should be displaying the following type to output in
the console window of the Eclipse IDE.
Result:

Thus the cloudsim is simulated using Eclipse Environment successfully.


Ex No. 5 b Simulate a cloud scenario using CloudSim and running a scheduling
algorithm

Aim
To Simulate a cloud scenario using CloudSim and running a scheduling algorithm

Procedure to import Eclipse, running scheduling algorithms in your system

Step 1: Link to download Eclipse and download Eclipse for Windows 64bit into your Local
machine

https://www.eclipse.org/downloads/packages/release/kepler/sr1/eclipse-ide-
java-developers

Windows x86_64

Step 2: Download scheduling source code cloudsim-code-master from git hub repository in
your local machine

https://github.com/shiro873/Cloudsim-Code

Src-scheduling source
files
Step 3: Download commons-maths3-3.6.1 from git hub repository in your local machine

https://commons.apache.org/proper/commons-math/download_math.cgi

Commons-maths3-
3.6.1-bin.zip

Step 4: Downloaded Eclipse, cloudsim-3.0.3 and Apache Commons Math 3.6.1 in your local
machine and extract cloudsim-3.0.3 and Apache Commons Math 3.6.1

Downloaded Files

Step 5: First of all, navigate to the folder where you have unzipped the eclipse folder and
open Eclipse.exe
Step 6: Now within Eclipse window navigate the menu: File -> New -> Project, to open the
new project wizard

Step 7: A ‗New Project‗ wizard should open. There are a number of options displayed and
you have to find & select the ‗Java Project‗ option, once done click ‘Next‗
Step 8: Now a detailed new project window will open, here you will provide the project name
and the path of CloudSim-master-code project source code, which will be done as follows:

Project Name: CloudSim


Step 9: Unselect the ‘Use default location’ option and then click on ‘Browse’ to open the path
where you have unzipped the Cloudsim-code-master project and finally click Next to set project
settings.

Step 10: Make sure you navigate the path till you can see the bin, docs, examplesetc folder in
the navigation plane.

Step 11: Once done finally, click ‗Next‘ to go to the next step i.e. setting up of project
settings
Step 12: Once the project is configured you can open the ‗Project Explorer‗ and start
exploring the Cloudsim project. Also for the first time eclipse automatically start building the
workspace for newly configured Cloudsim project, which may take some time depending on
the configuration of the computer system.

Following is the final screen which you will see after Cloudsim is configured.

Step 13: Now just to check you within the ‗Project Explorer‗, you should navigate to the
‗src‗ folder, then expand the package ‗default package‗ and double click to open the
‗RoundRobin.java‗.
Step 14: Now navigate to the Eclipse menu ‗Run ->Run‗ or directly use a keyboard shortcut
‘Ctrl + F11’ to execute the ‘RoundRobin.java‘. If it is successfully executed it should be
displaying the following type to output in the console window of the Eclipse IDE.
Result:

Thus the scheduling algorithm is executed in cloudsim is simulated using Eclipse


Environment successfully.
Ex No. 6 Procedure to File Sharing from one virtual machine to another virtual machine

Aim:
To transfer a file one VM to another Virtual Machine

Procedure: 1. Choose the virtual machine and select Player > Manage > Virtual Machine Settings

2. Go to the Options tab and select the Shared Folders option

3. Under Folder sharing, choose a sharing option. Two options are available:
• Always enabled – keeps folder sharing enabled, even when the virtual machine is shut
down, suspended, or powered off.
• Enabled until next power off or suspend – enable folder sharing temporarily, until the
virtual machine is shut down, suspended, or powered off. Note that if the virtual machine is
restarted, shared folders will remain enabled.
4. (Optional) You can also map the drive to the Shared Folders directory that contains all shared
folders. To do that, select the Map as a network drive in Windows guests option.
1. Click Add to add a share folder

6. The Add Shared Folder Wizard opens. Click Next to Continue:

2. Type the path on the host system to the directory you want to share and specify its name:
8. Select shared folder options:

• Enable this share – enables the shared folder. Deselect this option when you want to
disable the shared folder without deleting it from the virtual machine configuration.
• Read-only – makes the shared folder read-only. The virtual machine can view and copy
files from the shared folder, but it cannot add, change, or remove files.

View a shared folder


A shared folder appears differently, depending on the guest operating system type and version. To view a
shared folder in a Windows guest OS, go to Windows Browser > Network. You should see a list of
computer available on the network, including a computer named vmware-host:

Double click vmware-host and browse to the shared folder:

Result:
Thus the procedure to share the files among the host and guest VM is done and verified.
Ex No. 7 Find a procedure to launch virtual machine using Openstack

Aim
To Find a procedure to launch virtual machine using Openstack

Introduction:
❖ OpenStack was introduced by Rackspace and NASA in July 2010.
❖ OpenStack is an Infrastructure as a Service known as Cloud Operating System, that
take resources such as Compute, Storage, Network and Virtualization Technologies and
control those resources at a data center level
❖ The project is building an open source community - to share resources and technologies
with the goal of creating a massively scalable and secure cloud infrastructure.
❖ The software is open source and limited to just open source APIs such as Amazon.

The following figure shows the OpenStack architecture

OpenStack architecture
• It is modular architecture
• Designed to easily scale out
• Based on (growing) set of core services
The major components are
1. Keystone
2. Nova
3. Glance
4. Swift
5. Quantum
6. Cinder
• KEYSTONE :
o Identity service
o Common authorization framework
o Manage users, tenants and roles
o Pluggable backends (SQL,PAM,LDAP, IDM etc)

• NOVA
o Core compute service comprised of
▪ Compute Nodes – hypervisors that run virtual machines
• Supports multiple hypervisors KVM,Xen,LXC,Hyper-V
and ESX
▪ Distributed controllers that handle scheduling, API calls, etc
• Native OpenStack API and Amazon EC2 compatible
API
• GLANCE
o Image service
o Stores and retrieves disk images (Virtual machine templates)
o Supports RAW,QCOW,VHD,ISO,OVF & AMI/AKI
o Backend Storage : File System, Swift, Gluster, Amazon S3
• SWIFT
o Object Storage service
o Modeled after Amazon‘s Service
o Provides simple service for storing and retrieving arbitrary data
o Native API and S3 compatible API
• NEUTRON
o Network service
o Provides framework for Software Defined Network
o Plugin architecture
▪ Allows intergration of hardware and software based network
solutions
• Open vSwitch, Cisco UCS,Standard Linux
Bridge,NiCira NVP
• CINDER
o Block Storage (Volume) service
o Provides block storage for Virtual machines(persistent disks)
o Similar to Amazon EBS service
o Plugin architecture for vendor extensions
▪ NetApp driver for cinder
• HORIZON
o Dashboard
o Provides simple self service UI for end-users
o Basic cloud administrator functions
▪ Define users, tenants and quotas
▪ No infrastructure management
• HEAT OpenStack Orchestration
o Provides template driven cloud application orchestration
o Modeled after AWS Cloud Formation
o Targeted to provide advanced functionality such as high availability
and auto scaling
o Introduced by Redhat
• CEILOMETER – OpenStack Monitoring and Metering
o Goal: To Provide a single infrastructure to collect measurements from
an entire OpenStack Infrastructure; Eliminate need for multiple agents
attaching to multiple OpenStack Projects
o Primary targets metering and monitoring: Provided extensibility

❖ Steps in Installing Openstack


Step 1:
• Download and Install Oracle Virtual Box latest version & Extension
package
o https://virtualbox.org/wiki/downloads

Step 2:

• Download CentOS 7 OVA(Open Virtual Appliance) from


o Link : https://linuxvmimages.com/images/centos-7
• Import CentOS 7 OVA(Open Virtual Appliance) into Oracle Virtual Box
Step 3:Login into CenOS 7

• Login Details
o User name : centos
o Password : centos
• To change into root user in Terminal

#sudosu–

Step 4: Installation Steps for OpenStack

Step5: Command to disable and stop firewall

# systemctl disable firewalld

#systemctl stop firewalld


Step 6: Command to disable and stop Network Manager

# systemctl disable NetworkManager

# systemctl stop NetworkManager

Step 7: Enable and start Network

#systemctl enable network

#systemctl start network


Step 8: OpenStack will be deployed on your Node with the help of PackStack
package provided by rdo repository (RPM Distribution of OpenStack).In order to enable
rdo repositories on Centos 7 run the below command.

#yum install -y https://rdoproject.org/repos/rdo-release.rpm

Step 9: Update Current packages

#yum update –y
Step 10:Install OpenStack Release for CentOS

#yum install –y openstack-packstack

Step 11:Start packstack to install OpenStack Newton

#packstak --allinone

Step 12:Note the user name and password from keystonerc_admin

#cat keystonerc_admin
Step 13: Click the URL and enter the user name and password to start OpenStack

OpenStack is successfully launched in your machine


Result:

Thus the OpenStack Installation is executed successfully.


Ex. No. 8a Install Hadoop single node cluster

Aim:
To find procedure to set up the one node Hadoop cluster.

Procedure:
Step 1:

Installing Java is the main prerequisite for Hadoop. Install java1.7.

$sudo apt-get update


$sudo apt-get install openjdk-7-jdk
$sudo apt-get install openjdk-7-jre
$ java -version

java version "1.7.0_79"

OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.14.04.1)

OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)

Step 2:

SSH Server accepting password authentication (at least for the setup time).

To install, run:

student@a4cse196:~$ su

Password:

root@a4cse196:/home/student# apt-get install openssh-server

Step 3:

Generate the ssh key

root@a4cse196:/home/student# ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa

Generating public/private rsa key pair.

Created directory '/root/.ssh'.

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.


The key fingerprint is:

77:a1:20:bb:db:95:6d:89:ce:44:25:32:b6:81:5d:d5 root@a4cse196

The key's random art image is:

+--[ RSA 2048] ---+

| .... |

| o. E |

| oB.o |

| +*+. |

| .S+. |

| .o=. |

| .=+ |

| o=. |

| ..o |

+ +

Step 4:

If the master also acts a slave (`ssh localhost` should work without a password)

root@a4cse196:/home/student# cat $HOME/.ssh/id_rsa.pub >>$HOME/.ssh/authorized_keys

Step 5:

Create hadoop group and user:

Step 5.1 root@a4cse196:/home/student# sudo addgroup hadoop

Adding group `hadoop' (GID 1003) ...

Done.

Step 5.2 root@a4cse196:/home/student# sudo adduser --ingroup hadoop hadoop

Adding user `hadoop' ...

Adding new user `hadoop' (1003) with group `hadoop' ...

Creating home directory `/home/hadoop' ...

Copying files from `/etc/skel' ...

Enter new UNIX password:


Retype new UNIX password:

passwd: password updated successfully

Changing the user information for hadoop

Enter the new value, or press ENTER for the default

Full Name []:

Room Number []:

Work Phone []:

Home Phone []:

Other []:

Is the information correct? [Y/n] y

root@a4cse196:/home/student#

Step 6:

Copy your .tar file to home.(hadoop-2.7.0.tar.gz)

Step 7:

Extracting the tar file.

root@a4cse196:/home/student# sudo tar -xzvf hadoop-2.7.0.tar.gz -C /usr/local/lib/

Step 8:

Changing the Ownership

root@a4cse196:/home/student# sudo chown -R hadoop:hadoop /usr/local/lib/hadoop-2.7.0

Step 9:

Create HDFS directories:

root@a4cse196:/home/student# sudo mkdir -p /var/lib/hadoop/hdfs/namenode

root@a4cse196:/home/student# sudo mkdir -p /var/lib/hadoop/hdfs/datanode

root@a4cse196:/home/student# sudo chown -R hadoop /var/lib/hadoop

Step 10:

Check where your Java is installed:

root@a4cse196:/home/student# readlink -f /usr/bin/java

/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
Step 11:

Open gedit and do it

root@a4cse196:/home/student# gedit ~/.bashrc

Add to ~/.bashrc file:

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

export HADOOP_INSTALL=/usr/local/lib/hadoop-2.7.0

export PATH=$PATH:$HADOOP_INSTALL/bin

export PATH=$PATH:$HADOOP_INSTALL/sbin

export HADOOP_MAPRED_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_HOME=$HADOOP_INSTALL

export HADOOP_HDFS_HOME=$HADOOP_INSTALL

export YARN_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"

Step 12:

Reload source

root@a4cse196:/home/student# source ~/.bashrc

Step 13:

Modify JAVA_HOME in /usr/local/lib/hadoop-2.7.0/etc/hadoop/hadoop-env.sh:

root@a4cse196:/home/student# cd /usr/local/lib/hadoop-2.7.0/etc/hadoop

root@a4cse196:/usr/local/lib/hadoop-2.7.0/etc/hadoop# gedit hadoop-env.sh

export JAVA_HOME=${ JAVA_HOME}

Changed this to below path

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

Step 14:

Modify /usr/local/lib/hadoop-2.7.0/etc/hadoop/core-site.xml to have something like:

root@a4cse196:/usr/local/lib/hadoop-2.7.0/etc/hadoop# gedit core-site.xml

<configuration>
<property>

<name>fs.default.name</name>

<value>hdfs://localhost:9000</value>

</property>

</configuration>

Step 15:

Modify /usr/local/lib/hadoop-2.7.0/etc/hadoop/yarn-site.xml to have something like:


root@a4cse196:/usr/local/lib/hadoop-2.7.0/etc/hadoop# gedit yarn-site.xml

<configuration>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

</configuration>

Step 16:

Create /usr/local/lib/hadoop-2.7.0/etc/hadoop/mapred-site.xml from template:


root@a4cse196:/usr/local/lib/hadoop-2.7.0/etc/hadoop# cp /usr/local/lib/hadoop-
2.7.0/etc/hadoop/mapred-site.xml.template /usr/local/lib/hadoop-2.7.0/etc/hadoop/mapred-site.xml

Step 17:

Modify /usr/local/lib/hadoop-2.7.0/etc/hadoop/mapred-site.xml to have something like:


root@a4cse196:/usr/local/lib/hadoop-2.7.0/etc/hadoop# gedit mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>
</configuration>

Step 18:

Modify /usr/local/lib/hadoop-2.7.0/etc/hadoop/hdfs-site.xml to have something like:

root@a4cse196:/usr/local/lib/hadoop-2.7.0/etc/hadoop# gedit hdfs-site.xml

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/var/lib/hadoop/hdfs/namenode</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:/var/lib/hadoop/hdfs/datanode</value>

</property>

</configuration>

Step 19:

Make changes in /etc/profile

$gedit /etc/profile

JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

PATH=$PATH:$JAVA_HOME/bin

export JAVA_HOME

export PATH

$source /etc/profile
Step 20:

root@a4cse196:/usr/local/lib/hadoop-2.7.0/etc/hadoop# hdfs namenode -format


Step 21:

Switch to hadoop user


start-dfs.sh

yes

yes

start-yarn.sh

root@a4cse196:/home/hadoop# jps

6334 SecondaryNameNode

6498 ResourceManager

6927 Jps

6142 DataNode

5990 NameNode

6696 NodeManager

Step 22:

Browse the web interface for the Name Node; by default it is available at:

http://localhost:50070

Result:

Thus the procedure to set up the one node Hadoop cluster was successfully done and verified.
Ex. No. 8b Word Count Program Using Map And Reduce

Aim:

To Count the number of words using JAVA for demonstrating the use of Map and Reduce
tasks.

Procedure:

1. Analyze the input file content


2. Develop the code
a. Writing a map function
b. Writing a reduce function
c. Writing the Driver class
3. Compiling the source
4. Building the JAR file
5. Starting the DFS
6. Creating Input path in HDFS and moving the data into Input path
7. Executing the program

Program: WordCount.java

import java.io.IOException;

import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

importorg.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

public static class TokenizerMapper


extends Mapper<Object, Text, Text, IntWritable>{

private final static IntWritable one = new IntWritable(1);

private Text word = new Text();

public void map(Object key, Text value, Context context

) throws IOException, InterruptedException {

StringTokenizer itr = new StringTokenizer(value.toString());

while (itr.hasMoreTokens()) {

word.set(itr.nextToken());

context.write(word, one);

public static class IntSumReducer

extends Reducer<Text,IntWritable,Text,IntWritable> {

private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable<IntWritable> values,

Context context

) throws IOException, InterruptedException {

int sum = 0;

for (IntWritable val : values) {

sum += val.get();

result.set(sum);

context.write(key, result);

public static void main(String[] args) throws Exception {

Configuration conf = new Configuration();

Job job = Job.getInstance(conf, "word count");


job.setJarByClass(WordCount.class);

job.setMapperClass(TokenizerMapper.class);

job.setCombinerClass(IntSumReducer.class);

job.setReducerClass(IntSumReducer.class);

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(IntWritable.class);

FileInputFormat.addInputPath(job, new Path(args[0]));

FileOutputFormat.setOutputPath(job, new Path(args[1]));

System.exit(job.waitForCompletion(true) ? 0 : 1);

Save the program as

WordCount.java

Step 1: Compile the java program

For compilation we need this hadoop-core-1.2.1.jar file to compile the mapreduce program.

https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-core/1.2.1

Assuming both jar and java files in same directory run the following command to compile

root@a4cseh160:/#javac -classpath hadoop-core-1.2.1.jar WordCount.java

Step 2: Create a jar file

Syntax:

jar cf jarfilename.jar MainClassName*.class

Output:

root@a4cseh160:/#jar cf wc.jar WordCount*.class

Step 3: Make directory in hadoop file system

Syntax:

hdfs dfs -mkdir directoryname

Output:

root@a4cseh160:/# hdfs dfs -mkdir /user


Step 4: Copy the input file into hdfs
Syntax:

hdfs dfs -put sourcefile destpath

Output:

root@a4cseh160:/#hdfs dfs -put /input.txt /user

Step 5: To a run a program

Syntax:

hadoop jar jarfilename main_class_name inputfile outputpath

Output:

root@a4cseh160:/#hadoop jar wc.jar WordCount /user/input.txt /user/out

Input File: (input.txt)

Cloud and Grid Lab. Cloud and Grid Lab. Cloud Lab.

Output:

18

3 Cloud

3 Lab.

2 Grid

2 and

Step 6: Check the output in the Web UI at http://localhost:50070.

In the Utilities tab select browse file system and select the correct user.

The output is available inside the output folder named user.


Step 7: To Delete an output folder

Syntax:

hdfs dfs -rm -R outputpath

Output:

root@a4cseh160:/#hdfs dfs -rm -R /user/out.txt

Result:

Thus the numbers of words were counted successfully by the use of Map and
Reduce tasks.

You might also like