You are on page 1of 7

HIGH LEVEL DESIGN

Name

Project Title

Location

Date

Effective security in
Hybrid Cloud
Computing

Team members

Pooja.P
V.Vijayalakshmi
R.Pavithra

Chennai

Project Guide

Mr. N. R. Rejin Paul

Chennai

Distribution List

Unisys

Version No.

Table of Contents
Contents
TestObjective.................................................................................................................................. 3
Requirements...................................................................................................................................3
Assumptions.....................................................................................................................................3
TestData.....................................................................................................................................

TestCases................................................................................................................................. .......3

Test Objective

To know whether we have installed OpenNebula in the proper way and are we given the
authorisation to use the cloud application in our system, we go for test objectives.

Requirements
Test cases are written to verify the proper working of the software in our system. Test cases can
be used at the time of

Installation
Registration of your system to the cloud
Proper functioning.

Assumptions
The assumptions that are made are:

The system is already registered to use OpenNebula


Ubuntu Linux is installed
The system is a virtually enabled one

Test Data
Dependancies:

OpenNebula installation
Libvirt installation.

Front-end

Requirements for the Front-End are:


ruby >= 1.8.7
Additionally you should be able to ssh passwordlessly to all the hosts (including itself,
the frontend) from the frontend.

Test Cases

To start the testing all of this, start OpenNebula and add the ec2 host with:
lgonzalez@machine:one$ one start
oned and scheduler started
lgonzalez@machine:one$ onehost create ec2 im_ec2 vmm_ec2
lgonzalez@machine:one$ onehost list
HID NAME
STAT

RVM

0 ec2

TCPU

FCPU

ACPU

100

TMEM

FMEM

on

submit the created ec2 template to initiate a ec2 instance like this:
lgonzalez@machine:one$ onevm create ec2.template
ID: 0

later the scheduler will deploy the machine on the ec2


lgonzalez@machine:one$ onevm list
ID
0

NAME STAT CPU


one-0 pend

MEM

HOSTNAME

TIME
00 00:00:05

lgonzalez@machine:one$ onevm list


ID
0

NAME STAT CPU


one-0 boot

MEM
0

HOSTNAME

TIME

ec2 00 00:00:15

And then you can see more detailed information (like IP address of this machine):
lgonzalez@machine:one$ onevm show 0
VID

: 0

AID

: -1

TID

: -1

UID

: 0

STATE

: ACTIVE

LCM STATE

: RUNNING

DEPLOY ID

: i-1d04d674

MEMORY

: 0

CPU

: 0

PRIORITY

: -2147483648

RESCHEDULE

: 0

LAST RESCHEDULE: 0
LAST POLL

: 1216647834

START TIME

: 07/21 15:42:47

STOP TIME

: 01/01 01:00:00

NET TX

: 0

NET RX

: 0

....: Template :....


CPU

: 1

EC2
: AMI=ami-dcb054b5,AUTHORIZED_PORTS=2225,ELASTICIP=75.101.155.97,INSTANCETYPE=m1.small,KEYPAIR=gsg-keypair
IP

: ec2-75-101-155-97.compute-1.amazonaws.com

MEMORY

: 1700

NAME

: one-0

REQUIREMENTS

: HOSTNAME = "ec2"

In this case is IP assigned ec2-75-101-155-97.compute-1.amazonaws.com.


Now we check that the machines are running in our cluster, we have one machine
running locally from our Xen resources (local01) and one machine runnining in EC2
(workernode0).
oneserver:~# qstat -f
queuename
states

qtype used/tot. load_avg arch

--------------------------------------------------------------------------all.q@local01

BIP

0/1

0.05

lx24-x86

---------------------------------------------------------------------------

all.q@workernode0

BIP

0/1

0.04

lx24-x86

---------------------------------------------------------------------------

To test the cluster, submit some jobs to SGE via qsub <script.sh>, before that we
need to change to the account ofnistest, since thats the user we configured for
the NIS and SGE.
oneserver:~# su - nistest

oneserver:~# qsub test_1.sh; qsub test_2.sh;

Now we see how jobs are scheduled and launched into our hybrid cluster.
nistest@oneserver:~$ qstat -f
queuename
states

qtype used/tot. load_avg arch

--------------------------------------------------------------------------all.q@local01

BIP

0/1

0.02

lx24-x86

--------------------------------------------------------------------------all.q@workernode0

BIP

0/1

0.01

lx24-x86

--------------------------------------------------------------------------###########################################################################
#
- PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS
###########################################################################
#
1180 0.00000 test_1.sh

nistest

qw

07/21/2008 15:26:09

1181 0.00000 test_2.sh

nistest

qw

07/21/2008 15:26:09

nistest@oneserver:~$ qstat -f
queuename
states

qtype used/tot. load_avg arch

--------------------------------------------------------------------------all.q@local01
1181 0.55500 test_2.sh

BIP

1/1

nistest

0.02
r

lx24-x86

07/21/2008 15:26:20

--------------------------------------------------------------------------all.q@workernode0
1180 0.55500 test_1.sh

BIP
nistest

1/1

0.07
r

lx24-x86

07/21/2008 15:26:20

---------------------------------------------------------------------------

The interesting parameter here is the scalability provided by EC2, since you can
launch any number of instances on EC2, and add them as working nodes into your
SGE Virtual Private Cluster managed by OpenNebula.

You might also like