Professional Documents
Culture Documents
5 –
Contents
Eucalyptus consists of five main components that work together to provide the requisite cloud services.
Cloud Controller (CLC) - Within a Eucalyptus cloud, this is the main controller component responsible for
managing the entire system. It is the main entry point into the Eucalyptus cloud for all users and
administrators. All clients will communicate only with the CLC using the SOAP (Simple Object Access
Protocol) or REST (Representational State Transfer)-based API. The CLC is responsible for passing on
requests to the right component, collecting them, and sending the responses from the components back
to the client. This is the public face to the Eucalyptus cloud. CLC provides a UI (an https server on port
8443)
Cluster Controller (CC) - The controller component within Eucalyptus responsible for managing the entire
virtual instance network. Requests are communicated to the CC using the SOAP or REST-based
interface. The CC maintains all the information about the Node Controllers that run in the system and is
responsible for controlling the life cycle of the instances. It routes requests for starting virtual instances to
the Node Controller with available resources.
Walrus (W) - Walrus is a storage service similar to that of Amazon S3. The primary use of Walrus is to
store VM images called Eucalyptus Machine Images (EMIs). The controller component that manages
access to the storage services within Eucalyptus. Requests are communicated to Walrus using the SOAP
or REST-based interface.
Storage Controller (SC) - The storage service within Eucalyptus that implements Amazon’s S3 interface.
SC is used for storing and accessing virtual machine images. The VM images can be public or private
and are initially stored in compressed and encrypted form. The images are decrypted only when a node
needs to start a new instance and requests access to the image.
Setup Architecture –
In any Eucalyptus Cloud Installation, there are 2 top-level components: Cloud Controller (CLC) and
Walrus. These 2 components manage the various clusters, where cluster is a set of physical machines
that host the Virtual Instances. In each cluster, there are 2 components that interact with the high level
components: Cluster Controller (CC) and Storage Controller (SC). CC and SC is cluster – level
components. Each cluster is composed of various Nodes, or physical machines. Each Node will run a
Node Controller (NC) that will control the hypervisor for managing the Virtual Instances.
For this setup, we have implemented a Single-Cluster Installation, where all the components except NC
are co-located on one machine. As per Eucalyptus documentation, this co-located system is called: front-
end.
The Node Controller uses Xen as a hypervisor. The NC service runs on Domain-0 kernel in the Xen
Setup.
Hardware
Intel Core 2 Duo Processor 1.8 GHz (VT enabled) with 4 GB RAM, 160 GB HDD.
# export VERSION=2.0.0
# export ARCH=x86_64
# export http_proxy=http://username:******@proxy.domainname:8080
Packages are available from our yum repository. To use this option, create '/etc/yum.repos.d/euca.repo'
file with the following four lines:
[euca]
name=Eucalyptus
baseurl=http://www.eucalyptussoftware.com/downloads/repo/eucalyptus/2.0.0/yum/centos/
enabled=1
Prerequisites
Front-end, node(s), and client machine system clocks are synchronized (e.g., using NTP).
Front end needs java, command to manipulate a bridge, and the binaries for dhcp server (do not
configure or run dhcp server on the CC):
# /etc/init.d/eucalyptus-cloud start
Starting Eucalyptus services: walrus sc cloud done.
# /etc/init.d/eucalyptus-cc start
Starting Eucalyptus cluster controller: done.
Start NC
# /etc/init.d/eucalyptus-nc start
You should have at least 32 loop devices
Starting Eucalyptus services:
Enabling IP forwarding for eucalyptus.
# /etc/init.d/eucalyptus-nc status
You should have at least 32 loop devices
So it means we have to create at least 32 loop devices to get rid of this message.
# vi /etc/modprobe.conf
Now, when I re-start the service again I don't see that same message -
# /etc/init.d/eucalyptus-nc start
Starting Eucalyptus services:
Enabling bridge netfiltering for eucalyptus.
done.
[euca2ools]
name=Euca2ools
baseurl=http://www.eucalyptussoftware.com/downloads/repo/euca2ools/2.0.0/yum/centos/
enabled=1
# export VERSION=1.3.1
# export ARCH=x86_64
# yum install euca2ools.$ARCH --nogpgcheck
# iptables -F
# /etc/init.d/iptables stop
Better use chkconfig or /etc/rc.local file to do this. I prefer using /etc/rc.local file.
/etc/init.d/iptables stop
/sbin/iptables -F
visit https://frontend:8443
log in using the password you set (if you used the GUI earlier) or admin/admin
select the "Credentials" tab
Select "Download Credentials"
# mkdir ~/.euca
# chmod 700 ~/.euca
# ls -l euca2-admin-x509.zip
# mv euca2-admin-x509.zip ~/.euca
# unzip euca2-admin-x509.zip
Archive: euca2-admin-x509.zip
To setup the environment run: source /path/to/eucarc
inflating: eucarc
inflating: cloud-cert.pem
inflating: jssecacerts
inflating: euca2-admin-3947cb34-pk.pem
inflating: euca2-admin-3947cb34-cert.pem
# chmod 600 *
"ERROR: failed to register Walrus, please log in to the admin interface and check cloud status."
The issue turned out that my Proxy variables in roots shell were causing communication Issues
between the processes. As they use wget in some places inside the Eucalyptus code.. So if you
have issues you may want to make sure the following variables are unset:
http_proxy
ftp_proxy
HTTP_PROXY
FTP_PROXY
==========================================================
INFO: We expect all nodes to have eucalyptus installed in / for key synchronization.
INFO: We expect all nodes to have eucalyptus installed in / for key synchronization.
# euca-describe-availability-zones verbose
AVAILABILITYZONE Cluster01 xx.xx.xx.xx
AVAILABILITYZONE |- vm types free / max cpu ram disk
AVAILABILITYZONE |- m1.small 0002 / 0002 1 128 2
AVAILABILITYZONE |- c1.medium 0002 / 0002 1 256 5
AVAILABILITYZONE |- m1.large 0001 / 0001 2 512 10
AVAILABILITYZONE |- m1.xlarge 0001 / 0001 2 1024 20
AVAILABILITYZONE |- c1.xlarge 0000 / 0000 4 2048 20
==========================================================
Add Images
# wget http://eucalyptussoftware.com/downloads/eucalyptus-images/euca-centos-5.3-
x86_64.tar.gz
# tar zxvf euca-centos-5.3-x86_64.tar.gz
# cd euca-centos-5.3-x86_64
]# ls -lrt
total 1026040
-rw-r--r-- 1 root root 1049624576 Apr 24 2009 centos.5-3.x86-64.img
drwxr-xr-x 2 root root 4096 May 13 2009 xen-kernel
drwxr-xr-x 2 root root 4096 May 13 2009 kvm-kernel
# euca-bundle-image -i xen-kernel/vmlinuz-2.6.27.21-0.1-xen --kernel true
Traceback (most recent call last):
File "/usr/bin/euca-bundle-image", line 39, in ?
from euca2ools import Euca2ool, FileValidationError, Util, \
ImportError: No module named euca2ools
NOTE: I get above error for all euca2ools commands, every euca-* command fails with this error
and this is what I can figure it out == CentOS is still distributing Python 2.4 by default. The
euca2ools installer has a dependency on python2.5. So, python2.5 is installed and then euca2ools
is installed as a library for python2.5. However, when you run the euca2ools scripts, they are still
being run with the python2.4 interpreter and the euca2ools library cannot be found.
As a part of solution I've two options - either downgrade to euca2ools 1.2 or modify Phythondir variable.
#!/bin/sh
find /usr/bin -type f -name 'euca-*' | xargs sed -i
's/#!\/usr\/bin\/python/#!\/usr\/bin\/python25/g'
find /usr/sbin -type f -name 'euca-*' | xargs sed -i 's/#!\/usr\/bin\/env
python/#!\/usr\/bin\/env python25/g'
Okay, finally after executing above script I able to execute script successfully BUT later I again
encountered a lot of issues with euca-register commands and was getting an error like -
# euca-register centos-kernel-bucket/vmlinuz-2.6.27.21-0.1-xen.manifest.xml
Warning: failed to parse error message from AWS: :1:0: syntax error
EC2ResponseError: 400 Bad Request
Failure: 400 Bad Request
Failed to bind the following fields:
Location = centos-kernel-bucket/vmlinuz-2.6.27.21-0.1-xen.manifest.xml
# euca-register centos-kernel-bucket/vmlinuz-2.6.27.21-0.1-xen.manifest.xml
IMAGE eki-8E3416F8
# euca-register centos-ramdisk-bucket/initrd-2.6.27.21-0.1-xen.manifest.xml
IMAGE eri-889216D6
# euca-register centos-image-bucket/centos.5-3.x86-64.img.manifest.xml
IMAGE emi-1D2D15B1
Command line –
GUI –
After for a moment, instance has an IP address BUT in our case I was not that lucky and I got this –
# euca-describe-instances
RESERVATION r-4CC5098C admin default
INSTANCE i-402E07E7 emi-1CDA15AF 0.0.0.0 0.0.0.0 terminated my-
key 0 m1.small 2010-11-02T12:56:52.574Z Cluster01 eki-
8DE616F2 eri-883D16CD
Okay, finally I got it running state. The issue was insufficient disk space! So I changed INSTANCE_PATH
variable in /etc/eucalyptus/eucalyptus.conf
CAVEATS - In this mode, as mentioned previously, VMs are simply started with their Ethernet
interfaces attached to the local Ethernet without any isolation. Practically, this means that you
should treat a VM the same way that you would treat a non-VM machine running on the network.
Eucalyptus does its best to discover the IP address that was assigned to a running VM via a third-
party DHCP server, but can be unsuccessful depending on the specifics of your network (switch
types/configuration, location of CC on the network, etc.). Practically, if Eucalyptus cannot
determine the VM's IP, then the user will see '0.0.0.0' in the output of 'describe-instances' in both
the private and public address fields. The best workaround for this condition is to instrument your
VMs to send some network traffic to your front end on boot (after they obtain an IP address). For
instance, setting up your VM to ping the front-end a few times on boot should allow Eucalyptus to
be able to discover the VMs IP.
We have to tweak our VM images in such a way that we should be able to ping front end server when VM
will boot. Let’s see how to do that –
# euca-terminate-instances i-3D0D082D
INSTANCE i-3D0D082D
# euca-describe-instances
RESERVATION r-4C2E08BC admin default
INSTANCE i-3D0D082D emi-1CDA15AF 0.0.0.0 0.0.0.0 shutting-down my-
key 0 m1.small 2010-11-02T15:41:52.758Z Cluster01 eki-
8DE616F2 eri-883D16CD
# euca-describe-instances
RESERVATION r-4C2E08BC admin default
INSTANCE i-3D0D082D emi-1CDA15AF 0.0.0.0 0.0.0.0 terminated my-key
0 m1.small 2010-11-02T15:41:52.758Z Cluster01 eki-8DE616F2
eri-883D16CD
# euca-describe-images
IMAGE eki-8DE616F2 centos-kernel-bucket/vmlinuz-2.6.27.21-0.1-
xen.manifest.xml admin available public x86_64
kernel instance-store
IMAGE emi-1CDA15AF centos-image-bucket/centos.5-3.x86-64.img.manifest.xml
admin available public x86_64 machine eki-8DE616F2 eri-883D16CD
instance-store
# euca-deregister eki-8DE616F2
IMAGE eki-8DE616F2
# euca-deregister eri-883D16CD
IMAGE eri-883D16CD
# euca-deregister emi-1CDA15AF
IMAGE emi-1CDA15AF
# umount /mnt/
Okay, now image has been modified to send ping request to front-end server while it will boot and set root
password.
Now recreate the images using euca-* commands. [Jump to - Add Images section]
Now run the instance using euca-* commands. [Jump to - Run instances section]
Great news - it's working! Whoo hooo! I'm in. OK, a newbie victory but a victory all the same.
# euca-describe-instances
RESERVATION r-4E3208ED admin default
INSTANCE i-3EA5078C emi-1CF415A8 xx.xx.xx.xx xx.xx.xx.xx
running my-key 0 m1.small 2010-11-03T07:00:58.114Z
Cluster01 eki-8E5B1702 eri-886D16CE
# useradd nj0001
# id nj0001
uid=501(nj0001) gid=501(nj0001) groups=501(nj0001)
# passwd nj0001
Changing password for user nj0001.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
$ pwd
/home/nj0001
$ ls -l
total 8
-rw-rw-r-- 1 nj0001 nj0001 4942 Nov 3 17:22 euca2-nj0001-x509.zip
$ mkdir -p ~/.euca/certs
$ chmod -R 700 ~/.euca/certs
$ mv euca2-nj0001-x509.zip ~/.euca/certs
$ cd ~/.euca/certs
$ unzip euca2-nj0001-x509.zip
Archive: euca2-nj0001-x509.zip
To setup the environment run: source /path/to/eucarc
inflating: eucarc
inflating: cloud-cert.pem
inflating: jssecacerts
$ euca-describe-images -a
IMAGE eri-886D16CE centos-ramdisk-bucket/initrd-2.6.27.21-0.1-
xen.manifest.xml admin available public x86_64
ramdisk instance-store
IMAGE eki-8E5B1702 centos-kernel-bucket/vmlinuz-2.6.27.21-0.1-
xen.manifest.xml admin available public x86_64
kernel instance-store
IMAGE emi-1CF415A8 centos-image-bucket/centos.5-3.x86-64.img.manifest.xml
admin available public x86_64 machine eki-8E5B1702 eri-
886D16CE instance-store
$ euca-describe-instances
RESERVATION r-4FE9088F nj0001 default
INSTANCE i-4AF50836 emi-1CF415A8 10.88.88.231 10.88.88.231
running my_key 0 m1.small 2010-11-03T12:08:20.476Z
Cluster01 eki-8E5B1702 eri-886D16CE
In case if you’re not comfortable on command line then you can use Firefox plug-in known as Hybridfox to
manage the VMs.
To create your own/ customized kernel, ramdisk and disk images follow below procedure and then extract
the kernel, ramdisk from that newly installed VM and then transfer them to frontend server along with disk
image after this using euca2tools bundle it, upload it and register it and run.
# export ARCH=x86_64
# dd if=/dev/zero of=/home/images/centos5.5-base.img bs=1M count=10240
# /sbin/mke2fs -F -j /home/images/centos5.5-base.img
# mount -o loop /home/images/centos5.5-base.img /home/images/Baseimage
# mkdir -p /home/images/Baseimage/{etc,dev,proc,sys,var/lock/rpm}
# vi /home/images/Baseimage/etc/fstab
[main]
cachedir=/var/cache/yum
debuglevel=2
logfile=/var/log/yum.log
exclude=*-debuginfo
gpgcheck=0
obsoletes=1
reposdir=/dev/null
[base]
name=CentOS 5 - $basearch - Base
baseurl=http://ftp.belnet.be/packages/centos/5.5/os/x86_64/
enabled=1
[updates-released]
name=CentOS 5 - $basearch - Released Updates
baseurl=http://ftp.belnet.be/packages/centos/5.5/updates/x86_64/
enabled=1
NETWORKING=yes
HOSTNAME=localhost.localdomain
# vi /home/images/Baseimage/etc/sysconfig/network-scripts/ifcfg-eth0
ONBOOT=yes
DEVICE=eth0
# vi /home/images/Baseimage/etc/resolv.conf
search persistent.co.in
nameserver xx.xx.xxx.xxx
nameserver 10.77.224.100
Now we are ready to register kernel image, ramdisk image and disk image.
Kernel image -
IMAGE eki-9EBB177F
Ramdisk image –
IMAGE eri-001F18DD
Disk image –
IMAGE emi-F744159C
Solution - You can increase the VM type through the admin web interface ("Configuration" tab). Make
sure that values are ordered and you change one VM type at a time.
# euca-describe-instances
RESERVATION r-44B2087D admin default
INSTANCE i-2D89056A emi-F7761599 10.88.88.190 10.88.88.190
running my-key 0 m1.xlarge 2010-11-13T13:44:17.482Z
Cluster01 eki-9EAF1784 eri-19C41922
At this point you can also use HybridFox to access the VM launched via Eucalyptus private cloud.
Important Note: To use multiple hypervisors with Eucalyptus you will have to separate them into
different availability zones i.e. different subnets/networks with two different CC components
installed on either the NCs or hypervisors. The reason to that is, Eucalyptus does not have the
enough intelligence to send an user request to appropriate hypervisor which will understand the
source image format targeted for a particular hypervisor or at least you will have to make sure the
images you have created will work on or compatible to work on hypervisors like KVM, Xen or
VMware.
Install CentOS 5.5 and have KVM as default hypervisor. We will have to add this server as a new NC with
KVM hypervisor to our Eucalyptus cloud setup.
NOTE: If you have only one NIC card on your KVM host ,don’t assign IP to that NIC and use eth0
to interface with the bridge vnet0 (give static IP to the bridge) now the bridge IP can be used to
Connect over SSH and VNC to the host.
# cat /etc/sysconfig/network-scripts/ifcfg-br0
# Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe
DEVICE=br0
BOOTPROTO=static
BROADCAST=xx.xx.xx.xxx
HWADDR=xx.xx.xx.xx.xx.xx
IPADDR=xx.xx.xx.xx
IPV6INIT=no
NETMASK=xxx.xxx.xxx.x
NETWORK=xx.xx.xx.x
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe
DEVICE=eth0
BRIDGE=br0
BOOTPROTO=none
HWADDR=xx.xx.xx.xx.xx.xx
ONBOOT=yes
TYPE=Ethernet
# ifconfig | more
br0 Link encap:Ethernet HWaddr xx.xx.xx.xx.xx.xx
inet addr:xx.xx.xx.xx Bcast:xx.xx.xx.xxx Mask:xxx.xxx.xxx.x
inet6 addr: fe80::223:aeff:fe85:ee3e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:61880 errors:0 dropped:0 overruns:0 frame:0
TX packets:1665 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3640272 (3.4 MiB) TX bytes:416773 (407.0 KiB)
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.0023ae85ee3e no eth0
virbr0 8000.000000000000 yes
NTP setup –
Install NC
Install CC –
Start NC
# The hypervisor that the Node Controller will interact with in order
# to manage virtual machines. Currently, supported values are 'kvm'
# and 'xen'.
HYPERVISOR="kvm"
Now we will create a new VM so that we will get the .img file, initrd file and kernel file.
After this it will create a domain and then you will have to follow a regular installation flow.
Okay, so Fedora 11 has been successfully installed and now next task is to configure the VM for few
security settings shown below –
Check the network related files to verify they are correct and as expected.
Append -
Copy over – initrd, kernel and image file to front end server from KVM NC.
Note: - Before copying the image file to front end server just make sure the VM is shutdown.
Now we are ready to register kernel image, ramdisk image and disk image.
IMAGE eki-29F2189C
IMAGE eri-8E9819E5
IMAGE emi-FB1615B0
NOTE: ERROR
There's no such group (libvirtd) present in Centos 5.5 after installing the libvirtd daemon. I've added it
artificially (groupadd):
# ls -alh /var/run/libvirt
total 40K
drwxr-xr-x 4 root root 4.0K Aug 13 17:25 .
drwxr-xr-x 23 root root 4.0K Aug 13 17:25 ..
srwxrwx--- 1 root libvirt 0 Aug 13 17:25 libvirt-sock
srwxrwxrwx 1 root libvirt 0 Aug 13 17:25 libvirt-sock-ro
drwxr-xr-x 2 root root 4.0K Aug 11 00:06 network
drwxr-xr-x 2 root root 4.0K Aug 11 00:06 qemu
Adding eucalyptus to libvirt as you suggest, and adjusting /etc/libvirt/libvirtd.conf this way:
# Set the UNIX domain socket group ownership. This can be used to
# allow a 'trusted' set of users access to management capabilities
# without becoming root.
#
# This is restricted to 'root' by default.
unix_sock_group = "libvirt"
# If not using PolicyKit and setting group ownership for access
# control then you may want to relax this to:
unix_sock_rw_perms = "0770"
I had to put a symlinks in place too, according to nc.log error messages (I find it incredible that CentOS
kvm package does not put this in place correctly):
After this fix you will see the node is provisioned and ready for use.
There is lot more to explore and will be working on it however this document is enough to at least getting
started with experiment! Hope this helps.