You are on page 1of 105

Lab.

Deployment of a virtual HPC


cluster using Rocks
High Performance Infrastructures (HPC Master)
Roberto R. Expósito
Department of Computer Engineering
Universidade da Coruña
Lab. Deployment of a virtual HPC cluster using Rocks

• This lab consists of deploying a computing cluster for running HPC


applications on a virtual environment
• This document provides a step-by-step guide to deploy such a cluster
environment using the following technologies:
• Rocks, as an example of an all-in-one cluster middleware
• http://www.rocksclusters.org
• VirtualBox, as the underlying virtualization platform
• https://www.virtualbox.org
• Specific software versions used in this document:
• Rocks 7.0 (codename Manzanita), based on CentOS 7.4
• Oracle VirtualBox 5.1.38 and the corresponding VM Extension Pack
• Prerequisites:
• A working VirtualBox installation on your PC/laptop (version >= 4)
• You must have a 64-bits capable CPU that provides hardware virtualization
support
• Check that this support is actually enabled on BIOS/EUFI
2
Lab. Deployment of a virtual HPC cluster using Rocks

• Overall deployment of the virtual HPC cluster:


1. Create and configure the VM for the head node of the cluster
2. Install Rocks distribution on the head node
3. For each computing node:
• Create and configure the corresponding VM
• Install Rocks over the internal network of the cluster using PXE
• Minimum VM requirements for the head node
• 1 CPU core, 1 GB of memory and 30 GB of disk space
• Minimum VM requirements for each computing node
• 1 CPU core, 2 GB of memory and 30 GB of disk space
• Minimum cluster deployment: head node + 1 computing node
• If your PC/laptop has enough resources, you can install 2 computing nodes and/or
provide more CPU cores to them
• Head node must have 2 network interfaces for the public (external) and private
(internal) networks of the cluster
• Computing nodes must only have 1 network interface for the private (internal)
network of the cluster 3
IMPORTANT NOTES
Lab. Deployment of a virtual HPC cluster using Rocks

• This lab will NOT be evaluated


• BUT, it is the base required to address the following lab related with the
cluster that you are about to deploy
• So, next lab will propose the activities/tasks to be actually evaluated
• Those tasks must be performed on your virtual cluster
• It is mandatory to use the corresponding hostname for your cluster in
slide 34, following this naming scheme:
• myrockscluster.X.udc.es
• Where X is your given and family names separated by a dot
• Avoid using accent marks (e.g. á) or any other character that does not belong to
the English alphabet (e.g. ñ, ö)
• Example for María Pérez Acuña:
• myrockscluster.maria.acuna.udc.es
• Example for George Bähr:
• myrockscluster.george.bahr.udc.es
• The hostname is actually the Fully-Qualified Domain Name (FQDN) of your
cluster. It is an important parameter, so write it down
5
VIRTUALBOX CONFIGURATION
Lab. Deployment of a virtual HPC cluster using Rocks

• Check the correct installation of the VM Extension Pack on your


VirtualBox platform (File -> Preferences -> Extensions)
• The extension pack is required to support PXE boot on some NICs

7
Lab. Deployment of a virtual HPC cluster using Rocks

• Check that you have an existing NAT network (File -> Preferences ->
Network -> NAT Networks)
• Otherwise, you must create a new NAT network

Remember (or write


down) the name
and the network
range of your NAT
network

8
HEAD NODE VM
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the head node with:


• 1 CPU core
• 1 GB of memory
• 30 GB of disk space (dynamically allocated)
• 2 vNICs

10
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the head node with:


• 1 CPU core
• 1 GB of memory
• 30 GB of disk space (dynamically allocated)
• 2 vNICs

11
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the head node with:


• 1 CPU core
• 1 GB of memory
• 30 GB of disk space (dynamically allocated)
• 2 vNICs

12
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the head node with:


• 1 CPU core
• 1 GB of memory
• 30 GB of disk space (dynamically allocated)
• 2 vNICs

13
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the head node with:


• 1 CPU core
• 1 GB of memory
• 30 GB of disk space (dynamically allocated)
• 2 vNICs

14
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the head node with:


• 1 CPU core
• 1 GB of memory
• 30 GB of disk space (dynamically allocated)
• 2 vNICs

15
Lab. Deployment of a virtual HPC cluster using Rocks

• Head node VM

16
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the head node VM


• Boot order:
1. Optical disk
2. Hard disk

17
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the head node VM


• Enable PAE/NX feature

18
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the head node VM


• Check that hardware virtualization support is enabled

19
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the head node VM


• Download the Rocks bootable kernel ISO image from:
• http://central-7-0-x86-64.rocksclusters.org/isos/kernel-7.0-0.x86_64.disk1.iso
• Load the bootable kernel ISO into the optical device

20
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the head node VM


• First vNIC adapter must be the private (internal) network of the cluster
• Use the “Internal Network” type provided by VirtualBox

21
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the head node VM


• Second vNIC adapter must be the public (external) network of the cluster
• Use the “NAT Network” type provided by VirtualBox
• Choose the NAT network you created before (slide 8)

22
Lab. Deployment of a virtual HPC cluster using Rocks

• Head node VM: final configuration

23
ROCKS INSTALLATION ON THE
HEAD NODE
Lab. Deployment of a virtual HPC cluster using Rocks

• Start the head node VM and choose “Install Rocks 7.0”

25
Lab. Deployment of a virtual HPC cluster using Rocks

• Select the appropriate language during the installation process

26
Lab. Deployment of a virtual HPC cluster using Rocks

• The grayed selections indicate that something else must be done first
• In this case, the public network and hostname/FQDN must be first configured
• So, click on “Network & Hostname” (System section)

27
Lab. Deployment of a virtual HPC cluster using Rocks

• Remember that the vNIC adapter for the public interface is the second
one (see slide 22)
• At this moment, this adapter should be in disconnected state
• Click on configure

Configure only
the interface for
the public
network

28
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure network settings for the public interface


• You need to determine: IP, netmask, gateway and DNS configuration
• As IP, you can use any valid (and unused!) address according to the network
range of your NAT Network (see slide 8)
• Using the command “VBoxManage list dhcpservers” you can check the
starting address used by the DHCP server on your NAT Network

Starting address
used by the DHCP
server on the NAT
Network
29
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure network settings for the public interface


• E.g. for 10.0.2.0/24 (see slide 8):
• A valid IPv4 for a VM is any unused address from .4 to .254
• Netmask is 255.255.255.0 (note the /24 part of the network range)
• The gateway is the IPv4 address ended in .2
• As DNS, you can use any of the servers provided by Google (e.g. 8.8.8.8)

30
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure network settings for the public interface


• Introduce your IPv4 settings accordingly

31
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure network settings for the public interface


• Finally, configure IPv6 settings to "link-local only” and now you can click "Save“
to return to the network screen

32
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure network settings for the public interface


• At this point, click the on/off slider to activate the network interface
• If all goes well, you should see IP address, netmask, gateway and DNS server

33
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure the hostname (i.e. FQDN) for your cluster


• It is mandatory to use the appropriate hostname/FQDN for your cluster,
following the naming scheme explained in slide 5

DO NOT forget to
click “Apply”
after changing the
hostname!
34
Lab. Deployment of a virtual HPC cluster using Rocks

• If you are successful, you home screen should now reflect that the
network is up and configured
• Now, it is time to configure the private network of the cluster

35
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure network settings for the private interface


• Note that the interface you chose for your public network is not available on the
drop-down menu, so only one physical interface should appear in the list

You may select any IPv4 subnet you want. However,


conflict with the public network is not checked!. This
means that your public and private IP subnetworks must not
overlap.

Leaving the default settings here is strongly recommended

36
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure your cluster (Cluster Config section)


• Any items that need to be completed are highlighted in red, while grey areas are
automatically filled with data gathered from other parts of the installer

NOTE: set
“Cluster Name”
the same as your
hostname/FQDN
BUT without
“.udc.es”

The remaining
fields that can be
changed are not
relevant. Leave
the default values

37
Lab. Deployment of a virtual HPC cluster using Rocks

• Selection of packages to be installed (Rocks Rolls section)


• Rocks includes several core bundles, or “rolls”, that provide the OS and other
applications

Check that you


are using tis
URL to fetch
rolls. Then,
click on “List
Available Rolls”
and wait until
rolls appear
below

38
Lab. Deployment of a virtual HPC cluster using Rocks

• Selection of packages to be installed (Rocks Rolls section)


• Select kernel, base, core, CentOS, Updates-CentOS, python, ganglia and sge
• Click “Add selected rolls” and wait until they are added. Click “Done” to finish

DO NOT select
any other roll
different to the
ones specified
above

39
Lab. Deployment of a virtual HPC cluster using Rocks

• Selection of packages to be installed (Rocks Rolls section)


• At this point the home screen should look like
• Last step is to configure partitioning on your head node (System section)

40
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure partitioning (System section)


• Select “I will configure partitioning” and click “Done”

41
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure partitioning (System section)


• Now you are in the Anaconda's partitioning system

42
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure partitioning (System section)


• Replicate the following partitioning scheme
• Click “Done” and “Accept changes” when asked

First, create /boot


and swap partitions
with 1 GB and 2
GB, respectively.
Finally, create root
partition (/) with
the remaining
space by leaving
blank the desired
capacity

43
Lab. Deployment of a virtual HPC cluster using Rocks

• Disable kdump (System section)

44
Lab. Deployment of a virtual HPC cluster using Rocks

• Disable kdump (System section)


• Click “Done” to return to the home screen

45
Lab. Deployment of a virtual HPC cluster using Rocks

• Configuration of the head node is finished


• Click “Begin Installation”. There is no going back. Up until this point, all changes
are held in memory. After this, partitions are formatted

46
Lab. Deployment of a virtual HPC cluster using Rocks

• Installation begins
• During installation, it is time to setup root password

47
Lab. Deployment of a virtual HPC cluster using Rocks

• Setup root password


• Choose any password you want, but use one that you remember!
• Click “Done”

48
Lab. Deployment of a virtual HPC cluster using Rocks

• Let installation progress…and take a coffee 


• Installation is now automated, but there is a lot of work to do. Partitions must be formatted,
rolls must be downloaded from Internet, packages are installed and then your system is
configured. Be patient!

DO NOT create a
user now. We will
see later how users
can be created after
cluster installation

49
Lab. Deployment of a virtual HPC cluster using Rocks

• Once the installation finished, click “Reboot”


• Remember to remove the bootable ISO from the optical drive of the VM in order
to boot from the hard disk

50
Lab. Deployment of a virtual HPC cluster using Rocks

• The head node should come up


• It is time for the initial setup performed by Rocks. Click “Finish Configuration”

51
Lab. Deployment of a virtual HPC cluster using Rocks

• And finally…the head node is up & running!


• Log in to the head node as root

52
HEAD NODE CONFIGURATION
Lab. Deployment of a virtual HPC cluster using Rocks

• To configure the keyboard in another language (if you need it)


• Open a terminal and execute “system-config-keyboard”
• Check Internet connection
• Open a terminal and execute “ping www.google.es”
• Check Wordpress
• Open a browser and visit:
• http://localhost
• Check the Ganglia web interface
• Open a browser and visit:
• http://localhost/ganglia
• If you obtain an error with Ganglia, open a terminal and execute as root:
• /etc/init.d/gmetad restart
• Check the network configuration executing in a terminal
• ifconfig
• rocks list network
54
Lab. Deployment of a virtual HPC cluster using Rocks

• Check Internet connection

55
Lab. Deployment of a virtual HPC cluster using Rocks

• Check Wordpress

56
Lab. Deployment of a virtual HPC cluster using Rocks

• Check the Ganglia web interface

57
Lab. Deployment of a virtual HPC cluster using Rocks

• Check the network configuration

NIC of the
private
network

NIC of the
public
network

58
Lab. Deployment of a virtual HPC cluster using Rocks

• Check the network configuration

Private (internal)
and public
(external) networks
of the cluster

59
Lab. Deployment of a virtual HPC cluster using Rocks

• During installation, Rocks copies the distribution (rolls+configuration


files) to a local path on the head node
• /export/rocks/install
• It is possible to customize certain aspects of the Rocks distribution
• Add/remove rolls to the distribution
• Update packages with newer versions
• Customize the installation of the computing nodes
• Changing their default partitioning scheme
• Enabling the automatic installation of additional packages that are not
installed by default
• Disabling their automatic reinstallation upon a hard reboot
• When a computing node experiences a hard reboot (e.g., when the
computing node is reset by pushing the power button or after a power
failure), this feature will reformat its root file system and reinstall its
base operating environment
• This feature is interesting in production environments, but it can be a bit
annoying in our testing environment (reinstalling a node requires time)
• We will customize the Rocks distro to disable this feature
60
Lab. Deployment of a virtual HPC cluster using Rocks

• Location of the Rocks distro

61
Lab. Deployment of a virtual HPC cluster using Rocks

• Disable the automatic reinstallation of the computing nodes


• Open a terminal as root and change directory to /export/rocks/install
• Copy the file to be modified (auto-kickstart.xml), which allows you to customize
this feature
• cp rocks-dist/x86_64/build/nodes/auto-kickstart.xml site-
profiles/7.0/nodes/replace-auto-kickstart.xml
• Edit the file you have just copied (i.e. replace-auto-kickstart.xml)
• E.g. vim site-profiles/7.0/nodes/replace-auto-kickstart.xml
• Remove the following line from the file
• <package>rocks-boot-auto<package>
• When the Rocks distro is modified in any way, it must be rebuilt using the
following command:
• rocks create distro
• This command must be executed from the root location of the Rocks distro (i.e.
you must be located at /export/rocks/install)
• Execute the command to rebuild the distro and wait
• Rebuilding can take 5-10 minutes
62
Lab. Deployment of a virtual HPC cluster using Rocks

• If you plan to install 2 computing nodes, skip this slide and the next one
• Otherwise, you must add the head node as host execution for Son of Grid
Engine (SGE)
• SGE is a Distributed Resource Management (DRM) software used for job scheduling
• More details about DRM system will be given in lectures
• Adding the head node as host execution allows SGE to use it for running applications
• But note that this is not the common approach in real clusters!
• This is only done in your testing environment in order to have at least two nodes available
where applications can be executed using SGE (head node + 1 computing node)
• To do add the head node as host execution for SGE:
• Open the file /opt/gridengine/default/common/host_aliases and add an alias for the FQDN
of your cluster (see slides 5 and 34 to remember your FQDN, and see next slide for an
example of this file). If the alias for your FDQN already exists in this file, do not make any
modification to it and continue to the next step
• Execute: “./install_execd” (be sure to accept all default answers during installation!)
• After installation, execute: “source /opt/gridengine/default/common/settings.sh”
• Some services must be restarted:
• /etc/init.d/sgemaster.myrockscluster restart
• /etc/init.d/sgeexecd.myrockscluster restart
• Finally, check that now you have one host execution with the command qstat -f 63
Lab. Deployment of a virtual HPC cluster using Rocks

• Head node as host execution in SGE with 1 core/slot

The FQDN of this cluster is: myrockscluster.dec.udc.es

After being modified as indicated in the previous slide, the content of the
file /opt/gridengine/default/common/host_aliases is:

myrockscluster.local myrockscluster myrockscluster.dec.udc.es

The alias that has been added is highlighted in bold. Your file must has
the corresponding values according to your unique FQDN
(myrockscluster.X.udc.es)

64
Lab. Deployment of a virtual HPC cluster using Rocks

• Your cluster is almost ready to install the computing nodes, but…


• How are computing nodes actually installed in Rocks? (short answer)
• Computing nodes are installed over the internal network of the cluster using the
Preboot eXecution Environment (PXE) and the Kickstart mechanism
• https://en.wikipedia.org/wiki/Preboot_Execution_Environment
• https://en.wikipedia.org/wiki/Kickstart_(Linux)
• How are computing nodes actually installed in Rocks? (long answer)
• The computing node must be configured to boot from the network using PXE
• When the node boots, the DHCP request made by the NIC of the computing node
is served by the DHCP server that is running on the head node
• The DHCP server returns an IP address to the computing node along with the IP
address of the TFTP server, also running on the head node, and the location of
boot files on this server
• When the computing node receives all this information, it contacts the TFTP
server to obtain the boot image (e.g. pxelinux.0)
• The boot image obtains all the required files (e.g. vmlinuz) from the TFTP server
to allow the execution of the CentOS installer (i.e. Anaconda)
• CentOS will be installed on a local disk of the computing node (i.e. Rocks performs
stateful provisioning) 65
Lab. Deployment of a virtual HPC cluster using Rocks

• How are computing nodes actually installed in Rocks? (long answer, cont.)
• In order to perform a non-interactive OS installation, the Anaconda installer uses a
Kickstart file to fully automate the installation
• Anaconda interprets the kickstart file, which is obtained from the head node
• Among other things, a Kickstart file that contain answers to all the questions
that will be asked during the installation of CentOS
• The Kickstart file also describes what must be done from disk partitioning to
package installation, containing a list of RPM packages to be installed
• The RPM packages to be installed are also located on the head node
• Remember that the full Rocks distro is stored on the head node in
/export/rocks/install
• Concretely, RPM packages are located at /export/rocks/install/rocks-
dist/x86_64/RedHat/RPMS/
• So, these packages are also served over the network
• Therefore, using PXE boot and a Kickstart-based mechanism, CentOS can be
installed on the computing nodes over the network in a fully automated way
• When you install Rocks on the head node, all this stuff is automatically
installed and configured (DHCP and TFTP servers, kickstart files, etc)
66
COMPUTING NODE VM
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the computing node with:


• 1 CPU core
• 2 GB of memory
• 30 GB of disk space (dynamically allocated)
• 1 vNICs

68
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the computing node with:


• 1 CPU core
• 2 GB of memory
• 30 GB of disk space (dynamically allocated)
• 1 vNICs

69
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the computing node with:


• 1 CPU core
• 2 GB of memory
• 30 GB of disk space (dynamically allocated)
• 1 vNICs

70
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the computing node with:


• 1 CPU core
• 2 GB of memory
• 30 GB of disk space (dynamically allocated)
• 1 vNICs

71
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the computing node with:


• 1 CPU core
• 2 GB of memory
• 30 GB of disk space (dynamically allocated)
• 1 vNICs

72
Lab. Deployment of a virtual HPC cluster using Rocks

• We will create a new VM for the computing node with:


• 1 CPU core
• 2 GB of memory
• 30 GB of disk space (dynamically allocated)
• 1 vNICs

73
Lab. Deployment of a virtual HPC cluster using Rocks

• Computing node VM

74
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the computing node VM


• Boot order:
1. Network
2. Hard disk

75
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the computing node VM


• Enable PAE/NX feature

76
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the computing node VM


• Check that hardware virtualization support is enabled

77
Lab. Deployment of a virtual HPC cluster using Rocks

• Settings of the computing node VM


• The available vNIC adapter is the private (internal) network of the cluster
• Use the “Internal Network” type provided by VirtualBox

78
Lab. Deployment of a virtual HPC cluster using Rocks

• Computing node VM: final configuration

79
ROCKS INSTALLATION ON THE
COMPUTING NODE
Lab. Deployment of a virtual HPC cluster using Rocks

• In order to allow the installation of computing nodes, Rocks provides the


following command: insert-ethers
• This command enables all the services (DHCP, TFTP) required to allow the
installation of computing nodes over the network as explained before

81
Lab. Deployment of a virtual HPC cluster using Rocks

• Select “Compute” as appliance type:


• As you can see, Rocks supports installing other types of nodes such as Login
nodes, NAS servers, etc

82
Lab. Deployment of a virtual HPC cluster using Rocks

• At this moment, the DHCP server on the head node is listening for requests on
the internal network of the cluster

83
Lab. Deployment of a virtual HPC cluster using Rocks

• Now, the computing node VM can be started


• PXE booting allows the NIC to make a DHCP request
• This request will be served (after a seconds) by the DHCP server running on the
head node

84
Lab. Deployment of a virtual HPC cluster using Rocks

• DHCP request is served


• In this moment, the computing node receives the IP address (among other things)
• From now on, this node is part of the cluster (although it is not yet ready to use)

85
Lab. Deployment of a virtual HPC cluster using Rocks

• You can check on the head node that the new computing node has been
detected

86
Lab. Deployment of a virtual HPC cluster using Rocks

• After a while, you will see the new computing node with a hostname assigned

87
Lab. Deployment of a virtual HPC cluster using Rocks

• Meanwhile, the computing node has started the OS installation


• Actually, you are not probably seeing this screen…
• Instead, you have obtained an error during installation…(see next slide)

88
Lab. Deployment of a virtual HPC cluster using Rocks

• Computing nodes in Rocks 7 require a minimum of 4 GB of memory, but note


that we have configured the VM with only 2 GB!
• This was done on purpose to allow the execution of the head node (1 GB) and the
computing node (2 GB) on a host system with only 4 GB of memory, leaving 1
GB free for the host OS
• If your system has more than 4 GB of memory (e.g. 8 GB), then you can
shutdown the computing node VM that has failed, increase its memory to 4
GB in VirtualBox and boot it again
• The computing node will retry the OS installation that now should works
• Skip next slide
• Even if your system has more than 4 GB of memory, you can also follow the next
instructions to allow the installation of the computing node with only 2 GB
• If your system has 4 GB of memory, jump to next slide for a workaround
that allows you to install the computing node with only 2 GB of memory
• If your system has less than 4 GB of memory, and you do not have access to
any other computer, in any way, with at least that amount of memory
• Email me (roberto.rey.exposito@udc.es) to receive an alternative

89
Lab. Deployment of a virtual HPC cluster using Rocks

• Rocks 7 has changed the way computing nodes are installed compared to
previous versions
• Rocks 7 downloads the RPM packages from the head node and then creates a
local repository in a RAM disk to setup a tracker that allow to re-server RPMs
that computing nodes have downloaded
• This means that a computing node must have enough memory to store the
installer itself (Anaconda) and the RPM packages that it will install
• And the minimum OS installation requires at least 4 GB of memory for both the
installer and the packages
• However, the tracker can be disabled to make Rocks 7 behaves as in previous
versions, where the packages are stored in disk instead of memory
• Disabling this feature allows to decrease the minimum memory requirement
of computing nodes to 2 GB
• In order to disable the tracker on compute nodes, execute on the head node:
• rocks set host installaction compute-0-0 action="install notracker"
• rocks set host attr compute-0-0 UseTracker False
• Then, boot the computing node VM again to retry the OS installation
90
Lab. Deployment of a virtual HPC cluster using Rocks

• Finally, you should get this screen, with the computing node being installed

91
Lab. Deployment of a virtual HPC cluster using Rocks

• At this point, the head node already knows that the computing node obtained
the kickstart file successfully
• This means that the node is now installing the OS
• See the symbol “*” in brackets and compare with slide 87

NOTE: at this point,


you can stop the
insert-ethers command
by pressing F8

92
Lab. Deployment of a virtual HPC cluster using Rocks

• After a while, the computing node should be finishing the installation


• If not, then you can have another coffee 

93
Lab. Deployment of a virtual HPC cluster using Rocks

• When the installation finishes, the computing node reboots and should be up
& running!
• Note that the computing node will continue to boot over the network using PXE each time it
is rebooted (DO NOT change the boot order in your VM)
• However, the head node already knows that the computing node is part of the cluster so that
the response will order the compute node to boot from its local disk instead of performing a
OS reinstallation

94
LAST STEPS
Lab. Deployment of a virtual HPC cluster using Rocks

• Execute the following useful commands on the head node:


• To see all the available nodes in the cluster:
• rocks list host
• If you want to execute the same command on all the available nodes:
• rocks run host “command”
• E.g. rocks run host “hostname”
• List the network configuration for all the nodes
• rocks list host interface
• Check that now you have 2 execution hosts for SGE
• qstat -f
• Finally, check ssh connectivity with one computing node
• ssh compute-0-0

96
Lab. Deployment of a virtual HPC cluster using Rocks

Note that, at this point,


you are logged in to
the computing node

97
Lab. Deployment of a virtual HPC cluster using Rocks

• Optional step
• All the tasks you will have to perform in the next lab can be resolved
using only the terminal, so the GUI available on the head node is
completely useless
• But it still consumes useful system resources!
• Remember that the head node VM only has 1 CPU core and 1 GB
of memory, while computing node VMs have more memory but
they do not have any GUI installed by default
• When executing the terminal within the GUI of the head node to
perform the tasks, you may notice that the VM runs rather slow
• It would be interesting to be able to connect to the head node from your
PC/laptop through ssh
• This would allow a faster response from the terminal
• To do so, you need to configure the port forwarding of the NAT Network
you are using in VirtualBox for your head node
• See next slides if you are interesting on configuring this (recommended)
• Otherwise, you can jump to slide 102
98
Lab. Deployment of a virtual HPC cluster using Rocks

• Configure port forwarding for the NAT network you used for your
head node (see slide 8)
• Go to File -> Preferences -> Network -> NAT Networks
• Click “Port Forwarding”

99
Lab. Deployment of a virtual HPC cluster using Rocks

• Add a new port forwarding rule


• Protocol: you must choose TCP (ssh relies on this protocol)
• Host IP: you must use the IP of the localhost interface (127.0.0.1)
• Host Port: you can choose any unused port of our system (>1024)
• Guest IP: you must use the IP of the public interface of your head node (see slides
29-31)
• Guest Port: you must use port 22 (ssh listens on this port by default)

100
Lab. Deployment of a virtual HPC cluster using Rocks

• Check ssh connectivity to the head node from your PC/laptop


• When connecting to port 5679 on 127.0.0.1, the ssh connection is forwarded by
VirtualBox to port 22 on 10.0.2.4, where my head node is listening for ssh
requests

101
Lab. Deployment of a virtual HPC cluster using Rocks

Congrats, your virtual HPC cluster


based on Rocks is up & running!

102
Lab. Deployment of a virtual HPC cluster using Rocks

It is strongly recommended to make a


snapshot of your cluster VMs right now to
save the current state and have a restore
point in case of any future issue!

103
Lab. Deployment of a virtual HPC cluster using Rocks

• Select the appropriate VM and click “Snapshots”


• Then you are able to manage the snapshots for this VM (create, delete, restore…)

104
END

You might also like