Professional Documents
Culture Documents
7
Lab. Deployment of a virtual HPC cluster using Rocks
• Check that you have an existing NAT network (File -> Preferences ->
Network -> NAT Networks)
• Otherwise, you must create a new NAT network
8
HEAD NODE VM
Lab. Deployment of a virtual HPC cluster using Rocks
10
Lab. Deployment of a virtual HPC cluster using Rocks
11
Lab. Deployment of a virtual HPC cluster using Rocks
12
Lab. Deployment of a virtual HPC cluster using Rocks
13
Lab. Deployment of a virtual HPC cluster using Rocks
14
Lab. Deployment of a virtual HPC cluster using Rocks
15
Lab. Deployment of a virtual HPC cluster using Rocks
• Head node VM
16
Lab. Deployment of a virtual HPC cluster using Rocks
17
Lab. Deployment of a virtual HPC cluster using Rocks
18
Lab. Deployment of a virtual HPC cluster using Rocks
19
Lab. Deployment of a virtual HPC cluster using Rocks
20
Lab. Deployment of a virtual HPC cluster using Rocks
21
Lab. Deployment of a virtual HPC cluster using Rocks
22
Lab. Deployment of a virtual HPC cluster using Rocks
23
ROCKS INSTALLATION ON THE
HEAD NODE
Lab. Deployment of a virtual HPC cluster using Rocks
25
Lab. Deployment of a virtual HPC cluster using Rocks
26
Lab. Deployment of a virtual HPC cluster using Rocks
• The grayed selections indicate that something else must be done first
• In this case, the public network and hostname/FQDN must be first configured
• So, click on “Network & Hostname” (System section)
27
Lab. Deployment of a virtual HPC cluster using Rocks
• Remember that the vNIC adapter for the public interface is the second
one (see slide 22)
• At this moment, this adapter should be in disconnected state
• Click on configure
Configure only
the interface for
the public
network
28
Lab. Deployment of a virtual HPC cluster using Rocks
Starting address
used by the DHCP
server on the NAT
Network
29
Lab. Deployment of a virtual HPC cluster using Rocks
30
Lab. Deployment of a virtual HPC cluster using Rocks
31
Lab. Deployment of a virtual HPC cluster using Rocks
32
Lab. Deployment of a virtual HPC cluster using Rocks
33
Lab. Deployment of a virtual HPC cluster using Rocks
DO NOT forget to
click “Apply”
after changing the
hostname!
34
Lab. Deployment of a virtual HPC cluster using Rocks
• If you are successful, you home screen should now reflect that the
network is up and configured
• Now, it is time to configure the private network of the cluster
35
Lab. Deployment of a virtual HPC cluster using Rocks
36
Lab. Deployment of a virtual HPC cluster using Rocks
NOTE: set
“Cluster Name”
the same as your
hostname/FQDN
BUT without
“.udc.es”
The remaining
fields that can be
changed are not
relevant. Leave
the default values
37
Lab. Deployment of a virtual HPC cluster using Rocks
38
Lab. Deployment of a virtual HPC cluster using Rocks
DO NOT select
any other roll
different to the
ones specified
above
39
Lab. Deployment of a virtual HPC cluster using Rocks
40
Lab. Deployment of a virtual HPC cluster using Rocks
41
Lab. Deployment of a virtual HPC cluster using Rocks
42
Lab. Deployment of a virtual HPC cluster using Rocks
43
Lab. Deployment of a virtual HPC cluster using Rocks
44
Lab. Deployment of a virtual HPC cluster using Rocks
45
Lab. Deployment of a virtual HPC cluster using Rocks
46
Lab. Deployment of a virtual HPC cluster using Rocks
• Installation begins
• During installation, it is time to setup root password
47
Lab. Deployment of a virtual HPC cluster using Rocks
48
Lab. Deployment of a virtual HPC cluster using Rocks
DO NOT create a
user now. We will
see later how users
can be created after
cluster installation
49
Lab. Deployment of a virtual HPC cluster using Rocks
50
Lab. Deployment of a virtual HPC cluster using Rocks
51
Lab. Deployment of a virtual HPC cluster using Rocks
52
HEAD NODE CONFIGURATION
Lab. Deployment of a virtual HPC cluster using Rocks
55
Lab. Deployment of a virtual HPC cluster using Rocks
• Check Wordpress
56
Lab. Deployment of a virtual HPC cluster using Rocks
57
Lab. Deployment of a virtual HPC cluster using Rocks
NIC of the
private
network
NIC of the
public
network
58
Lab. Deployment of a virtual HPC cluster using Rocks
Private (internal)
and public
(external) networks
of the cluster
59
Lab. Deployment of a virtual HPC cluster using Rocks
61
Lab. Deployment of a virtual HPC cluster using Rocks
• If you plan to install 2 computing nodes, skip this slide and the next one
• Otherwise, you must add the head node as host execution for Son of Grid
Engine (SGE)
• SGE is a Distributed Resource Management (DRM) software used for job scheduling
• More details about DRM system will be given in lectures
• Adding the head node as host execution allows SGE to use it for running applications
• But note that this is not the common approach in real clusters!
• This is only done in your testing environment in order to have at least two nodes available
where applications can be executed using SGE (head node + 1 computing node)
• To do add the head node as host execution for SGE:
• Open the file /opt/gridengine/default/common/host_aliases and add an alias for the FQDN
of your cluster (see slides 5 and 34 to remember your FQDN, and see next slide for an
example of this file). If the alias for your FDQN already exists in this file, do not make any
modification to it and continue to the next step
• Execute: “./install_execd” (be sure to accept all default answers during installation!)
• After installation, execute: “source /opt/gridengine/default/common/settings.sh”
• Some services must be restarted:
• /etc/init.d/sgemaster.myrockscluster restart
• /etc/init.d/sgeexecd.myrockscluster restart
• Finally, check that now you have one host execution with the command qstat -f 63
Lab. Deployment of a virtual HPC cluster using Rocks
After being modified as indicated in the previous slide, the content of the
file /opt/gridengine/default/common/host_aliases is:
The alias that has been added is highlighted in bold. Your file must has
the corresponding values according to your unique FQDN
(myrockscluster.X.udc.es)
64
Lab. Deployment of a virtual HPC cluster using Rocks
• How are computing nodes actually installed in Rocks? (long answer, cont.)
• In order to perform a non-interactive OS installation, the Anaconda installer uses a
Kickstart file to fully automate the installation
• Anaconda interprets the kickstart file, which is obtained from the head node
• Among other things, a Kickstart file that contain answers to all the questions
that will be asked during the installation of CentOS
• The Kickstart file also describes what must be done from disk partitioning to
package installation, containing a list of RPM packages to be installed
• The RPM packages to be installed are also located on the head node
• Remember that the full Rocks distro is stored on the head node in
/export/rocks/install
• Concretely, RPM packages are located at /export/rocks/install/rocks-
dist/x86_64/RedHat/RPMS/
• So, these packages are also served over the network
• Therefore, using PXE boot and a Kickstart-based mechanism, CentOS can be
installed on the computing nodes over the network in a fully automated way
• When you install Rocks on the head node, all this stuff is automatically
installed and configured (DHCP and TFTP servers, kickstart files, etc)
66
COMPUTING NODE VM
Lab. Deployment of a virtual HPC cluster using Rocks
68
Lab. Deployment of a virtual HPC cluster using Rocks
69
Lab. Deployment of a virtual HPC cluster using Rocks
70
Lab. Deployment of a virtual HPC cluster using Rocks
71
Lab. Deployment of a virtual HPC cluster using Rocks
72
Lab. Deployment of a virtual HPC cluster using Rocks
73
Lab. Deployment of a virtual HPC cluster using Rocks
• Computing node VM
74
Lab. Deployment of a virtual HPC cluster using Rocks
75
Lab. Deployment of a virtual HPC cluster using Rocks
76
Lab. Deployment of a virtual HPC cluster using Rocks
77
Lab. Deployment of a virtual HPC cluster using Rocks
78
Lab. Deployment of a virtual HPC cluster using Rocks
79
ROCKS INSTALLATION ON THE
COMPUTING NODE
Lab. Deployment of a virtual HPC cluster using Rocks
81
Lab. Deployment of a virtual HPC cluster using Rocks
82
Lab. Deployment of a virtual HPC cluster using Rocks
• At this moment, the DHCP server on the head node is listening for requests on
the internal network of the cluster
83
Lab. Deployment of a virtual HPC cluster using Rocks
84
Lab. Deployment of a virtual HPC cluster using Rocks
85
Lab. Deployment of a virtual HPC cluster using Rocks
• You can check on the head node that the new computing node has been
detected
86
Lab. Deployment of a virtual HPC cluster using Rocks
• After a while, you will see the new computing node with a hostname assigned
87
Lab. Deployment of a virtual HPC cluster using Rocks
88
Lab. Deployment of a virtual HPC cluster using Rocks
89
Lab. Deployment of a virtual HPC cluster using Rocks
• Rocks 7 has changed the way computing nodes are installed compared to
previous versions
• Rocks 7 downloads the RPM packages from the head node and then creates a
local repository in a RAM disk to setup a tracker that allow to re-server RPMs
that computing nodes have downloaded
• This means that a computing node must have enough memory to store the
installer itself (Anaconda) and the RPM packages that it will install
• And the minimum OS installation requires at least 4 GB of memory for both the
installer and the packages
• However, the tracker can be disabled to make Rocks 7 behaves as in previous
versions, where the packages are stored in disk instead of memory
• Disabling this feature allows to decrease the minimum memory requirement
of computing nodes to 2 GB
• In order to disable the tracker on compute nodes, execute on the head node:
• rocks set host installaction compute-0-0 action="install notracker"
• rocks set host attr compute-0-0 UseTracker False
• Then, boot the computing node VM again to retry the OS installation
90
Lab. Deployment of a virtual HPC cluster using Rocks
• Finally, you should get this screen, with the computing node being installed
91
Lab. Deployment of a virtual HPC cluster using Rocks
• At this point, the head node already knows that the computing node obtained
the kickstart file successfully
• This means that the node is now installing the OS
• See the symbol “*” in brackets and compare with slide 87
92
Lab. Deployment of a virtual HPC cluster using Rocks
93
Lab. Deployment of a virtual HPC cluster using Rocks
• When the installation finishes, the computing node reboots and should be up
& running!
• Note that the computing node will continue to boot over the network using PXE each time it
is rebooted (DO NOT change the boot order in your VM)
• However, the head node already knows that the computing node is part of the cluster so that
the response will order the compute node to boot from its local disk instead of performing a
OS reinstallation
94
LAST STEPS
Lab. Deployment of a virtual HPC cluster using Rocks
96
Lab. Deployment of a virtual HPC cluster using Rocks
97
Lab. Deployment of a virtual HPC cluster using Rocks
• Optional step
• All the tasks you will have to perform in the next lab can be resolved
using only the terminal, so the GUI available on the head node is
completely useless
• But it still consumes useful system resources!
• Remember that the head node VM only has 1 CPU core and 1 GB
of memory, while computing node VMs have more memory but
they do not have any GUI installed by default
• When executing the terminal within the GUI of the head node to
perform the tasks, you may notice that the VM runs rather slow
• It would be interesting to be able to connect to the head node from your
PC/laptop through ssh
• This would allow a faster response from the terminal
• To do so, you need to configure the port forwarding of the NAT Network
you are using in VirtualBox for your head node
• See next slides if you are interesting on configuring this (recommended)
• Otherwise, you can jump to slide 102
98
Lab. Deployment of a virtual HPC cluster using Rocks
• Configure port forwarding for the NAT network you used for your
head node (see slide 8)
• Go to File -> Preferences -> Network -> NAT Networks
• Click “Port Forwarding”
99
Lab. Deployment of a virtual HPC cluster using Rocks
100
Lab. Deployment of a virtual HPC cluster using Rocks
101
Lab. Deployment of a virtual HPC cluster using Rocks
102
Lab. Deployment of a virtual HPC cluster using Rocks
103
Lab. Deployment of a virtual HPC cluster using Rocks
104
END