Design and Deployment Guide for Windows HPC Server 2008

Microsoft Corporation Published: September 2008

Abstract
This guide provides detailed information and step-by-step procedures for designing and installing a high performance computing cluster using Windows® HPC Server 2008. You can use this guide to first plan the deployment of your HPC cluster, and when you are ready to deploy you can follow the procedures to configure the head node, add compute nodes to the cluster, and verify that your cluster deployment has been successful.

Copyright Information
Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted in examples herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. © 2008 Microsoft Corporation. All rights reserved. Microsoft Active Directory, Windows, Windows PowerShell, Windows Server, and Windows Vista are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners.

Contents
Design and Deployment Guide for Windows HPC Server 2008......................................................4 Checklist: Deploy an HPC Cluster (Overview)................................................................................4 Step 1: Prepare for Your Deployment.............................................................................................5 Step 2: Deploy the Head Node......................................................................................................11 Step 3: Configure the Head Node.................................................................................................13 Step 4: Add Compute Nodes to the Cluster..................................................................................20 Step 5: Run Diagnostic Tests on the Cluster.................................................................................26 Step 6: Run a Test Job on the Cluster...........................................................................................26 Additional Resources....................................................................................................................30 Appendices...................................................................................................................................30 Appendix 1: HPC Cluster Networking........................................................................................31 Appendix 2: Creating a Node XML File......................................................................................42 Appendix 3: Node Template Tasks and Properties....................................................................50 Appendix 4: Job Template Properties........................................................................................58 Appendix 5: Scriptable Power Control Tools..............................................................................64 Appendix 6: Using HPC PowerShell..........................................................................................65

Run diagnostic tests to verify that the deployment of the cluster was successful.com/fwlink/?LinkId=123894. or by manually configuring them. Deploy the head node by installing Windows Server 2008 and HPC Pack 2008. Each task in the checklist is linked to the section in this document that describes the steps to perform the task. Checklist: Deploy an HPC Cluster (Overview) The following checklist describes the overall process of designing and deploying a Windows HPC Server 2008 cluster. Configure the head node by following the steps in the configuration to-do list.Design and Deployment Guide for Windows HPC Server 2008 This guide provides conceptual information for planning the deployment of a high performance computing cluster using Windows® HPC Server 2008. Note You can configure your HPC cluster for high availability by installing the head node in the context of a failover cluster. It also provides step-by-step procedures for deploying the head node in your cluster.microsoft. Add nodes to the cluster by deploying them from bare metal. 4 Step 2: Deploy the Head Node Step 3: Configure the Head Node Step 4: Add Compute Nodes to the Cluster Step 5: Run Diagnostic Tests on the Cluster Step 6: Run a Test Job on the Cluster . see http://go. Run some basic jobs on the cluster to verify that the cluster is operational. For more information about running an HPC cluster with failover clustering. and for verifying that your deployment was successful. adding compute nodes. If the server that is acting as the head node fails. Task Description Step 1: Prepare for Your Deployment Before you start deploying your HPC cluster. the other server in the failover cluster automatically begins acting as the head node (through a process known as failover). by importing an XML file. review the list of prerequisites and initial considerations.

Choose an existing domain account with enough privileges to perform installation and diagnostics tasks. obtain and test all the necessary components of your power control tools. and how the cluster will be connected to your enterprise network. such as deciding how you will be adding nodes to your cluster. Checklist: Prepare for your deployment Task Description 1.2. or using an XML file. 1. configure your network switches appropriately. Choose a network topology for your cluster 1.3. and choosing a network topology for your cluster. Choose the Active Directory domain for your cluster 1. Decide how to add compute nodes to your cluster 1. Choose how the nodes in your cluster will be connected. Choose the Active Directory® domain to which you will join the head node and compute nodes of your HPC cluster.5. as preconfigured nodes. Decide if you will be adding compute nodes to your cluster from bare metal. shut down. Review initial considerations and system requirements Review the list of initial considerations and system requirements to ensure that you have all the necessary hardware and software components to deploy an HPC cluster. If you want to use your own power controls tools to start. The following checklist describes the steps involved in preparing for your deployment. Choose a user account for installation and diagnostics 1.7.Step 1: Prepare for Your Deployment The first step in the deployment of your HPC cluster is to make important decisions. and reboot compute nodes remotely.1.4. If you will be deploying nodes from bare metal and want to multicast the operating system image that you will be using during deployment.6. Prepare for the integration of scriptable power control tools (optional) 5 . Prepare for multicast (optional) 1.

the Compute Cluster Job Manager. as well as hardware and software requirements for Windows HPC Server 2008. including jobs that are submitted through the use of the command-line tools. • Network Policy and Access Services. • • Windows Deployment Services. however. to deploy compute nodes remotely. require changes to run on Windows Server® 2008. File Services. you should consult your software vendor. the job scheduling console (HPC Job Manager). • A side-by-side installation of Windows HPC Server 2008 and Windows Compute Cluster Server 2003 on the same computer is not supported. to provide IP addresses and related information for compute nodes. Review initial considerations and system requirements The following sections list some initial considerations that you need to review. • The Windows HPC Server 2008 client tools. • Windows HPC Server 2008 supports job submission from Windows Compute Cluster Server 2003 clients. the command-line tools. including the cluster administration console (HPC Cluster Manager). • Clusters that have both Windows Compute Cluster Server 2003 nodes and Windows HPC Server 2008 nodes are not supported.1. These applications might. which enables Routing and Remote Access so that network address translation (NAT) services can be provided to the cluster nodes. Server roles added during installation The installation of HPC Pack 2008 adds the following server roles to the head node: • Dynamic Host Configuration Protocol (DHCP) Server. to manage shared folders. • The upgrade of a Windows Compute Cluster Server 2003 head node to a Windows HPC Server 2008 head node is not supported.1. 6 . Initial considerations Review the following initial considerations before you deploy your HPC cluster. and the COM APIs. This includes the Windows HPC Server 2008 client utilities. Compatibility with previous versions The following list describes compatibility between Windows HPC Server 2008 and Windows Compute Cluster Server 2003: • Windows HPC Server 2008 provides application programming interface (API)-level compatibility for applications that are integrated with Windows Compute Cluster Server 2003. and the APIs cannot be used to manage or submit jobs to a Windows Compute Cluster Server 2003 cluster. If you encounter problems running your application on Windows Server 2008.

Hardware requirements
Hardware requirements for Windows HPC Server 2008 are very similar to those for the 64-bit editions of Windows Server 2008. Note For more information about installing Windows Server 2008, including system requirements, see Installing Windows Server 2008 (http://go.microsoft.com/fwlink/? LinkID=119578). Processor (x64-based): • • RAM: • • • • Drive: • DVD-ROM drive Network adapters: • The number of network adapters on the head node and on the compute nodes depends on the network topology that you choose for your cluster. For more information about the different HPC cluster network topologies, see Appendix 1: HPC Cluster Networking. Minimum: 512 MB Recommended: 2 GB or more Minimum: 50 GB Recommended: 80 GB or more Minimum: 1.4 GHz Recommended: 2 GHz or faster

Available disk space:

Software requirements
The following list outlines the software requirements for the head node and the compute nodes in a Windows HPC Server 2008 cluster: • • Windows Server 2008 HPC Edition, or another 64-bit edition of Windows Server 2008 Microsoft HPC Pack 2008

Important Microsoft HPC Pack 2008 cannot be installed on any edition of Windows Server 2008 R2. It can only be installed on Windows Server 2008 HPC Edition, or another 64-bit edition of Windows Server 2008. To enable users to submit jobs to your HPC cluster, you can install the utilities included with Microsoft HPC Pack 2008 on client computers. Those client computers must be running any of the following operating systems: • Windows XP Professional with Service Pack 3 or later (x86- or x64-based) 7

• Windows Vista® Enterprise, Windows Vista Business, Windows Vista Home, or Windows Vista Ultimate • Windows Server 2003 Standard Edition or Windows Server 2003 Enterprise Edition with Service Pack 2 or later (x86- or x64-based) • Windows Server 2003, Compute Cluster Edition • Windows Server 2003 R2 Standard Edition or Windows Server 2003 R2 Enterprise Edition (x86- or x64-based)

1.2. Decide how to add compute nodes to your cluster
There are three ways to add compute nodes to your cluster: • From bare metal. The operating system and all the necessary HPC cluster components are automatically installed on each compute node as it is added to the cluster. No manual installation of the operating system or other software is required. • Add preconfigured compute nodes. The compute nodes are already running Windows Server 2008 HPC Edition, or another 64-bit edition of Windows Server 2008, and Microsoft HPC Pack 2008 is manually installed on each node. • Import a node XML file. An XML file that contains a list of all the nodes that will be deployed is used. This XML file can be used to add nodes from bare metal or from preconfigured nodes. For more information about node XML files, see Appendix 2: Creating a Node XML File. The following is a list of details to take into consideration when choosing how to add nodes to your HPC cluster: • When deploying nodes from bare metal, Windows HPC Server 2008 automatically generates computer names for your compute nodes. During the configuration process, you will be required to specify the naming convention to use when automatically generating computer names for the new nodes. • Compute nodes are assigned their computer name in the order that they are deployed. • If you want to add compute nodes from bare metal and assign computer names in a different way, you can use a node XML file. For more information about node XML files, see Appendix 2: Creating a Node XML File. • If you want to add preconfigured nodes to your cluster, you will need to install Windows Server 2008 HPC Edition, or another 64-bit edition of Windows Server 2008 on each node (if not already installed), as well as Microsoft HPC Pack 2008.

8

1.3. Choose the Active Directory domain for your cluster
The head node and the compute nodes in your HPC cluster must be members of an Active Directory domain. Before deploying your cluster, you must choose the Active Directory domain that you will use for your HPC cluster. If you do not have an Active Directory domain to which you can join your cluster, or if you prefer not to join an existing domain, you can install the Active Directory Domain Services role on the head node and then configure a domain controller on that node. For more information about installing the Active Directory Domain Services role on a computer that is running Windows Server 2008, see the AD DS Installation and Removal Step-by-Step Guide (http://go.microsoft.com/fwlink/?LinkID=119580). Caution If you choose to install and configure an Active Directory domain controller on the head node, consult with your network administrator about the correct way to isolate the new Active Directory domain from the enterprise network, or how to join the new domain to an existing Active Directory forest.

1.4. Choose a user account for installation and diagnostics
During the configuration process of your HPC cluster, you must provide credentials for a domain user account that will be used for installation and diagnostics. You must choose an existing account or create a new account, before starting your cluster deployment. The following is a list of details to take into consideration when choosing the user account: • The user account that you choose must be a domain account with enough privileges to create Active Directory computer accounts for the compute nodes. Alternatively, you can create the computer accounts manually or ask your domain administrator to create them for you. • If part of your deployment requires access to resources on the enterprise network, the user account must have the necessary permissions to access those resources—for example, installation files that are available on a network server. • If you want to restart nodes remotely from the cluster administration console (HPC Cluster Manager), the account must be a member of the local Administrators group on the head node. This requirement is only necessary if you do not have scriptable power control tools that you can use to remotely restart the compute nodes.

9

you may experience issues during deployment. • Do not have the network adapter that is connected to the enterprise network on the head node in an automatic configuration (that is. The five supported cluster topologies are: • • • • • Topology 1: Compute Nodes Isolated on a Private Network Topology 2: All Nodes on Enterprise and Private Networks Topology 3: Compute Nodes Isolated on Private and Application Networks Topology 4: All Nodes on Enterprise. and you are planning to add nodes to your cluster from bare metal: • Ensure that there are no Pre-Boot Execution Environment (PXE) servers on the private network. and the application network. • If you want to enable DHCP server on your head node for the private or application networks and there are other DHCP servers connected to those networks. we recommend that you prepare for multicast by: 10 . you must disable those DHCP servers. Prepare for multicast (optional) If you will be deploying nodes from bare metal and want to multicast the operating system image that you will be using during deployment. dynamically or manually assigned (static). If IPsec is enforced on your domain through Group Policy. Choose a network topology for your cluster Windows HPC Server 2008 supports five cluster topologies. and Application Networks Topology 5: All Nodes on an Enterprise Network For more information about each network topology. A workaround is to make your head node an IPsec boundary server so that compute nodes can communicate with the head node during PXE boot. When you are choosing a network topology. • If you choose a topology that includes a private network. Private.6. That adapter must have a valid IP address. the IP address for that adapter does not start with: 169. you must take into consideration your existing network infrastructure: • Decide which network in the topology that you have chosen will serve as the enterprise network. 1. • If you have an existing Domain Name System (DNS) server connected to the same network as the compute nodes. no action is necessary. These topologies are distinguished by how the compute nodes in the cluster are connected to each other and to the enterprise network.254). the private network. but the compute nodes will be automatically deregistered from that DNS server. see Appendix 1: HPC Cluster Networking.5.1. • If you want to use an existing DHCP server for your private network. • Contact your system administrator to determine if Internet Protocol security (IPsec) is enforced on your domain through Group Policy. ensure that it is configured to recognize the head node as the PXE server in the network.

contact your network administrator or your networking hardware vendor. if this feature is enabled. Install Windows Server 2008 on the head node computer 2. Checklist: Deploy the head node Task Description 2. 1. Prepare for the integration of scriptable power control tools (optional) The cluster administration console (HPC Cluster Manager) includes actions to start. • Disabling Spanning Tree Protocol (STP) on your network switches. After you have obtained all the necessary components. or another 64-bit edition of Windows Server 2008 on the computer that will act as the head node.cmd) that performs these power control operations using operating system commands.cmd to integrate your own scriptable power control tools. Step 2: Deploy the Head Node The next step in the deployment of your HPC cluster is to deploy the head node. Note For more information about these settings. In preparation for this integration. and reboot compute nodes remotely. Join the computer that will act as the head node to a Microsoft Active Directory Domain. This will help to reduce multicast traffic. shut down.3. if this feature is available. you must obtain all the necessary scripts. dynamically linked library (DLL) files.1. test them independently and ensure that they work as intended on the computers that you will be deploying as compute nodes in your cluster. such as Intelligent Platform Management Interface (IPMI) scripts that are provided by your vendor of cluster solutions. and all other components of your power control tools. These actions are linked to a script file (CcpPower. Install Microsoft HPC Pack 2008 on the 11 . Join the head node computer to a domain 2. For information about modifying CcpPower.• Enabling Internet Group Management Protocol (IGMP) snooping on your network switches.7. see Appendix 5: Scriptable Power Control Tools.2. Install Microsoft HPC Pack 2008 on the Install Windows Server 2008 HPC Edition. The following checklist describes the steps involved in deploying the head node. You can replace the default operating system commands in that script file with your own power control scripts.

If you want to install Microsoft HPC Pack 2008 on an existing installation of Windows Server 2008. After you have installed Windows Server 2008 on the head node. remove all server roles first and then follow the procedures in this guide.com/fwlink/? LinkID=119578). 12 . Join the head node computer to a domain As described in the Step 1: Prepare for Your Deployment section.exe from the HPC Pack 2008 installation media or from a network location. you can install Microsoft® HPC Pack 2008 on the head node. or another 64-bit edition of Windows Server 2008 on the computer that will act as the head node. run setup. Note It is recommended that you obtain the latest device drivers for your head node computer from the Web site of your hardware vendors. 2.3. including system requirements. and the head node is joined to an Active Directory domain. To start the Microsoft HPC Pack 2008 installation wizard on the computer that will act as the head node. Install Microsoft HPC Pack 2008 on the head node computer After Windows Server 2008 is installed on the head node computer. 2. To install Microsoft HPC Pack 2008 on the head node computer 1.2. For more information about installing Windows Server 2008.Task Description head node computer computer that will act as the head node. 2.microsoft. see Installing Windows Server 2008 (http://go. using the installation media or from a network location. you must start by installing Windows Server 2008 HPC Edition.1. Install Windows Server 2008 on the head node computer To deploy the head node of your HPC cluster. manually join the head node to an existing Active Directory domain. the head node must be a member of an Active Directory domain. Important We strongly recommend that you perform a clean installation of Windows Server 2008 before installing Microsoft HPC Pack 2008.

3. Add or remove users (optional) . 3.4. 5. Provide installation credentials Configure the cluster network by using the Network Configuration Wizard. you must configure the head node by following the configuration to-do list in HPC Cluster Manager. On the Select Installation Type page.2.5. add or remove users or administrators for your cluster. Continue to follow the steps in the installation wizard. and then click Next. Specify which credentials to use for system configuration and when adding new nodes to the cluster. Checklist: Configure the head node The following checklist includes the items in the configuration to-do list in HPC Cluster Manager that you need to complete in order to configure your head node. add drivers for the operating system images that you created for your node template on the previous task. and accept or reject the terms of that agreement. click Next. 13 3.2.1. Task Description 3. On the Microsoft Software License Terms page. If you accept the terms. If you will be deploying compute nodes from bare metal and those nodes require special device drivers. Configure the naming of new nodes 3. Add drivers for the operating system images (optional) 3. 4. If you will be giving access to the cluster to other members of your organization. Specify the naming convention to use when generating names automatically for new compute nodes. Step 3: Configure the Head Node After you have deployed the head node of your HPC cluster. On the Getting Started page. Configure the HPC cluster network 3. click Next. click Create a new HPC cluster by creating a head node. Create a node template 3. read or print the software license terms in the license agreement. Create a template that defines the steps to follow when configuring a compute node.6.

3. ensure that the head node and the computers that you will add as compute nodes to the cluster are physically connected according to the network topology that you have chosen for your cluster. On the Network Topology Selection page. The Network Configuration Wizard appears. Click Start. you must choose the network topology that you have selected for your cluster. in the Network adapter list. repeat step 4 for the private network adapter. ensure that you are able to identify to which network each one of the network adapters in the head node is connected. 5. Use the IP address. domain information. Otherwise. use the information displayed on this wizard page after you select a network adapter from the list. and then type the starting and ending IP addresses for the DHCP scope. Also. Important To ensure that you are selecting the correct network adapter. To enable DHCP services for the nodes connected to this network. On the Enterprise Network Adapter Selection page. 2. The HPC cluster network is configured by following the Network Configuration Wizard in HPC Cluster Manager. jump to step 9 in this procedure. Important Before you start configuring the HPC cluster network in HPC Cluster Manager. and then click Next. Choose a network topology for your cluster”. select the Enable network address translation (NAT) on the head node check box. When configuring the HPC cluster network. as described in “1. type a static IP address and a subnet mask for the head node.3. and Media Access Control (MAC) address of each adapter as a reference. On the Private Network Configuration page. To give access to resources on the enterprise network to compute nodes that are connected to this network. 14 . click the name of the network adapter that is physically connected to your enterprise network. In the To-do List. open it. point to All Programs. If HPC Cluster Manager is not already open on the head node.5. click Configure your network. click Microsoft HPC Pack. If you chose topology number 5 for your cluster.1. To configure the HPC cluster network 1. Configure the HPC cluster network The HPC cluster network configuration is the first step in the configuration process of your head node. 4. click the topology that you have chosen for your cluster. If the Gateway and DNS server IP addresses have not been automatically detected. and then click Next. b. and then click HPC Cluster Manager. select the Enable DHCP and define a scope check box. 6. select network services for that network: a. in Step 1: Prepare for Your Deployment. type each of these addresses. Optionally.

click Do not manage firewall settings. On the Firewall Setup page. 12. Provide installation credentials Installation credentials must be provided in order to configure new compute nodes. 9. review the list of configuration items. b. For more information. If you want to change any of the settings. Important 15 . Click Next after you are done configuring the application network. applications. Note For more information about firewall settings for your cluster. these same credentials will be used when running diagnostic tests on the cluster nodes. click Finish. These credentials will be used when installing the operating system. Click Next after you are done configuring the private network. and 7 for the application network adapter. see “1. 8. see “HPC network services” in Appendix 1: HPC Cluster Networking. click ON for that network. navigate to the appropriate wizard page by clicking it on the navigation pane or by clicking Previous. 3. To close the wizard. click Provide installation credentials. To apply firewall settings automatically to head nodes and compute nodes on each network. select the firewall setting for the cluster: a.2. c. verify your settings and click Configure. including the domain (DOMAIN\User). and then the password for the domain user account you will use to deploy compute nodes and to run diagnostic tests. Also.4. On the Review page. Type the user name. The Installation Credentials dialog box appears. To disable the firewall on a network. 2. If you do not want to change any firewall settings. Choose a user account for installation and diagnostics” in Step 1: Prepare for Your Deployment. To provide installation credentials 1. click Save the configuration report. click OFF. If you want to save a report of the network configuration. 10. In the To-do List. 6. Repeat steps 4.Note For more information about enabling NAT and DHCP on your cluster network. 7. on the Configuration Summary page. and when adding nodes to the Active Directory domain. see “Windows Firewall configuration” in Appendix 1: HPC Cluster Networking. After the network configuration process is completed. 11.

3. if you deploy three nodes after specifying the following naming series: ClusterNode-%100%.3. Alternatively. When specifying the compute node naming series. those nodes will be assigned these names: • • • ClusterNode-100 ClusterNode-101 ClusterNode-102 Important Compute node names are limited to 15 characters. The starting number is enclosed in percentage signs (%). by defining a naming series.000 will need a name that consists of 16 characters. Important If you want to restart nodes remotely from the cluster administration console (HPC Cluster Manager). For example: ClusterNode%1000%. you can create the computer accounts manually or ask your domain administrator to create them for you. Configure the naming of new nodes If you deploy compute nodes from bare metal. This requirement is only necessary if you do not have scripted power control tools that you can use to remotely restart the compute nodes. 16 . as they become available. When you deploy compute nodes from bare metal. The naming series is defined by selecting a root name and the starting number that will accompany that name.The account must be a domain account with enough privileges to create Active Directory computer accounts for the compute nodes. your node number 1. Windows HPC Server 2008 will automatically generate computer names for the new nodes that are being deployed. take into account the number of compute nodes in your deployment and ensure that the series that you specify will not generate names that exceed 15 characters. Important If part of your deployment requires access to resources on the enterprise network. For example. 3. click OK. your root name cannot have more than 12 characters.000 compute nodes. and you are not using a node XML file to import nodes to the cluster. if your deployment will consist of 1. otherwise. For example. the account must be added as an HPC cluster administrator on the head node. You need to specify how those names will be generated. the account should have the necessary permissions to access those resources. and your starting number is 1. To save the specified credentials. nodes will be named in sequence.

click Configure the naming of new nodes. In the To-do List. Use this type of template when adding compute nodes from bare metal. Important If you will create a node template with an operating system image. Because you might have more than one type of compute node. click Create a node template. add specific drivers and software to compute nodes. you can create different templates that apply to different nodes or situations. you will need the installation media for Windows Server 2008 HPC Edition or another 64-bit edition of Windows Server 2008. On the Specify Template Name page.4. To create a node template 1. or to update existing nodes.To specify the compute node naming series 1. Create a node template Node templates are new in Windows HPC Server 2008. This type of template includes a step to deploy an operating system on the compute nodes. The type of template that you create for the initial deployment of your HPC cluster depends on how you decided to add compute nodes to your cluster. or simply add a preconfigured node to your cluster. 3. 17 .2. 2. You can create two types of node templates: • With an operating system image. you can deploy an operating system image. and then click Next. click OK. This type of template is used to add preconfigured compute nodes to the cluster. see “1. With a node template. type a descriptive name for the template. The Create Node Template Wizard appears. • Without an operating system image. They define the necessary tasks for configuring and adding compute nodes to your cluster. The Specify Compute Node Naming Series dialog box appears. Note You cannot specify a compute node naming series that consists only of numbers. In the To-do List. or you may be adding compute nodes to your cluster in different ways. 3. The preview helps you to see an example of how the naming series will be applied to the names of the compute nodes. 2. Decide how to add compute nodes to your cluster” in Step 1: Prepare for Your Deployment. or you must have the installation files available on a network location that is accessible from the head node computer. For more information. To save the compute node naming series that you have specified. Type the naming series that you want to use.

click that image and then jump to step 3. c. If you will be adding compute nodes to your cluster from bare metal: a. and then click Next. and then type the product key that should be used. click the image that you want to use with the template. you can specify specific updates to be added to the template. and then click OK. e. click Create a new operating system image. in the Image Name list. click Without operating system. Optionally. specify if you want to multicast the operating system image during deployment. 18 . on the Select Operating System Image page. Prepare for multicast (optional)” in Step 1: Prepare for Your Deployment. Also. For more information. and then type or browse to the location of the Windows setup file for one of the 64-bit editions of Windows Server 2008.3. see Appendix 3: Node Template Tasks and Properties. After the image is added. and then click Next. You can add more tasks to the node templates that you create by using the Node Template Editor. d. specify if you want to include a product key to activate the operating system on the compute nodes. 6.f. j. click Use a specific password. Note The node templates that you create with the Create Node Template Wizard include the most common deployment and configuration tasks. and then jump to step 5 in this procedure. On the Select Deployment Type page. Click Next to continue. click Create. click With operating system. Type a descriptive name for the new operating system image. specify if you want to add a step in the template to download and install updates using Microsoft Update or the enterprise Windows Server Update Services (WSUS). h. 5. On the Specify Windows Updates page. On the Specify Local Administrator Password for Compute Node page. f. Optionally. If the operating system image that you want to use for your deployment is already listed in the Image Name list. On the Add Operating System Image window. click Add Image. on the Select Deployment Type page. 4. Click Next to continue. If you want to use a different operating system image. For more information. i. b. see “1. If you will be adding preconfigured compute nodes to your cluster.6. and then type and confirm the password that you want to use. On the Review page. Click Next to continue. g.

Important The Domain Users group is added as an HPC cluster user during installation. 5. or you can add individual domain users. 3. 4. Drivers must be in the .6. Note The device drivers that you add will be available to all operating system images in the image store. Add drivers for the operating system images (optional) If you will be deploying compute nodes from bare metal and those nodes require special device drivers. To add a user to the cluster: a. click Add or remove users. The Manage Drivers dialog box appears. Note It is recommended that you obtain the latest device drivers for your compute nodes from the Web site of your hardware vendors. you can remove the Domain Users group from the list of HPC cluster users. click Add User. you will need to add those drivers during the configuration process of your head node. you need to add them as HPC cluster users or HPC cluster administrators. click Manage drivers. you can remove users or administrators that were added by default during installation. In the To-do List.inf format). 2. 3.inf format. Also. 2. To add a driver. and must be accessible from the head node. click Add. The Select Users or Groups dialog box 19 .3. Repeat the two previous steps for all the drivers that you want to add. If you do not want all users in the domain to have access to your cluster. Add or remove users (optional) If you will be giving access to the cluster to other members of your organization. In the To-do List. In the Actions pane. To add or remove users for the cluster 1. Type or browse to the location of the setup information file for the driver that you want to add (. click Close.5. After you are done adding drivers. and then click Open. To add drivers for the operating system images 1. and add a different domain group specifically created for users of your HPC cluster.

you can use the Add Node Wizard to add compute nodes to your HPC cluster. you can manually change this setting in the Options menu. Important Unlike previous versions of Windows HPC Server 2008. d. After creating a node template.appears. There are three ways by which you can add compute nodes to your cluster: • • • Deploy compute nodes from bare metal Add compute nodes by importing a node XML file Add preconfigured compute nodes 20 . and then click Remove. click examples. c. In the Actions pane. automatic naming of nodes. Also. d. After you are done adding users. and then click Check Names. After you are done adding administrators. click Add Administrator. This default setting is automatically changed when you use the Add Node Wizard to add nodes from bare metal. 4. Repeat the previous step for all users that you want to add. on the Select Users or Groups window. b. and other capabilities to streamline deployment tasks. select it on the Users list. c. The Select Users or Groups dialog box appears. Step 4: Add Compute Nodes to the Cluster Windows HPC Server 2008 simplifies the deployment process of compute nodes by providing automatic node imaging. the default in Windows HPC Server 2008 is to respond only to Pre-Boot Execution (PXE) requests that come from existing compute nodes. To remove a user or administrator. For more information. it provides tools that you can use to monitor the progress of your deployment. on the Select Users or Groups window. and then click Check Names. Also. To add an administrator to the cluster: a. For more information. click examples. click OK. under Deployment Settings. b. Type the user name of the administrator that you want to add. click OK. Type the user name of the user that you want to add. Repeat the previous step for all administrators that you want to add. Note You cannot remove the domain Administrator account from the list of cluster administrators. 3.

see “1. Decide how to add compute nodes to your cluster” in Step 1: Prepare for Your Deployment. create one by following the steps in “3.5 Cancel the deployment of a node 4. Create a node template”.2. In this section: • • • • • 4. in the Actions pane. On the Select Deployment Method page. click Microsoft HPC Pack. and then click HPC Cluster Manager. For more information. you must have a template that includes a step to deploy an operating system image. see “3. point to All Programs. Monitor deployment progress 4.2.4. Deploy compute nodes from bare metal The following procedure describes how to add compute nodes to your HPC cluster from bare metal. If you do not have a template that includes a step to deploy an operating system image. verify in the configuration of the BIOS of that computer that the compute node will boot from the network adapter that is connected to the private network. Turn on the computers that you want to add as compute nodes to your cluster.1. Important To complete this procedure. Add Compute Nodes by Importing a Node XML File 4. The Add Node Wizard appears. Important Before turning on a compute node for this procedure. click the name of a node template that includes a step to deploy an operating system image. To deploy compute nodes from bare metal 1. and that Pre-boot Execution Environment (PXE) boot is enabled for that network adapter. instead of booting from the local hard drive or another device.3. If HPC Cluster Manager is not already open on the head node. Deploy Compute Nodes from Bare Metal 4.1. 3. click Deploy compute nodes from bare metal using an operating system image.3. Computers will be listed on the Add Node Wizard as they contact the head node during PXE boot. Configure the naming of new 21 . 4. by using a node template that includes a step to deploy an operating system image. On the Select New Nodes page. They will be named using the naming series that you specified when you configured the head node. 5. click Add Node. 2. in Step 3: Configure the Head Node. and then click Next. In Node Management.For more information about each of these three node deployment options. Click Start. Add Preconfigured Compute Nodes 4. in the Node template list. open it.4.

2. In Node Management. Add compute nodes by importing a node XML file The following procedure describes how to add compute nodes by importing a node XML file.nodes” in Step 3: Configure the Head Node. 3. click Microsoft HPC Pack. Important To complete this procedure.3. On the Select Node XML File page. For more information. If you will not be deploying more nodes. and then click Deploy. After HPC Pack 2008 is installed on all the compute nodes that you want to add to your cluster. Monitor deployment progress. 7. and then click Finish. 22 . type or browse to the location of the node XML file. and then click Import. Add preconfigured compute nodes A preconfigured compute node is a computer that has HPC Pack 2008 already installed and that is connected to the HPC cluster network according to the network topology that you have chosen for your cluster. you can unselect it. you can use the Add Node Wizard on the head node to add the preconfigured nodes to your cluster. On the Select Deployment Method page. open it. To monitor deployment progress. select the Go to Node Management to track progress check box. see 4. Click Start. click Continue responding to all PXE requests. 5. click Respond only to PXE requests that come from existing compute nodes. and then click Finish. To add compute nodes by importing a node XML file 1. point to All Programs. When all computers that you have turned on are listed. in the Actions pane. To monitor deployment progress. If HPC Cluster Manager is not already open on the head node. see Appendix 2: Creating a Node XML File. 6. For more information. click Import compute nodes from a node XML file. On the Completing the Add Node Wizard page.2. 4. If you see a node that you do not want to deploy at this time. and then click Next. on the Completing the Add Node Wizard page. Monitor deployment progress. click Select all. 4. click Add Node. select the Go to Node Management to track progress check box. 8.4. you must have a valid node XML file.4. 4. For more information. The Add Node Wizard appears. if you will be deploying more nodes. and then click HPC Cluster Manager. see 4.

create one by following the steps in “3. 3. click Next. 2. On the Select Installation Location page. 4. click Next. and the second procedure describes how to add the preconfigured compute nodes to the cluster. Important The computers that you will add to your cluster as preconfigured compute nodes must already be running Windows Server® 2008 HPC Edition. or another 64-bit edition of the Windows Server 2008 operating system. The first procedure describes how to install HPC Pack 2008 on the computers that will act as compute nodes. If you do not have a node template that does not include a step to deploy an operating system image. and accept or reject the terms of that agreement. 23 . If you want to install HPC Pack 2008 on an existing installation of Windows Server 2008. If you accept the terms. and then click Next. To start the HPC Pack 2008 installation wizard on the computer that will act as a compute node. On the Getting Started page. click Join an existing HPC cluster by creating a new compute node. Important We strongly recommend that you perform a clean installation of Windows Server 2008 before installing HPC Pack 2008. Important To complete this procedure. run setup. After HPC Pack 2008 is installed on all the compute nodes that you want to add to your cluster. in Step 3: Configure the Head Node. remove all server roles first and then follow the procedures in this guide. 7. and then click Next. type the computer name of the head node on your cluster. Create a node template”.4. click Close. 5. On the Microsoft Software License Terms page. On the Installation Complete page.exe from the HPC Pack 2008 installation media or from a network location. you must have a node template that does not include a step to deploy an operating system image.The following procedures describe how to add preconfigured compute nodes to your HPC cluster. On the Join Cluster page. On the Select Installation Type page. click Install. To install HPC Pack 2008 on a compute node computer 1. click Next. follow the steps in the Add Node Wizard on the head node to add the preconfigured nodes to your cluster. 8. read or print the software license terms in the license agreement. On the Install Required Components page. 6.

2. To view more information about a specific operation. You can also see detailed information for each deployment operation. 3. 5. and then click Finish. point to All Programs. If you will not be deploying more nodes. click Microsoft HPC Pack.4. if you will be deploying more nodes. After all the preconfigured nodes are turned on. click Add Node. and any errors that may have occurred. in the Navigation Pane. select the Go to Node Management to track progress check box. On the Completing the Add Node Wizard page. To select all the preconfigured compute nodes. click Continue responding to all PXE requests. You must bring compute nodes online before they can process jobs. in the Actions pane. Monitor deployment progress During the deployment process of a compute node. You can monitor the progress of the deployment process of compute nodes in Node Management. After the deployment process is complete. In Node Management. Select the preconfigured compute nodes that you want to add to your cluster. To monitor deployment progress 1. If HPC Cluster Manager is not already open on the head node. click Add compute nodes that have already been configured. To monitor deployment progress. Monitor deployment progress. see 4. click Microsoft HPC Pack. click Select all. and then click HPC Cluster Manager. On the Select Deployment Method page.4. 4. its state is set to Provisioning. Turn on all the preconfigured nodes that you want to add to your cluster. and then click Next. 10. and then click HPC Cluster Manager. The Add Node Wizard appears. and bring online nodes that have finished deploying. 8. 24 . click Next. 4. To view information about the deployment operations: a. open it. Click Start.To add preconfigured compute nodes to your cluster 1. click the name of a node template that does not include a step to deploy an operating system image. click Add. In Node Management. point to All Programs. click that operation. 2. If HPC Cluster Manager is not already open on the head node. On the Select New Nodes page. click Operations. b. the state changes to Offline. For more information. click Respond only to PXE requests that come from existing compute nodes. The Detail Pane will list the log entries for that operation. 6. 9. open it. Click Start. on the Before Deploying page. in the Node template list. 7. To add the selected compute nodes to your cluster.

To determine the reason of the failure. review the provisioning log for that node and the list of operations that were performed: a. To select all nodes that are currently offline. click Cancel Operations. click the node. 25 . in the Navigation Pane. To cancel the provisioning operations. click View operations. If the deployment process of a compute node fails. Select all the nodes that you want to bring online. on the Properties tab. The pivoted view in Node Management will list all the operations related to that node. b. In the Actions pane. To view the list of operations related to the deployment of a specific node. under Nodes. in Node Management. To view more information about a specific operation. The Detail Pane will list the log entries for that operation. click that operation. 4. the state of that node is set to Unknown and the health is set to Provisioning Failed. under By Health. b. the node will be moved to the Unknown state. and then in the Detail Pane. under By State. click Provisioning. in the Detail Pane.3. To bring online the nodes that have finished deploying: a. under Nodes. To review the provisioning log for a node. on the list of offline nodes. In Node Management. d. In Node Management. In Node Management. 2. click the Provisioning Log tab c. in the Navigation Pane. under Nodes. and the health for that node will be changed to Provisioning Failed. b.5 Cancel the deployment of a node You can stop the deployment process of a compute node from HPC Cluster Manager by canceling the provisioning operations. click Offline. To view the list of compute nodes that are currently being deployed: a. under By State. and then click the Operations tab. under Nodes. in the Navigation Pane. To view the list of operations related to the deployment failure. under By State. in the Navigation Pane. click Provisioning Failed. 4. 5. To view only compute nodes that are currently being deployed. click the node that you want to stop deploying. click Provisioning. The deployment process will stop. double-click that node. To cancel the deployment of a node 1. click Bring Online. on the Properties tab. c. In the views pane. click any node and then press CTRL+A. in the views pane. 3.

4. To run diagnostic tests on the cluster 1. Create and Submit a Job 6. To view the progress of the diagnostic tests and the test results. and then click HPC Cluster Manager. Checklist: Run a test job on the HPC cluster Task Description 6. If HPC Cluster Manager is not already open on the head node.2. ensure that the Run all functional tests and All nodes options are selected. in the Navigation Pane. 3. click Validate your cluster (under Diagnostics). Create a Job Template 6. Create and submit a basic job by using the HPC command-line tools. To view detailed information about a test. Step 6: Run a Test Job on the Cluster After you have finished deploying your cluster. 2. Create and submit a basic job by using the cmdlets in HPC PowerShell. In the Run Diagnostics dialog box. point to All Programs. you can run a simple test job to verify that your cluster is fully functional.1.4. In the To-do List.3. in the Navigation Pane. click To-do List. Click Start. In Configuration. open it. in Diagnostics.Step 5: Run Diagnostic Tests on the Cluster After you have configured your head node and added all compute nodes to the cluster. click Test Results. 26 . click Microsoft HPC Pack. 6. you should run diagnostic tests to validate cluster functionality and troubleshoot any configuration issues. Create and Submit a Job Using HPC PowerShell (Optional) Create a job template by running the Generate Job Template Wizard in HPC Cluster Manager. The following checklist describes the steps involved in running a simple test job on your cluster. click the down arrow for that section. 5. Create and submit a basic job in HPC Cluster Manager. Create and Submit a Job Using the Command-Line Interface (Optional) 6. double-click the test. and then click Run. To expand the information in a section of the test results.

On the Enter Template Name page. When you are prompted if you want to change the job template for the job. open it. 5. and then click Next without changing any settings. This will limit all jobs that are submitted using this template to run for no longer than one minute. click New. In the Actions pane. 8. If HPC Cluster Manager is not already open on the head node. This will allow jobs that are submitted using this template to run on any node group. click Next without changing any settings. click 27 .1. On the Finish page. click Job Templates. This will allow jobs from any project to be submitted using this template. what resources are assigned to jobs.2.1. This will run jobs that are submitted using this template with Normal priority. 6. The Generate Node Template Wizard appears. To create and submit a job 1. 2. HPC Cluster Manager includes the Generate Node Template Wizard to help you create basic job templates To create a simple job template 1. select the Run jobs no longer than check box. In Job Details. Click Start. 7. click Next without changing any settings. 9. and which users can submit jobs. Files folder of a compute node in • Uses the job template that you created in the previous section. In the Job template list. click Next without changing any settings. 6. On the Set Project Names page. Click Next to continue. Create a job template Job templates simplify the job management of your HPC cluster by helping you to limit the kinds of jobs that can be submitted to your cluster.6. • Runs at low priority. click Test Template (the template that you created in section “6. On the Set Priorities page. type Test Template for the name of the new job template. b. 2. On the Limit Node Groups page. specify the following job parameters: a. Create a job template”). point to All Programs. which limits to 1 minute the maximum duration of time that a job can run. in the Navigation Pane. In Configuration. 3. and optionally a description. On the Limit Run Time page. and then click HPC Cluster Manager. Create and submit a job This section describes how to submit a job in HPC Cluster Manager that: • Displays a directory list of the files in the C:\Program your cluster. click New Job. click Microsoft HPC Pack. in the Actions pane. In Job Management. 4. click Finish.

click Save. d. If you want to copy the results to the clipboard. 7. d. In the Command line box. type your user name and password. and then click OK. and then specify the following task parameters: a. When the state of the job is Finished. type the following command: job new /jobname:"Folder Contents" /priority:"Lowest" /RunTime:0:0:1 28 . b. To add a new basic task to the job. in the Details Pane.Yes. 6. In the Work directory box. Select the Run this job only on nodes in the following list check box. In the Task Properties window. using the command-line interface tools that are included with Windows HPC Server 2008. point to All Programs. in the Navigation Pane. type c:\Program Files. Create and submit a job using the commandline interface (optional) You can create and submit a job similar to the job that you created and submitted in the previous section.3. the Output box will display the directory list of c:\Program Files for the compute node that you selected in step 4. type Folder Contents. click Copy output to clipboard. In the views pane. To create and submit a job using the command-line interface 1. Open a Command Prompt window. click Submit. d. If you are prompted to enter your credentials. To submit the job. Select the check box for one of the nodes in your HPC cluster. To add a task. click All Jobs. c. and then click Command Prompt. click Add. 2. b. Click Start. click Resource Selection. click Lowest. 5. 4. click Task List. b. type a name for the new task. To create a new job. type dir. e. In the Task name box. 3. click Accessories. To add this task. in the Results tab. e. In the Priority list. 6. To view the progress and the results of the job that you submitted: a. In the Job name box. click the job that you submitted. and then specify the following resource parameters: a. double-click the task that you created in step 3. c. c. In Job Management. To limit the job so that it only runs on a specific compute node in your HPC cluster.

4. 2. and then press ENTER. To submit the job. 29 . 4. If you are prompted to enter your credentials. point to All Programs. type the following command: job add <JobID> /workdir:"C:\Program Files" dir Where <JobID> is the identification number for the job. Create and submit a job using HPC PowerShell (optional) You can also create and submit the same job that you created and submitted in the previous section. On the head node. To create a new job. If you are prompted by Windows PowerShell if you want to run the ccppsh. type the following cmdlet: $j | Add-HpcTask -WorkDir "C:\Program Files" -CommandLine "dir" 6. To submit the job. 5. as displayed on the commandline interface after typing the command in step 2. and then type ENTER. type the following command: job submit /id:<JobID> Where <JobID> is the identification number for the job. To add a task to the job. Right-click HPC PowerShell. 5. 3. click Start. To add a task to the job.4. using HPC PowerShell. and then click Run as administrator. 3. 6.format. as displayed on the commandline interface after typing the command in step 2. type your password. type the following cmdlet: $j = New-HpcJob -Name "Folder Contents" -Priority Lowest -RunTime "0:0:1" -RequestedNodes “<ComputeNodeName>” Where <ComputeNodeName> is the name of a compute node in your HPC cluster./requestednodes:”<ComputeNodeName>” Where <ComputeNodeName> is the name of a compute node in your HPC cluster. If you are prompted to enter your credentials.ps1xml script. and then type ENTER. and then click Microsoft HPC Pack. type your password. Note For more information about HPC PowerShell. type A. type the following cmdlet: $j | Submit-HpcJob 7. To create and submit a job using HPC PowerShell 1. see Appendix 6: Using HPC PowerShell.

microsoft. see the Submitting Jobs in Windows HPC Server 2008 Step-by-Step Guide (http://go.microsoft. technical reference documentation and troubleshooting guides are available on the Windows HPC Server 2008 Technical Library (http://go.Notes You can also type all three cmdlets in one line: New-HpcJob -Name "Folder Contents" -Priority Lowest -RunTime "0:0:1" -RequestedNodes “<ComputeNodeName>” | Add-HpcTask -WorkDir "C:\Program Files" -CommandLine "dir" | Submit-HpcJob Where <ComputeNodeName> is the name of a compute node in your HPC cluster. see the Configuring Job Submission and Scheduling Policies in Windows HPC Server 2008 Step-byStep Guide (http://go.microsoft.com/fwlink/?LinkId=121888). Appendices In this section: • • • • • • Appendix 1: HPC Cluster Networking Appendix 2: Creating a Node XML File Appendix 3: Node Template Tasks and Properties Appendix 4: Job Template Properties Appendix 5: Scriptable Power Control Tools Appendix 6: Using HPC PowerShell 30 .com/fwlink/?LinkId=119594). This documentation is available from the user interface by clicking any of the in-context help links or by pressing F1.com/fwlink/? LinkId=121887). Additional Resources • Additional online resources. • HPC Cluster Manager includes comprehensive Help documentation. including step-by-step guides. Related documents • For more information about creating and submitting jobs. • For more information about configuring job submission and scheduling policies.

All intra-cluster management and deployment traffic is carried on the enterprise network unless a private network (and optionally. and application traffic if no application network exists. Depending on the network topology that you choose for your cluster. These characteristics are important so that this network can perform latency-sensitive tasks. such as Dynamic Host Configuration Protocol (DHCP) and network address translation (NAT).Appendix 1: HPC Cluster Networking Windows HPC Server 2008 supports five cluster topologies designed to meet a wide range of user needs and performance. preferably with high bandwidth and low latency. A dedicated network. and access requirements. 31 Private network Application network . manageability. A dedicated network that carries intra-cluster communication between nodes. Network Name Description Enterprise network An organizational network to which the head node is connected and optionally the compute nodes. This section includes the following topics: • • • • HPC cluster networks Supported HPC cluster network topologies HPC network services Windows Firewall configuration HPC cluster networks The following table lists and describes the networks to which an HPC cluster can be connected. such as carrying parallel Message Passing Interface (MPI) application communication between compute nodes. can be provided by the head node to the compute nodes. deployment. certain network services. You must choose the network topology that you will use for your cluster well in advance of setting up an HPC cluster. This network carries management. scalability. The enterprise network is often the network that most users in an organization log on to when performing their job. These topologies are distinguished by how the compute nodes in the cluster are connected to each other and to the enterprise network. an application network) also connects the cluster nodes.

Supported HPC cluster network topologies There are five cluster topologies supported by Windows HPC Server 2008: • • • • • Topology 1: Compute Nodes Isolated on a Private Network Topology 2: All Nodes on Enterprise and Private Networks Topology 3: Compute Nodes Isolated on Private and Application Networks Topology 4: All Nodes on Enterprise. and Application Networks Topology 5: All Nodes on an Enterprise Network Topology 1: Compute nodes isolated on a private network The following image illustrates how the head node and the compute nodes are connected to the cluster networks in this topology: The following table lists and describes details about the different components in this topology: Component Description Network adapters • The head node has two network adapters. Private. • The default configuration for this 32 Network services . Traffic • The private network carries all communication between the head node and the compute nodes. • Each compute node has one network adapter. management and application traffic (for example. MPI communication). • The compute nodes are connected only to the private network. • The head node is connected to both an enterprise network and to a private network. including deployment.

• Network traffic between compute nodes and resources on the enterprise network (such as databases and file servers) pass through the head node. then both NAT and DHCP will be disabled by default. This has implications when developing and debugging parallel applications for use on the cluster. • Cluster performance is more consistent because intra-cluster communication is routed onto the private network. Security • The default configuration on the cluster has the firewall turned ON for the enterprise network and turned OFF for the private network. Considerations when selecting this topology Topology 2: All nodes on enterprise and private networks The following image illustrates how the head node and the compute nodes are connected to the cluster networks in this topology: 33 . • DHCP is enabled by default on the private network to assign IP addresses to compute nodes. For this reason. and depending on the amount of traffic. this might impact cluster performance. • Compute nodes are not directly accessible by users on the enterprise network. • If a DHCP server is already installed on the private network.Component Description topology is NAT enabled on the private network in order to provide the compute nodes with address translation and access to services and resources on the enterprise network.

• Traffic from the enterprise network can be routed directly to a compute node. • NAT is not required in this topology because the compute nodes are connected to the enterprise network. management. Traffic • Communication between nodes. • Each compute node has two network adapters. • This topology offers more consistent cluster performance because intra-cluster 34 Considerations when selecting this topology . so this option is disabled by default. Network services • The default configuration for this topology has DHCP enabled on the private network. • All nodes in cluster are connected to both the enterprise network and to a dedicated private cluster network. and application traffic. Security • The default configuration on the cluster has the firewall turned ON for the enterprise network and turned OFF for the private network. is carried on the private network in this topology. including deployment.The following table lists and describes details about the different components in this topology: Component Description Network adapters • The head node has two network adapters. to provide IP addresses to the compute nodes.

and a highspeed adapter that is connected to the application network. Topology 3: Compute nodes isolated on private and application networks The following image illustrates how the head node and the compute nodes are connected to the cluster networks in this topology: The following table lists and describes details about the different components in this topology: Component Description Network adapters • The head node has three network adapters: one for the enterprise network. one for the private network and another for the application network. • Each compute node has two network adapters. • This topology provides faster access to enterprise network resources by the compute nodes. 35 . • This topology provides easy access to compute nodes by users on the enterprise network. one for the private network.Component Description communication is routed onto a private network. • This topology is well suited for developing and debugging applications because all compute nodes are connected to the enterprise network.

• If a DHCP is already installed on the private network. Network services • The default configuration for this topology has both DHCP and NAT enabled for the private network. then both NAT and DHCP will be disabled by default. • This topology offers more consistent cluster performance because intra-cluster communication is routed onto the private and application networks. and application networks The following image illustrates how the head node and the compute nodes are connected to the cluster networks in this topology: 36 . DHCP is enabled by default on the application network. • Compute nodes are not directly accessible by users on the enterprise network in this topology. to provide IP addressing and address translation for compute nodes. private. Considerations when selecting this topology Topology 4: All nodes on enterprise. but not NAT. • Jobs running on the cluster use the high-performance application network for cross-node communication.Component Description Traffic • The private network carries deployment and management communication between the head node and the compute nodes. Security • The default configuration on the cluster has the firewall turned ON for the enterprise network and turned OFF on the private and application networks.

such as MPI communication between nodes.The following table lists and describes details about the different components in this topology: Component Description Network adapters • The head node has three network adapters. • The network adapters are for the enterprise network. • NAT is disabled for the private and application networks because the compute nodes are connected to the enterprise network. Traffic • The private cluster network carries only deployment and management traffic. • Network traffic from the enterprise network reaches the compute nodes directly. and a high speed adapter for the high performance application network. • All compute nodes have three network adapters. Network services • The default configuration for this topology has DHCP enabled for the private and application networks to provide IP addresses to the compute nodes on both networks. Security • The default configuration on the cluster 37 . • The application network carries latencysensitive traffic. the private network.

Considerations when selecting this topology • This topology offers more consistent cluster performance because intra-cluster communication is routed onto a private and application network.Component Description has the firewall turned ON for the enterprise network and turned OFF on the private and application networks. • This topology is well suited for developing and debugging applications because all cluster nodes are connected to the enterprise network. including intra-cluster. All traffic. • This topology provides easy access to compute nodes by users on the enterprise network. Topology 5: All nodes on an enterprise network The following image illustrates how the head node and the compute nodes are connected to the cluster networks in this topology: The following table lists and describes details about the different components in this topology: Component Description Network adapters • The head node has one network adapter. • This topology provides faster access to enterprise network resources by the compute nodes. • All nodes are on the enterprise network. • All compute nodes have one network adapter. 38 Traffic • .

• This topology. • Because all nodes are connected only to the enterprise network.Component Description application. • This topology provides faster access to enterprise network resources by the compute nodes. is well suited for developing and debugging applications because all cluster nodes are connected to the enterprise network. • This topology offers easy access to compute nodes by users on the enterprise network. like topologies 2 and 4. Network services • This topology does not require NAT or DHCP because the compute nodes are connected to the enterprise network. • The default configuration on the cluster has the firewall turned ON for the enterprise network. • This topology provides easy access to compute nodes by users on the enterprise network. Security Considerations when selecting this topology HPC network services Depending on the network topology that you have chosen for your HPC cluster. • Access of resources on the enterprise network by individual compute nodes is faster. This maximizes access to the compute nodes by users and developers on the enterprise network. the following network services can be provided by the head node to the compute nodes connected to the different cluster networks: 39 . is carried over the enterprise network. you cannot use Windows Deployment Services to deploy compute node images using the new deployment tools in Windows HPC Server 2008. and enterprise traffic.

Depending on the detected configuration of your HPC cluster and the network topology that you choose for your cluster. Firewall ports required by Windows HPC Server 2008 The following table lists all the ports that are opened by Windows HPC Server 2008 for communication between cluster services on the head node and the compute nodes. you will have to manually open those ports in Windows Firewall. or from a dedicated DHCP server on the private network. Important If you have applications that require access to the head node or to the cluster nodes on specific ports. 9893 . Port Number (TCP) Required By 5969 Required by the client tools on the enterprise network to connect to the HPC Job Scheduler Service on the head node.• • Network Address Translation (NAT) Dynamic Host Configuration Protocol (DHCP) server This section describes these HPC network services. Windows Firewall is enabled only on the enterprise network. Also. Enabling NAT on the head node enables compute nodes on the private or application networks to access resources on the enterprise network. By default. or via DHCP services coming from a server on the enterprise network. DHCP server A DHCP server assigns IP addresses to network clients. You do not need to enable NAT if you have another server providing NAT or routing services on the private or application networks. you do not need NAT if all nodes are connected to the enterprise network. and disabled on the private and application networks to provide the best performance and manageability experience. the compute nodes will receive IP addresses from either the head node running DHCP. Windows Firewall configuration Windows HPC Server 2008 opens firewall ports on the head node and compute nodes to enable internal services to run. Used by the HPC Management Service on the compute nodes to communicate with the HPC 40 9892. Network address translation (NAT) Network address translation (NAT) provides a method for translating Internet Protocol version 4 (IPv4) addresses of computers on one network into IPv4 addresses of computers on a different network.

Used for communication between the HPC MPI Service on the head node and the HPC MPI Service on the compute nodes. 5970 Used for communication between the HPC Management Service on the compute nodes and the HPC Job Scheduler Service on the head node. 9089 1856 8677 6729 5800 5801 5999 . 9088. Used for communication between the client application on the enterprise network and the services provided by the Windows Communication Foundation (WCF) broker node. and joining the computer to the domain. or to bring a node online or take it offline. Used for communication between ExecutionClient. installing all the necessary HPC components. ExecutionClient. Used by the remote node service on the enterprise network to enumerate nodes in a node group.exe on the compute nodes and the HPC Management Service on the head node. It performs tasks such as imaging the computer. Used for management services traffic coming from the compute nodes to the head node or WCF broker node.exe is used during the deployment process of a compute node. 41 9794 9087. Used by the HPC Job Scheduler Service on the head node to communicate with the HPC Node Manager Service on the compute nodes.Port Number (TCP) Required By System Definition Model (SDM) Service on the head node. Used for communication between the HPC command-line tools on the enterprise network and the HPC Job Scheduler Service on the head node. Used by HPC Cluster Manager on the enterprise network to communicate with the HPC Job Scheduler Service on the head node.

Examples of properties that can be associated with compute nodes are: location (including data center. • Preconfigured nodes that are added to your HPC cluster using a node XML file do not need to be manually approved into the cluster. or tags that are used to automatically create node groups. • Importing a node XML file is a simple and efficient way for you to associate properties with compute nodes. • Other properties. rack. in the Bin folder of the installation path for HPC Pack 2008. This XSD file is available on the head node. a computer name for identification purposes. This makes the deployment process more efficient and streamlined. the XSD file is available here: 42 . Benefits of using a node XML file for deployment The following list outlines some of the benefits of using a node XML file when adding compute nodes to your cluster: • You can pre-stage a PXE deployment of compute nodes for your HPC cluster by importing a node XML file with a list of all the computers that you will be adding to the cluster. node templates. such as the System Management BIOS (SMBIOS) GUID or the Media Access Control address (MAC) address. such as the physical location of each compute node and the Windows product key that should be used to activate the operating system. The node XML file schema The node XML file is based on an XML Schema Definition (XSD) language file: NodeConfigurationFile. This list includes: • When adding compute nodes from bare metal. Appendix 2: Creating a Node XML File A node XML file contains a list of compute nodes that you want to add to your cluster. without having to worry about powering them on in a specific order. • When adding preconfigured nodes that are already running one of the 64-bit editions of the Windows Server 2008 operating system. Using a node XML file. and HPC Pack 2008 has been installed. a Windows product key.xsd. For example. • You can give specific computer names (NetBIOS names) to compute nodes that are deployed from bare metal. and chassis). if you are using the default installation path. a hardware identification parameter for each compute node.Port Number (TCP) Required By 443 Used by the clients on the enterprise network to connect to the HPC Basic Profile Web Service on the head node. The compute nodes can be deployed both from bare metal or as preconfigured nodes. computer names will already be associated with a specific SMBIOS GUID or MAC address (or both).

Location:Chassis No • Optional attribute of the Location element. • Specifies the name or number of the server rack where the compute node is located. • This element is required when deploying compute nodes from bare metal. • Contains attributes with information about the node template that will be used to deploy the compute node.C:\Program Files\Microsoft HPC Pack\Bin\NodeConfigurationFile. • Specifies the name of the data center where the compute node is located. Location:Rack No • Optional attribute of the Location element.xsd The following table lists and describes the attributes and elements that are defined in the node XML file schema: Attribute. • Contains attributes with information about the location of the compute node. Template:Name Yes • Required attribute of the Template element. Location:DataCenter No • Optional attribute of the Location element. • This attribute is required only when a Template element is included. 43 . Element. or Element:Attribute Required Description Location No • Optional element. Template No • Optional element. • Specifies the name or number of the chassis that is used for the compute node.

but the node will be imported with that node template associated with it. • If Provisioned=“false”. Element. or not (Provisioned=“false”. the node is not considered a preconfigured node. or Element:Attribute Required Description • Specifies the name of the node template that will be used to deploy the compute node. the node template will not be applied to the node when the node is added to the cluster.Attribute. • If you are deploying compute nodes from bare metal. • Specifies if the node is a preconfigured node (Provisioned=“true”. Template:Provisioned No • Optional attribute of the Template element. or Provisioned=“1”). this attribute must specify the name of a node template that includes a step to deploy an operating system image. • If this attribute is not specified. and the node template will be applied to the node when the node is added to the cluster. or Provisioned=“1” is specified. • If Provisioned=“true”. • If the specified node template name does not exist on the head node. the deployment will fail. or your deployment will fail. 44 . or Provisioned=“0”).

• Ensure that you specify only those MAC addresses that exist in the compute node. you must specify this element or the MachineGuid parameter. • There can be multiple instances of this element. You must also specify this element if the cluster nodes in your system have SMBIOS GUIDs that are not unique (that is. two or more nodes in the node XML file have the same value for the MachineGuid parameter). • Specifies the MAC address of the network adapter that will be used by the compute node. Also. the node template will be applied to the node when the node is added to the cluster. Element. if the compute node uses more than one adapter. or Element:Attribute Required Description or Provisioned=“0” is specified. or must not be specified. this attribute must be Provisioned=“false”.Attribute. MacAddress No • Optional element. • If you are deploying compute nodes from bare metal. Specifying a MAC 45 . the node template must include a step to deploy an operating system image. or the deployment will fail. Provisioned=“0”. • If you are deploying compute nodes from bare metal.

• If you are deploying preconfigured nodes. Include only the twelve hexadecimal digits for the MAC address. if the compute node should be added to more than one node group. hyphens (“-”). • Specifies the name of the node group to which the compute node should be added during deployment. the following MAC address is correctly specified: 00301B445F02. colons (“:”). • There can be multiple instances of this element. Important When you specify a MAC address in the node XML file. Element. do not include any blank spaces. or Element:Attribute Required Description address that does not exist in a compute node. For example. might cause the import of that node to fail. Tag No • Optional element. Name Yes • Required attribute. • If you are deploying compute nodes from bare metal. or dots (“. this attribute specifies the computer name that will be assigned to the node during deployment.”).Attribute. this attribute specifies the current computer name of the 46 . • Specifies the computer name (NetBIOS name) of the compute node.

you must specify this parameter or the 47 . ManagementIpAddress No • Optional attribute. • If you are deploying compute nodes from bare metal. • You only need to specify this attribute if you are using scriptable power control tools to manage power on your cluster. or Element:Attribute Required Description compute node. the node XML file will fail to import. Domain No • Optional attribute. • Specifies information that is required for the integration of scriptable power control tools like Intelligent Platform Management Interface (IPMI) scripts. Element. it is not in the Unknown state). • Specifies the Active Directory® domain to which the compute node should be added.Attribute. • Specifies the SMBIOS GUID of the computer where the compute node is deployed. • If the specified name is that of a preconfigured node that has already been added to the cluster (that is. MachineGuid No • Optional attribute. the Active Directory domain of the head node is used. • If this attribute is not specified.

or Element:Attribute Required Description MacAddress element. 48 . ProductKey No • Optional attribute. a node XML file can be created from an HPC cluster that is already configured. but it must follow the node XML file schema. • The product key is used during the activation task of a node template that includes a step to deploy an operating system image. Creating a node XML file for deployment from bare metal The node XML file can be created in any XML editor or text editor. Important You must specify a Windows product key if you are using an operating system image of a retail version of Windows Server 2008. • The product key that you specify must match the edition of the operating system in the image that is used by the node template.Attribute. by exporting it from HPC Cluster Manager. Element. or the node XML file will fail to import. Also. or the evaluation version of Windows Server 2008 HPC Edition. • Specifies the Windows product key that will be used to activate the operating system on the compute node.

• You must include a Windows product key if you are using an operating system image of a retail version of Windows Server 2008. you can use only the MAC address. specify the Tag attribute with the name of the node group for each compute node. or the evaluation version of Windows Server 2008 HPC Edition • If your integration of scriptable power control tools requires a BMC IP address for each compute node.w3.org/2001/XMLSchema" xmlns="http://schemas.org/2001/XMLSchema-instance" xmlns:xsd="http://www. Sample node XML file <?xml version="1. If yosu do not specify a node template or if you specify a node template that does not include a step to deploy an operating system image. • Specify any location information that you want to be attached to the node. see the Creating a Node XML File in Windows HPC Server 2008 Step-by-Step Guide (http://go. • If you want nodes to be automatically added to specific node groups during deployment. • If both the SMBIOS GUID and MAC address of a compute node are specified. • You must specify a node template for each compute node listed.com/fwlink/? LinkId=139371). • If for some reason you do not have access to the SMBIOS GUID of a node. the deployment will fail.Note For detailed information about creating a node XML file. When creating a node XML file for a deployment from bare metal. Specifying a MAC address that does not exist in a compute node. This parameter can be the SMBIOS GUID or the MAC address of the computer.microsoft.w3. • Ensure that you specify only those MAC addresses that exist in each compute node.0" encoding="utf-8" standalone="yes" ?> <Nodes xmlns:xsi="http://www. it can be added to the node XML file.microsoft. and that node template must include a step to deploy an operating system image. When creating a node XML file for deployment from bare metal: • Specify the MAC address of a compute node in the MacAddress attribute for that compute node. • Specify the SMBIOS GUID of a compute node in the MachineGuid attribute for that compute node. the SMBIOS GUID is used. you will need a hardware identification parameter for each compute node. • Ensure that the node template names that are specified in the node XML file match the names of the node templates listed on the head node. might cause the import of that node to fail.com/HpcNodeConfigurationFile/2007/12"> 49 .

and then click HPC Cluster Manager. click Edit. To add tasks to a node template using the Node Template Editor 1. Some of these tasks are added to new templates when you create them using the Create Node Template Wizard. click Save. If HPC Cluster Manager is not already open on the head node. click Microsoft HPC Pack. 5. point to All Programs. Click Start. 2. 4. see Available node template tasks. click Node Templates. Repeat the previous step for all tasks that you want to add to the node template. 3. To add a task. 50 . configuration. click Add Task. click the node template to which you want to add tasks. In the Actions pane. In Configuration.<Node Name="ComputeNodeName1" Domain="CONTOSO" MachineGuid="{4c4c4544-0038-5710-804b-c6c04f464331}"> <Location DataCenter="Data Center 1" Rack="2" Chassis="1" /> <Template Name="Default ComputeNode Template" Provisioned="true" /> <MacAddress>00301B445F02</MacAddress> <MacAddress>001B2104EDF5</MacAddress> <Tag>ComputeNodes</Tag> <Tag>Rack2</Tag> </Node> </Nodes> Appendix 3: Node Template Tasks and Properties You can use the Node Template Editor in HPC Cluster Manager to add deployment. After you are done adding tasks. and then click the task that you want to add from the list of available tasks. 6. In the Views pane. open it. The Node Template Editor appears. and maintenance tasks to a node template. For more information about each node task and its parameters.

If this property is not specified. you must also specify the other one. the domain of the head node is used. The following table lists the provisioning task that you can add or modify on a node template. the computer path of the head node is used. and the properties that are associated with it.Available node template tasks There are four types of node template tasks: • • • • Provisioning Configuration Deployment Maintenance Provisioning Provisioning tasks are performed on the head node before the deployment process of the compute nodes takes place. If you only specify one property. 51 . • Domain (optional): specifies the name of the domain on which the computer account will be created. The default path is cn=Computers. Important When you specify one of these two properties. • ComputerPath (optional): specifies the path in Active Directory where the computer account will be created. it will be ignored and the domain and computer path of the head node will be used. If this property is not specified. Task Name Task Description Properties Create Computer Account Creates a computer account in Active Directory for the compute node.

• DestFile (required): specifies the absolute path to the drive of the compute node 52 . then you must add that return code to the list of error codes that should be ignored. the file will be copied using the Server Message Block (SMB) protocol. If False is selected. or it will be interpreted as an error code and the task will fail. the configuration task will fail if the command fails to run successfully. the task will fail. • Command (required): specifies the Windows PE command that you want to run. • UnicastFallback (optional): if True is selected and multicast fails. • ContinueOnFailure (optional): if True is selected. If False is selected and multicast fails. Task Name Task Description Properties Run Windows PE Command Runs a command in Windows PE. The following table lists the configuration tasks that you can add or modify on a node template and the properties that are associated with each.Configuration Configuration tasks are performed on a compute node after the compute node is booted into Windows Preinstallation Environment (Windows PE) at the beginning of the deployment process. the configuration task will not fail if the command fails to run successfully. The default return code that is expected from the command when it runs successfully is zero (0). If the command returns a success code other than zero. • ErrorWhiteList (optional): specifies the return error codes that should be ignored for the command. Multicast Copy Copies a file from the head node using the multicast protocol.

• DiskPartScript (Required): specifies the name and path of the script to use with Diskpart. ensure that you specify a path that is valid for the partitions that will be created with that task. Unicast Copy Copies a file from the head node using the Server Message Block (SMB) protocol.Task Name Task Description Properties to which the file will be copied. Shares a folder during the Windows PE phase of the operating system installation. relative to the Microsoft HPC Pack\Data\InstallShare folder. ensure that you specify a path that is valid for the partitions that will be created with that task. • UserPassword (optional): 53 Mount Share . • User (optional): specifies the user name to use when sharing the folder. • Directory (optional): specifies if a file (False) or a folder (True) is being copied. • SourceFile (required): specifies the name and path of the file to copy. Partition Disk Partitions the disk on the compute node using a script for Diskpart. • DriveLetter (optional): specifies the drive letter where the folder will be shared. • Destination (required): specifies the absolute path to the drive of the compute node to which the file will be copied. • Source (required): specifies the name and path of the file to copy. relative to the Microsoft HPC Pack\Data\InstallShare folder. If you have added the Partition Disk task to the node template. If you have added the Partition Disk task to the node template.

• Path (required): specifies the name and path of the folder that will be shared. ensure that you specify a drive letter that is valid for the partitions that will be created with that task. Install Windows Installs the Windows Server operating system on the compute node. • Product Key (optional): specifies the product key to use with this node template for the activation of the operating system. If you have added the Partition Disk task to the node template.Task Name Task Description Properties specifies the password to use when sharing the folder. then you should specify a password using the Local 54 . the password for the local Administrator account is automatically generated. it is secret and cannot be recovered. • Custom Unattend File (optional): specifies the absolute path to the custom unattend file to use for installation. • Local Administrator password (optional): specifies the password for the local Administrator account on the compute node. • Installation Drive (optional): specifies the drive letter where the Windows Server operating system will be installed. If False is selected. • Autogenerate Local Admin Password (required): if True is selected. After a password is automatically generated.

ensure that you specify a path that is valid for the partitions that will be created with that task. If you have added the Partition Disk task to the node template. • None Deployment Deployment tasks are performed on a compute node after the operating system has been installed. • WimPath (required): specifies the path on the compute node where the WIM file that will be extracted is stored. • Source (required): 55 . • DestinationPath (required): specifies the path on the compute node where the files in the Windows Imaging Format (WIM) file will be extracted. • Image (required): specifies the image to use for the installation of the operating system. Apply WIM Image Extracts the files in a WIM file to a local disk on the compute node. • Directory (optional): specifies if a file (False) or a folder (True) is being copied. Restart Restarts the compute node. The following table lists the deployment tasks that you can add or modify on a node template and the properties that are associated with each.Task Name Task Description Properties Administrator password attribute. • Destination (required): specifies the absolute path on the drive of the compute node where the file will be copied. Task Name Task Description Properties Unicast Copy Copies a file from the head node using the Server Message Block (SMB) protocol.

• ContinueOnFailure (optional): if True is selected.Task Name Task Description Properties specifies the name and path of the file to copy. or it will be interpreted as an error code and the task will fail. • Setup Source Directory (required): specifies the location of the installation files for the HPC Pack. Install HPC Pack Installs HPC Pack on the compute node. 56 Mount Share Shares a folder on the compute node. • DriveLetter (optional): specifies the drive letter where the folder will be shared. If the command returns a success code other than zero. • ErrorWhiteList (optional): specifies the return error codes that should be ignored for the command. The default return code that is expected from the command when it runs successfully is zero (0). then you must add that return code to the list of error codes that should be ignored. • User (optional): specifies the user name to use when sharing the folder. • Command (required): specifies the command that you want to run as Administrator. Run OS command Runs a command as the local Administrator. relative to the Microsoft HPC Pack\Data\InstallShare folder. deployment will not fail if the command fails to run successfully. . If False is selected. deployment will fail if the command fails to run successfully.

Task Name Task Description Properties • UserPassword (optional): specifies the password to use when sharing the folder. If False is selected. the domain of the head node is used. • WorkingDirectory (optional): specifies the folder 57 . The following table lists the maintenance tasks that you can add or modify on a node template and the properties that are associated with each. If this property is not specified. the default timeout value is 60 seconds. • Domain (optional): specifies the name of the domain to which the compute node will be joined. Task Name Task Description Properties Post Install Command Runs a command on the compute node after HPC Pack has been installed. Maintenance Maintenance tasks are performed on a compute node when you select a node in Node Management and then click Maintain. • Timeout (optional): specifies the number of seconds before the command times out. the maintenance task will fail if the command fails to run successfully. • • None None Restart Log Off Restarts the compute node. Join Domain Joins the compute node to an Active Directory domain. • ContinueOnFailure (optional): if True is selected. If this property is not specified. Logs off the compute node. the maintenance task will not fail if the command fails to run successfully. • Path (required): specifies the name and path of the folder that will be shared.

click the job template to which you want to add properties.Task Name Task Description Properties where the command runs. To add properties to a job template using the Job Template Editor 1. 6. open it. and then click HPC Cluster Manager. click Save. The Job Template Editor appears. • Categories (required): specifies the type of updates that will be applied to the compute node. In the Views pane. Click Start. see Available job template properties. Some of these properties are added to new templates when you create them using the Generate Job Template Wizard. 58 . • None Apply Updates • Patches (optional): specifies the list of updates that will be applied to the compute node. In Configuration. click Add. For more information about each job template property. click Edit. and then click the property that you want to add from the list of available properties. 5. Appendix 4: Job Template Properties You can use the Job Template Editor in HPC Cluster Manager to add properties to a job template. point to All Programs. After you are done adding properties. In the Actions pane. Repeat the previous step for all properties that you want to add to the job template. • Command (required): specifies the command to run. click Job Templates. 2. 4. This command runs using the installation credentials that were provided during the configuration process of the head node. Activate Operating System Activates the operating system on the compute node. If HPC Cluster Manager is not already open on the head node. To add a property. Applies updates to the compute node from Microsoft Update or Windows Server Update Services (WSUS). 3. click Microsoft HPC Pack.

no other jobs can run on a compute node at the same time as the job being submitted by the cluster user. the cluster user must specify the maximum number of resources assigned to the job. If False is selected as the only valid value. or to have them automatically calculated. If False is selected as the only valid value. Specifies a list of names that the cluster user can select 59 Auto Calculate Minimum Exclusive Fail on Task Failure Job Name . the cluster user cannot specify the maximum number of resources (cores. the cluster user cannot select this property when submitting the job. If True and False are selected as valid values. sockets. If True and False are selected as valid values. and resources will be automatically calculated based on the tasks in the job. the cluster user can select or unselect this property when submitting the job. sockets. or nodes) assigned to the job. and resources will be automatically calculated based on the tasks in the job. the cluster user can select or unselect this property when submitting the job. If True and False are selected as valid values. the cluster user cannot select this property when submitting the job. the cluster user cannot specify the minimum number of resources (cores. the cluster user can choose to specify the minimum number of resources. or nodes) assigned to the job. If True is selected as the only valid value. If False is selected as the only valid value. If True is selected as the only valid value. If False is selected as the only valid value. the cluster user can choose to specify the maximum number of resources. If True is selected as the only valid value. The properties that you set in a job template constrain the properties that a cluster user can choose when submitting a job to the cluster using that template. the failure of any task in the job that is submitted by the cluster user will cause the entire job to fail immediately. If True and False are selected as valid values. or to have them automatically calculated.Available job template properties The following table lists all the job properties that you can set in a job template. Job Property Description Auto Calculate Maximum If True is selected as the only valid value. the cluster user must specify the minimum number of resources assigned to the job.

sockets or cores) Maximum Sockets Specifies a range of values for the maximum number of sockets that the cluster user can assign to the job. If you set this property. the cluster user cannot specify a job name that is not on the list. This property has no effect if any of these conditions is true: • The Auto Calculate Maximum property has been added to the template with True selected as the only valid value • The Unit Type property has been added to the template and the list of the type of resources that can be assigned to the job does not include Node • The user selects a different type of resource to assign to the job (that is.Job Property Description for the job. Licenses in this list can be validated by a job activation filter that is defined by the cluster administrator. Licenses Specifies a list of licenses that the cluster user can select for the job. Specifies a range of values for the maximum number of cores that the cluster user can assign to the job. the cluster user cannot specify a license that is not on the list. sockets or nodes) Maximum Nodes Specifies a range of values for the maximum number of cluster nodes that the cluster user can assign to the job. If you set this property. This property has no effect if: • The Auto Calculate Maximum property has been added to the template with True selected as the only valid value • The Unit Type property has been added to the template and the list of the type of resources that can be assigned to the job does not include Socket 60 Maximum Cores . This property has no effect if any of these conditions is true: • The Auto Calculate Maximum property has been added to the template with True selected as the only valid value • The Unit Type property has been added to the template and the list of the type of resources that can be assigned to the job does not include Core • The user selects a different type of resource to assign to the job (that is.

nodes or cores) Minimum Cores Specifies a range of values for the minimum number of cores that the cluster user can assign to the job. sockets or nodes) Minimum Nodes Specifies a range of values for the minimum number of cluster nodes that the cluster user can assign to the job. the cluster user can still specify a node group that is not on the list. nodes or cores) Node Groups Specifies a list of node groups that the cluster user is required to select for the job.Job Property Description • The user selects a different type of resource to assign to the job (that is. This property has no effect if: • The Auto Calculate Minimum property has been added to the template with True selected as the only valid value • The Unit Type property has been added to the template and the list of the type of resources that can be assigned to the job does not include Sockets • The user selects a different type of resource to assign to the job (that is. If you set this property. but cannot remove any of the node groups that 61 . This property has no effect if: • The Auto Calculate Minimum property has been added to the template with True selected as the only valid value • The Unit Type property has been added to the template and the list of the type of resources that can be assigned to the job does not include Core • The user selects a different type of resource to assign to the job (that is. sockets or cores) Minimum Sockets Specifies a range of values for the minimum number of sockets that the cluster user can assign to the job. This property has no effect if: • The Auto Calculate Minimum property has been added to the template with True selected as the only valid value • The Unit Type property has been added to the template and the list of the type of resources that can be assigned to the job does not include Node • The user selects a different type of resource to assign to the job (that is.

from the Options menu. Preemptable If True is selected as the default value (Default Value parameter for this property). Node Ordering Specifies the ordering to use when assigning nodes to run the job. The job will be assigned first to nodes that have the smallest number of cores. If False is selected as the default value. The job will be assigned first to nodes that have the smallest amount of memory. independently of the value that was selected as the default value. • Memory Size (Descending). a cluster user that is using the job template can submit a job that cannot be preempted by using the HPC API. The job will be assigned first to nodes that have the largest number of cores. The node ordering options are: • Memory Size (Ascending). click Job Scheduler Configuration. Note This property has no effect if the preemption policy for the HPC Job Scheduler Service is set to No pre-emption. To review or configure the preemption policy for the HPC Job Scheduler Service. This property gives preference to nodes with specific attributes. • Processor Number (Ascending). • Processor Number (Descending). It is not possible to specify that a job cannot be 62 . the job being submitted by the cluster user can be preempted by another job that has a higher priority. The job will be assigned first to nodes that have the largest amount of memory.Job Property Description are listed as required. if there are not enough resources to run the higher priority job. the job being submitted by the cluster user cannot be preempted by another job. Note If True and False are selected as valid values for this property (Valid Value parameter). in HPC Cluster Manager.

If True and False are selected as valid values. the cluster user cannot specify a project name that is not on the list.microsoft. Specifies a list of project names that the cluster user can select for the job. see the Microsoft HPC Pack (http://go. If True is selected as the only valid value. the job runs until it is canceled or until its run time expires. If you set this property. Permissions for a template can be set in HPC Cluster Manager. If False is selected as the only valid value.com/fwlink/?LinkID=123849). in Configuration. Specifies a range of values for the amount of time that the cluster user can specify the job is allowed to run. the HPC PowerShell. under Job Templates (Set Permissions action). For more information about the HPC API. It is only possible to do this by using the HPC API. the cluster user cannot select this property when submitting the job. HPC Job Manager. If you set this property. the cluster user cannot specify a priority value that is not on the list. create a job template that includes this property with False selected as the default and the only valid value. select which users can submit jobs with the job template by setting permissions for the template. the cluster user cannot select a node that is not on the list. the task is stopped and the job is automatically canceled by the HPC Job Scheduler Service. Then.Job Property Description preempted by submitting it using HPC Cluster Manager. 63 Project Requested Nodes Run Time Run Until Canceled . If you set this property. Note If you want to allow only certain cluster users to submit jobs that cannot be preempted. If a task in the job is still running after the specified run time is reached. or the HPC command-line tools. Priority Specifies a list of priority values that the user can select for the job. the cluster user can select or unselect this property when submitting the job. Specifies a list of nodes that the cluster user can select to run the job.

These actions are linked to the CcpPower. If you set this property. if you are using the default installation path. like Intelligent Platform Management Interface (IPMI) scripts. using operating system commands. For example.cmd The default CcpPower.cmd [on|off|cycle] nodename [ipaddress]" goto done :on exit /b 1 goto done :off shutdown /s /t 0 /f /m \%2 64 . and Shut Down in the Actions pane in Node Management. sockets.cmd C:\Program Files\Microsoft HPC Pack\Bin\CcpPower. Specifies a list of the type of resources (cores. the file is available here: CcpPower.cmd script.cmd file has the following code: @setlocal @echo off if L%1 == Lon goto on if L%1 == Loff goto off if L%1 == Lcycle goto cycle echo "usage:CcpPower. the cluster user cannot specify a service name that is not on the list. You can replace the default operating system commands in CcpPower. Unit Type Appendix 5: Scriptable Power Control Tools The cluster administration console (HPC Cluster Manager) includes actions to start. If you set this property. which performs these power control operations. and restart compute nodes remotely: Start.Job Property Description Service Name Specifies a list of services names that the cluster user can select for a Service-Oriented Application (SOA) job. is available in the Bin folder of the installation path for HPC Pack 2008. the cluster user cannot specify a resource type that is not on the list. shut down.cmd with custom power control scripts. Reboot. or nodes) that can be assigned to the job. with the exception of the start action that is not enabled.

cmd with the name and path of your tool or tools for shutting down and restarting the node. by specifying the ManagementIpAddress attribute for each node.cmd script by HPC Cluster Manager. you must associate a management IP address with each compute node in the cluster (for example. Click Start. Appendix 6: Using HPC PowerShell HPC PowerShell is built on Microsoft Windows PowerShell™ technology. The management IP address is the third string (%3) that is passed to the CcpPower.cmd. point to All Programs. replace the entries of the shutdown command in CcpPower. type Get-Help Set-HpcNode.format. If you are prompted by Windows PowerShell to choose if you want to run the ccppsh.microsoft. For more information. type A. Right-click HPC PowerShell. see Appendix 2: Creating a Node XML File. in the HPC PowerShell. To start HPC PowerShell on the head node 1. A management IP address can be associated with each compute node in the cluster in the following ways: • When compute nodes are deployed using a node XML file. and should be provided to your power control tools when you add them in CcpPower. 65 . see the Windows HPC Server 2008 PowerShell Reference (http://go.goto done :cycle shutdown /r /t 0 /f /m \%2 goto done :done exit /b %ERRORLEVEL% endlocal To enable scriptable power control tools for the Shut Down and Reboot actions in HPC Cluster Manager. 2. and then click Microsoft HPC Pack. and then click Run as administrator. and provides a powerful command-line interface and a scripting platform to enable the automation of administrative tasks. HPC PowerShell is installed by default on the head node. with the ManagementIpAddress parameter. the IP address for the Base Management Controller (BMC) of the compute node). To enable tools for the Start action. • By using the Set-HpcNode cmdlet in the HPC PowerShell. For more information about this cmdlet. replace the exit entry in the :on section with the name and path of your tool for this action.com/fwlink/? LinkID=120725). Also. and can also be installed on a client computer as part of the utilities available with Windows HPC Server 2008.ps1xml script. Alternatively. and then press ENTER. 3.

View Help in HPC PowerShell In-context help is available for HPC PowerShell cmdlets: • To view a list of the cmdlets that are available in HPC PowerShell. click Windows PowerShell 1. To add HPC PowerShell from Windows PowerShell 1. 2. type the following cmdlet: Add-PsSnapin Microsoft. 5.com/fwlink/? LinkID=119587). and then click Windows PowerShell.microsoft. type the following cmdlet: Get-Command –PSSnapin Microsoft.ps1xml script. To save the profile. type: 66 .To start HPC PowerShell on a client computer 1. Click Start. point to All Programs. In Windows PowerShell. To close Notepad. 2.HPC Add the HPC PowerShell snap-in to your Windows PowerShell profile If you have a Windows PowerShell profile. see Windows PowerShell Profiles (http://go. 2. Click Start. click Exit. point to All Programs. and then click Windows PowerShell. and then press ENTER. you can add the HPC PowerShell snap-in to it so that it is available in every PowerShell session under your user name. click Save. To edit your profile in Notepad. To add the HPC PowerShell snap-in to your Windows PowerShell profile 1. click Microsoft HPC Pack. and then click HPC PowerShell.format.HPC 4. For more information about Windows PowerShell profiles. in the File menu. click Windows PowerShell 1. point to All Programs.0. Open Windows PowerShell.HPC • To view basic help information for a specific cmdlet. type: notepad $profile 3. Click Start. type A. in the File menu. Type the following cmdlet as a new line in the profile: Add-PsSnapin Microsoft. You can also add the HPC PowerShell snap-in from Windows PowerShell.0. If you are prompted by Windows PowerShell to choose if you want to run the ccppsh.

67 . press Q. • To view the help information on the screen one page at a time. more information will be displayed.Get-Help <cmdlet> Where <cmdlet> is an HPC PowerShell cmdlet. type | For example. To stop viewing the help information. • To view detailed information for a specific cmdlet. As you press SPACE or ENTER. if you type: Get-Help New-HpcJob –Detailed | More More at the end. type: Get-Help <cmdlet> -Detailed Where <cmdlet> is an HPC PowerShell cmdlet. Only the first page of the detailed information for the New-HpcJob cmdlet will be initially displayed.