You are on page 1of 9

Technical guide

Windows HPC server 2016 for


LS-DYNA – How to setup
Reference system setup - v1.0

2018-02-17 2018 © DYNAmore Nordic AB LS-DYNA / LS-PrePost


1 Introduction - Running LS-DYNA on Windows HPC cluster

Contents
1 Introduction - Running LS-DYNA on Windows HPC cluster ......................................................2
2 Assumptions ..........................................................................................................................2
3 Nomenclature ........................................................................................................................2
4 Net resources .........................................................................................................................3
5 Reference Windows HPC cluster.............................................................................................3
5.1 HPC Cluster software components ..................................................................................3
5.2 Hardware .......................................................................................................................4
5.3 File server and File Share setup.......................................................................................5
6 mpp/LS-DYNA and MS MPI – Message Passing Interface ........................................................5
7 Using the HPC-Cluster from a CAE-user perspective ...............................................................6
8 License server for LS-DYNA .....................................................................................................7
9 Verify that the system works ..................................................................................................7
10 Alternative hardware and MPI software .................................................................................7
11 Copyright ...............................................................................................................................8

2018 © DYNAmore Nordic AB 1 LS-DYNA and Windows HPC


1 Introduction - Running LS-DYNA on Windows HPC cluster

1 Introduction - Running LS-DYNA on Windows HPC cluster


The purpose of this guide is to be an aid when choosing hardware, setting up, and using a Windows
HPC cluster for LS-DYNA© for explicit and implicit analysis for small to medium sized workgroups
of CAE-users, i.e. 2-20 users, and small to medium sized cluster, i.e. 40 to about 500 cores. To this
end a reference system is described that is within the above scope.

It is assumed in this guide that LS-DYNA is used in a pure Microsoft Windows environment. It is
assumed that the reader has good knowledge about Windows, Windows Server 2016, networking,
and user administration using Active Directory.

The described Windows HPC reference system is of course useful also for other CAE-software, but
is not covered here. Please note that there are alternative setups using Windows HPC that may be
more suitable depending on the situation.

2 Assumptions
An all Windows environment is assumed:

• Workstations for the CAE-users running Windows Professional or Enterprise (versions 7, 8,


or 10). The reference system assumes Windows 10 Professional.

• Windows user management using Active Directory.

• HPC Server for LS-DYNA running Microsoft Server 2016 with Microsoft HPC Pack 2016.

Analysis types

• Implicit or explicit analysis with smp and mpp/LS-DYNA.

• No analyses with extreme IO-requirements, such as out of core implicit analysis or large
metal forming analysis with adaptive mesh generation. If this is the case, then some modi-
fication of the software configuration may be needed to reach optimal performance.

3 Nomenclature
As far as possible the nomenclature from the documentation of Microsoft Windows HPC-Pack 2016
is used.

• HPC – High Performance Computing

• MPI – Message Passing Interface standardized message passing for parallel computing de-
signed to function on a wide variety of networking hardware, software, and operating sys-
tems.

• mpp (as in mpp/LS-DYNA) – message passing parallel, an mpp software uses message pass-
ing to be able to solve a problem in parallel spanning multiple cores, CPUs and/or Compute
nodes.

2018 © DYNAmore Nordic AB 2 LS-DYNA and Windows HPC


4 Net resources

4 Net resources
• Microsoft HPC Pack 2016 documentation at technet.microsoft.com

• Microsoft HPC Pack 2016 Update 1, available at technet.microsoft.com

5 Reference Windows HPC cluster


In Figure 1 below the reference system is shown.

Windows HPC Cluster


Head node InfiniBand Switch
with Fileserver Compute node(s)

CAE Workstations
Company Active
Directory server

Gigabit Ethernet

Figure 1: Reference system

5.1 HPC Cluster software components


Components, software and function:

• Clients (from which jobs are submitted to the HPC Cluster): Workstations with Windows 10
Pro, LSTC WinSuite, Microsoft HPC-Pack 2016 Update 1 (Client utilities installation, which
installs the Job Manager and tools needed by LSTC WinSuite).

o Function: On the Client/Workstation the simulation model (“input file”) is created,


stored on the File server in a suitable folder and submitted as a simulation job to
the Head node. Results are viewed on the Client/Workstation.

• HPC Head node with Fileserver – Microsoft Server 2016 Standard, LS-DYNA License Server,
Microsoft HPC-Pack 2016 Update 1 (Head node Installation).

o Function: The HPC Head node receives the simulation jobs from the Clients, puts
them in a queue and then starts them as soon as sufficient resources are available
on the Compute nodes. The results from the simulation jobs are store on the File
server.

2018 © DYNAmore Nordic AB 3 LS-DYNA and Windows HPC


5 Reference Windows HPC cluster

• Compute node(s) –Microsoft Server 2016 Standard, Microsoft HPC-Pack 2016 Update 1
(Compute node installation).

o Function: The Compute nodes read the simulation job data from the File server,
run the simulation jobs, and store the result files on the File server.

All above servers/Workstations and their users are registered in the company Active Directory.

The users that can access and start jobs on the HPC Cluster are referred to as the CAE-users. Usually
a Group in the Active Directory is created “HPCgroup” so that all users belonging to this group have
appropriate access to the HPC Head node, File server Shares, and Compute node(s). When submit-
ting jobs to the HPC Cluster, they are submitted to the HPC Head node and thus this is the only
network server name the CAE-users need to know, e.g. “HPCSrv”.

Notes:

• Microsoft 2016 HPC pack Update 1 is a monolithic installation file that contains options to
install both Clients utilities, Head node, Compute node(s) et c.

• The reference HPC Cluster system has a file server dedicated to the HPC-Cluster, this is of-
ten a good choice as the HPC Cluster can generate a lot of IO and data.

• It is assumed that all servers and Compute node(s) in the HPC Cluster are attached to the
company network are reachable from the Workstations. Other options are possible, e.g.
with a private network for the HPC Cluster, but these are not explored here.

• LSTC WinSuite is a complete installation of LSTC products for Windows 7, 8, 10 computers.


It contains LS-DYNA, LS-PrePost, LS-TaSC, LS-OPT, LS-Run, Manuals, Training material etc.
LS-Run is a “command center” and is used e.g. to start and queue LS-DYNA simulations on
the local workstation or remote Windows and Linux HPC Clusters. LSTC WinSuite can be
remote installed on Workstations.

5.2 Hardware
The hardware selection was made in Q1 2018.

Workstations

• Windows 10, 32 GB RAM, Professional level graphics card for CAD (OpenGL), 1 TB Hard
drive, single 4-core Xeon CPU, Gigabit Ethernet card, Full HD display

HPC Head node with File server

• 32 GB RAM, single 8 core Xeon CPU, 6x4 TB SAS Hard drives, RAID 10 controller card for
SAS, Gigabit Ethernet card, Hard drives use RAID 10 for performance.

Compute node(s)

2018 © DYNAmore Nordic AB 4 LS-DYNA and Windows HPC


6 mpp/LS-DYNA and MS MPI – Message Passing Interface

• 192 GB RAM, 2x1 TB Hard drives (RAID 1), dual Xeon SP 6148 CPUs (20 cores/CPU), Mel-
lanox ConnectX-3 InfiniBand cards, Gigabit Ethernet card

InfiniBand Switch used for MPI

• Mellanox SX6005 12-port infiniBand switch, QSFP+ connector cables for connection of
Switch and Compute nodes

Notes

• For implicit analysis, more memory per node may be needed on the Compute node(s).

• Instead of Hard drives on the Compute Nodes, using SSDs may provide significantly better
performance for certain types of analyses, but not for the types assumed here, see Section
2.

• In the system, InfiniBand is only used for MPI-communication between the LS-DYNA-pro-
cesses. All other network traffic (SMB, TCP, UDP et c) is carried by the Gigabit Ethernet
network.

5.3 File server and File Share setup


The following file shares are available on the Head node/File server “\\HPCSrv”:

• \\HPCSrv\projects with subfolders for each CAE-user or project to store input files and sim-
ulation results. This share is also used by the Compute nodes during the simulations. This
share should be accessible by all CAE-Users on their Workstations as well as on the compute
nodes, else they will not be able to use the HPC Cluster

• \\HPCSrv\software contains subfolders

o lsdyna: the LS-DYNA executables: both smp and mpp-versions in double and single
precision. Microsoft HPC pack 2016 uses MS-MPI, thus mpp/LS-DYNA binaries
should be installed whose label include the phrase “msmpi”, e.g. ls-
dyna_mpp_s_r920_winx64_ifort131_msmpi.exe.

o installation: Windows HPC Pack 2016 Update 1 installation file and LSTC WinSuite
installation file. This is for convenience when adding new Clients (Workstations).

6 mpp/LS-DYNA and MS MPI – Message Passing Interface


The parallel software mpp/LS-DYNA works by splitting up the simulation model in N equally sized
pieces and each piece is then handled by a separate mpp/LS-DYNA process. For efficiency only
one mpp/LS-DYNA process should by run on each physical core (thus it is generally recommended
to turn of hyperthreading or alternatively use methods such as pinning described in the MPI docu-
mentation). The different mpp/LS-DYNA processes need to communicate to solver the total prob-

2018 © DYNAmore Nordic AB 5 LS-DYNA and Windows HPC


7 Using the HPC-Cluster from a CAE-user perspective

lem and to that end MPI is used. As quick communication is crucial for performance, the MPI com-
munication is made using a fast network such as InfiniBand between Compute nodes and meth-
ods such as user space memory copy between processes on the same Compute node.

In the reference HPC Cluster, the MPI implementation is used that is included with Microsoft HPC
Pack 2016: MS-MPI (Microsoft MPI). The experience is that it is safe to upgrade the MS MPI ver-
sion on the HPC Cluster as newer versions, with e.g. bug fixes, are released by Microsoft. For more
information on MPI and MS MPI, see technet.microsoft.com (search for MS MPI).

7 Using the HPC-Cluster from a CAE-user perspective


The CAE-users uses the system in the following manner:

1. Create the LS-DYNA input file, e.g. main.k, using a preprocessor, e.g. using LS-PrePost or
other pre-processor.

2. Save the input file in the users project folder on the Head Node/File Server “\\HPCSrv\pro-
ject\John\Project34\Crashsim12\main.k“

3. Open LS-Run (part of LSTC WinSuite) select the appropriate LS-DYNA binary, solution op-
tions, and the input file “\\HPCSrv\project\John\Project34\Crashsim12\main.k“. Submit
the simulation to the “HPCSrv” HPC Cluster queue. See also Figure 2 for an illustration. As
soon as resources, i.e. Compute nodes, are available the simulation will be started.

4. The progress of the job can be monitored using LS-Run, when finished the results are avail-
able for postprocessing in “\\HPCSrv\project\John\Project34\Crashsim12” (default loca-
tion).

Notes:

• Multiple jobs can be started, they are then run as soon as Compute node resources are
available.

• More information on how to start an LS-DYNA analysis using LS-Run is available under the
Help-menu in LS-Run.

• Jobs submitted by LS-Run to a Windows HPC Cluster are submitted to the standard Win-
dows HPC server queue and thus cooperate/co-exist with other jobs submitted to the HPC
Cluster, e.g. from other CAE-software.

2018 © DYNAmore Nordic AB 6 LS-DYNA and Windows HPC


8 License server for LS-DYNA

Figure 2: LS-Run

8 License server for LS-DYNA


The network license is installed by running LS-Run on the license server, e.g. the Head node. All
Compute node must be able to access the license server and vice versa. The license is installed from
the License option in LS-Run by importing the license file. To be able to use LS-DYNA on the cluster
it is necessary to specify “License type = network” and “Server hostname = LicenseServerHostname”
in the License menu in LS-Run on the computer submitting the job.

9 Verify that the system works


To verify that the system works run a benchmark model on 1 or more cores. Suitable benchmarks
for e.g. explicit analysis are included with LSTC WinSuite or can be found at www.dynaexam-
ples.com. Explicit simulation models are recommended as first benchmark. Check that the runtimes
are reduced as the number of cores are increased, including over multiple cores. For large explicit
analysis (more than about 10000-50000 elements per used core, near linear reduction of runtimes
are expected when scaling from 1 to more than 100 cores). For larger models LS-DYNA can effec-
tively use up to more than 10 000 cores in a single analysis simulation.

To verify node connectivity and the performance of the MPI/InfiniBand-interconnect one can also
use the mpipingpong test included in HPC Pack 2016 (see HPC Pack documentation at technet.mi-
crosoft.com).

10 Alternative hardware and MPI software


There are alternatives to the hardware and software components selected for the HPC Cluster de-
scribed here. A noncomplete list is:

• MPI: Intel MPI, IBM Platform MPI

• CPU: AMD EPYC

• Network for MPI: Intel Omni-Path

2018 © DYNAmore Nordic AB 7 LS-DYNA and Windows HPC


11 Copyright

11 Copyright
All trademarks, service marks, trade names, product names and logos appearing in his document
are the property of their respective owners, including in some instances Livermore Software Tech-
nology Corporation (LSTC).

2018 © DYNAmore Nordic AB 8 LS-DYNA and Windows HPC

You might also like