Professional Documents
Culture Documents
Contents
1 Introduction - Running LS-DYNA on Windows HPC cluster ......................................................2
2 Assumptions ..........................................................................................................................2
3 Nomenclature ........................................................................................................................2
4 Net resources .........................................................................................................................3
5 Reference Windows HPC cluster.............................................................................................3
5.1 HPC Cluster software components ..................................................................................3
5.2 Hardware .......................................................................................................................4
5.3 File server and File Share setup.......................................................................................5
6 mpp/LS-DYNA and MS MPI – Message Passing Interface ........................................................5
7 Using the HPC-Cluster from a CAE-user perspective ...............................................................6
8 License server for LS-DYNA .....................................................................................................7
9 Verify that the system works ..................................................................................................7
10 Alternative hardware and MPI software .................................................................................7
11 Copyright ...............................................................................................................................8
It is assumed in this guide that LS-DYNA is used in a pure Microsoft Windows environment. It is
assumed that the reader has good knowledge about Windows, Windows Server 2016, networking,
and user administration using Active Directory.
The described Windows HPC reference system is of course useful also for other CAE-software, but
is not covered here. Please note that there are alternative setups using Windows HPC that may be
more suitable depending on the situation.
2 Assumptions
An all Windows environment is assumed:
• HPC Server for LS-DYNA running Microsoft Server 2016 with Microsoft HPC Pack 2016.
Analysis types
• No analyses with extreme IO-requirements, such as out of core implicit analysis or large
metal forming analysis with adaptive mesh generation. If this is the case, then some modi-
fication of the software configuration may be needed to reach optimal performance.
3 Nomenclature
As far as possible the nomenclature from the documentation of Microsoft Windows HPC-Pack 2016
is used.
• MPI – Message Passing Interface standardized message passing for parallel computing de-
signed to function on a wide variety of networking hardware, software, and operating sys-
tems.
• mpp (as in mpp/LS-DYNA) – message passing parallel, an mpp software uses message pass-
ing to be able to solve a problem in parallel spanning multiple cores, CPUs and/or Compute
nodes.
4 Net resources
• Microsoft HPC Pack 2016 documentation at technet.microsoft.com
CAE Workstations
Company Active
Directory server
Gigabit Ethernet
• Clients (from which jobs are submitted to the HPC Cluster): Workstations with Windows 10
Pro, LSTC WinSuite, Microsoft HPC-Pack 2016 Update 1 (Client utilities installation, which
installs the Job Manager and tools needed by LSTC WinSuite).
• HPC Head node with Fileserver – Microsoft Server 2016 Standard, LS-DYNA License Server,
Microsoft HPC-Pack 2016 Update 1 (Head node Installation).
o Function: The HPC Head node receives the simulation jobs from the Clients, puts
them in a queue and then starts them as soon as sufficient resources are available
on the Compute nodes. The results from the simulation jobs are store on the File
server.
• Compute node(s) –Microsoft Server 2016 Standard, Microsoft HPC-Pack 2016 Update 1
(Compute node installation).
o Function: The Compute nodes read the simulation job data from the File server,
run the simulation jobs, and store the result files on the File server.
All above servers/Workstations and their users are registered in the company Active Directory.
The users that can access and start jobs on the HPC Cluster are referred to as the CAE-users. Usually
a Group in the Active Directory is created “HPCgroup” so that all users belonging to this group have
appropriate access to the HPC Head node, File server Shares, and Compute node(s). When submit-
ting jobs to the HPC Cluster, they are submitted to the HPC Head node and thus this is the only
network server name the CAE-users need to know, e.g. “HPCSrv”.
Notes:
• Microsoft 2016 HPC pack Update 1 is a monolithic installation file that contains options to
install both Clients utilities, Head node, Compute node(s) et c.
• The reference HPC Cluster system has a file server dedicated to the HPC-Cluster, this is of-
ten a good choice as the HPC Cluster can generate a lot of IO and data.
• It is assumed that all servers and Compute node(s) in the HPC Cluster are attached to the
company network are reachable from the Workstations. Other options are possible, e.g.
with a private network for the HPC Cluster, but these are not explored here.
5.2 Hardware
The hardware selection was made in Q1 2018.
Workstations
• Windows 10, 32 GB RAM, Professional level graphics card for CAD (OpenGL), 1 TB Hard
drive, single 4-core Xeon CPU, Gigabit Ethernet card, Full HD display
• 32 GB RAM, single 8 core Xeon CPU, 6x4 TB SAS Hard drives, RAID 10 controller card for
SAS, Gigabit Ethernet card, Hard drives use RAID 10 for performance.
Compute node(s)
• 192 GB RAM, 2x1 TB Hard drives (RAID 1), dual Xeon SP 6148 CPUs (20 cores/CPU), Mel-
lanox ConnectX-3 InfiniBand cards, Gigabit Ethernet card
• Mellanox SX6005 12-port infiniBand switch, QSFP+ connector cables for connection of
Switch and Compute nodes
Notes
• For implicit analysis, more memory per node may be needed on the Compute node(s).
• Instead of Hard drives on the Compute Nodes, using SSDs may provide significantly better
performance for certain types of analyses, but not for the types assumed here, see Section
2.
• In the system, InfiniBand is only used for MPI-communication between the LS-DYNA-pro-
cesses. All other network traffic (SMB, TCP, UDP et c) is carried by the Gigabit Ethernet
network.
• \\HPCSrv\projects with subfolders for each CAE-user or project to store input files and sim-
ulation results. This share is also used by the Compute nodes during the simulations. This
share should be accessible by all CAE-Users on their Workstations as well as on the compute
nodes, else they will not be able to use the HPC Cluster
o lsdyna: the LS-DYNA executables: both smp and mpp-versions in double and single
precision. Microsoft HPC pack 2016 uses MS-MPI, thus mpp/LS-DYNA binaries
should be installed whose label include the phrase “msmpi”, e.g. ls-
dyna_mpp_s_r920_winx64_ifort131_msmpi.exe.
o installation: Windows HPC Pack 2016 Update 1 installation file and LSTC WinSuite
installation file. This is for convenience when adding new Clients (Workstations).
lem and to that end MPI is used. As quick communication is crucial for performance, the MPI com-
munication is made using a fast network such as InfiniBand between Compute nodes and meth-
ods such as user space memory copy between processes on the same Compute node.
In the reference HPC Cluster, the MPI implementation is used that is included with Microsoft HPC
Pack 2016: MS-MPI (Microsoft MPI). The experience is that it is safe to upgrade the MS MPI ver-
sion on the HPC Cluster as newer versions, with e.g. bug fixes, are released by Microsoft. For more
information on MPI and MS MPI, see technet.microsoft.com (search for MS MPI).
1. Create the LS-DYNA input file, e.g. main.k, using a preprocessor, e.g. using LS-PrePost or
other pre-processor.
2. Save the input file in the users project folder on the Head Node/File Server “\\HPCSrv\pro-
ject\John\Project34\Crashsim12\main.k“
3. Open LS-Run (part of LSTC WinSuite) select the appropriate LS-DYNA binary, solution op-
tions, and the input file “\\HPCSrv\project\John\Project34\Crashsim12\main.k“. Submit
the simulation to the “HPCSrv” HPC Cluster queue. See also Figure 2 for an illustration. As
soon as resources, i.e. Compute nodes, are available the simulation will be started.
4. The progress of the job can be monitored using LS-Run, when finished the results are avail-
able for postprocessing in “\\HPCSrv\project\John\Project34\Crashsim12” (default loca-
tion).
Notes:
• Multiple jobs can be started, they are then run as soon as Compute node resources are
available.
• More information on how to start an LS-DYNA analysis using LS-Run is available under the
Help-menu in LS-Run.
• Jobs submitted by LS-Run to a Windows HPC Cluster are submitted to the standard Win-
dows HPC server queue and thus cooperate/co-exist with other jobs submitted to the HPC
Cluster, e.g. from other CAE-software.
Figure 2: LS-Run
To verify node connectivity and the performance of the MPI/InfiniBand-interconnect one can also
use the mpipingpong test included in HPC Pack 2016 (see HPC Pack documentation at technet.mi-
crosoft.com).
11 Copyright
All trademarks, service marks, trade names, product names and logos appearing in his document
are the property of their respective owners, including in some instances Livermore Software Tech-
nology Corporation (LSTC).