You are on page 1of 24

Nuclear Technology

ISSN: (Print) (Online) Journal homepage: https://ans.tandfonline.com/loi/unct20

Method of Characteristics for 3D, Full-Core


Neutron Transport on Unstructured Mesh

Derek R. Gaston, Benoit Forget, Kord S. Smith, Logan H. Harbour, Gavin K.


Ridley & Guillaume G. Giudicelli

To cite this article: Derek R. Gaston, Benoit Forget, Kord S. Smith, Logan H. Harbour,
Gavin K. Ridley & Guillaume G. Giudicelli (2021) Method of Characteristics for 3D, Full-
Core Neutron Transport on Unstructured Mesh, Nuclear Technology, 207:7, 931-953, DOI:
10.1080/00295450.2021.1871995

To link to this article: https://doi.org/10.1080/00295450.2021.1871995

This material is published by permission of


the Battelle Energy Alliance, LLC, for the COE
under Contract No. DE-AC07-05ID14517.
The US Government retains for itself, and
others acting on its behalf, a paid-up,
non-exclusive, and irrevocable worldwide
licence in said article to reproduce, prepare
derivative works, distribute copies to the
public, and perform publicly and display
publicly, by or on behalf of the Government.

Published online: 09 Jul 2021.

Submit your article to this journal

Article views: 1232

View related articles

View Crossmark data

Citing articles: 3 View citing articles

Full Terms & Conditions of access and use can be found at


https://ans.tandfonline.com/action/journalInformation?journalCode=unct20
NUCLEAR TECHNOLOGY · VOLUME 207 · 931–953 · JULY 2021
DOI: https://doi.org/10.1080/00295450.2021.1871995

Method of Characteristics for 3D, Full-Core Neutron Transport on


Unstructured Mesh
Derek R. Gaston,a* Benoit Forget,b Kord S. Smith,b Logan H. Harbour,a Gavin K. Ridley,a and
Guillaume G. Giudicellia
a
Idaho National Laboratory, 1955 N. Fremont Avenue, Idaho Falls, Idaho 83415
b
Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139

Received July 30, 2020


Accepted for Publication December 31, 2020

Abstract — A myriad of nuclear reactor designs are currently being considered for next-generation power
production. These designs utilize unique geometries and materials and can rely on multiphysics effects for safety
and operation. This work develops a neutron transport tool, MOCkingbird, capable of three-dimensional (3D),
full-core reactor simulation for previously intractable geometries. The solver is based on the method of char­
acteristics, utilizing unstructured mesh for the geometric description. MOCkingbird is built using the MOOSE
multiphysics framework, allowing for straightforward linking to other physics in the future. A description of the
algorithms and implementation is given, and solutions are computed for two-dimensional/3D C5G7 and the
Massachusetts Institute of Technology BEAVRS benchmark. The final result shows the application of
MOCkingbird to a 3D, full-core simulation utilizing 1.4 billion elements and solved using 12 000 processors.

Keywords — Method of characteristics, unstructured mesh, MOCkingbird, MOOSE, BEAVRS.

Note — Some figures may be in color only in the electronic version.

I. INTRODUCTION from many experiments and decades of operating experi­


ence. The existing tools have proven themselves to be up to
Modeling and simulation of nuclear reactors play a vital the task for simulation of existing reactor designs. These
role in their life cycle. The existing fleet has relied heavily on tools have heavily relied on nodal calculations,1–3 with sup­
modeling and simulation based on informed approximations port from two-dimensional (2D) method of characteristics
(MOC) solvers for lattice calculations.4–7 However, most
*E-mail: derek.gaston@inl.gov
designs among the next generation of advanced reactors
This material is published by permission of the Battelle Energy would benefit from three-dimensional (3D), full-core, neu­
Alliance, LLC, for the COE under Contract No. DE-AC07- tron transport calculations. One of the primary drivers for this
05ID14517. The US Government retains for itself, and others acting is the need to be agnostic to model complex, heterogeneous
on its behalf, a paid-up, non-exclusive, and irrevocable worldwide axial geometry. In addition, multiphysics effects, such as core
licence in said article to reproduce, prepare derivative works, distribute
“flowering” in sodium-cooled fast reactors,8 can also be
copies to the public, and perform publicly and display publicly, by or on
behalf of the Government. captured using flexible 3D neutron transport tools.
This is an Open Access article distributed under the terms of the Unfortunately, high-fidelity, 3D neutron transport calculation
Creative Commons Attribution-NonCommercial-NoDerivatives has largely remained on the sidelines due to large computa­
License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which tional requirements.9 Recently, groundbreaking research in
permits non-commercial re-use, distribution, and reproduction in 3D, full-core, neutron transport has led to some of the first
any medium, provided the original work is properly cited, and is not
altered, transformed, or built upon in any way.
viable solution methods for reactor design.9–15
This article has been corrected with minor changes. These changes do This work develops a MOC neutron transport method
not impact the academic content of the article. on unstructured mesh that is accurate, scalable,

931
932 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

geometrically flexible, capable of full-core 3D calculation, couplings of 2D MOC solutions, in what is known as 2D/
and able to respond to arbitrary multiphysics interactions, one-dimensional (1D) transport,26,27 to provide 3D transport
such as geometric deformation. To achieve these goals, solutions at lower cost and accuracy than 3D MOC.
a new neutron transport solver, named MOCkingbird,
was developed utilizing the MOOSE framework.16,17 II.B. Unstructured Mesh MOC State of the Art
MOOSE provides MOCkingbird with the underlying geo­
metrical representation and a simplified linking to other While CSG is ideal for ray tracing, it can be difficult to use
physics within the reactor core. Utilizing unique algo­ for representing complicated geometries and geometric
rithms for parallelization of MOC, MOCkingbird can effi­ deformation.28 An alternative is unstructured mesh. Most com­
ciently solve both 2D and 3D neutron transport problems. monly used in finite element solvers, unstructured mesh builds
This paper is composed as follows: Sec. II presents geometry out of discrete “elements” that are made of simple
the state of the art of the underlying numerical method in shapes such as triangles, quadrilaterals, hexahedra, and tetra­
MOCkingbird, the MOC, and its use with unstructured hedra. Many of these small elements can be combined to create
mesh. Section III summarizes the main advanced features complicated geometries that can then be straightforwardly
of MOCkingbird, while Sec. IV dives into some of the deformed. This is the approach MOCkingbird uses for geome­
fundamentals of the method and presents some of the trical representation. In addition to the flexibility gained
technical choices made during implementation. Finally, through utilizing unstructured mesh, by abstracting ray tracing
Sec. V presents numerical light water reactor benchmarks to be across an element, MOCkingbird gains the ability to work
solved with MOCkingbird, providing both validation and in 1D, 2D, and 3D. Unstructured mesh is also the basis for
an evaluation of its scaling and performance. numerous non-MOC finite element transport solvers.29,30
Utilization of unstructured mesh as the geometric
representation for MOC is not, in itself, a new concept.
II. BACKGROUND Several previous studies have applied this methodology.
Two of the most successful implementations of MOC
II.A. MOC for Neutron Transport utilizing unstructured mesh were developed at Argonne
National Laboratory: MOCFE (Ref. 31) and PROTEUS-
Transport simulations can predict the flow of neutrons MOCEX (Ref. 32). While these two codes share a similar
through materials, enabling predictive modeling of radiation geometric discretization, including the ability to solve
shielding, neutron detectors, critical experiments, and reac­ 3D, they vary greatly in implementation detail.
tor cores. The MOC, further described in Sec. IV, is Of paramount importance to MOC on unstructured mesh
a solution method that is used in many fields and has seen is the ability to handle communication of angular flux
broad adoption in the nuclear engineering community for moments across domain decomposed geometry. In light of
solving the neutron transport equation, starting from the this, MOCFE utilized a novel matrix-free generalized mini­
work of Ref. 18. MOC has become broadly used in reactor mum residual method (GMRES) solver to accelerate
analysis, supplanting other transport solution methodolo­ a nonimplicit global sweep iterative algorithm. To further
gies for many applications, including lattice physics.19 reduce parallel communication due to domain decomposition,
The MOC is traditionally solved using constructive solid MOCFE also employed angle- and energy-based decomposi­
geometry20,21 (CSG). CSG represents the surfaces within tion. MOCFE utilized a distributed-mesh back-projection33
a reactor using a parametric representation of simple shapes, (among other schemes) to generate 3D trajectories (trajec­
which are then operated on with Boolean logic. This scheme tories out of the radial plane) allowing for MOC solutions in
has been particularly successful for modeling light water 3D geometries. Ghosting for track generation and load balan­
reactors due to its ability to perfectly represent the concentric cing for the computationally expensive solution phase were
cylinders placed within lattices inherent to that geometry. explored. The back-projection of the geometry being different
Early efforts successfully utilized MOC for cross-section for each angle proved challenging for load balancing.
and discontinuity factor calculations for input into full-core PROTEUS-MOCEX utilized a wholly different scheme
nodal calculations.2,4 Extensive research utilizing this method for the solution of 3D transport problems. Instead of directly
for 2D radial calculations has led to efficient, accelerated,22,23 generating 3D trajectories, PROTEUS-MOCEX built on the
parallel20,21,24 codes capable of full-core calculations.25 extensive research into so-called 2D/1D methods that utilize
Lately, breakthrough work has led to some of the first 3D, CSG (Refs. 34 and 35). Broadly, 2D/1D methods employ
full-core MOC calculations.9–12 In the last 15 years, consid­ transport solvers (such as MOC) in the radial direction and
erable work has been done to successfully leverage axial lower-order methods in the axial direction (such as diffusion,

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 933

or SN). This scheme can be incredibly effective due to reac­ to selectively generate 3D tracks based on a unique one
tors often having much higher geometric and material varia­ dimension is critical to efficient track generation and
bility in the radial direction than the axial direction. claiming. In addition, said track generation also computes
PROTEUS-MOCEX, in addition to utilizing unstructured the necessary quadrature weights to use with the tracks
mesh, went one step further: It replaced the 1D axial solve for angular and spatial integration.
and the axial leakage term in 2D with a 1D finite element
treatment and effectively 3D sources in the 2D MOC solve.32
The 2D MOC sweep was then coupled into the axial solve as III.A. Summary of Main Features
a line source. This provided the efficiency needed to solve
complex 3D transport problems.36 MOCkingbird uses the conventional flat source MOC
Several other research efforts have also utilized unstruc­ formulation as detailed in Ref. 19. Long characteristic tracks
tured mesh with MOC. The MOC on Unstructured Mesh37 (domain boundary to domain boundary) are used in both
(MOCUM) project is heavily focused on automatically con­ serial and parallel, providing the same solver behavior
verting CSG geometry into triangular elements. A Delaunay regardless of the number of message passing interface41
triangulation algorithm was created, allowing for simplified (MPI) ranks used. At this time, MOCkingbird uses flat
workflow from reactor geometry discretization to MOC solu­ source, isotropic scattering, isotropic fission source, and
tion. Parallelism in MOCUM was handled using OpenMP. multigroup cross-section approximations. MOCkingbird
MoCha-Foam38 was developed using the OpenFoam open- does not contain cross-section generation capabilities; thus,
source, partial differential equation (PDE) solver framework, all cross sections must be generated externally. However,
allowing for future expansion into multiphysics problems. a flexible interface exists to allow for the reading of many
Finally, an effort to utilize linear source representation on different types of cross-section databases. One in-progress
unstructured meshes39 was undertaken at Bhabha Atomic aspect of MOCkingbird is its acceleration capability based
Research Center in Mumbai. Similar to MOCUM, on DSA,23 rather than CMFD,22,42 which is more commonly
a triangulation was utilized to produce the geometry, however, used in MOC programs. Acceleration in MOCkingbird is
less elements were needed due to the higher-order representa­ still under heavy development and will be the subject of
tion of the source. One unique aspect of this effort was that it future publications.
utilized curved triangle edges to reduce geometric representa­ Some interesting or unique aspects of MOCkingbird are
tion error.
1. unstructured mesh for geometry definition and
Each of these research efforts pushed the state of the art
spatial discretization
forward for MOC on unstructured mesh. However, many
aspects still remain unsolved. These include accurate reflec­ 2. no geometric assumptions made in the solver
tive boundary condition treatment,31,40 memory issues,32 3. capable of working with moving mesh, e.g.,
parallelization,33,37,38 and parallel track generation.33 This from thermal expansion or fracture
project aims to alleviate these issues through the develop­
4. same solution behavior in serial and parallel
ment of MOCkingbird.
5. parallel, scalable cyclic track generation, start­
III. MOCKINGBIRD ing point location, and track distribution
6. asynchronous sparse data exchange algorithms
MOCkingbird has been developed using MOOSE, during problem setup
which is a framework for building multiphysics tools, and
therefore is a natural fit for a code that will ultimately perform 7. scalable communication routines during pro­
multiphysics analysis of reactors. MOOSE provides impor­ blem setup
tant infrastructure to MOCkingbird, including finite element 8. efficient, robust element traversal
mesh, ray-tracing capabilities, input/output, online post pro­
9. scalable, domain decomposed ray tracing
cessing, auxiliary field computations, code timing, memory
management, testing infrastructure, and straight-forward cou­ 10. smart buffering for messages
pling to other MOOSE-based applications. 11. scalable memory usage
The other significant library dependency is that of
OpenMOC, which MOCkingbird relies on for track 12. object pool utilization to reduce memory allo­
generation.40 OpenMOC contains routines for the effi­ cation and deallocation
cient generation of cyclic 2D and 3D tracks. The ability 13. weighted partitioning for load balance

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


934 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

14. high parallel scalability transport-corrected-P0 scattering cross sections. These


15. developed using MOOSE for simplified linking simplifications can be removed in the future, but provide
to other physics codes. a good basis to start from. Many detailed treatments of
this formulation are found in the literature.4,9,20,43,46
This section gives an overview of the characteristic form
III.B. Ray-Tracing Capabilities of the steady-state Boltzmann neutron transport equation as
used by MOCkingbird. Sections IV.A through IV.F start from
Asynchronous parallel ray tracing allows for the inte­ the multigroup, isotropic scattering approximation and review
gration of a complete track across the entire domain decom­ the reduction to a problem that is discrete in both angle and
posed geometry without any intermediate global space. Ultimately, a source iteration scheme is developed for
synchronization or iteration. This sets apart MOCkingbird the iterative solution of the k-eigenvalue problem.47
from other MOC codes and aids in convergence. Because
such a capability could be useful for solving many physical IV.A. Characteristic Equation
problems, e.g., radiative heat transfer or shock waves,
MOCkingbird’s ray-tracing capability has been abstracted We start from the isotropic scattering multigroup
into a MOOSE module that will be part of the open source neutron multiplication eigenproblem, the derivation of
release in the near future. The details of this parallel algo­ which can be found in any reactor physics book, e.g.,
rithm are beyond the scope of the current discussion and Ref. 48. This equation reads:
will be detailed in a future publication. �
As is mentioned in Sec. IV.E, many MOC codes employ Ω � Ñ þ Σt;g ðrÞ ψg ðr; ΩÞ
modular ray tracing (MRT) and modular spatial domain ð
1 X χ g ðrÞ
decomposition (SDD) (Refs. 9 and 43) in parallel. When ¼ Σs0;g0 !g ðrÞψg0 ðr; Ω0 ÞdΩ0 þ
4π g0 ¼1;n 4π 4πkeff
using a modular decomposition, tracks are only integrated ð
g

from one partition boundary to another within one source X


νΣf ;g0 ðrÞψg0 ðr; Ω0 ÞdΩ0 : ð1Þ
iteration, creating a block-Jacobi-like method. As noted in g0 ¼1;ng 4π
Ref. 33, this idea is untenable for arbitrarily decomposed
unstructured mesh that is partitioned with a mesh partitioner. Equation (1) represents a balance between production and
Figure 1 shows why this is the case. With unstructured- loss of neutrons, with the fundamental eigenvalue keff
mesh partitioning there are many jagged/reentrant corners balancing the system that might otherwise not have
along partition boundaries. If a track were to pass through a steady state. The groupwise angular fluxes ψg are the
the domain in such a way that it follows a partition bound­ dependent variables in Eq. (1). Solving for ψg allows for
ary, then a block-Jacobi-like tracing of that track would take
the computation of reaction rates throughout the core.
many source iterations before boundary information is pro­
Σt;g , Σs0;g0 !g , and νΣf ;g are the groupwise total, scatter,
pagated across the domain.9 In a large 3D mesh for a full
and nu-fission macroscopic cross sections, respectively.
reactor core with millions of tracks, this situation frequently
A position in 3D space is denoted by r. The direction of
occurs. Similar issues have been studied for the domain
neutron travel is represented by Ω, a point on the unit
decomposition of discrete ordinates sweeps in finite element
neutron transport.44,45 sphere in R 3 , and the subscript g denoting the neutron
Utilizing an asynchronous communication strategy, group of interest. The cumulative fission emission spec­
MOCkingbird efficiently traces the track shown in Fig. 1c trum is represented by χg in an equilibrium of prompt and
in one source iteration without any intermediate global syn­ delayed emissions. Last, ng is the total number of neutron
chronization. This provides multiple advantages: behavior is groups. The group structure partitions the possible ener­
the same in parallel and serial, storage of partition angular gies the neutrons may take on. These are conventionally
fluxes is not required (therefore scalable in memory), and the grouped into contiguous bands of neutron energies.
convergence rate is not reduced as observed in Ref. 9. The conventional method to solve the resulting eigen­
problem is called source iteration. The source terms from
fission and scattering on the right side of Eq. (1) are
IV. MOC IN MOCKINGBIRD “lagged.” The lagging of the scattering source makes the
scheme an inexact power iteration, but is a generally more
The MOCkingbird code utilizes a traditional form of stable and efficient numerical scheme.49 This algorithm
MOC that makes use of flat spatial source regions and finds the eigenpair of maximum eigenvalue (inverse of

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 935

Fig. 1. The mesh is split for three MPI ranks, and one track is considered. With a block-Jacobi-like algorithm, the boundary
angular flux would only propagate to the other side of the domain after all source iterations. In MOCkingbird, the entire track is
traced in one iteration without any intermediate global synchronization.

ð
keff) and its associated eigenvector the solution scalar flux.
ϕg ðrÞ ¼ ψg ðr; ΩÞΩ : ð3Þ
The right side of Eq. (1) is the total neutron source, denoted 4π
hereafter for brevity as

ng This source Qg is isotropic. While an isotropic fission


1 X source is conventionally regarded as a good assump­
Qg ðrÞ ¼ Σs0;g0 !g ðrÞϕg0 ðrÞ
4π g0 tion, isotropic scattering generally is not, especially for
ng
! collisions with light isotopes such as hydrogen. To
χg ðrÞ X
þ νΣf ;g0 ðrÞϕg0 ðrÞ ; ð2Þ account for this, a transport correction, as discussed in-
keff g0
depth by Ref. 48, is applied to the total cross sections.
With this notation the multigroup isotropic transport
with equation is

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


936 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT


Ω � Ñ þ Σt;g ðrÞ ψg ðr; ΩÞ ¼ Qg ðrÞ : ð4Þ In addition, MOCkingbird utilizes an “as-tracked”
approximation of the FSR volumes, with the volumes of
The early work on MOC by Askew18 developed a robust the FSRs approximated using track integrations.
discretization for solving this equation approximately by iden­ Traditionally, segmentation is carried out as
tifying the characteristics of the PDE. It should be noted, a preprocessing step by MOC-based codes,20,52 where the
however, that characteristics-based methods date back to the tracks are traced and segments are stored in memory for later
late 1950s with the work of Vladimirov in Ref. 50, according to use during the iterative solution scheme. However, the num­
Ref. 51. MOCkingbird uses a flat source MOC discretization ber of segments required in 3D calculations can overwhelm
that matches the formulation given in Ref. 19. available computer resources; therefore, recent advance­
ments have been made in “on-the-fly” segmentation.9 In
this mode, ray tracing is performed each time the track is
IV.B. Tracks, Segmentation, and Ray Tracing used to integrate the angular flux across the domain. This
provides savings in memory,9 but also requires additional
The scalar flux in each flat source region (FSR) is computational work. MOCkingbird can be run in either
defined by numerically approximating the spatial and angu­ mode: saving the segments during the first sweep or com­
lar integral of the scalar fluxes and the volumes using the pletely on-the-fly computation to save memory.
tracks crossing each FSR. It is necessary to have a sufficient Using these approximations, it is possible to compute
number of tracks (indexed by k) crossing each FSR to ensure the scalar flux by accumulating each Δψk;i;g;p computed on
accurate evaluation of the scalar flux. Figure 2a shows an each segment across each FSR. Many different options exist
illustrative set of a few tracks. The intersections of a track on for determining the tracks, including cyclic tracking,40 MRT
the boundary of each successive FSR define the segments, as (Ref. 43), once-through,53 and back-projection.33 The track
shown in Fig. 2b. Using the fact that each track has an laydown algorithm used by MOCkingbird is cyclic, global
assigned spacing/width ωk , the scalar flux is tracking as developed within OpenMOC (Ref. 40).
" # Cyclic tracking creates track laydowns that form cycles
4π 1 XX through the domain. That is, starting at the origin of one track
ϕi;g ¼ Qi;g þ ωm ðkÞωp ωk Δψk;i;g;p ;
Σt;i;g Vi k p it is possible to move along all connected tracks in the cycle
and arrive back at the starting position. Cyclic tracking is
ð5Þ
desirable for its ability to accurately represent reflected,
periodic, or rotational boundary conditions. As can be seen
where
in Fig. 3, at the edges of the domain each track meets the next
m = indices for azimuthal quadrature track in the cycle. This allows for the angular flux to pass
from one track to the next track. Without this feature, the
p = indices for polar quadrature incoming angular flux at the beginning of the tracks would
ω = weights. have to be approximated or known. In full-core calculations,

Fig. 2. Example of equally spaced tracks and the segments that would lie along one such track.

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 937

Fig. 3. (a) Two-dimensional and (b) three-dimensional, with the 3D tracks defined as “stacks” above the 2D tracks (from Ref. 40).

the incoming angular flux is often zero (vacuum), but having IV.C. Source Iteration and Transport Sweeps
cyclic tracking capability makes the code much more flex­
ible for the calculation of symmetric problems (e.g., pin cells MOCkingbird utilizes a source iteration scheme for
and assembly-level lattice calculations). the angular fluxes along each track, scalar fluxes in each
The cyclic tracking code utilized in MOCkingbird was FSR, and the eigenvalue that balances the system. The
initially developed for OpenMOC, as detailed in Ref. 40. As scheme iterates between sweeping the angular fluxes
shown in Fig. 3, the algorithms produce 2D tracks by speci­ across the tracks and updating the flat sources using the
fying the number of azimuthal angles and the azimuthal scalar fluxes computed during sweeping.
spacing (space between parallel tracks). For 3D tracks, the Algorithm 1 outlines the source iteration scheme.
2D tracks are generated first. Then, “stacks” are made above First, the fluxes are initialized, then the scalar fluxes are
each 2D track to create the 3D tracks. However, just as in utilized to compute the initial source Q. The scalar and
segmentation, the storage of all of the 3D track information domain boundary angular fluxes are then normalized by
can be prohibitive for a full-core 3D calculation. The index of dividing by the sum of the fission source (which is
the 2D track uniquely identifies each 3D track it projects to, computed using an MPI_Allreduce in parallel):
the index of its polar angle, and its index in the z-stack of
tracks. This allows MOCkingbird to create a 3D track on-the- XX
fly, enabling parallelizable track generation. F¼ νΣi;f ;g ϕi;g : ð6Þ
i g
MOCkingbird includes support for stochastic angular
quadratures in addition to the traditional discrete ordinates
angular quadrature. The Random Ray Method54 (TRRM) Normalization is necessary to keep the source values
integrates the angular flux across noncyclical tracks chosen from continually increasing/decreasing (depending on
randomly within the domain. The premise of a stochastic the value of keff).
quadrature in MOC was originally exposed by Davidenko
and Tsibulsky in the Russian language work51 and later
recapitulated in English in Appendix B of Ref. 55, giving Algorithm 1: Source Iteration Algorithm
promising results on the C5G7 benchmark. The idea of a
stochastic quadrature had not enjoyed further analysis or
examination in English language works up until Tramm
et al.’s recent introduction of TRRM. We have utilized
Tramm et al.’s research by including TRRM mode in
MOCkingbird. One significant advantage of TRRM is
reduced memory usage when compared to deterministic
track laydowns. This MOCkingbird capability will be dis­
cussed in a future publication.

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


938 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

The scalar flux and FSR volumes are then zeroed utilizing MPI_Allreduce. The change in keff is also
for accumulation during the transport sweep. The trans­ monitored:
port sweep itself is shown in Algorithm 2. Each segment
� �
of each track is iterated over, the angular flux is updated, Δk ¼ �keff ;new keff ;old � : ð9Þ
and the contribution to the scalar flux is tallied accord­
ing to Eq. (5). When integration across a track meets Once FRMS or Δk is small enough (typically 10 4 and
a partition boundary, the current angular flux is packed 10 6 ), the iteration is terminated. It should be noted that
into a data structure and communicated to the neighbor­ these conditions are sometimes referred to as “stopping
ing process via an asynchronous MPI communication criteria” rather than “convergence criteria,” as these
algorithm, which will be detailed in a later publication. quantities are not true residuals.

Algorithm 2: Transport Sweep. The k tracks are iterated IV.D. Transport Correction and Stabilization
over, intersecting them with the geometry. The angular
fluxes are propagated along the tracks and contribute to
the source region’s scalar flux. Transport-corrected cross sections have been shown
to sometimes cause the source iteration scheme outlined
previously to diverge.56 The transport correction is
a modification to the cross sections to take anisotropic
effects into account.57 Using ΔΣtr;g as the correction, it is
applied through modification of the total and scattering
cross sections:

Σtr;g ¼ Σt;g ΔΣtr;g ð10Þ

and

es;g0 !g ¼ Σs0;g0 !g
Σ ΔΣtr;g δg0 ;g : ð11Þ

After the transport sweep, the eigenvalue is updated Making these cross-section modifications can lead to
using the ratio of the old and new fission sources [as a system that is unstable under source iteration. For an in-
computed using Eq. (6), and utilizing MPI_Allreduce depth analysis see Refs. 9 and 56. MOCkingbird stabi­
in parallel]: lizes this using the damping scheme of Ref. 56. This is
achieved by defining a damping factor:
Fnew
keff ;new ¼ keff ;old ; ð7Þ (
Fold ρe
Σs;i;g!g
Di;g ¼ Σtr;i;g ; for Σs0;i;g!g < 0 ð12Þ
and then convergence criteria are checked. MOCkingbird 0; otherwise ;
convergence criteria are based on the root-mean-squared
(RMS) change of the element fission source: where ρ > 0 controls the amount of damping to apply
for FSRs that have negative in-group scattering cross
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi0
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1
ffiffiffiffiffi sections. Larger values of ρ lead to more damping. Too
u P P 2
u Σf ;i;g ϕi;g;new Σf ;i;g ϕi;g;old much damping, though, will slow the convergence
uP
u fissionable;i @ g
P g
A rate.9
u Σf ;i;g ϕi;g;old
t g The application of this damping factor occurs imme­
FRMS ¼ ; diately after the transport sweep. The new scalar fluxes
Nfissionable
ϕi;g;new are combined with the scalar fluxes from the
ð8Þ
previous iteration ϕi;g;old via a weighted average:
where fissionable refers to FSRs that contain fissionable
ϕi;g;new þ Di;g ϕi;g;old
material, and N is the number of fissionable elements. The ϕi;g ¼ : ð13Þ
sum over fissionable elements is computed in parallel 1 þ Di;g

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 939

The new keff is computed using this damped flux. partitioning algorithms. Because MOCkingbird does not
Damping is only applied if ρ is specified to be a positive utilize either Cartesian SDD nor MRT, it is not subject to
number, thus it is only used when problems require it. partition multiplicity restrictions or degraded transport con­
vergence without acceleration. An arbitrary partition multi­
IV.E. Parallelism and Domain Decomposition plicity may be used. Moreover, because rays are traced
completely through the domain on each iteration, conver­
The MOC is a highly parallelizable algorithm. The gence properties for highly decomposed geometries do not
majority of the computational effort is located within the differ from serial computations. This property is important
transport sweep, and each independent track can be swept for simulation reproducibility across computing platforms.
simultaneously. This inherent parallelism is, in most scalable
MOC codes, exploited by a combination of shared and dis­
tributed memory parallelism. OpenMP (Ref. 58) is the usual IV.F. Memory
shared memory choice for MOC used in Refs. 9, 20, 31, 43,
and 45 even though other models like pthreads59 could be Full-core reactor 3D simulations require enormous com­
used. Distributed memory parallelism for MOC, taking place pute resources, but lie well within the capabilities of modern
on multiple computers connected over a network, universally high-performance computing. For the 3D pressurized water
uses MPI (Ref. 60) as its programming model. Graphics reactor core calculation in the BEAVRS benchmark,62 pre­
processing units have also been used to parallelize MOC sented in Sec. V.D, there are 1:4 � 109 FSRs, 300 � 106
(Ref. 46), which hold their own parallelization challenges. tracks, and 70 energy groups. The storage of the scalar fluxes
A natural way to use a distributed memory cluster is to and the domain boundary angular fluxes is approximately
split the reactor geometry among the MPI ranks. During 140 � 109 double-precision floating-point numbers (1 TB of
a transport sweep, each rank is responsible for segment inte­ memory). Ideally, as the problem is decomposed into smaller
gration across tracks that intersect the assigned portion of the partitions, the total memory used should be roughly constant.
domain. The partitioning of the domain is called SDD (Ref. However, with MRT, this is not the case.
61), and has been applied to reactor geometries.9,43 Because As the number of partitions grow with MRT, so does the
compute work scales with partition volume and communica­ memory usage as the angular flux is stored at the beginning
tion scales with partition surface area, Cartesian decomposi­ and end of each track on each partition, and the total parti­
tions should be effective, so long as each partition volume and tions’ surface increases. To compute an estimate of the total
contained surface area is approximately equal.43 This has the memory that would be used by MRT for the 3D BEAVRS
caveat of being inflexible in the number of compute nodes to problem, a few assumptions are made: a 360 � 360 � 460 cm
be used. Consider the partitioning of a cube into equal domain size, 32 azimuthal angles, four polar angles, isotropic
volumes; only 1, 8, 27, 64, and so on partitions are compatible tracks, and a domain partitioning using cubic numbers of
with this scheme assuming the partitions are also cube shaped. MPI ranks. Using these assumptions, the graph in Fig. 4 is
Another restriction is that MRT (Ref. 43) is often produced, showing the amount of memory used to store the
employed with SDD, in which case the rays in each parti­ partition boundary angular fluxes. The growth of memory
tion are the same and meet each other at boundaries. This with MPI ranks with MRT could represent an issue for
allows for a simplified communication pattern with only usability with large-scale compute jobs. However, this
nearest neighbors. Within each source iteration, the incom­ could be somewhat alleviated by using hybrid distributed
ing angular flux is set, and all rays are traced from one edge shared memory, in which MPI ranks are used across nodes
of the partition boundary to the next partition boundary and and local (on-node), non-MPI processes are created that can
finally communicated to the neighbor. The outcome of this share on-node memory as needed (for example, for angular
process is that angular flux data only move across one fluxes between partitions on the same node).
partition during each source iteration. Without the presence In contrast, MOCkingbird only stores angular fluxes on
of acceleration methods, information traverses the domain the global domain boundaries, as characteristics are followed
more slowly than following the track completely at each across the entire domain at every sweep, so intermediate
iteration, inhibiting convergence, as confirmed by Ref. 9. storage is not required. Therefore, there is no increase in
MOCkingbird utilizes both MPI and OpenMP for its angular flux storage as the problem is decomposed into
parallelization, with an arbitrary SDD using MPI and smaller partitions. MOCkingbird still allocates a memory
optional OpenMP threads to process the work on each pool of rays and angular fluxes in each domain, but it is
spatial domain. MOCkingbird benefits from its integration much smaller and only serves to avoid constant allocation
in the MOOSE framework to leverage its existing and de-allocation of angular flux arrays.

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


940 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

Partition Boundary Angular Flux Scalar Flux a factor of 2. With this, the metric “levelized nanoseconds
per integration” (LNSI) is also introduced as
10000
8000
NSI
6000 LNSI; ; ð15Þ
Aggregate Storage (GB)

Nprocs
4000

2000
where Nprocs is the number of ranks used. This scales NSI
back to a more universal metric; if a code is perfectly
1000 scaling, the LNSI stays constant. This metric has been
800
600
utilized in other MOC papers.9,20,64
400

1 10 100 1000 10000


V.A. 2D C5G7 Benchmark
MPI Ranks The C5G7 benchmark55 has often been used in the
Fig. 4. Growth of angular flux storage versus scalar flux literature as a verification step for deterministic trans­
storage when using MRT. Computed using partitioning port codes.20,21,65,66 As shown in Fig. 5, the quarter-
into cubes of MPI ranks (from Ref. 43). core 2D benchmark includes four assemblies with
17 × 17 fuel pins and a water reflector. One of the
V. BENCHMARK PROBLEMS defining features of the C5G7 is the heterogeneous
representation of the pin cell with the exception of
Section IV described the algorithms that the clad and gap homogenization. A 112 132-element
MOCkingbird utilizes and their parallel performance. mesh, as seen in Fig. 5, was used that matches the
This section explores their efficacy for solving steady- element locations and spatial fidelity close to CSG as
state, k-eigenvalue, neutron transport benchmarks. in Ref. 20. A quadrilateral mesh type was used. While
Specifically, four benchmarks are analyzed: the 2D the underlying mesh library supports higher-order ele­
C5G7 (Ref. 55), the 3D C5G7 Rodded-B,63 the 2D ments, MOCkingbird’s ray-tracing capabilities cur­
BEAVRS, and the 3D BEAVRS (Ref. 62). rently only support cells with planar sides.
We abbreviate the average absolute pin power error For an angular refinement study, track spacing will be
as “AVG,” the pin peaking factor–weighted average held constant at 0.01 cm, while the number of azimuthal
absolute pin power error as “MRE,” and the root- angles is increased from 4 to 128. In the polar direction, the
mean-square pin power error as “RMS.” The metric Tabuchi Yamamoto67 (TY) quadrature with three angles is
“nanoseconds per integration” (NSI) is also introduced, used. Δρ is computed against the MCNP benchmark refer­
measuring mean segment traversal time per ray ence keff of 1.18655. Convergence was judged using
unknown, as a fission source RMS change between successive fission
source iterations of 1 � 10 5 .
T The results of this study can be found in Table I.
NSI; P P ; ð14Þ MOCkingbird converges as the number of azimuthal
k i Ng Np
angles increases. These numerical results and the quali­
where tative fluxes found in Fig. 6 agree well with other pub­
lished results.20 In addition, the normalized absolute
T = time for a source iteration (in nanoseconds) relative pin power error, shown in Fig. 7, is in line with
the results found in Ref. 65, showing larger error in the
k = tracks
fuel pins near the reflector.
i = segments on each track These simulations utilized 160 MPI ranks spread
Ng = number of energy groups across four nodes of the Lemhi cluster at Idaho
National Laboratory (INL). Table II details the perfor­
Np = number of polar angles in the polar quadrature. mance of each run for the angular refinement study.
“Solve time” is the total time to convergence in seconds
When considering scalability, if the number of ranks is spent doing source iterations. The number of iterations
doubled, it would ideally be the case that the time T would be and intersections per iteration are also shown in Table II.
cut in half, and therefore NSI would also be reduced by The LNSI can be seen to fall and then remain constant as

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 941

Fig. 5. (a) Quarter-core C5G7 geometry and (b) mesh detail for the bottom right corner of the core. Colors represent sets of fuel
mixtures and the moderator.

TABLE I
The 2D C5G7 Results with Increasing Azimuthal Angles and an Azimuthal Spacing of 0.01 cm

Angles Sweeps keff Δρ (pcm) AVG MRE RMS

4 648 1.18552 −102 1.95 1.78 2.42


8 656 1.18484 −169 0.43 0.38 0.53
16 656 1.18512 −139 0.27 0.20 0.38
32 655 1.18638 −14 0.28 0.22 0.40
64 654 1.18664 11 0.29 0.24 0.42
128 654 1.18676 23 0.30 0.24 0.43

the number of azimuthal angles rises; this is the amorti­ These simulations utilized 400 MPI ranks and ten nodes on
zation of parallel communication latency with the addi­ the Lemhi cluster (two processors per node with 20 cores per
tion of more work per MPI rank. processor and 192 GB memory) at INL. The convergence
criteria were 10 5 relative change in the fission source. The
V.B. 3D C5G7 Benchmark results of this study can be found in Table III with a repre­
sentative 3D scalar flux solution shown in Fig. 8.
The C5G7 benchmark has been extended to a full With these results, it is seen that MOCkingbird achieves
suite of 3D benchmarks.63 There are three primary a keff within 50 pcm of the benchmark solution on the finest
configurations in the 3D benchmark: Unrodded, grid. It is clear that axial fidelity plays a large role in accuracy,
Rodded A, and Rodded B, of which this study focuses as the coarsest mesh has an eigenvalue that is 464 pcm too low.
on only the Rodded B because it offers the largest A polar angle study was also completed. The 400-axial-
challenge by having multiple banks of control rods layer mesh was used with the same configuration noted pre­
partially inserted. viously, spare for the number of polar angles. The detailed
Four meshes were created with 50, 100, 200, and 400 results can be viewed in Table IV. While the error is already
axial layers, which contain 5.6, 11.2, 22.4, and 44.8 M low by six polar angles, a stationary point is not met until ten
elements, respectively. The meshes were extruded from our polar angles.
C5G7 mesh, with 32 azimuthal angles with a spacing of Table V details the performance of MOCkingbird for
0.1 cm and six polar angles with 0.3 cm axial spacing the solution of the C5G7 Rodded B. One interesting trend in
chosen. This resulted in the generation of 53.9 M tracks. Table V is that the LNSI increases as the amount of axial

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


942 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

Fig. 6. The 2D C5G7 results with 128 azimuthal angles and an azimuthal spacing of 0.01 cm.

Fig. 7. The 2D C5G7 absolute relative error (in percent) in normalized pin powers with an azimuthal spacing of 0.01 cm: (a) 128
azimuthal angles and (b) 4 azimuthal angles.

layers increases. All solves were performed using 400 MPI comparison. It is a full-core benchmark containing 193
ranks; therefore, increasing the mesh density should, in fuel assemblies, each with an array of 17 × 17 fuel rods,
theory, increase the amount of local work to be done and guide tubes, and burnable poisons. Two fuel cycles are
decrease LNSI. However, additional cache misses will contained within the benchmark; however, the current
occur due to the greater number of elements traversed in study focuses on the fresh core configuration from cycle
an unstructured manner during ray tracing. 1. This section solves a 2D configuration of the
BEAVRS benchmark while Sec. V.D explores a fully
V.C. 2D BEAVRS Benchmark 3D solution.
The layout of the assemblies for cycle 1 can be
The BEAVRS benchmark62 was developed at the seen in Fig. 9. There are three different levels of
Massachusetts Institute of Technology (MIT) as enrichment: 3.1%, 2.4%, and 1.6%, and multiple lay­
a realistic benchmark with real-world measurements for outs of burnable poisons (6, 12, and 16). Each pin cell

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 943

TABLE II
The 2D C5G7 Performance Characteristics

Angles Solve Time (s) Iterations Intersections LNSI

4 66 648 12 014 062 41


8 77 656 22 442 732 31
16 115 656 44 105 097 25
32 194 655 88 073 264 24
64 362 654 175 991 664 24
128 770 654 351 892 634 24

TABLE III MeshGenerator system to create assemblies and ulti­


The 3D C5G7 Rodded B Results mately the core. In addition to the fuel assemblies, the baffle
with Increasing Axial Elevations and bypass water are meshed as part of “water assemblies” in
Cubit, which are then rotated to fit around the core and added
Axial to the core pattern. Figure 11 shows the baffle and water
Layers keff Δρ (pcm) AVG RMS
meshes. A view of the full geometry can be found in Fig.
50 1.073129 −464 0:892 1:160 12. Ultimately, the mesh used for this study contained
100 1.076307 −146 0:571 0:825 10 553 408 elements.
200 1.077239 −53 0:492 0:739
400 1.077498 −27 0:470 0:713
V.C.2. Cross Sections and Reference Values

The BEAVRS benchmark does not include a set of cross


sections; rather, it specifies the materials in the reactor used to
generate the cross sections. However, a detailed OpenMC
model exists69 that can be utilized to generate such cross
sections for the Hot Zero Power isothermal case at
975 ppm boron and 560 K. For this study, Guillaume
Giudicelli worked with Zhaoyuan Liu at MIT using
OpenMC and the Cumulative Migration Method57,70 to
build transport-corrected 70-energy-group cross sections
that are spatially averaged by material, with the reflector
distinct from the moderator.
OpenMC was used to compute a normalized set of
assembly powers. Those normalized powers collapsed to
one of the symmetric octants of a core can be found in
Fig. 13; they are used as reference values for comparison
with the calculations done with MOCkingbird. Also,
using this set of cross sections with OpenMOC for
Fig. 8. The 3D C5G7 thermal scalar flux with 50 axial a detailed 2D calculation without axial leakage produced
layers.
a keff of 1.00188 in two dimensions. That result utilized
a flat source model with pin cell moderator discretized
is fully specified by the benchmark, without any with eight rings and eight sectors and four rings and four
homogenization. sectors in the fuel. The OpenMOC calculation utilized 64
azimuthal angles with a 0.05-cm spacing and TY (Ref.
V.C.1. Meshing 67) polar quadrature with three angles. The 300 pcm
difference between OpenMOC and OpenMC is
The 2D mesh is built using pin cells generated from a resonance self-shielding error and can be resolved by
Cubit,68 as shown in Fig. 10, which are then utilized in the introducing equivalence factors.71–73

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


944 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

TABLE IV
The 3D C5G7 Rodded B Polar Angle Study Results

Angles Sweeps keff Δρ (pcm) AVG MRE RMS

2 708 1.082 465 1.519 1.250 2.259


4 706 1.078 29 0.602 0.482 0.960
6 709 1.077 −27 0.470 0.386 0.713
8 710 1.077 −43 0.428 0.358 0.629
10 711 1.077 −49 0.408 0.344 0.587
12 711 1.078 −50 0.396 0.336 0.563

TABLE V
The 3D C5G7 Performance Characteristics

Axial Layers Solve Time (s) Iterations LNSI Intersections/Iterations

50 15 488 700 99 12 386 466 041


100 18 575 707 110 13 380 319 921
200 24 561 709 126 15 370 859 231
400 33 659 709 138 19 351 886 010

V.C.3. Results is shown in Fig. 16b, where parallel efficiency goes


from over 80% at 1024 MPI processes down to 40%
Utilizing this problem setup, MOCkingbird was run using 18 423 MPI processes. At 18 432 processes, each
on the Lemhi cluster at the INL to solve the 2D full-core MPI rank is only responsible for 550 elements. This
eigenvalue problem. The problem settings and computa­ broad range in scalability makes MOCkingbird
tional requirements can be found in Table VI. With these a flexible tool for full-core analysis.
settings, a keff of 1.00231 was computed, 43 pcm above
the OpenMOC solution of Giudicelli mentioned pre­ V.D. 3D BEAVRS Benchmark
viously. The difference can be explained by the difference
between the source discretizations. In addition, as shown Three-dimensional simulation of the BEAVRS
in Fig. 14, the normalized assembly power differences are core is challenging due to the size of the domain
all within 1% of the OpenMC results. The in-out tilt in and the intricate axial detail that must be modeled.
the assembly power error distribution is believed to stem In all, 34 separate axial zones must be accounted for.
from shortcomings of the transport correction.11,70 In addition, as shown in Refs. 9, 11, and 74 and in
Sec. V.B, a fine axial discretization is needed for
accuracy: optimally < 2.0 cm mesh layers. With the
V.C.4. Scalability core being 460 cm in height, that is more than 230
layers in the axial mesh. Due to constraints imposed
The 2D BEAVRS benchmark problem represents by the geometrical axial elevations, 241 axial layers
an ideal case to test scalability with realistic geometry were required. If the full-core mesh from Sec. V.C is
and angular/spatial quadratures. A scalability study was used as a template and extruded on 241 elevations,
performed with a mesh that did not represent the inter- that is more than 2.5 billion elements.
assembly gap and therefore had 10 343 424 total ele­
ments. The results of the strong scaling study can be
viewed in Fig. 16 with a representative solution shown V.D.1. Geometry and Meshing
in Fig. 15. The NSIs shown in Fig. 16a are close to
ideal out to 1024 MPI processes, with performance Before running the 3D problem, the 3D mesh
gradually tapering off after that point. This behavior needs to be generated. This is accomplished using the

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 945

3. Using the number of MPI ranks the 3D problem


will be run with, read in the split 2D mesh.
4. In parallel, extrude the mesh, changing the mate­
rial definitions for each elevation.
5. Repartition the 3D mesh using the weighted
hierarchical partitioner.
6. Write out the individual partitions to separate
files.
Step 2 is critical because there is not enough
memory for every MPI rank to read the entire 2D
mesh at the same time. Therefore, each MPI process
reads a small piece of it. A new parallel extrusion
capability was created for step 4. Even for the 250
layers needed to create the 3D mesh, the parallel extru­
sion is nearly instantaneous (just seconds) due to using
12 000 MPI ranks. Without step 5, the mesh partition­
ing would simply be a huge number of columns (cor­
responding to the partitioning of the 2D mesh). Once
these steps are complete, individual files, one for each
MPI rank, are created that contain a portion of the 3D
Fig. 9. BEAVRS benchmark assembly layout for cycle 1 mesh. When MOCkingbird is ultimately run to solve
(Ref. 62).
the k-eigenvalue problem, each MPI rank reads its
particular part of the 3D mesh.
MOOSE mesh generation capability for “extrusion.” When this process is carried out for a quarter-core
However, due to the size and complexity of the 3D 3D mesh with 241 axial layers, it generates 12 000
mesh for BEAVRS, this is an intricate process invol­ files containing a total of 652 854 540 elements. Those
ving several steps. The target number of cores used for files are 162 GB in total size. To control the size of the
solving the 3D BEAVRS problem was chosen to be full-core problem and make it tractable to run within
12 000. Therefore, the mesh generation process ulti­ the available compute time, 128 axial layers were
mately needs to create a mesh suitable for running with used. This brought the 12 000 mesh files to 320 GB
that number of MPI ranks. and 1 386 977 280 elements. That number of elements
Within the axial elevations detailed within the is the largest mesh ever used by a MOOSE-based
BEAVRS benchmark, there are some short layers. application.
Several are below 1 cm with the smallest being
0.394 cm. If a mesh similar to the one from Sec. V.C is
extruded using those elevations, then tiny material
regions, like the helium gap in a fuel pin cell, end up as V.E. Three-Dimensional Core Simulations
tiny 3D volumes requiring a fine track laydown. To
combat this, several of the small axial layers were The full-core, 3D BEAVRS problem represents an
merged, going from 34 to 26 layers. Because of similar enormous challenge for any neutron transport tool.
constraints, the 40-μm assembly gaps were also Many groups have simulated it,75–81 some using
neglected. A fully geometrically detailed model with homogenization81 and many others using Monte
angular and spatial convergence will be examined in Carlo–based codes.77–80 However, to date only one
a future report. code9,64 has utilized MOC for the full-core 3D analysis
Generating the 3D mesh for BEAVRS used these of BEAVRS. However, Ref. 9 also astutely made use of
steps: the extruded nature of BEAVRS to gain efficiency and
reduce memory use. Also, another effort74 solved
1. Generate the 2D mesh. a full-core problem similar to BEAVRS using TRRM,
2. Partition/split the 2D mesh for the number of which differs somewhat from conventional MOC.
MPI ranks to be used when running the 3D problem. There has yet to be a direct MOC calculation of full-

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


946 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

Fig. 10. Pin cell meshes and inter-assembly gap mesh.

core, 3D BEAVRS by a fully general, unstructured- efficiency from small jobs to large makes it an efficient
mesh MOC code without any geometrical assumptions tool.
and the ability to perform geometric deformation. The full-core problem represents a significant
While this study is able to complete this analysis, it increase in computational needs. To make the problem
does so for only one mesh configuration. More work tractable by MOCkingbird with available computational
with a detailed mesh refinement study and acceleration resources, the axial mesh for the full-core simulation
is needed in the future. uses 128 layers. This brings the full-core mesh, as
Before simulating the full-core system, a 3D quar­ shown in Fig. 17, to 1 386 977 280 elements. In
ter-core problem was used for a strong scaling study addition, as shown in Table VII, the polar quadrature
with 12 000, 6000, 3000, and 1500 MPI ranks. Figure is reduced to six polar angles with a polar spacing of
18 shows the results of this strong scaling study. 2 cm. The overall effect of coarsening both spatially
MOCkingbird performs well, with the LNSI staying and angularly is that the number of tracks stays almost
essentially constant over a large range of MPI processes. constant, and the number of intersections per iteration
At 12 000 MPI ranks, there are, on average, over 50 000 grows by less than 2×. However, solution accuracy
elements per MPI rank. This suggests, based on previous could suffer.
scaling studies, that this problem could still scale well to The thermal flux solution for the full-core pro­
around 100 000 MPI ranks. MOCkingbird’s parallel blem is visible in Fig. 19a. The spacer grids are

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 947

Fig. 13. Normalized assembly powers for the BEAVRS


benchmark as computed by OpenMC . OpenMC computed
eigenvalues: 2D keff ¼ 1:00491 and 3D keff ¼ 1:00024.

TABLE VI
Problem Settings and Computational
Requirements for the 2D BEAVRS

Number of elements 10 558 033


Fission source convergence 1 � 10 5
Azimuthal angles 64
Azimuthal spacing 0.05 cm
TY (Ref. 67) polar 3
Fig. 11. Example of the baffle and water mesh surround­ quadrature angles
ing the core. MPI processes 4000
Solve time 4801 s
LNSI 19
Intersections 2 035 381 032
Number of transport sweeps 2200

Fig. 12. As-meshed geometry for the 2D BEAVRS core.


Fig. 14. Normalized assembly power differences for the
2D BEAVRS solution computed by MOCkingbird com­
plain to see as thermal flux depressions. The keff is pared to the OpenMC result.
low at 0.99402, but that is to be expected with the
coarse angular discretization and coarse axial layers. made (neglecting inter-assembly gaps and some axial
In addition to discretization error, convergence was homogenization of small axial zones), which both
limited (with 3× fewer power iterations than the 2D contribute to degraded solution fidelity as shown in
problem), and some geometric approximations were Fig. 19b.

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


948 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

Fig. 15. Thermal flux (0 to 9.87 ev) for the 2D BEAVRS benchmark.

Fig. 16. Strong scaling results for 2D BEAVRS.

This solution, though not perfect, represents a large step MOC is a viable capability for full-core reactor simulation.
forward for unstructured-mesh MOC. The result and the Once the in-progress development of an acceleration
performance of 33 LNSI show that unstructured-mesh method is complete, this tool will prove to be useful for

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 949

TABLE VII
Problem Settings and Computational Requirements
for the Full-Core 3D BEAVRS Solution

Number of elements 1 386 977 280


Axial layers 128
Average element height 3.6 cm
Number of tracks 302 341 016
Fission source convergence 2E−4
keff convergence 2.1E−06
Azimuthal angles 32
Azimuthal spacing 0.1 cm
Polar angles 6
Polar spacing 2 cm
MPI processes 12 000
Solve time 33.1 h
LNSI 33
Intersections 722 139 821 713
Number of transport sweeps 800
keff 0.99402

Fig. 17. Slices showing the detail in the top and mid­ with reentrant parallel ray tracing, excessive memory usage,
plane of the as-meshed full-core geometry.
and load imbalances. The tool was applied to several bench­
mark problems to show both fidelity and scalability.
40
Ultimately, a full-core, 3D BEAVRS benchmark calculation
was achieved using 12 000 MPI ranks in 12.26 h. This result
Levelized ns/integration

30
shows that the tool is capable of high-fidelity, full-core, 3D
calculations. However, it should be noted that in order to
enable MOCkingbird to be usable for design and analysis,
20
a sufficient acceleration scheme needs to be developed. The
next steps in this research involve completing the implemen­
10
tation of the acceleration method, adding transient capability,
more research on mesh partitioning, generalization of the ray-
0
2000 4000 6000 8000 10000 tracing routines to be used by other MOOSE-based codes,
and enhancements to the communication algorithms.
MPI Ranks

Fig. 18. Strong scaling for the 3D quarter-core BEAVRS


problem. A constant LNSI equates to 100% parallel Acknowledgments
efficiency.
This work was funded under multiple programs and organiza­
tions: the INL Laboratory Directed Research and Development
high-fidelity neutron transport calculations; at that time, this program and the U.S. Department of Energy (DOE) Nuclear
solution will be revisited. Energy Advanced Modeling and Simulation and Consortium for
Advanced Simulation of Light Water Reactors programs. This
research made use of the resources of the High Performance
VI. CONCLUSIONS AND FUTURE WORK Computing Center at INL, which is supported by the DOE Office
of Nuclear Energy and the Nuclear Science User Facilities. This
manuscript has been authored by Battelle Energy Alliance, LLC
A new implementation of MOC using unstructured mesh
under contract number DE-AC07-05ID14517 with the COE. The
as the underlying geometrical description, named U.S. government retains and the publisher, by accepting this article
MOCkingbird, was presented. This implementation over­ for publication, acknowledges that the U.S. government retains
comes many of the shortcomings of previous research in a nonexclusive, paid-up, irrevocable, worldwide license to publish
this area including nonvolume preserving mesh, serial track or reproduce the published form of this manuscript, or allow others
generation, approximations for boundary conditions, issues to do so, for U.S. government purposes.

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


950 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

Fig. 19. Thermal flux for the full core shown on two slices through the core.

References Pittsburgh, Pennsylvania, April 28–May 2, 1991, Vol. 2,


10.2.1 1 (1991); https://ci.nii.ac.jp/naid/10016744853/en/
(current as of July 30, 2020).
1. T. DOWNAR et al., PARCS V3. 0 US NRC Core Neutronic
Simulator USER MANUAL, University of Michigan
7. D. KNOTT and E. WEHLAGE, “Description of the
(2012).
LANCER02 Lattice Physics Code for Single-Assembly
2. T. BAHADIR and S.-Ö. LINDAHL, “Studsvik’s and Multibundle Analysis,” Nucl. Sci. Eng., 155, 3, 331
Next Generation Nodal Code SIMULATE-5,” Proc. (2007); https://doi.org/10.13182/NSE155-331.
ANFM-2009 Conf. Advances in Nuclear Fuel
Management IV, Hilton Head Island, South Carolina, 8. B. FONTAINE et al., “Description and Preliminary
April 12–15, 2009. Results of PHENIX Core Flowering Test,” Nucl. Eng.
Des., 241, 10, 4143 (2011); https://doi.org/10.1016/j.
3. M. B. IWAMOTO and M. TAMITANI, “Methods, nucengdes.2011.08.041.
Benchmarking, and Applications of the BWR Core
Simulator Code AETNA,” Proc. Int. Conf. Advances in 9. G. GUNOW, “Full Core 3D Neutron Transport Simulation
Nuclear Fuel Management III, Hilton Head Island, South Using the Method of Characteristics with Linear Sources,”
Carolina, October 5–7, 2003. PhD Thesis, Department of Nuclear Science and
Engineering, Massachusetts Institute of Technology
4. K. SMITH, “CASMO-4 Characteristics Methods for (2018).
Two-Dimensional PWR and BWR Core Calculations,”
Trans. Am. Nucl. Soc., 83, 322 (2000). 10. G. GUNOW, B. FORGET, and K. SMITH, “Full Core 3D
Simulation of the BEAVRS Benchmark with OpenMOC,”
5. R. SANCHEZ et al., “APOLLO II: A User-Oriented, Ann. Nucl. Energy, 134, 299 (2019); https://doi.org/10.
Portable, Modular Code for Multigroup Transport 1016/j.anucene.2019.05.050.
Assembly Calculations,” Nucl. Sci. Eng., 100, 3, 352
(1988); https://doi.org/10.13182/NSE88-3. 11. G. GIUDICELLI, “A Novel Equivalence Method for High
Fidelity Hybrid Stochastic-Deterministic Neutron Transport
6. J. CASAL, “HELIOS: Geometric Capabilities of a New Simulations,” PhD Thesis, Department of Nuclear Science
Fuel-Assembly Program,” Proc. Int. Topl. Mtg. Advances and Engineering, Massachusetts Institute of Technology
in Mathematics, Computations and Reactor Physics, (2020).

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 951

12. S. SANTANDREA, L. GRAZIANO, and D. 25. K. S. SMITH and J. D. RHODES, “Full-Core, 2-D, LWR
SCIANNANDRONE, “Accelerated Polynomial Axial Core Calculations with CASMO-4E,” presented at
Expansions for Full 3D Neutron Transport MOC in the PHYSOR 2002, Seoul, Korea, October 7–10, 2002.
APOLLO3 Code System as Applied to the ASTRID Fast
Breeder Reactor,” Ann. Nucl. Energy, 113, 194 (2018); 26. T. M. EVANS et al., “Denovo: A New Three-Dimensional
https://doi.org/10.1016/j.anucene.2017.11.010. Parallel Discrete Ordinates Code in SCALE,” Nucl. Technol.,
171, 2, 171 (2010); https://doi.org/10.13182/NT171-171.
13. B. LINDLEY et al., “Developments Within the WIMS
Reactor Physics Code for Whole Core Calculations,” pre­ 27. A. ZHU et al., “Transient Methods for Pin-Resolved Whole
sented at the Int. Conf. on Mathematics & Computational Core Transport Using the 2D-1D Methodology in
Methods Applied to Nuclear Science & Engineering, Jeju, MPACT,” Proc. M&C 2015, Nashville, Tennessee, April
Korea, April 16–20, 2017. 19–23, 2015, p. 19 (2015).
14. P. ARCHIER et al., “Validation of the Newly Implemented 28. M. W. REED, “The “Virtual Density” Theory of
3D TDT-MOC Solver of APOLLO3 Code on a Whole 3D Neutronics,” Thesis, PhD Thesis, Department of Nuclear
SFR Heterogeneous Assembly,” presented at PHYSOR, Science and Engineering, Massachusetts Institute of
Sun Valley, Idaho, May 1–5, 2016. Technology (2014); https://dspace.mit.edu/handle/1721.1/
15. J. R. TRAMM et al., “ARRC: A Random Ray Neutron 87497 (current as of July 30, 2020).
Transport Code for Nuclear Reactor Simulation,” Ann.
Nucl. Energy, 112, 693 (2018); https://doi.org/10.1016/j. 29. Y. WANG, “Nonlinear Diffusion Acceleration for the
anucene.2017.10.015. Multigroup Transport Equation Discretized with S N and
Continuous FEM with Rattlesnake,” Proc. 2013 Int. Conf.
16. D. GASTON et al., “MOOSE: A Parallel Computational Mathematics and Computational Methods Applied to
Framework for Coupled Systems of Nonlinear Equations,” Nuclear Science Engineering (M and C 2013), Sun
Nucl. Eng. Des., 239, 10, 1768 (2009); https://doi.org/10. Valley, Idaho, May 5–9, 2013.
1016/j.nucengdes.2009.05.021.
30. M. L. ADAMS and E. W. LARSEN, “Fast Iterative
17. C. J. PERMANN et al., “MOOSE: Enabling Massively
Methods for Discrete-Ordinates Particle Transport
Parallel Multiphysics Simulation,” SoftwareX, 11, 100430
(2020); https://doi.org/10.1016/j.softx.2020.100430. Calculations,” Prog. Nucl. Energy, 40, 1, 3 (2002); https://
doi.org/10.1016/S0149-1970(01)00023-3.
18. J. ASKEW, “A Characteristics Formulation of the Neutron
Transport Equation in Complicated Geometries,” United 31. S. SANTANDREA et al., “A Neutron Transport
Kingdom Atomic Energy Authority (1972). Characteristics Method for 3D Axially Extruded
Geometries Coupled with a Fine Group Self-Shielding
19. D. KNOTT and A. YAMAMOTO, “Lattice Physics
Environment,” Nucl. Sci. Eng., 186, 3, 239 (2017);
Computations,” Handbook of Nuclear Engineering, pp.
https://doi.org/10.1080/00295639.2016.1273634.
913–1239, D. G. CACUCI, Ed., Springer (2010); https://
doi.org/10.1007/978-0-387-98149-9. 32. A. MARIN-LAFLECHE, M. SMITH, and C. LEE,
20. W. BOYD et al., “The OpenMOC Method of Characteristics “Proteus-MOC: A 3D Deterministic Solver
Neutral Particle Transport Code,” Ann. Nucl. Energy, 68, 43 Incorporating 2D Method of Characteristics,” Proc.
(2014); https://doi.org/10.1016/j.anucene.2013.12.012. 2013 Int. Conf. Mathematics and Computational
Methods Applied to Nuclear Science Engineering (M
21. B. KOCHUNAS et al., “Overview of Development and Design and C 2013), Sun Valley, Idaho, May 5–9, 2013.
of MPACT: Michigan Parallel Characteristics Transport Code,”
Proc. 2013 Int. Conf. Mathematics and Computational 33. M. SMITH et al., “Method of Characteristics Development
Methods Applied to Nuclear Science and Engineering (M and Targeting the High Performance Blue Gene/P Computer at
C 2013), Sun Valley, Idaho, May 5–9, 2013. Argonne National Laboratory,” Int. Conf. Mathematics and
Computational Methods Applied to Nuclear Science
22. K. S. SMITH, “Nodal Method Storage Reduction by Engineering (M&C 2011), Rio de Janeiro, Brazil, May 8–
Nonlinear Iteration,” Trans. Am. Nucl. Soc., 44, 265 (1983). 12, 2011.
23. R. E. ALCOUFFE, “Diffusion Synthetic Acceleration 34. N. CHO, G. LEE, and C. PARK, “Fusion Method of
Methods for the Diamond-Differenced Discrete-Ordinates Characteristics and Nodal Method for 3D Whole Core
Equations,” Nucl. Sci. Eng., 64, 2, 344 (1977); https://doi. Transport Calculation,” Trans. Am. Nucl. Soc., 86, 322
org/10.13182/NSE77-1. (2002).
24. C. RABITI et al., “Parallel Method of Characteristics on 35. B. COLLINS et al., “Stability and Accuracy of 3D Neutron
Unstructured Meshes for the UNIC Code,” Proc. Int. Conf. Transport Simulations Using the 2D/1D Method in
Physics of Reactors, Interlaken, Switzerland, September MPACT,” J. Comput. Phys., 326, 612 (2016); https://doi.
14–19, 2008. org/10.1016/j.jcp.2016.08.022.

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


952 GASTON et al. · METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT

36. C. LEE et al., “Simulation of TREAT Cores Using 49. S. STIMPSON, B. COLLINS, and B. KOCHUNAS,
High-Fidelity Neutronics Code PROTEUS,” Proc. M&C “Improvement of Transport-Corrected Scattering Stability
2017, Jeju, Korea, April 16–20, 2017, p. 16 (2017). and Performance Using a Jacobi Inscatter Algorithm for
2D-MOC,” Ann. Nucl. Energy, 105, 1 (2017); https://doi.
37. X. YANG and N. SATVAT, “MOCUM: A
org/10.1016/j.anucene.2017.02.024.
Two-Dimensional Method of Characteristics Code Based
on Constructive Solid Geometry and Unstructured Meshing 50. V. VLADIMIROV, “Numerical Solution of the Kinetic
for General Geometries,” Ann. Nucl. Energy, 46, 20 (2012); Equation for the Sphere,” Vycisl. Mat., 3, 3 (1958).
https://doi.org/10.1016/j.anucene.2012.03.009.
51. V. DAVIDENKO and V. TSIBULSKY, “The Method of
38. P. COSGROVE and E. SHWAGERAUS, “Development Characteristics with Angle Directions Stochastic Choice,”
of MoCha-Foam: A New Method of Characteristics Math. Mod., 15, 8, 75 (2003); http://www.mathnet.ru/php/
Neutron Transport Solver for OpenFOAM,” presented archive.phtml?wshow=paper&jrnid=mm&paperid=
at the Int. Conf. Mathematics & Computational 411&option_lang=rus (accessed July 14, 2020).
Methods Applied to Nuclear Science Engineering
52. J. RHODES, K. SMITH, and D. LEE, “CASMO-5
(M&C 2017), Jeju, Korea, April 16–20, 2017.
Development and Applications,” Proc. Int. Conf. Physics
39. T. MAZUMDAR and S. DEGWEKER, “Solution of the of Reactors (PHYSOR2006), Vancouver, Canada,
Neutron Transport Equation by the Method of September 10–14, 2006.
Characteristics Using a Linear Representation of the
53. B. LINDLEY et al., “Developments Within the WIMS
Source Within a Mesh,” Ann. Nucl. Energy, 108, 132
Reactor Physics Code for Whole Core Calculations,”
(2017); https://doi.org/10.1016/j.anucene.2017.04.011.
Proc. M&C, Jeju, Korea, April 16–20, 2017.
40. S. SHANER et al., “Theoretical Analysis of Track
Generation in 3D Method of Characteristics,” presented at 54. J. R. TRAMM et al., “The Random Ray Method for
the Joint Int. Conf. Mathematics and Computation (M&C), Neutral Particle Transport,” J. Comput. Phys., 342, 229
Nashville, Tennessee, April 19–23, 2015. (2017); https://doi.org/10.1016/j.jcp.2017.04.038.
41. M. FORUM, “MPI: A Message-Passing Interface Standard. 55. E. LEWIS et al., “Benchmark Specification for
Version 3.0” (2015): https://www.mpi-forum.org/docs/mpi- Deterministic 2-D/3-D MOX Fuel Assembly Transport
3.1/mpi31-report.pdf (accessed Aug. 23, 2019). Calculations without Spatial Homogenization (C5G7
MOX), Nuclear Energy Agency / Nuclear Science
42. W. WU et al., “Improvements of the CMFD Acceleration
Committee (2001).
Capability of OpenMOC,” Nucl. Eng. Technol., 52, 10,
2162 (2020); https://doi.org/10.1016/j.net.2020.04.001. 56. G. GUNOW, B. FORGET, and K. SMITH, “Stabilization
of Multi-Group Neutron Transport with
43. B. M. KOCHUNAS, “A Hybrid Parallel Algorithm for the
Transport-Corrected Cross-sections,” Ann. Nucl. Energy,
3-D Method of Characteristics Solution of the Boltzmann
126, 211 (2019); https://doi.org/10.1016/j.anucene.2018.
Transport Equation on High Performance Compute
10.036.
Clusters.” PhD Thesis, Nuclear Engineering and
Radiological Sciences, University of Michigan (2013). 57. Z. LIU et al., “Cumulative Migration Method for
Computing Rigorous Diffusion Coefficients and
44. G. COLOMER et al., “Parallel Algorithms for Sn Transport Transport Cross Sections from Monte Carlo,” Ann. Nucl.
Sweeps on Unstructured Meshes,” J. Comput. Phys., 232, Energy, 112, 507 (2018); https://doi.org/10.1016/j.anu
1, 118 (2013); https://doi.org/10.1016/j.jcp.2012.07.009. cene.2017.10.039.
45. J. I. VERMAAK et al., “Massively Parallel Transport 58. L. DAGUM and R. MENON, “OpenMP: An Industry
Sweeps on Meshes with Cyclic Dependencies,” Standard API for Shared-Memory Programming,” IEEE
J. Comput. Phys., 425, 109892 (2021); https://doi.org/10. Comput. Sci. Eng., 5, 1, 46 (1998); https://doi.org/10.
1016/j.jcp.2020.109892. 1109/99.660313.
46. W. BOYD, “Reactor Agnostic Multi-Group Cross Section 59. B. NICHOLS, D. BUTTLAR, and J. FARRELL, PThreads
Generation for Fine-Mesh Deterministic Neutron Transport Programming: A POSIX Standard for Better
Simulations,” PhD Thesis, Department of Nuclear Science Multiprocessing, O’Reilly Media, Inc. (1996).
and Engineering, Massachusetts Institute of Technology (2017).
60. W. GROPP et al., “A High-Performance, Portable
47. G. I. BELL and S. GLASSTONE, “Nuclear Reactor Implementation of the MPI Message Passing Interface
Theory,” U.S. Atomic Energy Commission, Washington, Standard,” Parallel Comput., 22, 6, 789 (1996); https://
District of Columbia (1970). doi.org/10.1016/0167-8191(96)00024-5.
48. A. HÉBERT, Applied Reactor Physics, 2nd ed., Presses 61. B. KELLEY et al., “CMFD Acceleration of Spatial
Internationales Polytechnique (2009). Domain-Decomposition Neutron Transport Problems,”

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021


METHOD OF CHARACTERISTICS FOR 3D, FULL-CORE NEUTRON TRANSPORT · GASTON et al. 953

Proc. Int. Conf. Physics of Reactors (PHYSOR2012), 72. H. PARK and H. G. JOO, “Practical Resolution of Angle
Knoxville, Tennessee, April 15–20, 2012. Dependency of Multigroup Resonance Cross Sections
Using Parametrized Spectral Superhomogenization
62. N. HORELIK et al., “Benchmark for Evaluation and
Factors,” Nucl. Eng. Technol., 49, 6, 1287 (2017);
Validation of Reactor Simulations (BEAVRS), V1. 0.1,”
https://doi.org/10.1016/j.net.2017.07.015.
Proc. Int. Conf. Mathematics and Computational Methods
Applied to Nuclear Science & Engineering, Sun Valley, 73. G. GIUDICELLI, K. SMITH, and B. FORGET,
Idaho, May 5–9, 2013. “Generalized Equivalence Theory Used with Spatially
63. “Benchmark on Deterministic Transport Calculations Linear Sources in the Method of Characteristics for
Without Spatial Homogenisation, MOX Fuel Assembly 3D Neutron Transport,” Nucl. Sci. Eng., 194, 1044 (2020);
Extension Case,” Organisation for Economic Co-operation https://doi.org/10.1080/00295639.2020.1765606.
and Development/Nuclear Energy Agency (2005). 74. J. TRAMM, “Development of the Random Ray Method of
64. G. GIUDICELLI et al., “Adding a Third Level of Neutral Particle Transport for High-Fidelity Nuclear
Parallelism to OpenMOC, an Open Source Reactor Simulation,” PhD Thesis, Department of Nuclear
Deterministic Neutron Transport Solver,” presented at Science and Engineering, Massachusetts Institute of
the Int. Conf. Mathematics and Computational Methods Technology (2018).
Applied to Nuclear Science and Engineering, Portland,
Oregon, August 25–29, 2019. 75. J. LEPPÄNEN, R. MATTILA, and M. PUSA, “Validation
of the Serpent-ARES Code Sequence Using the MIT
65. R. M. FERRER and J. D. RHODES III, “A Linear Source BEAVRS benchmark—Initial Core at HZP Conditions,”
Approximation Scheme for the Method of Characteristics,” Ann. Nucl. Energy, 69, 212 (2014); https://doi.org/10.
Nucl. Sci. Eng., 182, 2, 151 (2016); https://doi.org/10. 1016/j.anucene.2014.02.014.
13182/NSE15-6.
66. X. YANG, R. BORSE, and N. SATVAT, “{MOCUM} 76. M. RYU et al., “Solution of the BEAVRS Benchmark
Solutions and Sensitivity Study for {C5G7} Benchmark,” Using the nTRACER Direct Whole Core Calculation
Ann. Nucl. Energy, 96, 242 (2016); https://doi.org/10.1016/ Code,” J. Nucl. Sci. Technol., 52, 7–8, 961 (2015); https://
j.anucene.2016.05.030. doi.org/10.1080/00223131.2015.1038664.

67. A. YAMAMOTO et al., “Derivation of Optimum Polar 77. D. J. KELLY et al., “Analysis of Select BEAVRS PWR
Angle Quadrature Set for the Method of Characteristics Benchmark Cycle 1 Results Using MC21 and OpenMC,”
Based on Approximation Error for the Bickley Function,” Proc. Int. Conf. Physics of Reactors (PHYSOR2014), Kyoto,
J. Nucl. Sci. Technol., 44, 2, 129 (2007); https://doi.org/10. Japan, September 28–October 3, 2014.
1080/18811248.2007.9711266. 78. H. J. PARK et al., “Real Variance Analysis of Monte Carlo
68. T. D. BLACKER, W. J. BOHNHOFF, and Eigenvalue Calculation by McCARD for BEAVRS
T. L. EDWARDS, “CUBIT Mesh Generation Environment. Benchmark,” Ann. Nucl. Energy, 90, 205 (2016); https://
Volume 1: Users Manual,” Sandia National Labs (1994). doi.org/10.1016/j.anucene.2015.12.009.

69. “PWR Benchmarks” (2019); https://github.com/mit-crpg 79. K. WANG et al., “Analysis of BEAVRS Two-Cycle
/PWR_benchmarks (current as of July 30, 2020). Benchmark Using RMC Based on Full Core Detailed
70. Z. LIU et al., “Conservation of Migration Area by Model,” Prog. Nucl. Energy, 98, 301 (2017); https://doi.
Transport Cross Sections Using Cumulative Migration org/10.1016/j.pnucene.2017.04.009.
Method in Deterministic Heterogeneous Reactor Transport 80. Z. WANG et al., “Validation of SuperMC with BEAVRS
Analysis,” Prog. Nucl. Energy, 127, 103447 (2020); https:// Benchmark at Hot Zero Power Condition,” Ann. Nucl.
doi.org/10.1016/j.pnucene.2020.103447. Energy, 111, 709 (2018); https://doi.org/10.1016/j.anucene.
71. G. GIUDICELLI, K. SMITH, and B. FORGET, 2017.09.045.
“Generalized Equivalence Methods for 3D Multi-Group 81. M. ELLIS et al., “Initial Rattlesnake Calculations of the
Neutron Transport,” Ann. Nucl. Energy, 112, 9 (2018); Hot Zero Power BEAVRS,” Idaho National Laboratory
https://doi.org/10.1016/j.anucene.2017.09.024. (2014).

NUCLEAR TECHNOLOGY · VOLUME 207 · JULY 2021

You might also like