Professional Documents
Culture Documents
12
Neural network potentials
Jinzhe Zenga,b, Liqun Caoa,b, and Tong Zhua,b,∗
a
Shanghai Engineering Research Center of Molecular Therapeutics & New Drug Development,
School of Chemistry and Molecular Engineering, East China Normal University, Shanghai, China
b
NYU-ECNU Center for Computational Chemistry at NYU Shanghai, Shanghai, China
∗Corresponding author: E-mail address: tzhu@lps.ecnu.edu.cn
Abstract
Recently, artificial neural network-based methods for the construction of potential energy surfaces and mo-
lecular dynamics (MD) simulations based on them have been increasingly used in the field of theoretical chem-
istry. The neural network potentials (NNP) strike a good balance between accuracy and computational
efficiency relative to quantum chemical calculations and MD simulations based on classical force fields. Thus,
NNP is becoming a powerful tool for studying the structure and function of molecules. In this chapter, we
introduce the basic theory of NNP. The construction steps and the usage of NNP are also introduced in detail
with the MD simulation of methane combustion as an example. We hope that this chapter can help those
readers who are new but interested in entering this field.
Keywords: Neural network, Potential energy surface, Molecular dynamic simulation, Chemical reaction
Introduction
Molecular dynamics (MD) simulation has been a key theoretical tool for studying the struc-
tural and dynamical properties of a wide range of chemical systems. One of the essential el-
ements determining the reliability of MD simulations is the underlying potential energy
surface (PES), which describes the intra- and intermolecular interactions. There are two main
ways to construct the PES for MD. The first one is to perform quantum mechanical (QM) cal-
culation on-the-fly, which is also known as ab initio MD simulation (AIMD) [1]. The other one
is the empirical interatomic potentials (force fields) which are based on classical molecular
mechanics (MM) [2,3]. The force fields undoubtedly have the advantage in terms of compu-
tational efficiency. However, these functional forms of force fields can only accurately de-
scribe classical physical interactions and usually do not have a good accuracy when taking
Quantum Chemistry in the Age of Machine Learning 279 Copyright # 2023 Elsevier Inc. All rights reserved.
https://doi.org/10.1016/B978-0-323-90049-2.00001-9
280 12. Neural network potentials
into account the motion of electrons. Thus, their accuracy is often questioned as they lack im-
portant quantum effects such as polarization and charge transfer. This is also the reason that
only a few force fields can describe chemical reactions. The AIMD is more accurate in prin-
ciple. However, its applications are significantly limited by its computational cost. It seems to
be difficult to ensure both efficiency and accuracy of PESs at the same time.
In the past two decades, the development of machine learning potentials (MLP) provides
the possibility to solve this contradiction. MLP employs machine learning models rather than
empirical formulas or Schr€ odinger equation to establish the relationship between the poten-
tial energy of the system and the structure as well as the chemical information. Compared
with empirical formulas, ML models have a better fit power and are therefore more accurate.
They have been used since the 1990s [4–9]. Following the work of Behler and Parrinello [10], a
series of MLP construction approaches have been proposed for a wide variety of chemical
systems. For example, Csányi and co-workers [11] proposed the Gaussian Approximation Po-
tentials, M€uller et al. proposed the GDML, DTNN and the SchNet methods [12–14], Hammer
et al. proposed the kCON model [15], E and co-workers developed the Deep Potential
(DeePMD) model [16,17], Jiang et al. developed the EANN model [18], and Dral and co-
workers proposed the KREG method [19,20]. There are several machine learning methods
that can be used to build the PES. Among them, the artificial NN is widely used due to its
better computational scaling for performance and memory requirements [21,22]. Recently,
universal NNPs have been proposed for many elements [23,24].
Compared with AIMD, the efficiency of the MLP is substantially improved (typically by
more than 3 orders of magnitude). With code optimization or on dedicated hardware, MD
simulation with MLP has been able to simulate systems containing 100 million atoms [17].
However, there is still an order of magnitude difference in the efficiency of MLP compared
to classical force fields. Therefore, it was mainly used to study scientific problems that
traditional force fields are not capable of or require high accuracy. For example, the de-
sign/discovery of new materials, homogeneous catalysis, etc [25–39]. The latest develop-
ments and applications in this field can be found in several recent comprehensive reviews
[40–51].
Recently, we have used the neural network-based potentials (NNP) to simulate a number
of typical chemical systems, such as the hydration of zinc ions [52], metalloproteins [53], and
combustion reactions [54,55] (Fig. 1). Metals, especially some transition metals, are important
cofactors in the regulation of protein structure and function. However, important quantum
effects associated with metal coordination are often not accurately described by classical force
fields. We found that benefitting from its powerful fitting ability, the NNP can easily achieve
higher accuracy than classical force fields in these systems. Combustion is an even more com-
plex chemical system that involves thousands of chemical reactions and generates hundreds
of molecular species during the process. Traditionally, MD simulations based on reactive
force fields such as ReaxFF [56] were used to investigate the combustion reaction processes.
However, the accuracy of the ReaxFF has often been criticized [57,58]. We have developed
NNPs as accurate as DFT method to achieve efficient simulations of methane combustion
and long-chain alkane pyrolysis. From the MD trajectories, one can not only obtain detailed
reaction networks but also discover new reactions. With this chapter, we hope to pass on our
experience to help more beginners better understand and use NNP and the MD simulations
based on it.
Methods 281
FIG. 1 (A) Radial distribution function of Zn O distance (solid lines) and corresponding integration of radial dis-
tribution function (RDF, dashed lines) calculated from the trajectory simulated by NNP. The gray area is the experi-
mental measurement of RDF [52]. The P1 and P2 denote, respectively, the peak experimental radial distribution of
the first and second solvation shells, while N1 and N2 denote, respectively, the experimental coordination number
of the first and second solvation shells. (B) The workflow to construct NNPES for zinc proteins [53].
(C) Snapshots of the partial combustion system extracted from the reactive MD simulation of methane combustion
[54]. Panel (A) This figure is reprinted with permission from M. Xu, T. Zhu, J.Z.H. Zhang, Molecular dynamics simulation of
zinc ion in water with an ab initio based neural network potential, J. Phys. Chem. A 123 (2019) 6587–6595. Copyright 2019
American Chemical Society. Panel (B) This figure is taken from M. Xu, T. Zhu, J.Z.H. Zhang, Automatically constructed neural
network potentials for molecular dynamics simulation of zinc proteins, Ftont. Chem. 9 (2021) 692200. Panel (C) This figure is
taken from J. Zeng, L. Cao, M. Xu, T. Zhu, J.Z.H. Zhang, Complex reaction processes in combustion unraveled by neural
network-based molecular dynamics simulation, Nat. Commun. 11 (2020) 5713.
Methods
A good NNP model should meet the following requirements: (1) Its accuracy must be very
close to the quantum chemistry method that used to label the data. (2) The only input needed
by the model should be the element information, atomic coordinate, charges and spin state.
(3) If used for MD simulation, the model should be smooth and differentiable. (4) In most
cases, the PES model should preserve the translational, rotational, and permutational symme-
tries of the system, which is essential to guarantee the transferability of the model.
In most, but not all, NNP models, the potential energy of the system is expressed as the sum
of atomic contributions to the energy (Fig. 2)
282 12. Neural network potentials
FIG. 2 The neural network model that generates the potential energy surface for MD simulation. This figure is taken
from J. Zeng, L. Cao, M. Xu, T. Zhu, J.Z.H. Zhang, Complex reaction processes in combustion unraveled by neural network-
based molecular dynamics simulation, Nat. Commun. 11 (2020) 5713.
X
N
E¼ Ei : (1)
i¼1
While the energy of atom i is determined by its chemical environment. The relationship
between them is described by a neural network N :
Ei ¼ N D Rei : (2)
The chemical environment refers to the element information and relative positions Rei of the
atom i and all atoms within a pre-defined distance/angular cut-off which centered at atom i.
To satisfy the requirement 4 mentioned above, we use mathematics or machine learning
methods to transform the Cartesian coordinates of atoms in the chemical environment.
The mathematical representation after the transformation is called a molecular descriptor D.
A deep neural network function N is the composition of multiple layers Li :
N ¼ Ln ∘Ln1 ∘…∘L1 : (3)
A layer L is usually composed by
y ¼ Lðx; w, bÞ ¼ ϕ xT w + b , (4)
where x is the input vector and y is the output vector. w and b are weights and biases, respec-
tively, both of which are trainable. ϕ is the activation function, such as ReLU, softplus, sig-
moid, tanh, GeLU, and etc.
Methods 283
One of the simplest descriptors is the Coulomb matrix [59]:
8
>
< 0:5Zi 8i ¼ j
2:4
Cij ¼ Zi Z j , (5)
>
: |R R | 8i 6¼ j
i j
where Z is the nuclear charge. Since all elements in the matrix are determined only by the
relative positions of atoms, the translational and rotational symmetries can be well preserved,
while the permutation invariance can be achieved by sorting the rows and columns by their
respective norm. However, the sorted Coulomb matrix cannot ensure the smoothness and
differentiability of the PES model, and it can be only used when the total energy is learned
directly, no decomposition into atomic contributions.
Currently, there are a lot of molecular descriptors used in the construction of NNP. Some
of them [18] are mentioned or introduced in other chapters of this book (such as Chapters 11,
12, and 19). Here we will introduce three of them, the atom-centered symmetry functions
(ACSFs) [60], the SchNet model [14], and the Deep Potential-smooth edition (DeepPot-SE)
[61] method.
X ζ 2 2 2
G2 ¼ 21ζ 1 + λ cos θijk eη Rij +Rik +Rjk f c Rij f c ðRik Þ f c Rjk , (8)
j, k6¼i
Rij Rik
where Rs, λ, η, and ζ are hyperparameters, θijk ¼ Rij Rik , and the cut-off function fc is defined by
8
< 1 cos πRij + 1 , if R R ,
ij c
f c Rij ¼ 2 Rc (9)
:
0, if Rij > Rc :
RC is the user-defined cut-off. The cut-off function makes atoms out of the cut-off radius
have no contribution to either G1 or G2 .
On this basis, Gastegger et al. proposed the weighted atomic-centered symmetry functions
(wACSFs) descriptor [62], which introduced weighting functions of the chemical elements to
symmetry functions (Eqs. 7–8):
X
g Z j eηðRij Rs Þ f c Rij ,
2
G1 ¼ (10)
j6¼i
284 12. Neural network potentials
X ζ
h Z j , Zk 1 + λ cos θijk eη ðRij Rs Þ +ðRik Rs Þ +ðRjk Rs Þ f c Rij f c ðRik Þ f c Rjk , (11)
2 2 2
G2 ¼ 21ζ
j, k6¼i
SchNet
It is worth to mention that the ACSFs (and its variants) are fixed before training an NNP. In
fact, there are other molecular descriptors such as SchNet [14] and DeepPot-SE [61] which can
be learned in the training process. SchNet contains a trainable convolutional NN. Its descrip-
tor D is a NN function N of the coordinates Rij and atomic type α of the given chemical
system.
D ¼ N α; Rij (13)
The NN function contains both the dense layers Ld and the convolutional layers Lc :
N x; Rij ¼ I n x; Rij (14)
I x; Rij ¼ Ld ∘Lc ∘Ld ∘ϕ∘Ld x; Rij + x (15)
where x is the input tensors that could be either α or the output from the last layer, n is a
hyperparameter to decide how many times I is applied to x, ϕ is the softplus, and the
convolutional layers Lc are given by
X
Lc x; Rij ¼ x ∘W Rij
j j
(16)
where W is trainable. Ld can be found in Eq. 4. Details of this model can be found in Ref. [14].
DeepPot-SE
network consisting
In DeepPot-SE, the expression for the atomic contribution Ei is a neural
of three hidden layers. The input layer is the molecular descriptor D Rei , which determined
by the “environment matrix” Rei , the “embedding” matrix Gi , and a reduced dimension em-
bedding matrix G< i [64].
Methods 285
T
D Ri ¼ G<
T
i ∙ Ri ∙ Ri ∙Gi (17)
8
> s Rij , ifa ¼ 1
>
< sR X =R ,ifa ¼ 2
>
Rei ¼
ij ij ij
(18)
>
> ij ij ij , ifa ¼ 3
> s R Y =R
ja
:
s Rij Zij =Rij , ifa ¼ 4
8
> 1
>
> , if Rij Ron
>
> R
> ij
< ( ! )
s Rij ¼ 1 Rij Ron 3
Rij Ron 2
Rij Ron
>
> 6 + 15 10 + 1 , if Ron Rij Roff
> Rij
> Roff Ron Roff Ron Roff Ron
>
>
:
0, if Rij Roff
(19)
s(Rij) is a switched reciprocal distance function that controls the range of the chemical en-
vironment to be considered. If an atom is separated from atom i by a distance greater than Roff,
then the atom is not included in the chemical environment of atom i. If a neighboring atom is
within a distance of Ron, then it is given full weight in the descriptor. The weight can smoothly
change from Ron to Roff. The environment matrix is a Nneigh 4 array, where Nneigh is the num-
ber of atoms with in Roff.
The “embedding” matrix Gi is actually another neural network N which is called
“embedding network”:
Gi ¼ N s Rij (20)
The equation of the neural network N is given in Eq. 3. Usually, we put 3 hidden layers in
the embedding network. Each row of the embedding matrix corresponds to a neighbor. If
there are M3 nodes in the third layer of Gi , the size of the embedding matrix is Nneigh M3.
The reduced dimensional embedding matrix has the same values as Gi , but only the first
M0 columns are stored, where M0 is smaller than M3. The main purpose of the usage of G< i
is to reduce the computational cost.
In their recent work [22], Pinheiro Jr. et al. systematically benchmarked a series of molec-
ular descriptors in different application contexts and give selection recommendations based
on the target and the size of the training set. We strongly recommend that readers of this book
read this work before construct their own NNPs.
FIG. 3 The workflow of reference dataset construction. This figure is taken from J. Zeng, L. Cao, M. Xu, T. Zhu, J.Z.H.
Zhang, Complex reaction processes in combustion unraveled by neural network-based molecular dynamics simulation, Nat.
Commun. 11 (2020) 5713.
while being as small as possible is the most critical and difficult. When one cannot predict the
chemical space that MD will explore, such as simulating combustion reactions, the construc-
tion of reference datasets becomes even more difficult. A feasible approach is to run MD sim-
ulation while sampling. Previous studies suggested using multiple NNP models to identify
poorly sampled regions of the configuration space [65,66]. In this method, several NNP
models will be trained based on the same reference dataset. During the MD simulation, a
large number of trial configurations are evaluated by all of these models. If a given structure
differs obviously from all trained data, the predictions of these models should be significantly
different. Then one need to add this structure into the dataset. Conversely, if the training
set already contains similar structures of the given one, the predicted results of these models
should be consistent. This algorithm was called “active learning” or “learning on-the-fly” and
has been used by many works [66–76], see Chapter 14. Fig. 3 shows an “active learning”
workflow we used in the simulation of methane combustion.
Firstly, we need to prepare an initial dataset by using a short MD simulation with the
ReaxFF force field. For each atom in each snapshot of the trajectory, we built a molecular clus-
ter which contains this atom and species that within a specified cut-off centered on it. Then the
Mini Batch KMeans [77] is used to remove the redundancy. Starting from the initial dataset,
four different NNP models were trained based on the dataset from the last step. Then several
short MD simulations were performed based on one of these NNP models. During the sim-
ulation, the atomic forces of the central atom in the molecular cluster are evaluated by all four
NNP models simultaneously. For a specific atom, if the predicted forces by these four models
Case studies 287
are consistent with each other, then similar molecular clusters should already be included in
the dataset. On the contrary, if the results of these four models are inconsistent with each other
and the error between them is in a specific range, the corresponding molecular cluster will be
added into the dataset. The update of the dataset will be continued until the predictions of the
four models are always consistent or the target length of the MD simulation is reached. More
technique details can be found in our previous study [54].
Case studies
axis_neuron represents the size of the embedding matrix, which was set to 12. And the options
start_lr, decay_rate, and decay_steps in the learning_rate module controls the learning rate for nth
batch according to the following formula:
n
lrðnÞ ¼ start_lr∗decay_ratedecay_steps (21)
The initial learning rate was set to 0.001 and it will decay every 400 steps. The loss function
is defined as:
p pf X
L pε , p f ¼ ε Δε2 + jΔF i j2 (22)
N 3N i
The Δ ε and Δ Fi in the loss function represent the difference between the prediction of NNP
in the labeled energy and force, respectively. pε and pf are prefactors that decays exponentially
during training. By assigning different random seeds in the neural network initialization pro-
cess, one can train multiple models at the same time.
After the training is completed, the predictive power of the model must be checked first. As
shown in the Fig. 4, the mean absolute errors (MAE) of potential energy are only 0.04 eV/atom
A
Training Set / Test Set
Structure
MAE RMSE
Number
578,731/13,315 0.041/0.140 0.073/0.240
B 60
Atomic forces predicted by NN potential (eV/Å)
105
40
104
20
Counts
103
0
102
–20
–40 101
MAE = 0.12eV/Å
RMSE = 0.30eV/Å
–60 100
–60 –40 –20 0 20 40 60
Atomic forces calculated by DFT (eV/Å))
FIG. 4 (A) Energy prediction errors for the reference set. The mean absolute errors (MAEs) and root mean squared
errors (RMSEs) are in eV/atom. (B). The correlation of atomic forces between NN predictions and DFT calculations.
This figure is taken from J. Zeng, L. Cao, M. Xu, T. Zhu, J.Z.H. Zhang, Complex reaction processes in combustion unraveled by
neural network-based molecular dynamics simulation, Nat. Commun. 11 (2020) 5713.
Case studies 289
and 0.14 eV/atom in the training set and the test set, respectively. As for the atomic forces, the
correlation coefficient between the predicted and labeled values is 0.999 and the MAE is
0.12 eV/Å.
In this chapter, we introduce the basic concepts and ideas that about the construction and
usage of NNP from a beginner’s perspective. In the past years, machine learning-based mo-
lecular dynamics simulations have gained significant developments and advances that have
changed the research paradigm throughout the theoretical chemistry community. In the pro-
cess of growing from beginner to expert, we believe that the following issues are needed to be
considered [84,85]. (1) Data is at the heart of all machine learning methods. Therefore, the
quest for better methods to build reference datasets will never stop. Some recent useful dis-
cussions can be found in the literature [86,87]. (2) The selection of proper molecular descrip-
tors which can represent the chemical environment well, while with minimal complexity.
Although well-established molecular descriptors such as ACSFs, EANN and DeepPot-SE
are already available, one may still want to design new descriptors for specific systems of in-
terest to further enhance the transferability of the NNP model and improve the accuracy and
efficiency of the training process and MD simulation. (3) The selection of hyper-parameters
for NN. There are many hyper-parameters such as learning rate and network structure pa-
rameters which have huge impact on the accuracy and efficiency of NNPs. Although some
automated methods have been proposed [60,87,88], the choice of these parameters is still
largely empirically dominated. When computational resources are limited, one needs to care-
fully compare and select the appropriate set of parameters. (4). The treatment of long-range
interactions. As mentioned above, distance cut-offs (usually around 5 Å) are used in defining
the chemical environment. The purpose of this is to avoid having too many degrees of free-
dom in the chemical environment, which would greatly increase the difficulty of the training
process as well as the size of the reference dataset. For small systems with the periodic bound-
ary, the long-range interactions can be effectively included in the short-range interactions.
However, for large and/or non-periodic systems in condensed phases, such as biomolecules
like proteins, long-range interactions must be explicitly considered. Only very recently some
encouraging progress has been made [89,90].
Acknowledgment
This work was supported by the National Natural Science Foundation of China (Grants No. 22173032, 21933010). J.Z.
was supported in part by the National Institutes of Health (GM107485) under the direction of Darrin M. York. We also
thank the ECNU Multifunctional Platform for Innovation (No. 001) and the Extreme Science and Engineering Dis-
covery Environment (XSEDE), which is supported by National Science Foundation Grant ACI-1548562.56 (specifi-
cally, the resources EXPANSE at SDSC through allocation TG-CHE190067), for providing supercomputer time.
References
[1] R. Iftimie, P. Minary, M.E. Tuckerman, Ab initio molecular dynamics: concepts, recent developments, and future
trends, Proc. Natl. Acad. Sci. U. S. A. 102 (2005) 6654–6659.
[2] J.A. Harrison, J.D. Schall, S. Maskey, P.T. Mikulski, M.T. Knippenberg, B.H. Morrow, Review of force fields and
intermolecular potentials used in atomistic computational materials research, Appl. Phys. Rev. 5 (2018), 031104.
[3] P.E. Lopes, O. Guvench, A.D. MacKerell Jr., Current status of protein force fields for molecular dynamics sim-
ulations, Methods Mol. Biol. 1215 (2015) 47–71.
References 291
[4] T.B. Blank, S.D. Brown, A.W. Calhoun, D.J. Doren, Neural network models of potential energy surfaces, J. Chem.
Phys. 103 (1995) 4129–4137.
[5] G.E. Moyano, M.A. Collins, Molecular potential energy surfaces by interpolation: strategies for faster
convergence, J. Chem. Phys. 121 (2004) 9769–9775.
[6] M.J.T. Jordan, K.C. Thompson, M.A. Collins, Convergence of molecular potential energy surfaces by interpola-
tion: application to the OH+ H2!H2O+ H reaction, J. Chem. Phys. 102 (1995) 5647–5657.
[7] K.C. Thompson, M.J.T. Jordan, M.A. Collins, Polyatomic molecular potential energy surfaces by interpolation in
local internal coordinates, J. Chem. Phys. 108 (1998) 8302–8316.
[8] G.G. Maisuradze, D.L. Thompson, Interpolating moving least-squares methods for fitting potential energy sur-
faces: illustrative approaches and applications, J. Phys. Chem. A 107 (2003) 7118–7124.
[9] C. Qu, Q. Yu, J.M. Bowman, Permutationally invariant potential energy surfaces, Annu. Rev. Phys. Chem.
69 (2018) 151–175.
[10] J. Behler, M. Parrinello, Generalized neural-network representation of high-dimensional potential-energy sur-
faces, Phys. Rev. Lett. 98 (2007), 146401.
[11] A.P. Bartok, M.C. Payne, R. Kondor, G. Csanyi, Gaussian approximation potentials: the accuracy of quantum
mechanics, without the electrons, Phys. Rev. Lett. 104 (2010), 136403.
[12] S. Chmiela, A. Tkatchenko, H.E. Sauceda, I. Poltavsky, K.T. Schutt, K.R. Muller, Machine learning of accurate
energy-conserving molecular force fields, Sci. Adv. 3 (2017), 1603015.
[13] K.T. Schutt, F. Arbabzadah, S. Chmiela, K.R. Muller, A. Tkatchenko, Quantum-chemical insights from deep ten-
sor neural networks, Nat. Commun. 8 (2017), 13890.
[14] K.T. Schutt, H.E. Sauceda, P.J. Kindermans, A. Tkatchenko, K.R. Muller, SchNet - a deep learning architecture for
molecules and materials, J. Chem. Phys. 148 (2018), 241722.
[15] X. Chen, M.S. Jorgensen, J. Li, B. Hammer, Atomic energies from a convolutional neural network, J. Chem. The-
ory Comput. 14 (2018) 3933–3942.
[16] L.F. Zhang, J.Q. Han, H. Wang, R. Car, E. Weinan, Deep potential molecular dynamics: a scalable model with the
accuracy of quantum mechanics, Phys. Rev. Lett. 120 (2018), 143001.
[17] D. Lu, H. Wang, M. Chen, L. Lin, R. Car, E. Weinan, W. Jia, L. Zhang, 86 PFLOPS deep potential molecular dy-
namics simulation of 100 million atoms with ab initio accuracy, Comput. Phys. Commun. 259 (2021), 107624.
[18] Y. Zhang, C. Hu, B. Jiang, Embedded atom neural network potentials: efficient and accurate machine learning
with a physically inspired representation, J. Phys. Chem. Lett. 10 (2019) 4962–4967.
[19] P.O. Dral, F. Ge, B.X. Xue, Y.F. Hou, M. Pinheiro Jr., J. Huang, M. Barbatti, MLatom 2: an integrative platform for
atomistic machine learning, Top. Curr. Chem. 379 (2021) 27.
[20] P.O. Dral, MLatom: a program package for quantum chemical research assisted by machine learning, J. Comput.
Chem. 40 (2019) 2339–2347.
[21] J. Behler, Perspective: machine learning potentials for atomistic simulations, J. Chem. Phys. 145 (2016), 170901.
[22] M. Pinheiro, F. Ge, N. Ferre, P.O. Dral, M. Barbatti, Choosing the right molecular machine learning potential,
Chem. Sci. 12 (2021) 14396–14413.
[23] J.S. Smith, B.T. Nebgen, R. Zubatyuk, N. Lubbers, C. Devereux, K. Barros, S. Tretiak, O. Isayev, A.E. Roitberg,
Approaching coupled cluster accuracy with a general-purpose neural network potential through transfer learn-
ing, Nat. Commun. 10 (2019) 2903.
[24] C. Devereux, J.S. Smith, K.K. Davis, K. Barros, R. Zubatyuk, O. Isayev, A.E. Roitberg, Extending the applicability
of the ANI deep learning molecular potential to sulfur and halogens, J. Chem. Theory Comput. 16 (2020)
4192–4202.
[25] J. Behler, First principles neural network potentials for reactive simulations of large molecular and condensed
systems, Angew. Chem. Int. Edit. 56 (2017) 12828–12840.
[26] T. Morawietz, J. Behler, A full-dimensional neural network potential-energy surface for water clusters up to the
hexamer, Z. Phys. Chem. 227 (2013) 1559–1581.
[27] T. Morawietz, J. Behler, A density-functional theory-based neural network potential for water clusters including
van der Waals corrections, J. Phys. Chem. A 117 (2013) 7356–7366.
[28] M. del Cueto, X. Zhou, L. Zhou, Y. Zhang, B. Jiang, H. Guo, New perspectives on CO2–Pt(111) interaction with a
high-dimensional neural network potential energy surface, J. Phys. Chem. C 124 (2020) 5174–5181.
[29] C. Hu, Y. Zhang, B. Jiang, Dynamics of H2O adsorption on Pt(110)-(1 2) based on a neural network potential
energy surface, J. Phys. Chem. C 124 (2020) 23190–23199.
[30] X. Lu, X. Wang, B. Fu, D. Zhang, Theoretical investigations of rate coefficients of H + H2O2 ! OH + H2O on a
full-dimensional potential energy surface, J. Phys. Chem. A 123 (2019) 3969–3976.
292 12. Neural network potentials
[31] H. Wang, W. Yang, Toward building protein force fields by residue-based systematic molecular fragmentation
and neural network, J. Chem. Theory Comput. 15 (2019) 1409–1417.
[32] H. Wang, W.T. Yang, Force field for water based on neural network, J. Phys. Chem. Lett. 9 (2018) 3232–3240.
[33] O.T. Unke, D. Koner, S. Patra, S. K€aser, M. Meuwly, High-dimensional potential energy surfaces for molecular
simulations: from empiricism to machine learning, Mach. Learn.: Sci. Technol. 1 (2020), 013001.
[34] A. Jonayat, A.C. Van Duin, M.J. Janik, Discovery of descriptors for stable monolayer oxide coatings through ma-
chine learning, ACS Appl. Energy Mater. 1 (2018) 6217–6226.
[35] A.C. Rajan, A. Mishra, S. Satsangi, R. Vaish, H. Mizuseki, K.-R. Lee, A.K. Singh, Machine-learning-assisted ac-
curate band gap predictions of functionalized MXene, Chem. Mater. 30 (2018) 4031–4038.
[36] G. Pilania, J.E. Gubernatis, T. Lookman, Multi-fidelity machine learning models for accurate bandgap predic-
tions of solids, Comput. Mater. Sci. 129 (2017) 156–163.
[37] R. Jinnouchi, F. Karsai, G. Kresse, On-the-fly machine learning force field generation: application to melting
points, Phys. Rev. B 100 (2019), 014105.
[38] K. Yao, J.E. Herr, D.W. Toth, R. McKintyre, J. Parkhill, The TensorMol-0.1 model chemistry: a neural network
augmented with long-range physics, Chem. Sci. 9 (2018) 2261–2269.
[39] S. Kondati Natarajan, T. Morawietz, J. Behler, Representing the potential-energy surface of protonated water
clusters by high-dimensional neural network potentials, Phys. Chem. Chem. Phys. 17 (2015) 8356–8371.
[40] B. Jiang, J. Li, H. Guo, High-fidelity potential energy surfaces for gas-phase and gas–surface scattering processes
from machine learning, J. Phys. Chem. Lett. 11 (2020) 5120–5131.
[41] S. Manzhos, T. Carrington, Neural network potential energy surfaces for small molecules and reactions, Chem.
Rev. 121 (2021) 10187–10217.
[42] M. Meuwly, Machine learning for chemical reactions, Chem. Rev. 121 (2021) 10218–10239.
[43] J. Behler, Four generations of high-dimensional neural network potentials, Chem. Rev. 121 (2021) 10037–10072.
[44] M. Ceriotti, C. Clementi, O. Anatole von Lilienfeld, Introduction: machine learning at the atomic scale, Chem.
Rev. 121 (2021) 9719–9721.
[45] V.L. Deringer, A.P. Bartók, N. Bernstein, D.M. Wilkins, M. Ceriotti, G. Csányi, Gaussian process regression for
materials and molecules, Chem. Rev. 121 (2021) 10073–10141.
[46] A. Glielmo, B.E. Husic, A. Rodriguez, C. Clementi, F. Noe, A. Laio, Unsupervised learning methods for molec-
ular simulation data, Chem. Rev. 121 (2021) 9722–9758.
[47] B. Huang, O.A. von Lilienfeld, Ab initio machine learning in chemical compound space, Chem. Rev. 121 (2021)
10001–10036.
[48] F. Musil, A. Grisafi, A.P. Bartók, C. Ortner, G. Csányi, M. Ceriotti, Physics-inspired structural representations for
molecules and materials, Chem. Rev. 121 (2021) 9759–9815.
[49] A. Nandy, C. Duan, M.G. Taylor, F. Liu, A.H. Steeves, H.J. Kulik, Computational discovery of transition-metal
complexes: from high-throughput screening to machine learning, Chem. Rev. 121 (2021) 9927–10000.
[50] O.T. Unke, S. Chmiela, H.E. Sauceda, M. Gastegger, I. Poltavsky, K.T. Sch€ utt, A. Tkatchenko, K.-R. M€
uller, Ma-
chine learning force fields, Chem. Rev. 121 (2021) 10142–10186.
[51] J. Westermayr, P. Marquetand, Machine learning for electronically excited states of molecules, Chem. Rev. 121
(2021) 9873–9926.
[52] M. Xu, T. Zhu, J.Z.H. Zhang, Molecular dynamics simulation of zinc ion in water with an ab initio based neural
network potential, J. Phys. Chem. A 123 (2019) 6587–6595.
[53] M. Xu, T. Zhu, J.Z.H. Zhang, Automatically constructed neural network potentials for molecular dynamics sim-
ulation of zinc proteins, Front. Chem. 9 (2021), 692200.
[54] J. Zeng, L. Cao, M. Xu, T. Zhu, J.Z.H. Zhang, Complex reaction processes in combustion unraveled by neural
network-based molecular dynamics simulation, Nat. Commun. 11 (2020) 5713.
[55] J. Zeng, L. Zhang, H. Wang, T. Zhu, Exploring the chemical space of linear alkane pyrolysis via deep potential
GENerator, Energy Fuel 35 (2021) 762–769.
[56] A.C. Van Duin, S. Dasgupta, F. Lorant, W.A. Goddard, ReaxFF: a reactive force field for hydrocarbons, J. Phys.
Chem. A 105 (2001) 9396–9409.
[57] E. Wang, J. Ding, Z. Qu, K. Han, Development of a reactive force field for hydrocarbons and application to iso-
octane thermal decomposition, Energy Fuel 32 (2017) 901–907.
[58] T. Cheng, A. Jaramillo-Botero, W.A. Goddard, H. Sun, Adaptive accelerated ReaxFF reactive dynamics with val-
idation from simulating hydrogen combustion, J. Am. Chem. Soc. 136 (2014) 9434–9442.
References 293
[59] M. Rupp, A. Tkatchenko, K.R. Muller, O.A. von Lilienfeld, Fast and accurate modeling of molecular atomization
energies with machine learning, Phys. Rev. Lett. 108 (2012), 058301.
[60] J. Behler, Atom-centered symmetry functions for constructing high-dimensional neural network potentials, J.
Chem. Phys. 134 (2011), 074106.
[61] L. Zhang, J. Han, H. Wang, W.A. Saidi, R. Car, E. Weinan, End-to-end symmetry preserving inter-atomic poten-
tial energy model for finite and extended systems, NIPS’18: Proceedings of the 32nd International Conference on
Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA, 2018, pp. 4441–4451.
[62] M. Gastegger, L. Schwiedrzik, M. Bittermann, F. Berzsenyi, P. Marquetand, wACSF-weighted atom-centered
symmetry functions as descriptors in machine learning potentials, J. Chem. Phys. 148 (2018), 241709.
[63] J.S. Smith, O. Isayev, A.E. Roitberg, ANI-1: an extensible neural network potential with DFT accuracy at force
field computational cost, Chem. Sci. 8 (2017) 3192–3203.
[64] J. Zeng, T.J. Giese, S. Ekesan, D.M. York, Development of range-corrected deep learning potentials for fast, ac-
curate quantum mechanical/molecular mechanical simulations of chemical reactions in solution, J. Chem. The-
ory Comput. 17 (2021) 6993–7009.
[65] J. Behler, Neural network potential-energy surfaces in chemistry: a tool for large-scale simulations, Phys. Chem.
Chem. Phys. 13 (2011) 17930–17955.
[66] J.S. Smith, B. Nebgen, N. Lubbers, O. Isayev, A.E. Roitberg, Less is more: sampling chemical space with active
learning, J. Chem. Phys. 148 (2018), 241733.
[67] L.F. Zhang, D.Y. Lin, H. Wang, R. Car, E. Weinan, Active learning of uniformly accurate interatomic potentials
for materials simulation, Phys. Rev. Mater. 3 (2019), 023804.
[68] W. Wang, T. Yang, W.H. Harris, R. Gomez-Bombarelli, Active learning and neural network potentials accelerate
molecular screening of ether-based solvate ionic liquids, Chem. Commun. (Camb.) 56 (2020) 8920–8923.
[69] Q. Lin, Y. Zhang, B. Zhao, B. Jiang, Automatically growing global reactive neural network potential energy sur-
faces: a trajectory-free active learning strategy, J. Chem. Phys. 152 (2020), 154104.
[70] S.J. Ang, W. Wang, D. Schwalbe-Koda, S. Axelrod, R. Gómez-Bombarelli, Active learning accelerates ab initio
molecular dynamics on reactive energy surfaces, Chem (2021) 738–751.
[71] Z. Li, J.R. Kermode, A. De Vita, Molecular dynamics with on-the-fly machine learning of quantum-mechanical
forces, Phys. Rev. Lett. 114 (2015), 096405.
[72] B. Huang, O.A. von Lilienfeld, Quantum machine learning using atom-in-molecule-based fragments selected on
the fly, Nat. Chem. 12 (2020) 945–951.
[73] E.V. Podryabinkin, A.V. Shapeev, Active learning of linearly parametrized interatomic potentials, Comput. Ma-
ter. Sci. 140 (2017) 171–180.
[74] N.J. Browning, R. Ramakrishnan, O.A. von Lilienfeld, U. Roethlisberger, Genetic optimization of training sets for
improved machine learning models of molecular properties, J. Phys. Chem. Lett. 8 (2017) 1351–1359.
[75] Y. Zhang, H. Wang, W. Chen, J. Zeng, L. Zhang, H. Wang, E. Weinan, DP-GEN: a concurrent learning platform
for the generation of reliable deep learning based potential energy models, Comput. Phys. Commun. 253 (2020),
107206.
[76] Q. Lin, L. Zhang, Y. Zhang, B. Jiang, Searching configurations in uncertainty space: active learning of high-
dimensional neural network reactive potentials, J. Chem. Theory Comput. 17 (2021) 2691–2701.
[77] D. Sculley, Web-scale k-means clustering, in: Proceedings of the 19th international conference on World Wide
Web, 2010, ACM, 2010, pp. 1177–1178.
[78] H. Wang, L. Zhang, J. Han, E. Weinan, DeePMD-kit: a deep learning package for many-body potential energy
representation and molecular dynamics, Comput. Phys. Commun. 228 (2018) 178–184.
[79] M. Frisch, G. Trucks, H. Schlegel, G. Scuseria, M. Robb, J. Cheeseman, G. Scalmani, V. Barone, G. Petersson, H.
Nakatsuji, Gaussian 16, Revision A. 03, Gaussian Inc., Wallingford CT, 2016.
[80] S.Y. Haoyu, X. He, S.L. Li, D.G. Truhlar, MN15: a Kohn–Sham global-hybrid exchange–correlation density func-
tional with broad accuracy for multi-reference and single-reference systems and noncovalent interactions,
Chem. Sci. 7 (2016) 5032–5051.
[81] D. Lu, W. Jiang, Y. Chen, L. Zhang, W. Jia, H. Wang, M. Chen, DP Train, then DP Compress: Model Compression
in Deep Potential Molecular Dynamics, arXiv preprint, 2021. arXiv:2107.02103.
[82] H.M. Aktulga, J.C. Fogarty, S.A. Pandit, A.Y. Grama, Parallel reactive molecular dynamics: numerical methods
and algorithmic techniques, Parallel Comput. 38 (2012) 245–259.
294 12. Neural network potentials
[83] J. Zeng, L. Cao, C.H. Chin, H. Ren, J.Z.H. Zhang, T. Zhu, ReacNetGenerator: an automatic reaction network gen-
erator for reactive molecular dynamics simulations, Phys. Chem. Chem. Phys. 22 (2020) 683–691.
[84] V. Botu, R. Batra, J. Chapman, R. Ramprasad, Machine learning force fields: construction, validation, and
outlook, J. Phys. Chem. C 121 (2017) 511–522.
[85] J. Behler, Constructing high-dimensional neural network potentials: a tutorial review, Int. J. Quantum Chem. 115
(2015) 1032–1050.
[86] J.F. Xia, Y.L. Zhang, B. Jiang, Efficient selection of linearly independent atomic features for accurate machine
learning potentials, Chin. J. Chem. Phys. 34 (2021) 695–703.
[87] M. Xu, T. Zhu, J.Z.H. Zhang, Automated construction of neural network potential energy surface: the enhanced
self-organizing incremental neural network deep potential method, J. Chem. Inf. Model. 61 (2021) 5428–5437.
[88] G. Imbalzano, A. Anelli, D. Giofre, S. Klees, J. Behler, M. Ceriotti, Automatic selection of atomic fingerprints and
reference configurations for machine-learning potentials, J. Chem. Phys. 148 (2018), 241730.
[89] T.W. Ko, J.A. Finkler, S. Goedecker, J. Behler, A fourth-generation high-dimensional neural network potential
with accurate electrostatics including non-local charge transfer, Nat. Commun. 12 (2021) 398.
[90] A. Grisafi, M. Ceriotti, Incorporating long-range physics in atomic-scale machine learning, J. Chem. Phys. 151
(2019), 204105.
Another random document with
no related content on Scribd:
— Ah, quasi — ella ripetè cupamente. — Ci son dunque esempi di
navigli perduti senza che rimanga alcuna traccia di loro, senza che si
salvi un uomo dell’equipaggio, senza che una tavola galleggiante sul
mare dia un indizio della catastrofe?
Non potei negarle che di questi casi ce ne fossero, ma erano così
rari, così eccezionali da non doverci fermare il pensiero.
— Perchè vede, Ceriani — ella ripigliò — non so come sopporterei
l’annunzio positivo d’una sciagura; so che l’incertezza mi
ucciderebbe.... Ma, in nome del cielo — seguitò la povera signora
attorcigliando nervosamente il fazzoletto alle dita — si è poi fatto
tutto quello che si doveva per chiarire questo mistero?... Mio marito
lo afferma; io non lo credo.
Le enumerai le lettere, i telegrammi che si erano spediti; l’assicurai
che si sarebbe tornato a scrivere, a telegrafare.
Ella si strinse nelle spalle. — Scrivere? Telegrafare?... Ah se fossi un
uomo!
Qualcuno s’avvicinava, ed ella mi lasciò con queste parole,
slanciandomi uno sguardo metà di preghiera, metà di rimprovero.
Che pretendeva mai la signora Agnese? Ch’io andassi alla ricerca
del King Arthur come Stanley era andato alla ricerca di
Livingstone?... Ohimè, l’impresa dello Stanley era stata una follia
sublime; la mia non sarebbe stata che una follia ridicola.
Nè la signora Agnese me ne riparlò nei giorni seguenti. La sua
agitazione febbrile aveva ceduto il posto a una calma apparente che
ci impensieriva ancora di più. Ella stava per lunghe ore sdraiata sulla
sua poltrona nel salottino giapponese, senza un libro, senza un
lavoro, immersa in un cupo silenzio. A colazione, a pranzo, toccava
appena il cibo, pronunziava appena qualche monosillabo, si faceva
una legge di non menzionar mai nè il King Arthur, nè il capitano
Atkinson, nè la piccola Ofelia. Solo una volta ella scattò dalla
seggiola quando il dottor Gandolfi le suggerì un viaggetto di un
mese. — Quest’anno non mi muovo da Venezia — ella risposo in
tuono secco, reciso.
Passavano i giorni, passavano le settimane. Eravamo venuti a
sapere d’un tifone che aveva infuriato nei mari della China fra il 25 e
il 28 di giugno ed era penetrato nei nostri animi il convincimento che
in quella occasione appunto il King Arthur si fosse perduto con tutto
l’equipaggio. Ma mentre si conoscevano i nomi d’altri legni ch’erano
scampati miracolosamente al pericolo, e sbattuti, malconci avevano
dovuto ripararsi in qualcheduno di quei porti, del King Arthur
nessuno poteva dir nulla. Nessuno lo aveva visto dopo la sua
partenza da Hiogo.
Anche i danni materiali d’un simile stato di cose erano gravissimi. Le
rimesse fatte a Londra per rimborsare il nostro banchiere
importavano circa ottocentomila lire, somma della quale c’era forza
rimaner scoperti finchè fosse spirato il termine necessario per
acquistare il diritto d’abbandono verso le compagnie assicuratrici, e
non c’è casa di commercio, per potente che sia, a cui non dia
degl’impicci l’immobilizzare un capitale di quasi un milione.
Inoltre tutti i vantaggi sperati da un’iniziativa che doveva riaffermare
la superiorità della nostra ditta andavano in fumo per esser raccolti in
gran parte dai nostri rivali, i Gelardi, che avevano commesso a
Hiogo un carico di riso dopo di noi e che lo aspettavano entro
l’ottobre col vapore inglese The Iron Duke. Noi l’odiavamo questa
Iron Duke che seguiva la via tenuta dal King Arthur, che
probabilmente sarebbe passato sul punto ove il King Arthur era stato
inghiottito dalle onde. Non credo che nessuno di noi gli augurasse
un disastro, ma è certo che a sentirlo nominare ci si rimescolava il
sangue. E lo si sentiva nominare così spesso. I sensali, che in attesa
del King Arthur avevano imbastito degli affari con noi, adesso, con la
compunzione di chi fa una visita di condoglianza, venivano a
sciogliersi da ogni impegno e a dirci della dolorosa necessità in cui si
trovavano di trattare coi Gelardi per l’acquisto della merce di
prossimo arrivo con l’Iron Duke. E poi gli stessi Gelardi, alquanto
vanitosi per loro natura, stimavano opportuno di comunicare ai
giornali cittadini le varie tappe del loro bastimento. Era partito il tal
giorno da Hiogo; aveva nel tal altro toccato Point-de-Galle; era
passato per Aden, era a Suez.... Il comandante dell’Iron Duke non
faceva economia di telegrammi.
Finalmente, ai primi di novembre, una mattina, il bastimento entrò in
porto e andò ad ancorarsi alla Giudecca, proprio dove, in aprile, era
ancorato il King Arthur. Ed io procurai nella giornata medesima di
vedere il capitano per chiedergli se gli fosse venuta all’orecchio
nessuna voce circa al vapore che due mesi prima di lui aveva
lasciato il Giappone alla volta di Venezia. Ma egli non ne sapeva più
di quello che ne sapevamo noi.
Quando tornai in banco dopo questa mia pratica vana, il principale
mi disse: — Mia moglie ha ragione. L’incertezza è il peggiore dei
mali, e una speranza voluta conservare a ogni costo è una fonte
perenne d’inquietudine.... Ma che speranza? — egli corresse con un
gesto d’impazienza. — Noi non ne abbiamo più; noi non dubitiamo
che il King Arthur sia perduto.... Ci manca però la forza di
rassegnarvisi finchè non abbiamo in mano un documento, una
prova.... Ah, questa prova, questa prova chi ce la darà?
Mi guardò in un modo singolare e soggiunse: — Senta, Ceriani. Il
viaggio d’esplorazione che l’Agnese parve consigliarle tempo
addietro è, anche a’ miei occhi, una cosa assurda. Nondimeno
qualche passo si potrebbe fare. Una corsa in Inghilterra per
esempio, tanto da vedere gli armatori, da consultarsi con persone
esperimentate, da recarsi agli uffici del Lloyd ove ci son notizie di
tutto il mondo?... Andrei io, se non avessi scrupolo di piantar quella
disgraziata.... Lei, Ceriani, lei ch’è giovine, ch’è libero, avrebbe
difficoltà di partire per Londra al più presto, domani sera, per
esempio?...
Sollevai alcuni dubbi sull’utilità di questa gita, ma difficoltà ad
abbandonar Venezia per un quindici o venti giorni non ne avevo
affatto. In fondo, lo confesso, l’offerta mi tornava gradita, perchè
ormai il King Arthur pesava sul banco come un incubo. Ora,
quest’incubo io l’avrei subìto anche durante le mie peregrinazioni
che avevano per iscopo preciso di far nuove indagini sulla sorte del
naviglio: ma mi sorrideva l’idea di cambiar aria, di sostituire una
ricerca attiva (fosse pure infruttuosa) a una preoccupazione inerte e
opprimente.
VI.
· · · · · · · · · · · · · · · ·
NELL’ANDARE AL BALLO.