Professional Documents
Culture Documents
PDF Computational Approaches For Chemistry Under Extreme Conditions Nir Goldman Ebook Full Chapter
PDF Computational Approaches For Chemistry Under Extreme Conditions Nir Goldman Ebook Full Chapter
PDF Computational Approaches For Chemistry Under Extreme Conditions Nir Goldman Ebook Full Chapter
https://textbookfull.com/product/constitutionalism-under-extreme-
conditions-law-emergency-exception-richard-albert/
https://textbookfull.com/product/nonlinear-dynamics-of-
structures-under-extreme-transient-loads-first-edition-ademovic/
https://textbookfull.com/product/cell-biology-and-translational-
medicine-volume-2-approaches-for-diverse-diseases-and-conditions-
kursad-turksen/
https://textbookfull.com/product/indistractable-nir-eyal/
Goldman-Cecil Medicine 26th Edition Lee Goldman
https://textbookfull.com/product/goldman-cecil-medicine-26th-
edition-lee-goldman/
https://textbookfull.com/product/foundations-of-high-energy-
density-physics-physical-processes-of-matter-at-extreme-
conditions-1st-edition-jon-larsen/
https://textbookfull.com/product/quantum-computational-chemistry-
modelling-and-calculation-for-functional-materials-taku-onishi/
https://textbookfull.com/product/team-coordination-in-extreme-
environments-work-practices-and-technological-uses-under-
uncertainty-1st-edition-cecile-gode/
https://textbookfull.com/product/computational-approaches-for-
studying-enzyme-mechanism-part-a-1st-edition-gregory-a-voth-eds/
Challenges and Advances
in Computational Chemistry and Physics 28
Series Editor: Jerzy Leszczynski
Computational
Approaches
for Chemistry
Under Extreme
Conditions
Challenges and Advances in Computational
Chemistry and Physics
Volume 28
Series editor
Jerzy Leszczynski
Department of Chemistry and Biochemistry
Jackson State University, Jackson, MS, USA
This book series provides reviews on the most recent developments in computa-
tional chemistry and physics. It covers both the method developments and their
applications. Each volume consists of chapters devoted to the one research area.
The series highlights the most notable advances in applications of the computa-
tional methods. The volumes include nanotechnology, material sciences, molecular
biology, structures and bonding in molecular complexes, and atmospheric
chemistry. The authors are recruited from among the most prominent researchers
in their research areas. As computational chemistry and physics is one of the most
rapidly advancing scientific areas such timely overviews are desired by chemists,
physicists, molecular biologists and material scientists. The books are intended for
graduate students and researchers.
All contributions to edited volumes should undergo standard peer review to
ensure high scientific quality, while monographs should be reviewed by at least two
experts in the field. Submitted manuscripts will be reviewed and decided by the
series editor, Prof. Jerzy Leszczynski.
Computational Approaches
for Chemistry Under Extreme
Conditions
123
Editor
Nir Goldman
Materials Science Division
Lawrence Livermore National Laboratory
Livermore, CA, USA
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This book presents recently developed computational approaches for the study of
reactive materials under extreme conditions, with an emphasis on atomistic meth-
ods or those derived from atomistic calculations. Our intention is to include
state-of-the-art developments in a single source, spanning high pressures (e.g., 10s
to 100s of GPa), high temperatures (up to 1000s of degrees Kelvin), and even
strong electrical fields. The methods presented here include ab initio approaches
such as Density Functional Theory (DFT) and semi-empirical quantum simulation
methods (spanning nanometer length scales and picosecond timescales) as well as
reactive force field and coarse-grained approaches (spanning microns, nanoseconds,
and beyond). These approaches are readily applied across a broad range of fields,
including prebiotic chemistry in impacting comets, studies of planetary interiors,
high-pressure synthesis of new compounds, and detonations of energetic materials.
We have aimed to emphasize the strong connection with experiments throughout
the book. Our hope is that this will prove to be a useful reference for both com-
putational scientists wishing to learn more about specific subfields in this area as
well as experimental scientists wishing to gain familiarity with well-known com-
putational approaches for simulation of their experiments.
The book can be subdivided into a number of ways that will hopefully prove
instructive to both experts and the uninitiated. Broadly speaking, Chaps. 1–5 have
focused on some sort of quantum simulation approach, mainly with Kohn–
Sham DFT as the starting point. Chapters 1, 4, 6, 7, and 8 discuss force field
simulation and development for classical molecular dynamics calculations.
Chapters 8, 9, and 10 discuss coarse-graining approaches to extend prediction of
physical and chemical properties to closer to experimental time and length scales. In
more detail, Chap. 2 focuses on crystal composition and structure prediction
through evolutionary algorithms. Chapters 3 and 4 discuss semi-empirical quantum
method developments, which can yield significantly longer timescales for molec-
ular dynamics simulations while still providing information about electronic states.
Chapter 4 places an emphasis on refining both semi-empirical and classical
molecular dynamics simulations through force matching to DFT-MD trajectories,
v
vi Preface
including approaches for speeding up the fitting process through linear least-squares
fitting. Chapters 4, 5, and 6 discuss extending quantum simulations to long time-
scales through free energy calculations and accelerated simulation approaches.
Chapter 8 has some focus on novel reactive coarse-graining approaches for
shock-induced volume collapsing reactions. Chapters 9 and 10 place additional
emphasis on use of machine-learning methods to extend physical–chemical simu-
lation approaches to closer experimental time and length scales.
The chapters in this book can also be neatly subdivided by application area.
Chapter 1 discusses hydrocarbon polymers under extremely high pressures and
temperatures. Chapters 2, 8, and 10 explore energetic materials, with Chaps. 8 and
10 exploring reactive conditions. Chapters 4, 5, and 6 explore prebiotic chemistry,
and the role extreme conditions can play in the synthesis of simple, life-building
precursors. Chapters 3 and 8 focus on the chemistry of simple organics under shock
compression conditions. Chapter 9 investigates the reactivity of jet fuels under
combustion (high temperature) conditions.
Ultimately, we wish for this volume to be a useful pedagogical tool for a wide
variety of researchers in extreme physics and chemistry. Though our specific
emphasis has been on elevated conditions, the work presented here can be gener-
alized very easily to other materials, pressures, and temperatures. Our collective
efforts can be broadly applied to any number of scientific efforts spanning many
different types of compounds and reactive conditions. Thus, the aim for our book is
to be impactful for any research area that relies on atomistic simulation approaches
to guide and elucidate experimental studies and materials discovery.
vii
viii Contents
1.1 Introduction
For foams, the use of classical molecular dynamics (MD) simulations using reactive
force fields has given new insights into formation of hotspots and other aspects of
shock dynamics in heterogeneous materials.
The outline of the chapter is as follows. The first section is a discussion on
first-principles simulations using density functional theory (DFT), including post-
processing to analyze the chemical composition of the material. The second section
presents classical MD simulations of hydrocarbon polymers and—foams, we show
how the shock propagates in the foam and how local hotspots are formed. In the third
section, we discuss Z experimental results on PMP plastic and demonstrate how Z
and simulations give a complete picture of the PMP shock response. The chapter
concludes with a summary and conclusions.
1
(e2 − e1 ) (P2 + P1 )(v1 − v2 ) (1.3)
2
where e is the specific internal energy, P is pressure, and ρ is the density. The
subscripts 1 and 2 designate the initial and final states, respectively. The result is a
thermodynamics relationship between pressure, specific volume (v 1/ρ), and inter-
nal energy of the final and initial states, the so-called Rankine–Hugoniot relationship
(1.3) [6].
us
u2 u1
2 1
Fig. 1.1 Shock wave in a material, the shock is supersonic and causes a discontinuous transition
from thermodynamic state 1 to state 2. The shock velocity is us and the net particle velocities before
and after the shock are u1 and u2 . The initial and final states are linked by conservation laws. The
analysis can, for example, be made as a thermodynamic control volume
4 T. R. Mattsson et al.
Fig. 1.2 The end states of shock compression of polyethylene [9] (black) and three release paths
from different initial shock pressures. Note the shoulder in the Hugoniot between 100 and 150
GPa caused by dissociation of the polymer. Above 150 GPa, there are no longer any molecular
fragments, see Fig. 1.7
Fig. 1.3 Obtaining thermodynamic information from a long MD simulation. The moving average
is a useful measure for when the periodic oscillations in the simulation no longer influence the
result. A drift toward lower pressure during the simulation, for example, would indicate relaxation
in the system and a longer simulation time is needed
Fig. 1.4 Species-dependent pair correlation function for ethane at 0.571 g/cm3 and 300 K. The
pair correlation plateaus at 1.00 since the pair correlation function is normalized to the random
probability of finding an atom in space, hence without any correlation the value is 1.00
The next step is to determine the amount of time atoms must be in close proximity
to each other to be considered bonded. To start, we run the tracker on the simulation
in question and vary the C–C bond distance, C–H bond distance, H–H bond distance,
and time requirement.
At 1.5 g/cm3 , 3100 K, we varied the required bond lengths and times in order to
determine that there is no sensitive dependence on a particular cutoff parameter. The
results from this sensitivity study are shown in Fig. 1.6. The hydrogen–hydrogen is not
shown as it made no difference. Each plot uses different carbon–carbon bond lengths,
varied between 1.60 and 1.72 Å. The abscissa is the carbon–hydrogen bond length
and is the same for all figures. Finally, the different colored lines denote different
proximity time requirements from 60 to 120 fs. In a transient system such as this,
as time goes to infinity, we expect the free carbon and free hydrogen to approach
their atomic ratios (carbon to 33% and hydrogen to 66%). It can also be seen that
there appear to be no clear breakpoints for picking one bond distance over another,
hence using the pair correlation functions as a guide. If there were distinct features
in Fig. 1.6, we would need to investigate the cause for them before continuing.
We use atom velocity as a guide to select a suitable duration. For this example,
we used the kinetic theory of gasses and ideal gas law as a crude approximation to
the velocity.
1 2 3
mv kB T
2 2
where k B is the Boltzmann’s constant, T is the absolute temperature, m is the mass
(in our case of a single atom), and v is the average Maxwellian velocity. With T
3100 K, for carbon, we get a velocity of 2538 m/s or 0.02538 Å/fs. Therefore, over
100 fs, we can expect a single atom to move about 2.5 Å. For hydrogen, this would
be about 25 Å. We assume that for C–H, the atoms must be bonded if the hydrogen
does not rapidly move away from the carbon. Similarly, unless both carbons are
moving parallel, they would still drift apart quickly enough to fail the time/distance
8 T. R. Mattsson et al.
Fig. 1.6 Change in perceived composition as a function of bond distances for ethane at 1.5 g/cm3
and 3100 K. The changes are smooth, so the selection of a particular bond cutoff does not change
qualitative behavior
constraint. We use the vibrational period as a reference for our bond time calculation.
A carbon–carbon triple bond has a full period of about 18 fs. For a double bond such
as ethylene, the period is about 21 fs, and the period for a single bond in ethane
is about 33 fs. The 100 fs time cutoff was chosen to be significantly longer than
multiple vibration periods. The tracking method has a hard reset to a given molecule
interaction meaning if two atoms move apart farther than the distance cutoff, they
must satisfy the distance constraint for the entire time constraint again before they are
considered bonded. This allows us to better distinguish between a scattering event
and a bond.
1 Simulations of Hydrocarbon Polymers Related … 9
After identifying all of the bonded atoms, we can begin to identify molecules. Our
method uses a recursive algorithm capable of identifying branched molecules beyond
a nearest-neighbor picture. One way to conceptualize this information is by building
a three-dimensional construct where the first two dimensions are a matrix of atom-
to-atom bonds (designated by either a 0 or 1 but more information can be stored)
and the third dimension is time. The algorithm walks through the time-dependent
upper diagonal, following all bonded pairs until no more bonds are found in that
chain (at which point it starts with the next atom not in that chain) or until all atoms
are accounted for.
The recursive nature of the algorithm makes it fast and reliable. The ability of
looking beyond nearest neighbor is particularly important for the purpose of changing
chemistry. In the case of polyethylene, the polymer structure is never recovered if
cooled, but other species may be formed which helps us to understand how the
system evolves from a metastable polymer to thermodynamic equilibrium along a
given trajectory in a phase diagram.
Figure 1.7 shows the stoichiometry for four strands of polyethylene along the
principle Hugoniot calculated according to the method described above. This series
of simulations was performed with four strands of CH2 , 16 carbon atoms long and
with an extra hydrogen atom at each end to cap the chain and prevent cross bonding
(for a total of 200 atoms). For densities of 1.1 to 1.7 g/cm3 , all four strands are intact.
At 2.6 g/cm3 , the system is almost entirely an atomic gas. Between 1.7 and 2.7 g/cm3
is the dissociation regime responsible for the inflection in the Hugoniot. Outside of
this regime, the stoichiometry is easy to calculate, but inside the variety and species of
molecules is much harder to understand and additional constraints are often required,
not on the bond tracking program itself but on the results. For example, one might
Fig. 1.7 Polyethylene stoichiometry along the principal Hugoniot. These simulations used 4 strands
of 16 carbons and 32 hydrogens along the chain plus 2 hydrogens (one at each end) to cap the strand
and prevent cross bonding. 1.1 g/cm3 through 1.7 g/cm3 show only C16 H34 which indicates that all
four strands are intact. At 2.6 g/cm3 the system is almost entirely an atomic gas
10 T. R. Mattsson et al.
see a C1 and C7 H5 for a few time steps and then see a C2 and C6 H5 for a few time
steps. These are transients that do exist but are difficult to quantify in a meaningful
way given the very small number of atoms present in a DFT simulation. Figure 1.7
shows only the most persistent of the species.
With this tool, we are able to carefully follow the dynamic chemistry of poly-
mer dissociation as a function of shock strength. Accurate accounting of species is
important for understanding the role of bond breaking in release of internal energy,
for comparisons with classical molecular dynamics simulations using reactive force
fields, and for comparison with chemical reactivity models.
Fig. 1.8 Snapshots from initial configurations of molecular dynamics simulations for semicrys-
talline polyethylene (top left), poly(4-methyl 1-pentene), i.e., PMP (top right), and a model PMP
foam with density of 0.30 g/cm3 (bottom)
Fig. 1.9 Classical molecular dynamics Hugoniot response compared to experiments [36] and DFT
data for full-density polyethylene (left) and Poly(4-methyl 1-pentene) (right) [38]
1 Simulations of Hydrocarbon Polymers Related … 13
Fig. 1.10 Dissociation, as percent carbon chain, for DFT (20 and 50 fs bond criteria are blue and
red, respectively) and ReaxFF classical MD (green) [38]. Maximum percent fraction is 87.5% for
the C16 polyethylene chains, and 95% for the C44 chains, shown
While foams and porous materials are frequently useful and necessary in shaping
high-pressure waves, their response is notoriously difficult to characterize. Experi-
mental error bars are usually large due to sample-to-sample foam variation, as well
as to inhomogeneities within individual samples. Issues of sample control combined
with a lack of direct experimental measures of temperature and chemistry make anal-
ysis difficult. Root et al. conducted Z machine shock experiments on PMP foams
with average density of 0.3 g/cm3 [49]. The experimental results were published with
extensive atomistic, and continuum simulations exploring the effects of porosity on
average temperature, vaporization/ejecta, and wave propagation profiles. Subsequent
work [30, 31] explored the temporal evolution of void collapse and local temperature
hotspot formation.
A foam Hugoniot comparison between Z machine experiments, MD simulation,
and Equation of State (EOS) calculations are shown in Fig. 1.11a. These results show
that the final densities of these compressed systems do not reach nearly the com-
paction that the fully dense polymer samples reach these pressures. This is because
the void collapse in foams leads to significant heating, which causes expansion in
the final states. Because of this, we remain well within the density range for which
ReaxFF has been shown to be reliable.
The Hugoniot curves for PMP foam have two distinct regions. At low pressures,
the foam compacts to approximately four times its initial density, at which point
the curve turns steeply vertical, with little increased density for large increases in
pressure. In both the EOS calculations and the MD simulations, the data is tightly
aligned along near-vertical lines. The experimental data, colored to match the same
initial densities of MD and EOS data, have significant horizontal variation. The
simulations help to build confidence for this being due to variation in foam density
across different samples.
14 T. R. Mattsson et al.
Fig. 1.11 Comparison of the shock Hugoniot from classical MD and experiments on Z [48, 49].
Equation of state results are also shown for SESAME 7171 [12, 19]
Figure 1.11b shows the final average temperatures for the same shot data. Exper-
imental temperatures are not available. The data shows the strong dependence of
final temperature on the initial density of the foam. The fully dense polymer tem-
peratures are almost an order of magnitude lower than the foams. And the slope of
increased temperature with pressure increases with decreasing initial density. This
is consistent with the conclusion that increased void size/number leads to increasing
hotspots and higher temperatures. One also sees in the plots that the MD tempera-
tures show the same trends as the EOS temperatures and agree at lower pressures.
However, at higher pressures, the SESAME 7171 values are consistently lower than
MD temperatures, even when the pressure vs. density values agree. This is likely a
result of the fact that the EOS model does not account for the polymer dissociation,
which would indicate that the MD temperatures are more dependable. It should,
however, be noted that these temperatures are well above the range usually explored
with classical molecular dynamics.
The molecular dynamics snapshots in Fig. 1.12 demonstrate one significant mech-
anism, which was not observed in full-density polymer shock studies. We see sig-
nificant vaporization and ejecta within the collapsing void. This vaporized material
is produced at pressures and densities well below the onset of dissociation in dense
polymer, because the temperatures in the collapsing voids are high enough to break
bonds. Once the material is volatized, it can travel ballistically ahead of the shock
front. In closed foams, these ejected particles impact the far side of the void, pre-
heating the un-shocked material. In an open foam, the vaporized particles can travel
far ahead of the shock front through the interconnected void space. The ejected par-
ticles travel at velocities as high as twice the bulk particle velocity. This observation
explains features seen in the wave profiles. In both simulation and experiment, shock
fronts are broader than expected, and the rise time of the front appears to be correlated
to the void size.
These results indicate that classical molecular dynamics can be used effectively
to model Z machine experiments in polymer shock compression. Specifically, with
the right potentials, quantitative agreement can be made with DFT and experimental
1 Simulations of Hydrocarbon Polymers Related … 15
Fig. 1.12 Snapshots of a propagating shock front showing vaporized ejecta from the collapsing
void. Atoms are colored by velocity in the shock direction. (left) piston velocity of 10 km/s, giving
pressure of ~40 GPa (right), piston velocity of 25 km/s, giving pressure of ~240 GPa. Dissociated
polymer is seen extending ahead of the shock front [31]
results over a useful range of compression. Moreover, because MD allows for the
simulation of much larger features than those which can be captured with DFT, it
can explore the mechanisms and processes associated with larger length scale struc-
ture in polymers (i.e., long-range order, defects, or voids). Moreover, aspects of the
compression which are difficult or impossible to observe experimentally, including
void collapse and temperature production, are readily studied with MD. For these
reasons, classical molecular dynamics has proven to be a useful tool to augment our
analysis and understanding of Z machine experiments.
In the 1990s, Sandia researchers developed the Z machine as a pulsed power source
for generating X-rays for radiation–material interaction studies and as an X-ray drive
for experiments relevant for inertial confinement fusion studies [40, 52]. In 2007,
the Z machine was refurbished [51] increasing the maximum current delivered to a
target to approximately 20 MA in a pulse length of 100–600 ns making Z capable of
producing magnetic pressures up to 600 GPa. Early on, researchers [15] determined
that the current pulse could be used to drive a smoothly increasing compression
wave into a target load where the time-dependent magnitude of the pressure loading
is given as
Fig. 1.13 Illustration of the current drive on the anode. The magnetic field penetrates only a fraction
of the flyer, the shockless compression front outruns the magnetic field
where P is the pressure, B(t) is the time-dependent magnetic field, J(t) is the time-
dependent current density in units of (amps/unit length), I(t) is the time-dependent
current, and W (t) is a time-dependent scaling factor that depends on the load geom-
etry. This led to the first shockless compression experiments to measure quasi-
isentropes in materials [15]. From prior work at Sandia on the hyper-velocity launcher
(HVL) gun using a shockless compression wave to launch flyer plates [8], the idea
to use the Z machine to launch flyer plates was quickly conceived [16, 24].
Figure 1.13 shows the basic concept for either shockless compression or flyer plate
experiments on Z. Current is driven in a loop along the anode and cathode through a
shorting cap on the target’s top. This current loop generates a magnetic field between
the cathode and the anode. The combination of current and magnetic field creates a
Lorentz force ( F J × B) that drives a shockless compression wave through the
anode. A sample directly mounted to the anode will be shocklessly compressed as
the compression wave transits into it. If the anode is left with a free surface, just as
in the HVL, when the shockless compression wave reaches the anode free surface,
the anode is accelerated outward from the cathode. Using a shockless compression
wave minimizes the temperature rise in the anode, however, the high current causes
Joule heating on the inner surface of the anode. This causes the conductivity to drop
and the current begins to diffuse into the anode, which creates a magnetic diffusion
front that propagates behind the compression wave front. The Joule heating melts
and vaporizes the inner surface of the anode [10].
1 Simulations of Hydrocarbon Polymers Related … 17
Fig. 1.14 Illustration of a Z coaxial geometry flyer plate experiment. The load is about 5 cm tall
and 1 cm wide
The ability to shape the current pulse along with developments in 1D and 2D
magnetohydrodynamics (MHD) simulations greatly improved the capabilities to use
the Z machine for quasi-isentropic compression experiments [10] and for flyer plate
experiments [32, 33]. By proper tailoring of the current pulse, the flyer plate is
shocklessly accelerated to the desired velocity. Experiments are designed to ensure
a steady shock through the sample and to avoid contamination in the measurement
from release waves reflecting from Joule heated, melted portion of the flyer. Fur-
ther refinements to the Z target geometry and improvements in MHD and Z circuit
modeling have led to the successful launching of aluminum flyer plates up to around
40 km/s [34].
Figure 1.14 illustrates a rectangular target geometry used for flyer plate experi-
ments. In this geometry, rectangular flyer plates are placed on opposite sides of the
cathode. The flyer plates are typically aluminum, but Z has also used Al/Cu layered
flyers where Cu is the impactor. The distance between the flyer plates and cathode,
called the AK gap, is asymmetric—one side has a larger AK gap than the other
side. This results in two different magnetic pressures that create two different flyer
velocities on each side of the target, and thus, two different Hugoniot states in one
experiment. MHD simulations are used to determine the flight gap so that the flyer
plate reaches a terminal velocity prior to impact with the samples with a few hundred
microns of solid density on the impact face [32, 33]. The target frame is designed to
hold as many samples as possible to maximize the data return.
18 T. R. Mattsson et al.
The Z machine has been used to study the high-pressure response of the hydrocar-
bon polymer: poly(4-methyl-1-pentene) plastics up to 985 GPa [50]. Poly(4-methyl-
1-pentene) plastic (PMP) is a CH-based plastic in which the carbon atoms are sp3 -
hybridized. PMP is often referred to by its trade name TPX® (Mitsui Chemicals,
Inc.) and is available in several different variations. PMP is possible to machine and
is transparent, which makes it a valuable window material for shock-release studies
on the Z machine. For the work in [50], the PMP samples were the DX845 variant of
TPX® with a density of 0.833 g/cm3 and a melting temperature of 232 °C. The index
of refraction of PMP is relatively constant over a large range of wavelengths. For the
Z experiments, we use a 532 nm wavelength for our laser interferometry system. At
that wavelength, PMP has an index of 1.461 ± 0.001.
The laser interferometry system at Z is a velocity interferometer system for any
reflector (VISAR) that was originally developed at Sandia National Laboratories in
the late 1960s and early 1970s. Details of the VISAR technique for shock wave
research are found in [2, 3, 14, 17]. The VISAR measures flyer plate velocities and
shock velocities during the Z experiment.
Figure 1.15 shows a schematic illustration of the experimental configuration
along with two sample VISAR measurements. An aluminum flyer plate is accel-
erated toward the PMP samples. Two different sample configurations were used
for measuring the high-pressure response of PMP (inset): the top configuration is a
thick sample of PMP used to measure the Hugoniot. The bottom configuration is a
PMP/quartz window stack used to measure the Hugoniot and the reshock state in
PMP. In the experiment, VISAR tracks the flyer plate trajectory, directly measuring
the flyer plate velocity up to impact. The flyer plates are shocklessly accelerated
to the desired velocity. At impact, a shock is produced in the PMP sample. The
shock strength is high enough that the shock front in the PMP becomes reflective
and the VISAR laser reflects off the shock front. This allows for a direct, precise
measurement of the shock velocity as the shock wave travels through the sample.
Figure 1.15 (red) shows the shock through the PMP is steady for 10 s of ns. At
late times in the single PMP sample, the velocity begins to decrease. The velocity
decrease is caused by reflected release waves from the melted portion of the flyer
(Fig. 1.13). In the quartz backed PMP configuration, a steady wave is observed in
the quartz for 10 s of ns as well. At later times, a second shock is observed in the
quartz. This is caused by the reflected shock from the PMP/quartz interface and the
PMP/aluminum impactor interface. Again, release waves from the melted portion of
the flyer cause the decrease in the shock velocity.
The advantage to using the Z machine is that it launches solid density flyers at
the targets. Thus, the flyer plate is well characterized at impact. Combined with
the direct measurement of the shock velocity in the PMP, the Hugoniot state is
easily and accurately determined through impedance matching [6]. A Monte Carlo
impedance matching (MCIM) method [50] is used to calculate the Hugoniot state and
uncertainty. The MCIM method accounts for uncertainty in the measurements, initial
densities, and the uncertainty in the Hugoniot of the flyer plate. Figure 1.16 shows
the compiled Hugoniot data for PMP along with the QMD calculated Hugoniot for
PMP discussed earlier in the chapter.
1 Simulations of Hydrocarbon Polymers Related … 19
Fig. 1.15 VISAR measured velocity traces from two different experiments on PMP plastics show-
ing the flyer velocity, PMP shock velocity, and the quartz shock velocity. Inset: Configuration for
experiments using the direct shock experiment (top sample) and the shock—reshock experiment
(bottom sample)
Fig. 1.16 Hugoniot of PMP plastic in density–pressure space. The left panel shows the low-pressure
range where there was previous data. The QMD simulations are in excellent agreement with data
over a large range
Figure 1.16 shows the Z experimental data range from 100 to 550 GPa along the
principal Hugoniot. The QMD calculations are in good agreement up to 400 GPa.
Starting at about 150 GPa, the principal Hugoniot steepens dramatically compared to
the Hugoniot below 100 GPa. The sharp upturn suggests dissociation of the molecular
system into an atomic system. This similar behavior along the Hugoniot was observed
in the molecular systems of CO2 [48, 49] and ethane (C2 H6 ) [35].
The bond tracking analysis method discussed earlier was used to assess dissocia-
tion along the Hugoniot. Figure 1.17 shows the results of the bond tracking analysis.
At low shock pressures of approximately 20 GPa, the polymer chain begins to break
20 T. R. Mattsson et al.
down and complex Cx Hy constituents are observed. Increasing the shock pressure
increased the amount of free hydrogen. At a shock pressure of approximately 150
GPa, all H atoms have dissociated from the C chains and the number of free C
atoms begins to increase. Complete dissociation into C and H atoms requires a shock
pressure of approximately 315 GPa.
At 315 GPa, the compression factor μ 1 − ρ0 /ρ is μ 0.68. At 160 GPa
where the system is primarily atomic C and H with a small percentage (~6%) of
C–C clusters, μ 0.66. Interestingly, this compression factor of μ 0.66–0.68
for complete dissociation is consistent with other C–H systems with sp3 -hybridized
bonded carbon. Complete dissociation for ethane (C2 H6 ) occurs at μ 0.69 [35],
for polyethylene μ 0.62 [9], and polystyrene μ 0.62 [54]. For each system, the
pressure and temperature at complete dissociation are different, but the compression
factors are all similar. This suggests that the C–C bonds are more affected by com-
pression than pressure or temperature since compression reduces the gap between
bonding and anti-bonding orbitals making it easier to occupy the anti-bonding states
in the C.
Hydrocarbon polymers are commonly used in shock physics research, and their
behavior is complex. Strong compression of hydrocarbon polymers involves a series
of physical mechanisms, together resulting in the Hugoniot response shown in
Fig. 1.2. Under progressively stronger shock loading, the hydrocarbon polymers dis-
cussed in this chapter (linear polyethylene and polymethylpentene with side chains)
first respond softly as the void space between polymer chains is compressed resisted
only by the relatively weak inter-molecular Van der Waals interaction. During this
phase, the covalent bonds of the polymer remain intact (Fig. 1.7). Once the voids
Another random document with
no related content on Scribd:
use of the monks in their devotions. This summary of the first
contents of the library is taken by Hardy from the Gesta Abbatum, a
chronicle of St. Albans.
The literary interests of Paul were, it appears, continued by a large
proportion at least of his successors, and many of these made
important contributions to the library. Geoffrey, the sixteenth abbot,
gave to the scriptorium a missal bound in gold, and another missal in
two volumes, both incomparably illuminated in gold and written in an
open and legible script. He also gave a precious illuminated psalter,
a book containing the benediction and sacraments, a book of
exorcism, and a collectory. (The description is taken from the Gesta.)
Ralph, the seventeenth abbot, was said to have become a lover of
books after having heard Wodo of Italy expound the Scriptures. He
collected with diligence a large number of valuable manuscripts.
Robert de Gorham, who was called the reformer of the liberty of the
Church of St. Albans, after becoming prior, gave many books to the
scriptorium, more than could be mentioned by the author of the
Gesta. Simon, who became abbot in 1166, caused to be created the
office of historiographer. Simon had been educated in the abbey, and
did not a little to add to its fame as a centre of literature. He repaired
and enlarged the scriptorium, and he kept two or three scribes
constantly employed in it. The previous literary abbots had for the
most part brought from without the books added to the collection, but
it was under Simon that the abbey became a place of literary
production as well as of literary reproduction. He had an ordinance
enacted to the effect that every abbot must support out of his
personal funds one adequate scribe. Simon presented to the abbey
a considerable group of books that he had himself been collecting
before his appointment as abbot, together with a very beautiful copy
of the Old and New Testaments.
The next literary abbot was John de Cell, who had been educated
in the schools of Paris, and who was profoundly learned in grammar,
poetry, and physics. On being elected abbot, he gave over the
management of the temporal affairs of the abbey to his prior,
Reymond, and devoted himself to religious duties and to study.
Reymond himself was a zealous collector, and it was through him
that was secured for the library, among many other books, a copy of
the Historica Scholastica cum Allegoriis, of Peter Comester. The
exertions of these scholarly abbots and priors won for St. Albans a
special distinction among the monasteries of Britain, and naturally
led to the compilation of the historic annals which gave to the abbey
a continued literary fame. Hardy is of opinion that these historic
annals date from the administration of Simon, between the years
1166 and 1183.
Richard of Wendover, who succeeded Walter as historiographer,
compiled, between the years 1230 and 1236, the Flores Historiarum,
one of the most important of the earlier chronicles of England. Hardy
points out that it could have been possible to complete so great a
work within the term of six years, only on the assumption that
Richard found available much material collected by Walter, and it is
also probable that other compilations were utilised by Richard for the
work bearing his name. It is to be borne in mind that the monastic
chronicles were but seldom the production of a single hand, as was
the case with the chronicles of Malmesbury and of Beda. The greater
number of such chronicles grew up from period to period, fresh
material being added in succeeding generations, while in every
monastic house in which there were transcribers, fresh local
information was interpolated until the tributary streams had grown
more important than the original current. In this manner, the
monastic annals were at one time a transcript, at another time an
abridgment, and at another an original work. “With the chronicler,
plagiarism was no crime and no degradation. He epitomised or
curtailed or adapted the words of his predecessors in the same path
with or without alteration (and usually without acknowledgment),
whichever best suited his purpose or that of his monastery. He did
not work for himself but at the command of others, and thus it was
that a monastery chronicle grew, like a monastery house, at different
times, and by the labour of different hands.”
Of the heads that planned such chronicles or of the hands that
executed them, or of the exact proportion contributed by the several
writers, no satisfactory record has been preserved. The individual is
lost in the community.
In the earlier divisions of Wendover’s chronicle, covering the
centuries from 231 down to about 1000, Wendover certainly relied,
says Hardy, upon some previous compilation. About the year 1014,
that narrative, down to the death of Stephen, showed a marked
change in style, giving evidence that after this period some other
authority had been adopted, while there was also a larger
introduction of legendary matter. From the accession of Henry II., in
1235, when the Flores Historiarum ends, Wendover may be said to
assume the character of an original author. On the death of Richard,
the work of historiographer was taken up by Matthew Paris. His
Lives of the Two Offas and his famous Chronicles were produced
between the years 1236 and 1259.
In certain of the more literary of the English monasteries, the
divine offices were moderated in order to allow time for study, and,
under the regulations of some foundations, “lettered” persons were
entitled to special exemption from the performance of certain daily
services, and from church duty.[145]
At a visitation of the treasury of St. Pauls, made in the year 1295,
by Ralph de Baudoke, the Dean (afterwards Bishop of London),
there were found twelve copies of the Gospels adorned, some with
silver, and others with pearls and gems, and a thirteenth, the case
(capsa) containing which was decorated not merely with gilding but
with relics.[146] The treasury also contained a number of other
divisions of the Scriptures, together with a Commentary of Thomas
Aquinas. Maitland says that the use of relics as a decoration was an
unusual feature. He goes on to point out that the practice of using for
manuscripts a decorated case, caused the case, not infrequently, to
be more valuable than the manuscript itself, so that it would be
mentioned among the treasures of the church, when the book
contained in it was not sufficiently important to be even specified.
The binding of the books which were in general use in the English
monasteries for reference was usually in parchment or in plain
leather. The use of jewels, gold, or silver for the covers, or for the
capsæ, was, with rare exceptions, limited to the special copies
retained in the church treasury. William of Malmesbury in the
account which he gives of the chapel made at Glastonbury by King
Ina, mentions that twenty pounds and sixty marks of gold were used
in the preparation of the Coöpertoria Librorum Evangelii.[147]
The Earlier Monastery Schools.—At the time when
neither local nor national governments had assumed any
responsibilities in connection with elementary education, and when
the municipalities were too ignorant, and in many cases too poor, to
make provision for the education of the children, the monks took up
the task as a part of the regular routine of their duty. The Rule of S.
Benedict had in fact made express provision for the education of
pupils.
An exception to the general statement concerning the neglect of
the rulers to make provision for education should, however, be made
in the case of Charlemagne, whose reign covered the period 790 to
830. It was the aim of Charlemagne to correct or at least to lessen
the provincial differences and local barbarities of style, expression,
pression, orthography, etc., in the rendering of Latin, and it was with
this end in view that he planned out his great scheme of an imperial
series of schools, through which should be established an imperial or
academic standard of style and expression. This appears to have
been the first attempt since the time of the Academy of Alexandria to
secure a scholarly uniformity of the standard throughout the civilised
world, and the school at Tours may be considered as a precursor of
the French Academy of modern times. For such a scheme the
Emperor was dependent upon the monks, as it was only in the
monasteries that could be found the scholarship that was required
for the work. He entrusted to Alcuin, a scholarly English Benedictine,
the task of organising the imperial schools. The first schools
instituted by Alcuin in Aachen and Tours, and later in Milan, were
placed in charge of Benedictine monks, and formed the models for a
long series of monastic schools during the succeeding centuries.
Alcuin had been trained in the cathedral schools founded in York by
Egbert, and Egbert had been brought up by Benedict Biscop in the
monastery of Yarrow, where he had for friend and fellow pupil the
chronicler Bede. The results of the toilsome journeys taken by
Biscop to collect books for his beloved monasteries of Wearmouth
and Yarrow[148] were far-reaching. The training secured by Alcuin as
a scribe and as a student of the Scriptures, the classics, and the
“seven liberal arts” was more immediately due to his master Ælbert,
who afterward succeeded Egbert as archbishop.
The script which was accepted as the standard for the imperial
schools, and which, transmitted through successive Benedictine
scriptoria, served seven centuries later as a model for the first type-
founders of Italy and France, can be traced directly to the school at
York.
Alcuin commemorated his school and its master in a descriptive
poem On the Saints of the Church at York, which is quoted in full by
West.[149] In 780, Alcuin succeeded Ælbert as master of the school,
and later, was placed in charge of the cathedral library, which was at
the time one of the most important collections in Christendom. In one
of his poems he gives a kind of metrical summary of the chief
contents of this library. The lines are worth quoting because of the
information presented as to the authors at that time to be looked for
in a really great monastic library. The list includes a distinctive
though very restricted group of Latin writers, but, as West points out,
the works “by glorious Greece transferred to Rome” form but a
meagre group. The catalogue omits Isidore, although previous
references make clear that the writings of the great Spanish bishop
were important works of reference in York as in all the British
schools. It is West’s opinion that the Aristotle and other Greek
authors referred to were probably present only in Latin versions.
These manuscripts in the York library were undoubtedly for the most
part transcripts of the parchments collected for Wearmouth and
Yarrow by Biscop.
The Library of York Cathedral.