You are on page 1of 13

OPINION PAPER

Challenges of Building Logic Trees for


Probabilistic Seismic Hazard Analysis
Julian J. Bommer,a) M.EERI

In the current practice of probabilistic seismic hazard analysis (PSHA), logic


trees are widely used to represent and capture epistemic uncertainty in each
element of the models for seismic sources and ground-motion prediction.
Construction of a logic tree involves populating the branches with alternative
models or parameter values, and then assigning weights, which together must
represent the underlying continuous distribution. The logic tree must capture
both the best estimates of what is known and the potential range of alternatives
in light of what is currently not known. There are several scientific challenges
involved in both populating the logic tree branches (for which new models often
need to be developed) and in assigning weights to these branches. The most
serious challenge facing this field now, however, may be a shortage of suitably
qualified and experienced experts. [DOI: 10.1193/1.4000079]

INTRODUCTION
Probabilistic seismic hazard analyses (PSHA) conducted to define the seismic loading to
be used in the earthquake-resistant design of safety-critical facilities (such as nuclear power
plants) must demonstrate that epistemic uncertainties have been captured in order to provide
regulatory assurance. The tool ubiquitously used for this purpose is the logic tree, first
introduced to the field of PSHA by Kulkarni et al. (1984). Logic trees are in some ways
a cumbersome tool and it is to be hoped, and expected, that superior devices will be devel-
oped in the future. However, for as long as logic trees are being used it is important that their
application is consistent with their intent, and for this reason it is very useful to question their
meaning (e.g., Bommer and Scherbaum 2008, Scherbaum and Kuehn 2011, Musson 2012).
This Opinion Paper addresses a number of practical issues related to the construction of
logic trees for defining the input to PSHA calculations, including correcting a recommenda-
tion made in an earlier publication and also clarifying the views presented in a previous
Opinion Paper that has been interpreted and applied in ways that were not intended. Another
purpose of this current article is to highlight an aspect of logic tree construction that has not,
in my view, received sufficient attention, namely that the process generally involves creating
new models for seismic sources and also for ground-motion prediction. The population of
logic trees with models that represent both the best estimates on the basis of what is known
and alternative models that represent what could occur in light of our lack of knowledge,
requires expertise and experience in both the relevant earth science disciplines and in the
characterization of uncertainty. And as the demand for PSHA studies conducted to the

a)
Civil & Environmental Engineering Dept., Imperial College London, South Kensington, SW7 2AZ, U.K.

1723
Earthquake Spectra, Volume 28, No. 4, pages 1723–1735, November 2012; © 2012, Earthquake Engineering Research Institute
1724 J. J. BOMMER

highest standards for critical facilities grows, we are facing a potential shortage of suitably
qualified professionals with experience in this field.

THE PURPOSE OF LOGIC TREES IN PSHA


Why do we construct logic trees for the input to PSHA? Regardless of the quantity and
quality of data gathered for a seismic hazard analysis, there will always be epistemic uncer-
tainty associated with the seismic source characterization (SSC) and ground-motion charac-
terization (GMC) models developed for the PSHA. This is because of two simple facts: There
will always be multiple interpretations of the available data and models, many of which can
be technically justified, and the PSHA calculations will invariably include earthquake sce-
narios for which there are no data to constrain the models. In order to represent this uncer-
tainty in the hazard estimates (by displaying the mean hazard curve and associated fractiles),
logic trees that represent the alternative models and parameter values for all the key com-
ponents of the SSC and GMC models are constructed.
Scherbaum and Kuehn (2011) summarize succinctly the process of logic tree construc-
tion as follows: “Assigning branch weights to a set of candidate models is equivalent to the
construction of a discrete probability distribution to represent epistemic uncertainties on the
model set,” (p. 1239). In this regard it is worth recalling that for many elements of the PSHA
input, it is often possible to define continuous probability distributions rather than using
discrete branches with alternative models, and this can offer advantages in terms of clarity
of purpose of the logic tree. It is interesting to note that in the original guidelines produced by
the Senior Seismic Hazard Analysis Committee (SSHAC), it was envisaged that epistemic
uncertainty would be represented in this way (Budnitz et al. 1997). For GMC models, it was
envisaged that experts would define distributions on the median values of ground-motion
amplitudes and distributions on the standard deviations associated with the predicted med-
ians. The more widely used practice of developing logic trees in which discrete branches
carry individual ground-motion prediction equations (GMPEs) complicates the process to
some extent, not least because the relationship between weights on models and weights
on ground-motion amplitudes can be obscure. For the purposes of the current discussion,
it is assumed that the focus is on building logic trees with individual branches that at
each node collectively form a discrete approximation to the intended continuous distribution.
The SSHAC guidelines were essentially focused on how and why epistemic uncertainty
must be considered and captured in PSHA for nuclear facilities. Musson (2012) argues that
the objectives stated in the SSHAC guidelines regarding the capture of uncertainty may not
always be realistic because most projects have limited resources. This is a reasonable argu-
ment and it may well reflect the case in many seismic hazard assessments, but for those
conducted for nuclear power plant sites it becomes imperative to make concerted efforts
to identify and quantify epistemic uncertainties, and to demonstrate that they have been
captured in the logic tree. After the events at Fukushima, Japan in March 2011, I would
contend that the onus to provide both regulatory and public assurance renders indisputable
the requirement for epistemic uncertainty to be well captured in hazard assessments for
nuclear facilities.
The capture of uncertainty can be thought of as building SSC and GMC logic trees that
represent distributions that reflect, in the wording used in the U.S. Nuclear Regulatory
CHALLENGES OF BUILDING LOGIC TREES FOR PROBABILISTIC SEISMIC HAZARD ANALYSIS 1725

Commission guidelines for implementing SSHAC processes (USNRC 2012, p. 36), “the
center, body, and range of technically defensible interpretations” (or the CBR of the
TDI, to satisfy the insatiable desire of this field for more acronyms). These distributions
are defined by the models or parameter values on the branches of the logic tree, and simul-
taneously the weights assigned to those branches. The key point to make at this stage is that
the discrete distribution represented by the branches must both include the best estimates that
the evaluators can develop for each component of the SSC and GMC models and simulta-
neously capture the range of alternatives (in terms of activity rates, earthquake locations,
maximum magnitudes, ground-motion amplitudes, etc) that should be contemplated in
view of the limitations of the data and the knowledge currently available.

THE NATURE OF LOGIC TREE BRANCH WEIGHTS


Scherbaum and Kuehn (2011) point out that although logic trees have been very widely
used in recent decades, there has been surprisingly little discussion of what the branch
weights actually represent. A discussion of this issue has begun in recent years, with
some viewing the weights as simply expressions of the relative merit of each alternative
in the assessment of the analyst (Abrahamson and Bommer 2005) and others interpreting
the weights as probabilities (McGuire et al. 2005, Musson 2005). In order to try to move
beyond that specific debate, I can say that I am persuaded by Scherbaum and Keuhn
(2011) that the weights are treated as probabilities downstream, in the representation and
application of the PSHA output. Consequently, I also accept their assertion that if this is
the case, it makes sense for those constructing the logic tree to be cognizant of this fact
from the outset and assign weights in a way that is consistent with their subsequent treatment
as probabilities.
The probabilistic nature of logic tree branch weights has implications for the question of
whether the mean hazard is the most appropriate output to use from a logic tree (Abrahamson
and Bommer 2005, McGuire et al. 2005, Musson 2005). Part of my motivation to yield on the
issue of interpreting branch weights as probabilities is the fact that the mean hazard curve, for
which this is a prerequisite, remains well established as the standard reference for nuclear
applications, to the extent that the debate regarding its use is probably closed for the immedi-
ate future. At the same time, interpretations of the views expressed in Abrahamson and
Bommer (2005) in practice have frequently been reduced to a choice between the mean
and median hazard estimates, which was never the intention of the original paper that trig-
gered the discussion (in fact, we specifically exhorted readers not to make this conclusion). If
this were the choice, I would unreservedly support the mean unless a defensible rationale
(such as low consequences of failure) was to be provided for choosing the median rather than
a higher fractile.
Returning to the nature of the weights, if one accepts the idea that the weights on the
branches of logic trees are probabilities, then as Musson (2012) states, the following question
arises: probability of what? Scherbaum and Kuehn (2011, p. 1238) make the following state-
ment in this regard: “We therefore see the branch weights as subjective estimates for the
degree-of-certainty or degree-of-belief—expressed within the framework of probability
theory—that the corresponding model is the one that should be used.” Musson (2012) offers
a subtly different, but actually quite radical, view that the weights are the probability of each
1726 J. J. BOMMER

model being “the best model available” (p. 1293) carefully pointing out that this does not mean
the probability of the model being true. I can fully accept the definition given by Scherbaum
and Kuehn (2011), and I think it is perhaps the clearest definition that has yet been proposed.
I could also live with the definition of Musson (2012) were it not for the implicit assumption
that for populating the branches of logic trees we are limited to the available models. This is
reflected in the assertion by Musson (2012) that if only one model for a particular feature
were available it should be assigned a weight (probability) of 1.0 since, by definition, in such
circumstances that model would be the best one available. To my mind, this somewhat
ignores the purpose of the logic tree, which is to capture what we know (our best estimate)
and what we do not know (the epistemic uncertainty). If there are insufficient models avail-
able to populate a logic tree that captures epistemic uncertainty, then new models must be
developed as part of the PSHA. The onus is on the hazard analyst to both determine the best
models for the SSC and GMC inputs and the range of uncertainty associated with the models,
and this must be done regardless of how many or how few alternatives happen to have been
published in the technical literature. This issue is discussed in more detail later in the paper.
In passing, a minor issue to note is that it may frequently be the case that the two branches
emanating from a single node will be assigned the same weight, which means that we would
view them as equally likely of being the best model (or in other words, we have no way of
distinguishing either of them as superior). Another point to note is that if one wishes to adopt
a truly purist interpretation of the weights as probabilities, then the weights on GMC
branches would need to be the conditional probabilities associated with each prediction
model being the best model given the particular SSC logic tree to which the GMC branches
are being added.
Although I think that Scherbaum and Kuehn (2011) have put forward an excellent defi-
nition for logic tree branch weights, I think it may be useful to have a definition that relates to
the whole logic tree, considering all the branches and their associated weights. Such a defini-
tion might state that a logic tree defines the probability distribution of potential ground-
motion amplitudes at a given location due to possible future earthquakes in the region.
By virtue of the nature of sigma in ground-motion prediction (e.g., Strasser et al. 2009),
each GMPE defines a distribution of ground-motion amplitudes for a given earthquake sce-
nario; a logic tree therefore defines a composite distribution due to multiple GMPEs and
multiple earthquake scenarios.

THE MEANING OF LOGIC TREE BRANCH WEIGHTS SET TO ZERO


An interesting point that follows from the discussion in the previous section is how to
interpret a weight of zero assigned to a particular model when constructing a logic tree. The
answer must be conditional to the context in which the question is addressed, one case being
at the time of looking at the available models and deciding which to include in the logic tree.
Musson (2012) addresses the issue from the perspective of his definition of weights as
probabilities of being the best model. In doing so, there is a potential danger in not including
a model simply because we view it as not having any chance of being better than the other
models available. If we only needed to represent our best estimate with the logic tree, this
approach would be fine, but we need to also represent possible alternatives; although Musson
(2012) issues a valid warning against including some models with low weights simply
CHALLENGES OF BUILDING LOGIC TREES FOR PROBABILISTIC SEISMIC HAZARD ANALYSIS 1727

because those models are available, the challenge to capture the full range of epistemic uncer-
tainty must still be addressed.
Scherbaum and Kuehn (2011), on the other hand, state that a branch weight of zero “must
correspond to the case in which the analyst is certain that the corresponding model should not
be used” (p. 1238). If we have already populated the branches of our logic tree and are now
testing these models against data, then we may indeed assign zero weights to those models
which simply appear incompatible with local or regional recordings or observations. In
essence, these zero weights correspond to pruning these branches from the logic tree. How-
ever, in the context of considering available models that might be included, it is perfectly
feasible that several models (for example, published GMPEs) will be effectively assigned
weights of zero not because of a belief that they should not be used but simply because
they are not needed in order to construct the distribution of uncertainty. By way of illustra-
tion, suppose a PSHA is being conducted for a region that is not unambiguously defined as
stable or active, and consequently the logic tree is being populated with GMPEs from both
western and eastern North America. In such a case, the spread of the model predictions may
be so large that the relatively smaller differences among the NGA (Next Generation of
Attenuation) models (Abrahamson et al. 2008) could make it unnecessary to include all
five of those GMPEs.

OPTIONS FOR ASSIGNING WEIGHTS


On the question of how weights should be assigned to the branches of a logic tree, par-
ticularly if the weights are to reflect their assumed probabilistic nature, Scherbaum and
Kuehn (2011) have raised some important points. One of these is a fully justified criticism
of the use of grading matrices, in which rankings are assigned to alternative models according
to several different criteria, and then summed or, as was proposed by Bommer et al. (2005),
multiplied, and finally normalized to yield values summing to unity (consistent with their
subsequent treatment as probabilities). As lead author of the paper that proposed the use of
grading matrices, I am obliged to acknowledge, without reservation, the flaws that Scher-
baum and Kuehn (2011) identify in such an approach to assigning weights, and I would
discourage hazard analysts from applying grading matrices of any kind to the development
of logic tree branch weights.
Now, having publicly recanted, I should explain why the grading matrix approach was
proposed in the first place. To put it simply, it seemed like a good idea at the time. The
approach emerged from a project that began a decade ago, and the GMPEs available to
us at the time were not of the same caliber as those at our disposal today. In building a
logic tree from the mixed bag of GMPEs available to hazard analysts conducting a
PSHA for a region that was neither eastern or western North America, many adjustments
were made to render the GMPEs compatible in terms of the parameter definitions they
used (and those that underwent several adjustments were often down-weighted to reflect
how ‘contaminated’ they may have become in the process). Moreover, weights varied
with magnitude and distance bins reflecting varying constraint of each model for different
ranges of these parameters. On top of this, the grading scheme was designed to separate our
assessments regarding the inherent quality of each equation from our judgments regarding
how appropriate each equation might be to the application at hand. Happily, ten years later
1728 J. J. BOMMER

such complications can generally be avoided, partly because GMPEs are generally derived
using similar parameter definitions, and this obviates the need to apply adjustments. More-
over, it has become possible to apply quality criteria to the selection of, rather than the
weighting of, models (e.g., Bommer et al. 2010). The grading matrix approach can now
be laid to rest, since weights can now be assigned to GMPEs only addressing the key
issue of the analyst’s degree-of-belief in each equation being the most appropriate for
the region and site being studied.
Within this new perspective (for GMC modelers), Scherbaum and Kuehn (2011) recom-
mend that weights be applied to branches sequentially, starting with the probability the ana-
lyst wishes to associate to the model in which he or she has greatest confidence. There is no
doubt that sequential (as opposed to parallel) weighting facilitates genuinely treating the
weights as probabilities throughout the process. In such an approach, which Scherbaum
and Kuehn (2011) also refer to as the stick-breaking method, one begins by assigning a
weight (P) to the most favored model, then the weights assigned to all the other branches
must be taken from the remainder (1-P). I suspect, however, that an analyst will naturally take
cognizance of how many branches are going to share the (1-P) weights, and P will likely be
set smaller if the remainder is to be spread over a large number of alternative branches (in the
same way that the central value of a three-point approximation of a given distribution will
differ from the central value of a seven-point approximation of the same distribution). If this
is the case, then it might be that it is not possible to avoid some element of parallel weighting.
Scherbaum and Kuehn (2011) acknowledge this to some degree by accepting that the process
might need to be iterative, with several sequences applied (presumably with different starting
weights on the most favored model) to the logic tree until the analyst feels that the distribution
reflects the full degree of epistemic uncertainty intended (in other words, avoiding unjusti-
fiably narrow distributions). I do not have any clear proposal as to how it should be achieved,
but I would argue that we need to find a way to consider population of the branches and
assigning the weights in unison, to create the intended distribution. Encouragingly, new
ideas for assigning branch weights that follow the rules of probability calculus are already
emerging (Runge and Scherbaum 2012).
One of the arguments that Scherbaum and Kuehn (2011) advance in favor of sequential
weighting is that following other approaches (parallel weighting) could render the mean
hazard highly insensitive to the actual weights, as was found by Sabetta et al. (2005)
and Scherbaum et al. (2005). Clearly this insensitivity implies that there is a danger of
the hazard analyst feeling confident about having captured epistemic uncertainty simply
by including many branches populated by several available models. The use of visualization
tools such as those proposed by Scherbaum et al. (2010) could assist here because the insen-
sitivity to branch weights will be heightened if several of the models cluster in ground-motion
space (i.e., predict broadly similar levels of motion). This would help to clarify the often
obscure relationship between weights on models and the resulting distribution of ground-
motion amplitudes, and ultimately on hazard estimates. One way this obscurity could be
circumvented entirely would be to construct the GMC logic tree by adopting or creating
a backbone GMPE as the central branch and then creating other branches carrying other
GMPEs obtained by scaling up and down the predicted values from the first equation
(e.g., Bommer and Scherbaum 2008). This approach, which may appear unorthodox to
some, has already been used in practice, as noted in the next section.
CHALLENGES OF BUILDING LOGIC TREES FOR PROBABILISTIC SEISMIC HAZARD ANALYSIS 1729

BUILDING THE BRANCHES OF A LOGIC TREE


The development of logic trees for input to PSHA calculations, in the SSHAC frame-
work, consists of two phases: evaluation of the available data, methods, and models, followed
by integration of data, methods, and models into a distribution that captures the CBR of the
TDI. The integration phase will often involve more than simply populating the branches of a
logic tree with existing models and then assigning weights to these branches; it generally
involves building new models to populate the branches. This is generally the case for
SSC logic trees, since it would be very difficult to encounter multiple seismic source char-
acterization models suitable for a site-specific analysis already available; I acknowledge that
there are cases of simply adopting existing source zonations, sometimes generated for hazard
mapping purposes, as the basis for site-specific studies, but this is not good practice. Con-
sequently, the SSC analysts compile, collect, process, and analyze instrumental and historical
earthquake catalogues and geological and geophysical data, and develop models for seismic
sources that characterize the possible locations and recurrence rates of future earthquakes of
different magnitudes. Although it seems to be widely accepted that constructing an SSC logic
tree will generally involve building new models, I suspect that it is less commonly appre-
ciated that this is very often the case for GMC logic trees as well.
In the case of GMC logic trees, for the simplest hazard studies it may be the case that the
branches are populated with a selection of published GMPEs and the hazard analyst then
invests her or his efforts in assigning weights to these branches. However, for more serious
applications, and particularly those conducted for nuclear facilities, such an approach is unli-
kely to be considered acceptable. For example, ground motions from crustal earthquakes in
California are now estimated using the NGA GMPEs (Abrahamson et al. 2008), which were
derived from a common strong-motion database and which yield similar predictions for sce-
narios well represented in the database. Moreover, all of the data for earthquakes of magni-
tude greater than Mw 7.2 come from regions outside California (Alaska, Iran, Turkey, and
Taiwan), with the exception of a single recording from the 1952 Kern County event. In order
to compensate for this possible underestimation of epistemic uncertainty in PSHA for the
Diablo Canyon nuclear power plant, additional branches were added to the GMC logic
tree by creating new equations obtained by scaling the predictions from the NGA model
up and down by factors related to the sparseness of the database in different magnitude-dis-
tance bins; a similar procedure was used for the logic tree for crustal seismicity in the western
United States for the 2008 national hazard maps (Petersen et al. 2008).
Even more pertinent examples come from projects affected by subduction-zone earth-
quakes. Musson (2012) claims that a logic tree for ground-motion predictions due to sub-
duction earthquakes constructed in 2001 would necessarily consist of a single branch—
consequently with a weight of 1.0—carrying the only subduction GMPE for response spec-
tral ordinates published at that time, namely Youngs et al. (1997). Again, if one is limited to
using only the available models, then the assertion is reasonable (apart from the fact that it
overlooks the Crouse (1991) model for subduction earthquakes), but one would have to
accept that in such a case no attempt is being made to capture epistemic uncertainty. In
the PSHA conducted for the U.S. Department of Energy Hanford site in the 1990s, the
Crouse (1991) and Youngs et al. (1997) equations were both included in the GMC logic
tree, together with two Japanese equations for PGA from subduction earthquakes that
1730 J. J. BOMMER

were used to scale average spectral shapes from the other two models. More recently, in the
BC Hydro SSHAC Level 3 PSHA, for which several GMPEs were available for predicting
spectral accelerations due to subduction earthquakes (e.g., Arango et al. 2012), it would have
been possible to construct a logic tree from published GMPEs without neglecting epistemic
uncertainty. However, the GMC technical integrator (TI) team in the BC Hydro project con-
cluded from their evaluation that none of these models was suitable because none were up-to-
date, and proceeded to compile a global database of recordings from subduction earthquakes
to derive a new GMPE. Additional branches for the GMC logic tree were created by model-
ing alternative magnitudes at which the scaling changed for the largest events, and by scaling
the median predictions in proportion to the range of average between-event residuals for
recordings from the Cascadia subduction zone.
New GMC models may be created through regressions on strong-motion data sets
assembled as part of the project, through stochastic simulations using parameters obtained
from weak-motion inversions (e.g., Boore 2003), or by making hybrid-empirical adjustments
(Campbell 2003). Even if a project does not derive new equations in one of these ways, the
GMC component of a PSHA is still likely to involve the creation of new models even if only
through the application of adjustments to make the existing equations more applicable to the
site (e.g., Cotton et al. 2006, Van Houtte et al. 2011) or to create additional versions of the
models to better cover the range of epistemic uncertainty (e.g., Petersen et al. 2008).
The apparent lack of appreciation of the general need to develop new GMC models as
well as new SSC models in PSHA studies for critical facilities may partly have been the result
of the highest profile studies to date—Yucca Mountain (Stepp et al. 2001) and PEGASOS,
which is the acronym for the German name of the project (Abrahamson et al. 2002, Renault
et al. 2010)—being conducted as Level 4 studies. In an SSHAC Level 4 study, the evalua-
tions and integrations are performed by a panel of experts making individual assessments that
are coordinated by a technical facilitator/integrator (TFI). New models may be commissioned
from contractors, as happened in the Yucca Mountain study, or created by virtue of applying
adjustments and extensions to existing equations, as happened in the PEGASOS and
PEGASOS Refinement Projects, but this organizational structure is not conducive to the
members of the ground-motion panel working together to develop new models for the
logic tree branches. In an SSHAC Level 3 process, where the TI teams work to develop
a single consensus SSC and GMC logic tree, new models are likely to be the norm, as
was the case in the BC Hydro PSHA. SSHAC Level 3 PSHA studies underway currently
for the Diablo Canyon plant in California, for the Hanford site in Washington, and for a new-
build site in South Africa are all likely to see the TI teams developing new models.
Musson (2012) effectively states, quite reasonably, that such endeavors to create new
models may be beyond the scope of hazard studies conducted over short schedules and
with limited resources, for those applications in which such hazard assessments are consid-
ered acceptable. However, if SSHAC Level 3 PSHA studies are to be conducted for safety-
critical facilities at multiple sites within a country or region, then the benefits could be spread
to other applications by conducting SSHAC Level 3 studies to develop to regional SSC and
GMC models, which can then be refined for site-specific studies (adding any local seismic
sources and modifying the ground motions for the near-surface geo-materials) using a Level
2 study (e.g., Coppersmith and Bommer 2012). This is the rationale behind the recently
CHALLENGES OF BUILDING LOGIC TREES FOR PROBABILISTIC SEISMIC HAZARD ANALYSIS 1731

completed CEUS SSC (Central and Eastern United States Seismic Source Characterization
for Nuclear Facilities) Project, which will no doubt also be used as input to regional and site-
specific hazard assessments for non-nuclear applications.

SEEING THE FOREST THROUGH THE (LOGIC) TREES


A peril to be faced in any PSHA earnestly trying to capture epistemic uncertainty is that
the logic trees can become unwieldy and the number of branches can begin to obscure the
underlying distributions. This is particularly the case in SSHAC Level 4 projects, where each
expert creates his or her own logic tree and these are then assembled into a single structure. In
the PEGASOS project, which had expert panels for SSC, GMC, and site response, the final
number of branch combinations reached the order of 1026! A complex logic tree may not
necessarily mean effective capture of epistemic uncertainty, even if it superficially creates
this impression. At least at the early stages of a project, there may be merit in keeping the
logic trees relatively simple, so that large numbers of sensitivity analyses can be conducted to
provide meaningful insight for the SSC and GMC experts to ascertain the key issues in terms
of impact on the hazard. In an SSHAC Level 3 study, this might apply to the preliminary SSC
and GMC models constructed for hazard runs that provide feedback to the experts on sen-
sitivities at the final workshop. In projects of a more limited scope, where the resources do not
support the development of new models, sensitivity calculations are also important. In such
cases, measures that facilitate the performance and interpretation of more sensitivity analyses
are valuable, an example of which might be eliminating the need to generate virtual faults
within area source zones (e.g., Bommer and Akkar 2012), at least for those sources that are
more remote from the site.
A useful device to avoid excessively complex logic trees is to bear in mind what is being
represented: for the SSC logic tree the underlying possible spatial distributions and rates of
future earthquakes of different magnitudes, and for the GMC logic tree the distributions of
ground-motion amplitudes from these earthquakes. The implied distributions from the logic
trees can be explored and visualized independently. The purpose of the logic tree is not to
sample the full range of possible methods and models. For example, a decision to include
alternative branches for both area source zones and smoothed seismicity in an SSC logic tree
should be taken on the basis of uncertainty about the spatial stationarity of seismicity, not
simply because there are two different approaches available for modeling seismicity not
explicitly associated to fault sources. What matters is the resulting distribution of earthquakes
in the PSHA calculations, not the number of modeling techniques that are represented in the
formulation of the logic tree.

THE EXPERTS NEEDED FOR THE JOB


The challenge of identifying and critically evaluating all available data, methods, and
models as the first stage of an SSHAC Level 3 PSHA suggests that the participants must
necessarily possess expertise in the relevant technical fields (geology, geophysics, seismol-
ogy, ground-motion prediction, etc). The level of expertise required is emphasized even more
by the integration phase of the work, for which the team members require not only extensive
knowledge of the relevant subject matters but also a deep appreciation of how different ele-
ments of SSC and/or GMC models influence the PSHA results. If one accepts the idea that
hazard analysis will generally entail the development of new models, then the level of
1732 J. J. BOMMER

technical expertise required in the relevant subjects becomes even more demanding. This
expertise is acquired through both academic study and practical experience, benefits
from some engagement with research, and requires continuous updating through participa-
tion in scientific meetings and contributions (as authors and/or reviewers) to the literature.
The global pool of experts in these fields (as they are applied to PSHA) is relatively small,
and the various projects mentioned earlier have absorbed a significant proportion of those
available for engagement in high-level seismic hazard studies. If the global nuclear renais-
sance continues to advance, and with the expectations and requirements for the rigor afforded
by Level 3 and 4 assessments raised by the impact of the Tohoku earthquake at Fukushima,
then there is going to be a genuine challenge to assemble teams for all of the studies that
might be undertaken. A positive move to address this issue is the involvement of several
young people in active roles in most of the projects that have been mentioned, which provides
training and preparation.
In passing it is also worth mentioning that there are two absolutely vital roles in an
SSHAC Level 3 study, the evaluator-integrator roles covered by the TI team, and the parti-
cipatory peer review panel (PPRP), which undertakes continuous review of the process and
the technical bases for all decisions throughout the project. Membership of a PPRP requires
detailed technical knowledge in key subject matters (collectively, the panel must cover all
relevant disciplines), extensive experience in PSHA, and a thorough understanding of the
SSHAC process. There is as much need to expand the pool of experts for this role as
for populating the TI teams on SSHAC Level 3 projects.
However, the expertise of the TI team and PPRP members in earth sciences and PSHA
may not be sufficient by itself, since ultimately the SSHAC Level 3 process is defining logic
trees that represent distributions that will be used as probability distributions. Scherbaum and
Kuehn (2011) raise the issue of the need for those constructing logic trees to possess what
they call “normative expertise,” questioning whether this might be “often outside the range of
competence and/or interest of hazard analysts,” (p. 1239) These philosophical aspects have
clearly not been a focus of interest for many seismic hazard practitioners, and it is a useful
contribution of Scherbaum and Kuehn (2011) to bring attention to these often neglected
issues. However, whether this is beyond the competence of those who conduct seismic
hazard analyses really depends on the degree of normative expertise that is required.
I would assert that what is needed for PSHA is a good grasp of the fundamentals of prob-
ability and a working appreciation of their application, in which case with a modest effort all
hazard analysts should be able to equip themselves with the required knowledge. Scherbaum
and Kuehn (2011) assert only that normative expertise needs to be present in PSHA projects
and rigorously applied, rather than requiring all hazard analysts to become normative experts.
I would contend that all hazard analysts should become conversant with the axioms of prob-
ability since they will be assigning logic tree branch weights. Scherbaum and Kuehn (2011)
provide a useful definition for what is meant by normative expertise: the ability “to design
unbiased and logically consistent probability estimates on models” (p. 1246). When technical
experts are assembled for an SSHAC Level 3 project, the early stages (and the first sessions
of most workshops) will include ‘training’ on the SSHAC process, the responsibilities and
duties assigned with each role, and the perils of cognitive bias. These preparatory sessions
may also include the basics of PSHA, and, taking the message of Scherbaum and Kuehn
(2011) on board, I would argue that fundamentals of probability should also be added to
CHALLENGES OF BUILDING LOGIC TREES FOR PROBABILISTIC SEISMIC HAZARD ANALYSIS 1733

the core curriculum of the general training on SSHAC procedures and seismic hazard
analysis.

CONCLUDING REMARKS
Building logic trees for PSHA involves populating the branches with models (or para-
meter values) and their associated weights, so that the two together yield the distribution of
the predicted feature (seismicity distributions and rates or ground-motion amplitudes) that
reflects the center, body and range of technically defensible interpretations. Ultimately,
we need to create logic trees that will capture what we do know in terms of our best estimates
and what we do not know about the parameters being predicted, as discrete approximations to
continuous distributions. High-level PSHA studies for critical facilities involve exciting tech-
nical challenges, but there is now also an urgent need to expand the pool of experts available
to conduct such studies. The experts need in the first instance to be established in one or more
of the disciplines required to build SSC or GMC models for PSHA, and then acquire experi-
ence in PSHA as well as specifically of participation in SSHAC Level 3 or 4 projects. Under-
standing of cognitive bias, the fundamentals of probability theory, the engineering
application of earthquake ground motion in design and fragility analysis, and nuclear reg-
ulation also need to be acquired. Clear guidelines on how to implement SSHAC Level 3
processes are now available (USNRC 2012) and these will no doubt be refined and updated
in the light of experience; the urgent challenge now is to develop a much broader pool of
experts to make the implementation possible. Bringing young and enthusiastic scientists and
engineers into major PSHA projects, to acquire the necessary experience, is essential if the
demands of future projects are to be met.
This is the seventh Opinion Paper related to the use of logic trees in PSHA to appear in
Earthquake Spectra since 2005. To my mind, this confirms that while logic trees have been
widely and effectively deployed in seismic hazard analyses, some of the more subtle aspects
of their construction and application have not been considered in sufficient depth, and con-
sequently, the state-of-the-practice is still being defined. I hope that others will feel inspired—
or provoked—to enter the debate and bring alternative perspectives and ideas to these
exchanges in order to enrich and broaden the conclusions that will eventually emerge.
Another contribution that would be of great benefit would be for the innovations and
ideas developed for the construction of logic trees in SSHAC Level 3 and 4 PSHA projects
to be published in the mainstream literature (to the degree that confidentiality requirements
allow) in order to share experience and developments beyond the teams of participants in each
of these projects.

ACKNOWLEDGMENTS
I am grateful to all those with whom I have enjoyed stimulating discussions of these
topics and whose ideas and questions have challenged my own views and therefore contrib-
uted to their evolution, especially Norm Abrahamson and the experts in the PEGASOS and
PEGASOS Refinement Projects. In this regard, special mention is due to Frank Scherbaum,
with whom I am in violent agreement on many issues and with whom I truly enjoy debating
our minor differences of opinion (and invariably learn a great deal in the process). My sincere
thanks are also due to Roger Musson for providing a preprint of his paper and for continuing
these discussions in a friendly and constructive spirit. I am very grateful to Peter Stafford for
1734 J. J. BOMMER

constructive feedback on the first draft of this paper and for excellent insights and suggestions
regarding aspects of probability, and to Frank Scherbaum and Nico Kuehn for stimulating
discussions triggered by an early version of this manuscript. Frank Scherbaum, Roger
Musson, and an anonymous reviewer all provided constructive and encouraging reviews
of the original submission, which are gratefully acknowledged.

REFERENCES
Abrahamson, N., Atkinson, G., Boore, D., Bozorgnia, Y., Campbell, K., Chiou, B., Idriss, I.M.,
Silva, W., and Youngs, R., 2008. Comparison of the NGA ground-motion relations. Earth-
quake Spectra 24, 45–66.
Abrahamson, N. A., Birkhauser, P., Koller, M., Mayer-Rosa, D., Smit, P., Sprecher, C., Tinic, S.,
and Graf, R., 2002. PEGASOS: A comprehensive probabilistic seismic hazard assessment for
nuclear power plants in Switzerland, Paper No. 633, Twelfth European Conference on Earth-
quake Engineering, 9–13 September 2002, London, England.
Abrahamson, N. A., and Bommer, J. J., 2005. Probability and uncertainty in seismic hazard ana-
lysis, Earthquake Spectra 21, 603–607.
Arango, M. C., Strasser, F. O., Bommer, J. J., Cepeda, J. M., Boroschek, R., Hernandez, D. A,
and Tavera, H., 2012. An evaluation of the applicability of current ground-motion models to
the South and Central American subduction zones, Bulletin of the Seismological Society of
America 102, 143–168.
Bommer, J. J., and Akkar, S., 2012. Consistent source-to-site distance metrics in ground-motion
prediction equations and seismic source models for PSHA, Earthquake Spectra 28, 1–15.
Bommer, J. J., Douglas, J., Scherbaum, F., Cotton, F., Bungum, H., and Fäh, D., 2010. On the
selection of ground-motion prediction equations for seismic hazard analysis, Seismological
Research Letters 81, 794–801.
Bommer, J. J., and Scherbaum, F., 2008. The use and misuse of logic-trees in probabilistic seis-
mic hazard analysis, Earthquake Spectra 24, 997–1009.
Bommer, J. J., Scherbaum, F., Bungum, H., Cotton, F., Sabetta, F., and Abrahamson, N. A., 2005.
On the use of logic trees for ground-motion prediction equations in seismic-hazard analysis,
Bulletin of the Seismological Society of America 95, 377–389.
Boore, D. M., 2003. Simulation of ground motion using the stochastic method, Pure & Applied
Geophysics 160, 635–676.
Budnitz, R. J., Apostolakis, G., Boore, D. M., Cluff, L. S., Coppersmith, K. J., Cornell, C. A., and
Morris, P. A., 1997. Recommendations for Probabilistic Seismic Hazard Analysis: Guidance
on uncertainty and use of experts, NUREG/CR-6372, two volumes, U.S. Nuclear Regulatory
Commission, Washington D.C.
Campbell, K. W., 2003. Prediction of strong ground motion using the hybrid empirical method
and its use in the development of ground motion (attenuation) relations in eastern North
America, Bulletin of the Seismological Society of America 93, 1012–1033.
Coppersmith, K. J., and Bommer, J. J., 2012. Use of the SSHAC methodology within regulated
environments: Cost-effective application for seismic characterization at multiple sites, Nuclear
Engineering and Design 245, 233–240.
Cotton, F., Scherbaum, F., Bommer, J. J., and Bungum, H., 2006. Criteria for selecting and
adjusting ground-motion models for specific target regions: applications to Central Europe
and rock sites, Journal of Seismology 10, 137–156.
CHALLENGES OF BUILDING LOGIC TREES FOR PROBABILISTIC SEISMIC HAZARD ANALYSIS 1735

Crouse, C. B., 1991. Ground-motion attenuation equations for earthquakes on the Cascadia sub-
duction zone, Earthquake Spectra 7, 210–236.
Kulkarni, R. B., Youngs, R. R., and Coppersmith, K. J., 1984. Assessment of confidence intervals
for results of seismic hazard analysis, in Proceedings, Eighth World Conference on Earth-
quake Engineering, vol.1, International Association for Earthquake Engineering, Tokyo,
Japan, 263–270.
McGuire, R. K., Cornell, C. A., and Toro, G. R., 2005. The case for the mean hazard curve,
Earthquake Spectra 21, 879–886.
Musson, R. M. W., 2005. Against fractiles, Earthquake Spectra 21, 887–891.
Musson, R., 2012. On the nature of logic trees in probabilistic seismic hazard assessment,
Earthquake Spectra 28, 1291–1296.
Petersen, M. D., Frankel, A. D., Harmsen, S. C., Mueller, C. S., Haller, K. M., Wheeler, R. L., Wesson,
R. L., Zeng, Y., Boyd, O. S., Perkins, D. M., Luco, N., Field, E. H., Wills, C. J., and Rukstales, K. S.,
2008. Documentation for the 2008UpdateoftheUnitedStatesNational Seismic HazardMaps, U.S.
Geological Survey Open-File Report 2008-1128, USGS, Reston, VA.
Renault, P., Heuberger, S., and Abrahamson, N. A., 2010. PEGASOS Refinement Project: An
improved PSHA for Swiss nuclear power plants, Paper No. ST4-0991, 14th European Con-
ference on Earthquake Engineering, 30 August–3 September 2010, Ohrid, Macedonia.
Runge, A., and Scherbaum, F., 2012. The quantification of consistent logic tree branch weights
for PSHA (abstract), Seismological Research Letters 83, DOI: 10.1785/gssrl.83.2.316.
Sabetta, F., Lucantoni, A., Bungum, H., and Bommer, J. J., 2005. Sensitivity of PSHA results to
ground motion prediction relations and logic-tree weights, Soil Dynamics & Earthquake Engi-
neering 25, 317–329.
Scherbaum, F., Bommer, J. J., Bungum, H., Cotton, F., and Abrahamson, N. A., 2005. Composite
ground-motion models and logic-trees: methodology, sensitivities and uncertainties, Buletin of
the Seismological Society of America 95, 1575–1593.
Scherbaum, F., and Kuehn, N. M., 2011. Logic tree branch weights and probabilities: Summing
up to one is not enough, Earthquake Spectra 27, 1237–1251.
Scherbaum, F., Kuehn, N. M., Ohrnberger, M., and Koehler, A., 2010. Exploring the proximity of
ground-motion models using high-dimensional visualization techniques, Earthquake Spectra
26, 1117–1138.
Stepp, J. C., Wong, I., Whitney, J., Quittemeyer, R., Abrahamson, N., Toro, G., Youngs, R.,
Coppersmith, K., Savy, J., Sullivan, T., and Yucca Mountain PSHA Project Members,
2001. Probabilistic seismic hazard analyses for ground motions and fault displacements at
Yucca Mountain, Nevada, Earthquake Spectra 17, 113–151.
Strasser, F. O., Abrahamson, N. A., and Bommer, J. J., 2009. Sigma: Issues, insights, and chal-
lenges, Seismological Research Letters 80, 40–56.
U.S. Nuclear Regulatory Commission (USNRC), 2012. Practical Implementation Guidelines for
SSHAC Level 3 and 4 Hazard Studies, NUREG 2117, Washington D.C.
Van Houtte, C., Drouet, S., and Cotton, F., 2011. Analysis of the origins of κ (kappa) to compute
hard rock to rock adjustments factors for GMPEs, Buletin of the Seismological Society of
America 101, 2926–2941.
Youngs, R., Chiou, S. -J., Silva, W. J., and Humphrey, J. R., 1997. Strong ground motion attenua-
tion relationships for subduction zone earthquakes, Seismological Research Letters 68, 58–73.
(Received 3 March 2012; accepted 17 May 2012)

You might also like