You are on page 1of 6

Bulletin of the Seismological Society of America, Vol. 72, No. 4, pp.

1427-1432, August 1982

C O M M E N T S ON " T H E C O R N E R F R E Q U E N C Y SHIFT, E A R T H Q U A K E SOURCE MODELS, AND Q," BY T. C. H A N K S BY CHARLES A. LANGSTON

I N T R O D U C T I O N

Hanks (1981) has recently offereda rather broad-based criticismof selected timedomain methods for modelingearthquake sources. From the nature of and reasoning behind Hanks' arguments, I perceive that there may be several fundamental misunderstandings regarding the objectives and techniques of using point source or various finite source models for earthquake data. Rather than reply directly to the many qualitative arguments he has presented concerning opinions of mode] fits, I would like to take this opportunity to examine what I feel is good scientific methodology in the treatment of seismic sources and resulting interpretations. In what follows, I will present a discussion of the various objectives that different source studies address and the assumptions involved. In doing so and in keeping with the major themes presented by Hanks (1981), I would like to demonstrate the genetic similarity between those studies which primarily employspectral techniques for interpretation purposes and those which use time-domain techniques. It is obvious that a digitally sampled seismogram is equally well represented by its time series or its complex spectrum; the differences are only a matter of form and not content. Similarly, the differences of opinion generated by those who work with these different data forms should be easily reducible to a common quantitative ground by recognizingthis fact. Toward the end of this commentary, I wouldlike to suggest ways whereby issues such as those brought up by Hanks (1981) may be incorporated in a more quantitative framework so that there can be some hope for their resolution.
SOURCE MODELING

Like in all other sciences, the data that are available to a seismologist do not supply questions to be posed. It is entirely up to the scientist to construct the context in which questions may be asked. T he context of questions in source studies has been built up over the years through an interplay of theoretical results and observational interpretation. Because of the unfortunate nature of trade-offs between source parameters and earth structure parameters, much of the progress in source studies has come about only after a significant increase in knowledge about particular aspects of wave propagation in the earth. Thus, there is a large body of "fact" outside the strict domain of source modeling which involves knowledge about earth structure. T h e quotation marks around "fact" is simply meant to emphasize that occasionally some results are revised after further study of new data. This information derived from structural studies serves as outside constraints available for use in source modeling. Context in source studies is also obtained by the construction of reasonable and physical hypotheses. All seismologists would agree that the basis for such hypotheses lie in application of Newton's laws and thermodynamics through continuum mechanics. However, and this is where the trouble occurs, all seismologists do not agree to what is reasonable. This is undoubtably due to the situation that the earth is far more complex than we let on and that all theories that are constructed to explain
1427

1428

L E T T E R S TO T H E E D I T O R

geophysical phenomena are clearly approximations. There will always be some deficiency (commonly attributed to "noise") in every theory. Agreement on a particular theory or hypothesis in seismology usually occurs when an undefined number of people consider it most consistent with the available data in some statistical sense. This, of course, does not mean theory is correct; at best it is only an approximation of the particular physical system. The development of context in treatments of the seismic data implicitly assumes that the seismologist has some objective in mind. After all, there is always a reason, no matter how trivial, for doing a study. The objectives of a particular modeling experiment can usually be plainly seen in the type of theory used or the hypotheses proposed to explain the data. For example, if source depth is the desired parameter of a particular earthquake study, then source depth must figure prominently as a parameter in the theory. Of course, there may be higher order objectives which are based on interpretation of modeling parameters. These may even seem to be independent of the modeling process because of their attached importance, an example being the evaluation of earthquake risk. Unfortunately, higher order objectives, although more noble than the first order parameters of a source model, tend to magnify the effects of errors made at the lower level since more subjective or interpretive decisions are made. To perform the objectives of a source study, one ostensibly uses the scientific method; a hypothesis is proposed, an experiment set up, and the hypothesis is tested by the experiment. If one is lucky or the experiment is exceptionally well designed, then the hypothesis is disproved or proved consistent with the data. If the result is ambiguous then the process is recursively performed ad infinitum. In seismology and, in particular, source studies, there are major problems with the experimental aspect of the process. The information that is commonly desired, such as exact fault geometry and slip as a function of space and time, is only crudely contained within the available data. Most seismic data, in fact, is the result of poorly designed experiments if exact source quantities are desired. The seismologist is, therefore, forced to construct passive experiments based on the availability of any data. This is usually a painstaking process since almost everything must be known about the data in order to obtain those few extra parameters contained in the hypothesis. For example, if one wants to apply a source theory involving P and S waves, then the P and S waves must be unambiguously identified on a seismogram and their structural interactions completely known. Unfortunately because of the earth structure-source parameter trade-offs and the underparameterization of geophysical systems, there is no certainty that assumptions involving the nature of data are correct. Thus, these assumptions must be constantly reevaluated to be made consistent with outside constraints and the data. In light of these previous comments, let us now discuss the several themes that Hanks (1981) presented. First, consider the basic methods which were discussed. There is the frequency domain technique for determining the two parameters in the source model suggested by Brune (1970). The techniques for applying this procedure to teleseismic P and S data were given by Hanks and Wyss (1972) and to regional seismograms by Thatcher and Hanks (1973). The basic assumptions made in these techniques were that 1. the radiation pattern of the source can be represented by a point-shear dislocation situated in homogeneous, isotropic, wholespace; 2. geometric spreading is taken for the direct P or S body wave in the appropriate vertically inhomogenous earth model; and

LETTERS

TO T H E

EDITOR

1429

3. the data contains only the direct P or S wave. T h e technique is applied by fitting a low- and high-frequency asymptote to the amplitude spectrum of a windowed portion of the seismogram. T he intersection of the asymptotes is the corner frequency and the zero frequency limit is related to the seismic moment of the point dislocation. What Hanks {1981) calls P and S waves are modeled separately. Indeed, each individual windowed seismogram is modeled separately, essentially giving a different time function spectrum for each " P " and "S" wave. Langston and Helmberger (1975) discuss a modeling method based on the following assumption 1. assume a point-shear dislocation in homogeneous, isotropic, plane-layered media. An approximate method for computing the teleseismic response of P and S waves from a point dislocation was presented which assumed 1. the effects of structure near the source and near the receiver are separable; and 2. geometric spreading is taken for the direct P or S body wave in the appropriate vertically inhomogeneous earth model. To apply these methods, one has to determine the appropriate structure model from outside constraints or, more dangerously, recursively. Synthetic seismograms are then computed for a source model to be compared directly with the time-domain data. Although a point source model was stressed in the paper for simplicity, it was mentioned that finite sources could be built up by summing concatenated arrays of point source solutions. T h e connection between the two methods is clear. Both use solutions for a pointshear dislocation at their core. Indeed, I am totally at a loss to understand Hanks' (1981) obvious dislike of point-source methods even in the expedient of treating the P and S waves separately as he seems to prefer. The methods outlined by Langston and Helmberger (1975) can be considered to be more realistic and helpful in source modeling since assumptions concerning the structure response have to be explicitly included in calculations. Comparison of the synthetics with features of the data insures that these assumptions are constantly evaluated. In contrast, spectral methods, as commonly applied, do not have any consistency checks on assumptions since the phase is ignored. High- and low-frequency asymptotes can be fit to most any band-limited data regardless of wave type. Hanks (1981) also seems to disagree with a basic tenet of modeling philosophy. He makes the rather rigid assertion that even if a point-source model fits the available data, it still has to be wrong. As implied above, all geophysical models are ultimately incorrect because they are underparameterized. If a simply parameterized model, such as a point source, fits the data just as well as a more complex model, then there are obvious problems with the parameter resolution of the complex model. Both models are equally "correct" but we are lead to disbelieve or ignore the meaning of the more complex of the two. In other words, the complex model~is not needed to explain the data. It will not yield any new information that is not already contained within the simple model. This is why point-source models are extensively used. If an earthquake can be approximated sufficiently well by a point source, there is no justification for adding new parameters. Often, however, it is hard to judge what "sufficiently well" is. If deviations from the point source are systematic, these deviations may indicate the type of new parametrization to try, e.g., going to a finite propagating source. More often than not, the deviations in the data are more complex than simple models would allow, including the simple expansion of the S-

1430

L E T T E R S TO T H E

EDITOR

wave pulse relative to P as Hanks (1981) would like. Upon reaching this point, the modeler usually gives up, having to reconcile the deviations with his lack of knowledge of other parameters. Th e various modeling techniques often have differing objectives. The primary objectives of the frequency domain technique are clear; there are only two parameters to invert. Determined for each wavetrain, the corner frequency can be used to interpret other quantities such as stress drop or fault radius and the point-source mo men t can yield the average slip. On the other hand, the time domain technique is totally general. Indeed, because the method is general, it is just as easy to include the Brune (1970) point source as any other approximate parameterization for the far-field time function. Since source and structure are treated together, there is the possibility th at source and/ or structure parameters can be investigated. In addition to estimates of the duration of the source-time function, the many teleseismic waveform studies critiqued by Hanks (1981) were also performed to obtain estimates of source orientation and source depth. Although there are always trade-offs involved among these parameters, it has usually been my experience that small variations in time function width does little to estimates of the other parameters. It has been shown in several studies (e.g., Burdick and Mellman, 1976) that the precise choice of source model (point or finite) does not affect these other parameters. This brings us to the interpretation of modeling results and how to discuss them. Hanks (1981) rather forcibly argues that the S-wave corner frequency shift (S waves having lower corner frequency than P waves) is a general observation of most spectral studies and can easily be seen in the time domain as well. Upon analysis, I find that this tenet is generally ambiguous and strongly tied to the interpretive technique used to formulate the statement. First, we have to ask, what is meant by " P " and "S" waves. Without a doubt, the wave trains which are called "S" by Hanks and others in the many frequency domain interpretations are often clearly longer period than those called "P." Amplitude spectra computed from these wave trains clearly show this. To obtain a uniformly good estimate of corner frequency, assuming the Brune (1970) model is sufficient, the structure response for the P and S waves must either be composed of a single delta function or a statistically random set of arrivals which yield a white spectrum. In the first case, the original modeling assumption of a homogeneous whole-space is correct and in the second, a random scatterer of a particular kind is invoked (the random scatterer must also preserve the same power level for the signal as if the arrival were the simple direct wave). It has been shown in several studies concerning structure effects for shallow earthquakes that these basic assumptions can be severely violated yielding moments and corner frequencies unrelated to the source model or yielding biases in the frequency content between P and S wave trains (Helmberger, 1974; Helmberger and Malone, 1975; Heaton and Helmberger, 1978; Langston, 1978a). T he problem comes about because what are called " P " and "S" waves (thinking about the simple whole-space ideal) are actually composed of a complex series of interferring arrivals. Thus, it would seem advantageous to carefully consider the structure effects before modeling the source. In those spectral studies where the source is deep enough or the receiver is close enough to clearly distinguish the uncontaminated far-field P- and S-wave arrivals, then the spectral method may be appropriate. Hanks (1981) has taken selected figures from several time-domain studies to seemingly prove his point that S waves have lower corner frequencies than P waves. I find it difficult to understand Hanks' method of determining corner frequency for a wave train in the time domain. I suspect he is confusing high-frequency content or

L E T T E R S TO T H E E D I T O R

1431

phase effects with corner frequency which are not at all synonymous. It is also not clear how the concept of corner frequency, as derived from Brune's (1970) model, applies to complex geometries and rupture models as in Langston's (1978b) San Fernando model. His presentation does point out a major problem in communication, however: how can results be shown and compared? In the many time-domain studies, some of which were discussed by Hanks (1981), the usual technique is to construct a deterministic source model and compute synthetic seismograms to compare with windowed data. T he quality of fit is usually defined within the context of the study. For example, a "good" fit may only consist of having wave polarities be consistent or it may consist of a quantitative objective function which is minimized in an inversion. Likewise, deviations from the fit are also discussed within the context of the study. Hanks (1981) points out what he considers deviations in the source models used in several time-domain studies. However, there is no room for meaningful discussion because of a lack of agreement on context. On one hand, Hanks (1981) shows what he considers are the deviations and suggests in qualitative way that the problems must be in the source model. On the other hand, the timedomain modeling studies employ specific and quantitative source and structure models where source effects interact strongly with structure effects. It is not so clear where the deviations of fit lie. In other words, to determine whether a hypothesis is at least consistent with the data, one has to test it as rigorously as possible. The context of a dialogue between proponents of one model over another lies in a quantitative comparison of the same data in the same form. Hanks (1981) may be entirely correct in his assessment of deficiencies in point-source models but he has not presented any useful quantitative evidence to demonstrate his point. A prime example of this disparity is Hanks' (1981) discussion of results concerning the 1968 Borrego Mountain earthquake. In it, he basically attacks some underlying assumptions made in the studies performed by Helmberger (1974) and Burdick and Mellman (1976). Challenging assumptions is always a good way to learn something new, especially when it comes to the basic ambiguities inherent in source modeling, and every study should certainly be scrutinized carefully. However, in this case, Hanks (1981) has been somewhat less than careful in his presentation of the data by comparing P waveforms between the Borrego Mountain and E1 Golfo earthquakes at only two stations (his Figure 5). The two stations that he has chosen are located very near to P-wave nodes for the Borrego Mountain event and do not demonstrate the effects of the P wave [as Burdick and Mellman (1976) interpret it] changing polarity and amplitude. In any case, his multiple-source idea is an easily testable one. I would prefer to disparage a model by first presenting results. These comments are especially applicable to Hanks' (1981) attenuation discussion. Burdick (1982) discusses these points in satisfying detail. C O N C L U S I O N S I hope this commentary has helped toward the resolution of the issues Hanks (1981) has brought up. T he treatment of the seismic source is difficult and fraught with many problems. Indeed, the comments of Hanks (1981) not withstanding, I have always found that the data never behaves perfectly and that there are always unexplainable deviations from theory in every study. However, I firmly believe that progress in the field will not come about by proposing qualitative arguments concerning one model over another. Many of the important assumptions made in all source studies are testable and testable in a quantitative way.

1432

LETTERS TO THE EDITOR

M o s t of t h e d i s a g r e e m e n t s I s e e m to h a v e w i t h H a n k s (1981) s e e m to lie i n t h e b a s i c p h i l o s o p h y o n e follows to m o d e l s e i s m i c sources. B e c a u s e it is i m p o s s i b l e to be a w a r e of e v e r y f a c t o r of p a r a m e t e r w h i c h m a y b e i m p o r t a n t i n e x p l a i n i n g t h e source, it s e e m s r e a s o n a b l e to t a k e a c o n s e r v a t i v e a p p r o a c h i n m o d e l i n g . I b e l i e v e t h a t it is good p r a c t i c e to p r o c e s s t h e seismic d a t a as little as p o s s i b l e a n d to c o n s t r u c t d e t e r m i n i s t i c m o d e l s w h i c h are as s i m p l e as possible, i n a n effort to fit t h e data. T h e effect of s t r u c t u r e a s s u m p t i o n s will be p l a i n l y v i s i b l e so t h a t d e c i s i o n s o n s o u r c e p a r a m e t e r s c a n b e m a d e m o r e u n a m b i g u o u s l y . I n p a r t i c u l a r , t h e use of p o i n t s o u r c e s i n s e i s m i c m o d e l i n g serves as a n e x c e l l e n t s t a r t i n g p o i n t for t h e m o d e l i n g of a n y e v e n t . H i g h e r o r d e r effects s u c h as f i n i t e n e s s or d i r e c t i v i t y are m u c h m o r e difficult to assess i n t h e s e i s m i c d a t a . U s u a l l y , t h e s e effects also o v e r p a r a m e t e r i z e t h e p r o b l e m for t h e a v a i l a b l e d a t a so t h a t u n i q u e n e s s p r o b l e m s b e c o m e m o r e severe. REFERENCES Brune, J. N. (1970). Tectonic stress and spectra of seismic waves from earthquakes, J. Geophys. Res. 75, 4997-5009. Burdick, L. J. (1982). Comments on "The corner frequency shift, earthquake source models, and Q," by T. C. Hanks, Bull. Seism. Soc. Am. 72, 1417-1424. Burdick, L. J. and G. R. Mellman (1976). Inversion of the body waves from the Borrego Mountain earthquake to the source mechanism, Bull. Seism. Soc. Am. 66, 1485-1499. Hanks, T. C. (1981). The corner frequency shift, earthquake source models, and Q, Bull. Seism. Soc. Am. 71, 597-612. Hanks, T. C. and M. Wyss (1972). The use of body-wave spectra in the determination of seismic-source parameters, Bull. Seism. Soc. Am. 62, 561-589. Heaton, T. H. and D. V. Helmberger (1978). Predictability of strong ground motion in the Imperial Valley; modeling the M4.9, November 4, 1976 Brawley earthquake, Bull. Seism. Soc. Am. 68, 31-48. Helmberger, D. V. {1974). Generalized ray theory for shear dislocations, Bull. Seism. Soc. Am. 64, 45-64. Helmberger, D. V. and S. D. Malone (1975). Modeling local earthquakes as shear dislocations in a layered half space, J. Geophys. Res. 80, 4881-4888. Langston, C. A. (1978a). Moments, corner frequencies, and the free surface, J. Geophys. Res. 83, 34223426. Langston, C. A. (1978b). The February 9, 1971 San Fernando earthquake: a study of source finiteness in teleseismic body waves, Bull. Seism. Soc. Am. 68, 1-29. Langston, C. A. and D. V. Helmberger (1975). A procedure for modelling shallow dislocation sources, Geophys. J. 42, 117-130. Thatcher, W. and T. C. Hanks (1973). Source parameters of southern California earthquakes, J. Geophys. Res. 78, 8547-8576.
DEPARTMENT OF GEOSCIENCES PENNSYLVANIA STATE UNIVERSITY UNIVERSITY PARK, PENNSYLVANIA 16802

Manuscript received September 14, 1981

You might also like