Professional Documents
Culture Documents
1200162026
1200162026
gov>
To: santer1@llnl.gov, John Lanzante <John.Lanzante@noaa.gov>
Subject: Re: Updated Figures
Date: Sat, 12 Jan 2008 13:20:26 -0500
Reply-to: John.Lanzante@noaa.gov
Cc: Melissa Free <Melissa.Free@noaa.gov>, Peter Thorne
<peter.thorne@metoffice.gov.uk>, Dian Seidel <dian.seidel@noaa.gov>, Tom Wigley
<wigley@cgd.ucar.edu>, Karl Taylor <taylor13@llnl.gov>, Thomas R Karl
<Thomas.R.Karl@noaa.gov>, Carl Mears <mears@remss.com>, "David C. Bader"
<bader2@llnl.gov>, "'Francis W. Zwiers'" <francis.zwiers@ec.gc.ca>, Frank Wentz
<frank.wentz@remss.com>, Leopold Haimberger <leopold.haimberger@univie.ac.at>,
"Michael C. MacCracken" <mmaccrac@comcast.net>, Phil Jones <p.jones@uea.ac.uk>,
Steve Sherwood <Steven.Sherwood@yale.edu>, Steve Klein <klein21@mail.llnl.gov>,
Susan Solomon <Susan.Solomon@noaa.gov>, Tim Osborn <t.osborn@uea.ac.uk>, Gavin
Schmidt <gschmidt@giss.nasa.gov>, "Hack, James J." <jhack@ornl.gov>
After returning to the office earlier in the week after a couple of weeks
off during the holidays, I had the best of intentions of responding to
some of the earlier emails. Unfortunately it has taken the better part of
the week for me to shovel out my avalanche of email. [This has a lot to
do with the remarkable progress that has been made -- kudos to Ben and others
who have made this possible]. At this point I'd like to add my 2 cents worth
(although with the declining dollar I'm not sure it's worth that much any more)
on several issues, some from earlier email and some from the last day or two.
While I'm not suggesting anything beyond a short paper, it might be possible
to "spin" this in more general terms as a brief update, while at the same
time addressing Douglass et al. as part of this. We could begin in the
introduction by saying that this general topic has been much studied and
debated in the recent past [e.g. NRC (2000), the Science (2005) papers, and
CCSP (2006)] but that new developments since these works warrant revisiting
the issue. We could consider Douglass et al. as one of several new
developments. We could perhaps title the paper something like "Revisiting
temperature trends in the atmosphere". The main conclusion will be that, in
stark contrast to Douglass et al., the new evidence from the last couple of
years has strengthened the conclusion of CCSP (2006) that there is no
meaningful discrepancy between models and observations.
The second item that I'd suggest be added to Ben's earlier outline (perhaps
as item 5) is a discussion of the issues that Susan raised in earlier emails.
The main point is that there is now some evidence that inadequacies in the
AR4 model formulations pertaining to the treatment of stratospheric ozone may
contribute to spurious cooling trends in the troposphere.
Ben wrote:
> So why is there a small positive bias in the empirically-determined
> rejection rates? Karl believes that the answer may be partly linked to
> the skewness of the empirically-determined rejection rate distributions.
[NB: this is in regard to Ben's Fig. 3 which shows that the rejection rate
in simulations using synthetic data appears to be slightly positively biased
compared to the nominal (expected) rate].
I would note that the distribution of rejection rates is like the distribution
of precipitation in that it is bounded by zero. A quick-and-dirty way to
explore this possibility using a "trick" used with precipitation data is to
apply a square root transformation to the rejection rates, average these, then
reverse transform the average. The square root transformation should yield
data that is more nearly Gaussian than the untransformed data.
Ben wrote:
> Figure 3: As Mike suggested, I've removed the legend from the interior
> of the Figure (it's now below the Figure), and have added arrows to
> indicate the theoretically-expected rejection rates for 5%, 10%, and
> 20% tests. As Dian suggested, I've changed the colors and thicknesses
> of the lines indicating results for the "paired trends". Visually,
> attention is now drawn to the results we think are most reasonable -
> the results for the paired trend tests with standard errors adjusted
> for temporal autocorrelation effects.
Peter also raised the point about trends being derived differently for
different datasets. To the extent possible it would be desirable to
have things done the same for all datasets. This is especially true for
using the same time period and the same method to perform the regression.
Another issue is the conversion of station data to area-averaged data. It's
usually easier to insure consistency if one person computes the trends
from the raw data using the same procedures rather than having several
people provide the trend estimates.
Given the tiny sample sizes, I'm not sure one can make any meaningful
statements regarding differences between models, particularly with regard to
some measure of variability such as is implied by the width of a distribution.
This raises another issue regarding Fig. 2 -- why show the results separately
for each model? This does not seem to be relevant to this project. Our
objective is to show that the models as a collection are not inconsistent
with the observations -- not that any particular model is more or less
consistent with the observations. Furthermore showing results for different
models tempts the reader to make such comparisons. Why not just aggregate the
results over all models and produce a histogram? This would also simplify
the figure.
Best regards,
_____John