You are on page 1of 22

CHAPTER I

INTRODUCTION
CHAPTER I

INTRODUCTION

Motivation of the study

As a researcher and forecaster it is the dream of every scientist to give the best
forecast, which realizes perfectly. India Meteorological Department (IMD) gives the
forecast based on several parameters in Long Range Forecast. This forecast is basically
derived from the statistical techniques, whose scale is synoptic and at all India level.

India being an agriculture country its 64% population stays at villages. The Indian
community would generally regard rainfall as the most important meteorological
parameter affecting its economic and social activities. Farmers depend on the monsoon
rain and are always in a doldrums in the month of April/ May about the future of the
coming monsoon. Due to its peculiar structure, forecasting about the Indian monsoon has
become a very sticky job. India that is surrounded by oceans and huge mountains of
Himalaya, enjoy advantages and disadvantages of such geographical position.

Over mid and higher latitudes the forecasting is not so complicated as compared
to tropical region. For short as well as medium range forecasting f numerical weather
prediction models are being used all over the globe. These models require the
meteorological parameters at regular grid intervals. As the numerical weather prediction
is initial as well as boundary value problem it requires the best possible initial data.

Basic aim of this study is to develop various schemes of objective analysis over
India and adjoining region. The schemes which are used at different research and
operational centres have been suitably modified over the Indian and adjoining region.
Also these schemes are tested/ being tested for various meteorological parameters viz.
Geopotential Height, Mean Sea Level Pressure, Temperature, Moisture, Winds and
Relative Humidity etc. The merits and demerits of different schemes are mentioned and

1
depending upon the synoptic situation and the availability of data the schemes are chosen
to perform the analyses.

1.1 Introduction

Media dissemination of weather forecasts to the public is the final step in a


complex process. The first step is to collect all the atmospheric observations from the
entire globe for a given time. Second, these observations are diagnosed or analysed to
produce a regular, coherent spatial representation of the atmosphere at that time. Third,
this analysis becomes the initial condition for the time integration of a numerical weather
prediction model based on the governing differential equations of the atmosphere.
Finally, the numerical weather prediction is used by a human forecaster as the basis for
the public forecast, Daley (1991).

The numerical prediction models store information about the atmosphere on a


regular grid. The forecast given by these models is based on the initial conditions in the
form of gridded analyses of current weather conditions. Due to the advent of the
sophisticated computers, numerical weather prediction became of practical utility as it
could estimate meteorological parameters at points on a regular grid. There are various
methods to describe the current atmospheric conditions as a first step in running a
numerical forecast. These methods involve an objective analysis of the observations, but
many go a step beyond to what is called four-dimensional data assimilation.

The best analysis of the meteorological variables should not only fit the data but
they should also provide initial fields which would give rise to the best possible forecast
by a given forecast model. The analyses which fit very closely with accurately measured
observations need not give rise to the best forecast because of limitations of the grid size,
unrepresentativeness of the observations (however, accurately measured) with the scales
of meteorological systems which the given prediction model is capable of assimilating.

2
Hence, the objective of any data assimilation scheme is to provide a sequence of
fields of dependent variables in 3-dimensions, which would satisfy the dynamical
relationship between them and also satisfy physical laws of the basic equations governing
the atmospheric motions. This has been recognized first by Charney et al. (1969), who
suggested that current and past data are to be combined in an explicit dynamical model
such that the model's prognostic equations provide time continuity and dynamical
coupling among various fields. This concept has been known as four-dimensional data
assimilation.

There are various definitions of objective analysis in different literatures. A few


of them are mentioned below.

1.2 Various definitions of Objective Analysis

1. It is a programmable method for estimating meteorological parameters (winds,


temperatures etc.) at points where no observations exist. Given a set of
meteorological observations unevenly distributed in space and time objective
analysis interpolates those unevenly distributed data on a regular grid of points.
2. An analysis that is free from any direct subjective influences resulting from
human experience, interpretation or bias.
3. The process of interpolating data in an objective way from observations at
irregularly spaced observing stations to regularly spaced grid points at equal
intervals of time.

In general, an objective analysis scheme should perform several functions viz.,


interpolation, removal of erroneous data, smoothing and in most applications should
contain some method of ensuring internal consistency.

1.3 Different schemes of Objective Analysis

Objective analyses are invaluable in observational studies of the atmospheric


general circulation. Many important components of the atmospheric energy, heat,

3
moisture and chemical constituent budgets, such as boundary fluxes, generation,
dissipation and conversion terms can be calculated from objective analyses.

The purpose of objective analysis is to provide gridded estimates of variables


from observations that are irregularly distributed in space and/or time. There are several
reasons why gridded fields may be desired. One reason is for the simple purpose of
displaying and contouring the data; most contouring packages require gridded input. In
addition some of the simpler analysis schemes used today (e.g. Cressman, 1959; Barnes,
1964 and 1973) have predictable and controllable response characteristics that allow
unwanted scales to be suppressed or virtually eliminated from the analysis. A second
reason is that gridded observations are required by widely used and popular techniques
used today for diagnostic calculations that involve derivative estimates.

The third and most important reason for objective analysis is to provide the initial
conditions for a numerical forecast model. Although more sophisticated techniques are
popular for today's operational and research models (e.g. optimum interpolation, Gandin,
1963 and variational methods, Sasaki, 1958) simple objective analysis schemes still are
used on occasion in the research environment because of their simplicity and efficiency.
Lorenc (1986) rightly states that no method for obtaining gridded estimate of a
meteorological variable is ideal in all respects. For some purposes, the sophistication of
optimum interpolation or variational schemes may be desired; in others, the simplicity of
traditional successive correction schemes may be preferred. For example, the use of
sophisticated schemes that involve balance constraints may be more appropriate than
simpler schemes, when the analysis acts as the initial conditions for a numerical forecast
model. Unless a suitable balance exists between the initial mass and motion fields, the
generation of high frequency gravity-inertial waves within the model may render the
forecast useless. On the other hand, simple successive correction schemes may be quite
satisfactory or even preferable for diagnostic studies.

Due to the proved benefits of simple successive correction objective analysis


schemes for meteorological studies, the two schemes remained popular for decades
among diagnosticians, the Cressman (1959) and Barnes (1964, 1973) analysis schemes.

4
These schemes have been used to analyse all sorts of meteorological data including
surface, rawinsonde, satellite, profiler, radar and aircraft data. Both these schemes assign
grid point values based on the distance from the grid points to each member of the set of
observations. Nearby observations affect a grid point value more than distant
observations. These schemes are typically iterated until the analysis converges to some
extent to the observations. Barnes (1994) showed that the goodness of fit to the
observations is a poor measure of the quality of an analysis.

The schemes viz. Cressman, Optimum Interpolation (01), variational, Barnes etc.
have been established all over the globe at various operational as well as research
forecasting centres. The work produced here comprises modified version of 01,
Cumulative Semivariogram technique, Variational scheme and Barnes scheme over India
and adjoining region. The performances of these schemes for various meteorological
parameters viz. height, wind, temperature, rainfall etc. have been tested over Indian
region. As it is a well known fact that prediction of weather is very difficult and
challenging task over tropics and in particular over Indian region, these schemes have to
be modified depending upon the regional constraints. As already mentioned with
reference to the various scientists it is very difficult to advocate any particular objective
analysis scheme as they have their own merits and demerits. In the next section some of
the schemes have been discussed in brief to get an idea how the development in the
analysis schemes took place. The subsequent section describes the most popular methods
for objectively analysing meteorological observations.

1.4 Commonly used analysis techniques

Three major objective analysis techniques are presented in this section:

• Surface fitting methods. A mathematical surface (two-dimensional function) that


fits irregularly spaced observations is determined in the region of interest. From
the function defining the mathematical surface, one can then compute values of
the parameters at grid points where observations are not available.
• Empirical linear interpolation of observed values to grid points.

5
• Statistical objective analysis. A method for estimating meteorological parameters
at grid points, in which correlations among the analysed variables determine the
weights applied to the observations.

These techniques are briefly described below.

1.4.1 Surface Fitting

The earliest attempts to analyse meteorological data involved fitting surfaces to


the data. The surfaces were defined by means of two-dimensional polynomials, one for
each sub-region. The surfaces were joined smoothly at the edges of each sub-region.

Panofsky (1949) made the first attempt to fit mathematical functions to two-
dimensional meteorological data. He arbitrarily divided his map of reports into sub-
regions, each containing 1 0 - 2 0 observations. In each sub-region, Panofsky fitted the
observations with a 3 rd degree polynomial. Using minimization property, he sought a
good fit between the analysed height and the observed height fields. Moreover, he sought
to minimize the angle between the observed wind and the analysed height gradient.
Coefficients were assumed to vary smoothly from one sub-region to another; they were
also expressed as polynomials in x and y, with the degree depending upon the number of
sub-regions. In this way, discontinuities were avoided at the boundaries between regions.

Methods of surface fitting


• Ordinary least squares fitting
• Weighted least squares fitting
• Splines: Although polynomials of sufficiently high degree can be made to pass
through all the data points, the resulting surfaces are not the smoothest possible.
Since most meteorologists prefer the smoothest possible analysis consistent with
the data, the use of splines has become more popular.

6
Advantages and Disadvantages ofSurface Fitting

Surface fitting is an attractive method for small numbers of observations,


especially when the observing network is fixed. No background (first guess) field is
required. It is possible to account for observational error. However, there are several
disadvantages:

• No incorporation of information from a background field is possible. Thus, the


meteorological knowledge about the situation is ignored except when variational
constraints are imposed.
• One must be wary of under fitting, over fitting or using the wrong set of
functions. If the data are under fit (not enough functions in the polynomial
expansion), important details resolved by the data may be lost in the analysis. If
the data are over fit (too many functions), variability in the analysis may have no
meteorological significance. Gradients between observing sites may be
completely unrealistic.
• In data sparse areas or outside the domain of observations, surface fitting can lead
to implausible functional values.
• Surface fitting is computationally expensive when large number of observations
are considered. In some cases, the problem is ill conditioned (numerically
unstable).

1.4.2 Empirical Linear Interpolation

Another class of analysis methods, labeled "empirical linear interpolation" was


popular from the late 1950's to about 1980. Easy to understand and computationally
undemanding, this method was attractive for the meteorologist. Bergthorsson and D56s
(1955) were first to introduce the method. They obtained a gridded analysis of the height
field, which relied upon many different sources of information: 1) the background height
field from a model forecast, 2) climatology and 3) height and wind observations.

7
Bergthorsson and Doos were also first to suggest that both climatology and models
provide information useful for grid point estimates and were first to make crude
allowance for the distribution of observations (dense verses sparse).

In the following section, the Cressman and Barnes methods of empirical linear
interpolation are discussed.

1.4.3 Cressman Analysis

Cressman (1959) introduced an interpolation method which corrects the


background grid point value (obtained from a forecast model) by a linear combination of
residuals (corrections) between predicted and observed values. The residuals are
weighted depending only upon the distance between the grid point and the observation.
The scheme begins with a background field from a numerical forecast. The background
value at each grid point is successively adjusted on the basis of nearby observations in a
series of scans (usually four to six) through the data. The radius of influence (the size of
the circle containing the observations which influence the correction) is reduced on
successive scans in order to build smaller scale information into the analysis where data
density supports it. The advantages of the Cressman scheme made it very popular in the
1960s and 1970s:

• The method is simple and computationally fast. (The speed depends upon the
number of scans.)
• The method incorporates forecast information in the background field. (The
forecast is the source of the first guess.)
• The results are generally pleasing.

The disadvantages are:

• The Cressman scheme is not well suited for diverse observations because
observational error is not accounted for.
• It does not account for the distribution of observations relative to each other.

8
• The scale (detail) of the result varies with observation density.
• There is no obvious way to analyse the wind field based upon height observations.
• Optimum scan radii have to be determined by trial and error.

1.4.4 Barnes Analysis

First proposed by Barnes (1964) the analysis which bears his name has probably
replaced the Cressman analysis as the most-used method of empirical linear interpolation.
It forms the basis for GEMPAK (General Meteorological data assimilation analysis and
Display Software Package), a widely used facility for objective analysis and diagnostic
calculations developed at NASA Goddard Space Flight Centre (Koch et al., 1983). The
Barnes method is still popular with researchers who wish to analyse the observations but
do not have access to a numerical model. It is based on a linear combination of the
observations themselves. The analysis usually involves two passes through the data.
Since the weight approaches zero asymptotically, there is no need to specify a radius of
influence; however, the number of observations should be chosen large enough such that
the datum most distant from the grid point receives a small weight.

The Barnes analysis has several advantages over the Cressman scheme:

• The degree of smoothing in the analysis can be predetermined without


experimentation; small-scale irregularities can be suppressed.
• There is no need to set an influence radius.
• Only two passes are necessary.
• No background field (first guess) is required. Therefore analysis can be performed
without the use of a model.
• Time weighting of observations is possible.

The disadvantages are the same as those for the Cressman analysis with the
exception of control of fine-scale analysis. However, all the objective analysis methods
mentioned above have the following common drawbacks:

9
• They are rather mechanical without any physical foundation but rely on the
regional configuration of irregular sites. Any change in site configuration leads to
different results although the same phenomenon is sampled.
• They do not take into consideration the spatial covariance or correlation structure
within the meteorological phenomena concerned.
• They have constant radius of influence without any directional variations. Hence,
spatial anisotropy of observed fields is ignored. Although some anisotropic
distance function formulations have been proposed by Inman (1970) and
Shenfield and Bayer (1974), all of them were developed with no explicit
quantitative reference to the anisotropy of observed field structure of the
meteorological event.

1.4.5 Optimum Interpolation

The last of the older techniques for four-dimensional data assimilation is a


statistical approach called Optimum Interpolation (01). Gandin (1963) developed and
popularized this least squares technique in meteorology though Eliassen (1954) was first
to suggest it. 01 or variations of it, is still used at major numerical prediction centres
around the world.

In 01 the weights are chosen so as to minimize the mean square error of the
analysis. Without burdening ourselves with the mathematics involved, we can illustrate
the facility of 01 by considering several simple examples. In each example, we assume
that 1) only one variable is analysed, 2) observations of this variable are made
independently at different locations and 3) the background error (error in the model
forecast) is spatially uniform. First, consider a single observation made at the location of
the grid point. 01 says that the best analysed value is a linear combination of observed
and predicted (background) values with the weights proportional to the inverse of the
respective error variances. That is, the smaller the error variance for each value, the
greater the weight for that value. The advantages of 01 include several shared by other
techniques. For example:

10
• Differentiation among observing systems and incorporation of error information
specific to each
• Ability to estimate one variable from observation of another
• Use of the analysis method for quality control of the observations

The following advantages are specific to optimum interpolation:

• Relative weighting of observations and background consistent with the


performance history of the assimilating model
• Estimates of analysis error produced as a function of the distribution and accuracy
of the data

The disadvantages of this method are:

• Computationally more expensive than other commonly used methods


• Scale-dependent correlation models require a long history of numerical forecasts
for accurate determination of empirical coefficients
• Not designed for best performance during extreme events

1.4.6 Variational Objective Analysis

The development of the variational calculus in the seventeenth and eighteenth


century was motivated by the need to find the largest or smallest values of rapidly
varying quantities. The appeal of variational procedures is that they consider a system as
a whole and do not deal explicitly with the individual components of the system. Thus, in
principle, it is possible to derive the behaviour of a system without knowing the details of
all the interactions among the various subcomponents. The variational calculus has been
employed to advantage in several branches of atmospheric science. In particular, it has
stimulated both the theory and the practice of objective analysis and initialization.

Among the many kinds of objective analysis methods, at least two kinds of
objective analysis method produce analysed data in balance. One is Multivariate
Optimum Interpolation (MOI) method and other is numerical variational objective

11
analysis first formulated by Sasaki (1958) by applying the technique of calculus of
variations by subjecting the meteorological variables to dynamic constraints. The
variational optimization technique has been a common tool for solid mechanics problems
for quite a long time. However, after Sasaki's initiation the above technique has
fascinated many research workers to use it for initialization and adjustment of
meteorological fields. The basic approach of this method is to optimize a function in a
domain under certain constraints. The constraints may be diagnostic or prognostic
relations.

The schemes discussed so far have been used extensively by various researchers
over the globe at different research and operational laboratories. Recent development is
data assimilation.

1.5 Data Assimilation

• Data Assimilation - a more sophisticated kind of objective analysis where a


numerical prediction model is combined with observations to give a four-
dimensional estimate of parameters consistent with the constraints of the
prediction model. Ideally, data assimilation should find a model state, which, in
some mathematical sense, most closely fits the observed data. The model provides
the time dimension. It can be used to extrapolate the atmospheric state forward
from one analysis time to the next.

Using a forecast model in four-dimensional data assimilation has several


advantages. The model provides:

• A first guess or background field for each analysis. If few observations are
available at analysis time, a short-term forecast, normally of a few hours duration
can fill the voids between observations. A model background field is usually
superior to one provided by simple persistence or climatology.
• Dynamical consistency between mass and wind fields
• Advection of information into data-sparse regions

12
• Temporal continuity of the fields

The purpose of data assimilation is to provide the initial conditions that will
produce the best possible model forecast and not to mimic the details contained in a
carefully drawn hand analysis. Therefore, it must create an analysis consistent with the
model numeric, dynamics, physics and resolution. This is done by using the short-range
forecast as the basis for the analysis and making a series of small corrections to that
forecast based on new information from observations. Therefore, analyses will be
different for different models and will most likely differ from the best estimate of the true
state of the atmosphere produced by a hand analysis. These observations and model fields
are compared by forming the observation increment, which is simply the difference
between an observation and the forecast, after the forecast has been transformed into an
observation look-alike.

1.6 Using observation increments to make the analysis

This step is the core of the analysis process. It is the most complex step and has a
very significant impact on the quality of the observation increments at observation times
and locations to the grid fields of the model forecast initial conditions (on the model
grid). Instead of analysing the observations, data assimilation analyses the observation
increments, making a grid of "corrections" to the previous short-range model forecast.
These corrections are then added to the short-range forecast to form the initial conditions
for the next forecast cycle. The analysis is performed using statistical information about
the observations and the background fields to extract as much model-usable information
from the observations as possible while retaining as much of the structure of the
background fields as possible.

To produce an accurate weather forecast, precise knowledge of the current state of


the atmosphere (the 'initial conditions') is needed. This is achieved by using observations
and assimilating those observations into the model. Many thousand observations are

13
received each day from a variety of observing types e.g. satellites, aircraft, ships, buoys,
radiosondes and land stations. Various atmospheric parameters are routinely measured
including temperature, wind (speed and direction) and humidity. Observations can be
assimilated into the model using a number of processes and variational analysis is one of
them.

There are insufficient observations at any one time to determine the state of the
atmosphere. So if we want a detailed complete picture of the atmosphere we need
additional information. This is available as knowledge of the behavior and probable
structure of the atmosphere. For instance, the knowledge of the typical structure of a
depression enables a human to draw an "analysis" of the atmospheric state, based on
scattered observations. To advance beyond this subjective approach, the behavior of the
atmosphere is embodied in a computer model. In particular, knowledge of the evolution
with time is embodied in a forecast model. This enables us to use observations distributed
in time. The model also provides a consistent means of representing the atmosphere.
Assimilation is the process of finding the model representation which is most consistent
with the observations.

Usually, data assimilation proceeds sequentially in time. The model organizes and
propagates forward the information from previous observations. The information from
new observations is used to modify the model state, to be as consistent as possible with
them and the previous information. It is the experience with operational assimilation for
Numerical Weather Prediction (NWP) that there is usually more information in the model
state, from previous observations, than there is in a new batch at a single synoptic time.
Thus it is important to preserve this in the assimilation process; it is not just a question of
fitting the new data. Since all information has to be represented within the model, it is
important that the model should be of sufficiently high resolution, with physically
realistic detail, to represent the information observed. Some research is investigating non-
sequential data assimilation methods, especially four-dimensional variational
assimilation.

14
Assimilation produces a convenient, comprehensive, high-resolution,
representation of the atmosphere. It has been clearly demonstrated that the use of a
computer model is usually better (i.e. leads to better forecasts) than the subjective human
approach. The main practical use of these assimilated "analyses" is for initializing NWP
forecasts. They are also useful for climate and general circulation studies, for instance in
the calculations of fluxes, which make use of their high resolution and comprehensive
coverage. However, it must be remembered that the blend of observed and modeled
information will vary according to the accuracy and coverage of the observations. So they
must be used with great care for model validation and climate change detection. Very
useful secondary products of a data assimilation system are the statistics on the (mis-) fit
of observations to model. These can be more directly used for model (in-) validation and
for the monitoring of observing systems.

The schemes described above including data assimilation technique have been
utilized by various scientists over the globe with suitable modifications in the schemes
viz. grid resolution, spatial variability, physical/dynamical constraints etc.

1.7 Earlier work done on objective analysis outside Indian region

Weather prediction methods are based on data available from irregularly


distributed sites within synoptic regions. A sound objective analysis is the primary
prerequisite of successful modeling. Objective analysis studies of meteorological
variables started with Panofsky (1949). He attempted to produce contour lines of upper-
wind movements by fitting third-order polynomials and employing least squares method
to the observations at irregular sites. The least squares method leads to predicted field
variable that depend on the distribution of data points when a suitable polynomial is fitted
to a full grid. Optimum analysis procedures were introduced in meteorology by Eliassen
(1954) and Gandin (1963). These techniques employ historical data about the structure of
the atmosphere to determine the weights to be applied to the observations. Here the
implied assumption is that the observations that are close to each other are highly
correlated; hence, as the observations get farther apart, the regional dependence

15
decreases. Gilchrist and Cressman (1954) reduced the domain of polynomial fitting to
small areas surrounding each node with a parabola. Bergthorsson and Doos (1955)
proposed the basis of successive correction methods, which did not rely on only
interpretation to obtain grid point values but also a preliminary guess field is initially
specified at the grid points. Cressman (1959) developed a number of further corrected
versions based on reported data falling within a specified distance CR from each grid
point. The value of CR is decreased with successive scans and the resulting field of the
latest scan is taken as the new approximation. Barnes (1964) summarized the
development of a convergent weighted-averaging analysis scheme that can be used to
obtain any desired amount of detail in the analysis of a set of randomly spaced data. This
scheme, which is described in CHAPTER VI, is based on the assumption that the 2-
dimensional distribution of atmospheric variables can be represented by Fourier integral
representation. The scheme is based on the supposition that the two-dimensional
distribution of an atmospheric variable can be represented by the summation of an
infinite number of independent waves. A comparison of objective methods through 1979
for sparse data is provided by Goodin et al. (1979). Their study indicated that fitting a
second-degree polynomial to each triangular sub region in the plane with each data point
weighted according to its distance from the sub region provides a compromise between
accuracy and computational cost. Koch et al. (1983) presented an extension of the Barnes
method, which is designed for an interactive computer scheme. Such a scheme allows
real-time assessment both of the quality of the resulting analyses and of the impact of
satellite-derived data upon various meteorological datasets.

According to Thiebaux and Pedder's (1987) assessment of the work done by


Bergthorsson and D66s, "the most obvious disadvantage of simple inverse distance-
weighting schemes is that they fail to take into account the spatial distribution of
observations relative to each other". Two observations equidistant from a grid point are
given the same weight regardless of relative values at measurement sites. This may lead
to large operational biases in grid point data when some observations are much closer
together than others within the region of influence. Especially after 1980s, many
researchers concentrated on the spatial covariance and correlation structures of the

16
regional meteorological variables. Lorenc (1981) has developed a methodology where the
grid points in a sub region are first analysed simultaneously using the same set of
observations and then sub areas are combined to produce the whole study area analysis.
Some papers are concerned with the determination of unknown parameters of the other
covariance functions that provide required weighting for atmospheric data assimilation.
Along this line, the idea proposed by Bratseth (1986) depends on the interpretation of the
meteorological covariances into the objective analysis. His analysis caused a recent
resurgence of the successive correction method in which the optimal analysis solution is
approached. Bratseth's (1986) method uses the correlation function for the forecast errors
to derive weights that are reduced in regions of higher data density. Buzzi et al. (1991)
described a simple and economic method for reducing errors that can result from the
irregular distribution of data points during an objective analysis. They have demonstrated
that a simple iterative method for correcting the analysis generated by an isotropic
distance-weighting scheme applied to an inhomogeneous spatial distribution of
observations cannot improve analysis accuracy but also results in an actual frequency
response that approximates closely the theoretical response of the predicted weight-
generating function. They have shown that in the case of inhomogeneous spatial
sampling, Barnes analysis could produce an unrealistic interpolation of sampled field
even when this is reasonably well resolved by error-free observations. Iteration of a
single correction algorithm led to the method of successive correction, Daley (1991). The
method of successive correction has been applied as a means to tune adaptively the a
posteriori weights. Objective analysis schemes are practical attempts to minimize the
variance estimation, Thiebaux and Pedder (1987). Pedder (1993) provided a suitable
formulation for a successive correction scheme based on multiple iterations using a
constant influence scale that provided a more effective approach to estimate spatial fields
from scattered observations than the more conventional Barnes method, which usually
involves varying the influence scale between the iterations. Dee (1995) has presented a
simple scheme for on-line estimation of covariance parameters in statistical data
assimilation systems. The basis of the methodology is a maximum-like hood approach in
which estimates are obtained through a single batch of simultaneous observations. Simple

17
and adaptive Kalman filtering techniques are used for explicit calculation of forecast
error covariances. However, the computational cost of the scheme is rather high.

There are many more studies on objective analysis and still the modifications and
developments of better schemes are going on at various research centres.

1.8 Special difficulties with tropical analysis

Analysis of any meteorological parameter over tropics has number of problems.


Availability of data is one of the major problems in the analysis procedures in tropics.
Sparsity of the data is important problem because oceans occupy major portion of the
tropics. The wind observations are found to be more reliable than height field and the
geostrophic balance is not valid near equator. Another important factor is the tropical
flow itself. The climatological variance of the flow is much smaller in tropics than those
in mid-latitudes. This means that the errors that might be considered as small in an extra
tropical region are large in tropics and cannot be ignored. The analysis over Indian region
has more problems because of the surrounding Bay of Bengal, Arabian Sea and the
Indian Ocean in the south and consequently the data sparse regions surround all around
Indian region.

The third point is that in the tropics the cumulus convection and the resulting
diabatic heating are very important; consequently, the divergent part of the wind is
comparable to the rotational part. Hence, special efforts have to be made in the analysis
schemes in order to retain the divergent part in wind analysis; otherwise the divergent
part is smoothed out during analysis procedures. Possibly due to above reasons the
performance of the analysis schemes of major forecasting centres viz. National Centre for
Environmental Prediction (NCEP), European Centre for Medium Range Weather
Forecast (ECMWF) etc. is comparatively poor over tropics as over other regions. Also
the analysis schemes in those centres use the error statistics of their forecasting models to
determine the weighting factor for observations. Hence in order to improve the analysis
over tropics, specifically over Indian region, alternate ways and methods should be

18 *
examined. As a result, the 'initial guess field' is relatively poor in the tropics resulting in
further deterioration in the integration from comparatively poor initial guess.

Since there is near balance between mass field and wind field in middle and high
latitudes, it is possible to use the MOI scheme or the variational analysis scheme.
However, coupling between these two fields have to be decoupled slowly as the equator
is approached. The lack of simple relationship between them is one of the reasons why
the analyses over the tropics are not as good as over mid and higher latitudes. Also, since
there is weak pressure and temperature gradients, the observing techniques often lack
accuracy to resolve the gradients.

The u-wind error in the large-scale analysis varies from 1 mps at 900 hPa to 3mps
at 10 hPa in mid latitudes whereas they vary from 2-6 mps in tropics. The problem for
large errors in the analysis in tropics is caused by the tendency of the forecast model to
drift it rapidly towards its own preferred climate and the data available in tropics are not
enough to correct this drift and shift to the true state.

1.9 Earlier work done on objective analysis over Indian region

Over the Indian region important studies were carried out in wind, height, relative
humidity and temperature analysis using successive correction as well as 01 and
variational method. Some of the earlier studies carried out are by Sikka and Ramanathan
(1970), Ramanathan and Sikka (1971), Ramanathan et al. (1972, 1973), Datta et al.
(1970), Datta and Singh (1973), Sinha et al. (1982a, 1982b), Rajamani et al. (1982, 1983,
1986) and Sinha et al. (1987, 1989, 1990). Ramanathan et al. (1973) computed structure
function of wind field at 500 hPa for winter season for India and its neighbourhood and
applied them for the wind field analysis following the procedure based on OI scheme.
These authors have also compared the analysis by successive correction method, double
fourier series fitting method and OI method. Their results showed that the analyses by
these methods perform satisfactorily over data dense region. The analysis over data
sparse region poses great problems and a satisfactory method is yet to be found.

19
Datta et al. (1970) have made use of the Cressman's method for the analysis of
contour heights. Datta and Singh (1973) have also suggested a way to utilize the
relatively denser surface data for use in analysis at higher levels. Sinha et al. (1982a,
1982b) have made objective analysis of sea level pressure and contour height. Rajamani
et al. (1982) used Cressman's successive correction method to study the influence of
Monsoon Experiment (MONEX-79) data on the objective analysis of wind field. It was
found that MONEX-79 data improved the analysis. However, the data from aircraft and
ships improved the analysis more than the data from satellite. Rajamani et al. (1983) have
formulated the 01 scheme in which they used the climatological characteristics of wind
field over the Indian region for the month of July. Rajamani et al. (1986) determined the
optimum grid length with respect to the existing network of upper air observing stations
over Indian region. Prasad and Bansal (1987) applied NCEP 01 analysis scheme for a
few synoptic cases. Begum et al. (1987) made objective analysis of relative humidity
(RH) and showed that there was improvement in the analysis when estimated upper air
RH data were used. Sinha et al. (1987) carried out objective analysis of mixing ratio over
the Indian region. In another study Sinha et al. (1989) carried out the wind analysis over
the Indian region using three different weighting functions for isotropic and anisotropic
conditions. It was found that the use of anisotropic weighting functions improved the
wind analysis specifically when there is strong wind with curvature. Sinha et al. (1990)
and Kulkarni et al. (1991) have carried out relative humidity analysis using univariate 01
scheme including estimated upper air relative humidity from surface observations. Sinha
et al. (1992) carried out some experiments with multivariate objective analysis scheme of
heights and winds using 01 scheme. They have carried out simultaneous analysis of
height and wind fields. For this, the height-height correlations were calculated using daily
height data of four July months (1976-1979) and were further used to derive the other
autocorrelation and cross-correlations assuming geostropic relationship. A Gaussian
function was used to model the autocorrelation function. Since the scheme was
multivariate the regression coefficients (weights) were matrix. Near the equator, the
geostropic approximation relating mass and wind was decoupled in a way similar to
Bergman (1979). Sinha et al. (1995) further modified this scheme over tropics in which

20
the divergent component was included in the statistical model of the wind forecast errors
as proposed by Daley (1985). Due to this new scheme the divergent component of wind
field was retained in the analysis which enhanced the values of velocity potential and
divergence fields. Mahajan et al. (1992) have used satellite-derived winds for objective
analysis by Cressman's successive correction method. Their study suggest that
objectively analysed wind field with the inclusion of constructed winds from cloud
motion vectors (CMVs) can be used in NWP. Sinha et al. (1992) carried out efficient 01
scheme over Indian region following Tanguay and Robert (1990) so as to minimize the
computer time.

Using three dimensional variational (3-D VAR) analysis scheme, Rizvi et al.
(2000) attempted to assimilate directly the geophysical parameters; Sea Surface Wind
speed (SSW) and Total Precipitable Water Content (TPWC) from MSMR (Multi-
frequency Scanning Microwave Radiometer) which is onboard sensor on IRS-P4
satellite. Mitra et al. (2000) used the 75 km resolution IRS-P4 satellite data of TPWC and
studied its representativeness over the Indian Ocean region. They used T80 National
Centre for Medium Range Weather Forecast (NCMRWF) model's operational analysis of
TPWC as the first guess and objective analysis was done by introducing the IRS-P4 water
vapour data as observations using the standard Cressman's technique. They concluded
that this new reanalysis was able to capture the signature of TPWC data from IRS-P4
satellite and the observed values from IRS-P4 were having a positive bias compared to
NCMRWF analysis over the region of the satellite pass. Mitra et al. (1997) in another
study have analysed daily rainfall using Cressman scheme over Indian monsoon region
by combining daily rainguage observations with the daily rainfall derived from INSAT
IR radiances.

Keeping in view the brief background about the subject of objective analysis
given in this chapter, the next chapters describe the schemes developed and tested using
real time data sets. CHAPTER II elaborates the performance of 2-Dimensional and 3-
Dimensional Numerical Variational analysis schemes.

21

You might also like