You are on page 1of 87

Unified Model Documentation Paper 000

UM Basic User Guide

UM Version : 10.1
Last Updated : 2015-03-06 (for vn10.1)
Owner : Glenn Greed

Contributors:
T. Allen, V. Bell, N. Bellouin, A. Bodas-Salcedo, P. Cresswell,
S. Dadson, M. Dalvi, J. M. Edwards, P. Faloon, M. Foster,
G. Greed, R. S. R. Hill, A. Jones, R. Jones, A. Lock,
B. Macpherson, J. Manners, C. Morcrette, S. D. Mullerworth,
J. G. L. Rae, S. Reddy, D. L. Roberts, G. Rooney, N. Savage,
P. Selwood, R. Stratton, S. Webster, J. M. Wilkinson, A. Wiltshire,
R. Wong, M. J. Woodage and S. Woodward

Met Office
FitzRoy Road
Exeter
Devon EX1 3PB
United Kingdom

c Crown Copyright 2015


This document has not been published; Permission to quote from it must be obtained from the Unified Model
system manager at the above address
UMDP: 000
UM Basic User Guide
Contents

Contents

List of Figures 2

List of Tables 3

1 Preface 4

2 Introduction 5
2.1 The Unified Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3 Understanding the Unified Model 9


3.1 Atmospheric Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Atmospheric Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 Code Management and Structure 17


4.1 Managing and Modifying Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Compilation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 The UM on Parallel Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4 Atmosphere Model Control Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5 Using the Unified Model 32


5.1 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2 Reconfiguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.3 Changing Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Atmospheric Tracers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.5 Selecting a New LAM Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.6 Science Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.7 Assimilation in the Atmosphere Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.8 Ancillary and Boundary Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.9 Single Column Unified Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.10 NEMO Ocean and CICE Sea Ice Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.11 Coupled Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.12 Diagnostic Output (STASH) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.13 Climate Meaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.14 File Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.15 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

A Output From the Unified Model 73


A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
A.2 Managing Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

B Error Handling In the Unified Model 75


B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
B.2 Default Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
B.3 Overriding Default Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
B.4 Post-failure actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

C Acronyms 77

D Definitions 81

References 82

1 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
List of Figures

List of Figures

4.1 Structure of an fcm-make app. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19


4.2 An example app’s fcm-make.cfg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3 Abstract representation of the UM’s fcm-make directory. Here “executable” refers to the type of
executable(s) to be built, e.g. atmosphere and reconfiguration, or a UM utility. . . . . . . . . . . 21
4.4 An app’s fcm-make.cfg file, with overrides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5.1 Example simple UM rose suite as depcited by cylc gui. . . . . . . . . . . . . . . . . . . . . . . . 33


5.2 Example rose bush view of model run with links to output. . . . . . . . . . . . . . . . . . . . . . 33
5.3 Example rose-gui. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.4 Example view of the rosie gui. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
List of Tables

List of Tables

5.1 Unified Model Sections - Atmosphere (A), Control (C) and Land (L) . . . . . . . . . . . . . . . . 54
5.2 File Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 1. Preface

Chapter 1:
Preface
This document is the Basic User Guide for the Unified Model (UM). It gives basic information on how to use the
model and includes references to relevant UM Documentation Papers (UMDPs) where more detailed information
is required.
Chapter 2 provides the introduction. It gives an overview description of the UM and its development history.
Chapter 3 contains information describing various aspects of the Unified Model. It is intended to help explain
how the UM works and describe some of its features but does not explain how to use it. Chapter 4 contains
information about the UM source code. This includes information on its structure, management and compilation.
The main part of the User Guide is chapter 5. It contains instructions on how to use various parts of the UM and
how to perform particular tasks.
The Basic User Guide is intended for use by both Met Office and external users of the UM.
This document was written in LATEX and is available in HTML, PDF and PostScript forms. The HTML and PDF
forms contains links to allow easy navigation around the document and also out to external documents (such
as UMDPs, source code listings, etc). External users will find that some of these links will not work since they
are specific to the Met Office or have not been configured.
If you have any contributions, corrections, or any other feedback concerning this document please contact the
editor at the following e-mail address: umsysteam@metoffice.gov.uk.

Note from the editors

Some authors have not provided updates for several releases and these sections are now considered out-of-
date, in some cases significantly so. These sections are reproduced verbatim from the previous version of this
document with a comment at the start of the section noting that the text has not been updated for this version.
Any questions regarding these sections should be directed to the relevant code owner.
Sections which have been recently checked or updated and are considered relevant to the latest release of the
UM are not marked in this way, even if they have not been updated for this version of the document.

4 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 2. Introduction

Chapter 2:
Introduction
2.1 The Unified Model
Author: Glenn Greed, Last Updated: 07 Jul 2014

2.1.1 Overview

The Met Office Unified Model (UM) is the name given to the numerical modelling software developed and used at
the Met Office. The UM is “seamless”; the model can be used for prediction across a range of timescales (from
weather forecasting to climate change) and this has been at the heart of the Met Office strategy for weather and
climate prediction since 1990.
The UM is used by the Met Office for operational numerical weather prediction, atmospheric research, and by
the Met Office Hadley Centre for seasonal forecasting and to simulate and predict the Earth’s climate. The UM
is also used by a number of meteorological centres and research organisations worldwide.
Different configurations of the same model are used across all time and space scales, where each configura-
tion is designed to best represent the processes which have most influence on the timescale of interest. For
example, for accurate climate predictions the use of a coupled ocean model is essential, whilst for short-range
weather forecasting a higher resolution atmospheric model may be more beneficial than running a costly ocean
component.
There are many benefits of using a seamless modelling system, including:
• efficiency: by developing only one system, duplicated effort is reduced
• understanding: a seamless system allows us to learn about climate model performance and error growth
from well constrained, observed and initialised shorter range predictions
• confidence: using the same model at different resolutions gives us confidence that the driving mechanisms
within models are consistent with each other
• synergy: advances in climate science can improve weather forecasts and vice-versa.
The UM was introduced into operational service in 1991 and both its formulation and capabilities have been
developed ever since, taking advantage of our improved understanding of atmospheric processes and steadily
increasing supercomputer power.
The UM supports global and limited area domains, the ability to run with a rotated pole, and a variable horizontal
grid and the capability to couple with various Earth system components/models, i.e. land surface, ocean, wave
and chemistry.

2.1.2 Applications

A typical run of the system consists of an optional period of data assimilation followed by a prediction phase.
Forecasts from a few hours to a few days ahead are required for numerical weather prediction, while for climate
modelling the prediction phase may be for tens, hundreds or even thousands of years.
A separate reconfiguration module allows atmosphere model data to be optionally interpolated to a new grid or
area prior to the start of a run.
Example configurations of the UM employed at the Met Office are listed on the Met Office website:
NWP: http://www.metoffice.gov.uk/research/modelling-systems/unified-model/weather-forecasting
Climate Prediction: http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models

5 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 2. Introduction

2.1.3 Features and Capabilities

Formulation

Until version 6.6, the UM code included atmospheric, ocean and sea-ice model components. The ocean and
sea-ice components were retired at UM7.0 in favour of coupling the UM atmosphere with the NEMO ocean and
CICE sea-ice models. These are owned and developed separately from the UM, follow completely indepen-
dent release cycles and, when coupled to the UM they run as separate components within an MPMD parallel
configuration.

Atmospheric Prediction

The atmospheric prediction component uses a set of equations that describe the time-evolution of the atmo-
sphere. The equations are solved for the motion of a fluid on a rotating, almost-spherical planet. The main
variables are the three components of wind (westerly, southerly and vertical), potential temperature, exner pres-
sure, density and components of moisture (vapour, cloud water and cloud ice). The UM dynamical core is
non-hydrostatic which allows for vertical accelerations enabling the model to be used for very high-resolution
forecasting. Two dynamical cores are supported, New dynamics ( UMDP-015 ) and the newer ENDGame (
UMDP-016 ). The Unified Model uses a grid-point scheme on an Arakawa ‘C’ grid in the horizontal with model
data ordered south to north. In the vertical, a Charney-Phillips grid is used which is terrain-following near the
surface but evolving to constant height surfaces higher up.
The physical processes represented include:
• atmospheric longwave and shortwave radiation allowing for the effects of clouds, water vapour, ozone,
carbon dioxide and a number of trace gases;
• land surface processes including a multi-layer soil temperature and moisture prediction scheme;
• a treatment of the form drag due to the sub-grid scale variations in orography;
• vertical turbulent transport within the boundary layer;
• large-scale precipitation determined from the water or ice content of a cloud;
• the effects of convection through a scheme based on the initial buoyancy flux of a parcel of air, which
includes entrainment, detrainment and the evaporation of falling precipitation;
• an interactive modelling of the effects of aerosols, such as sulphates, fossil fuel soot, mineral dust and
biomass smoke as well as sea salt, with their transport, mixing deposition and radiative effects being
represented;
• routing of the total (surface plus subsurface) runoff produced by the surface physics along prescribed river
channels to the sea;
• the effects of tropospheric oxidants.
A range of ancillary data may be incorporated into the model as a run progresses. The options range from lateral
boundary forcing information for regional configurations (a number of regional domains may be nested within
each other), to the periodic updating of fields such as ozone concentration to allow for the effects of seasonal
variations, and surface forcing for ocean-only configurations.

Diagnostic Processing

Diagnostic data output from the model may be defined and output on a timestep by timestep basis. Horizontal
fields, sub-areas or timeseries may be requested, for inspection and further analysis, or as forcing information
for other sub-models such as a sea-state prediction model. Accumulation, time meaning or spatial meaning
of fields may also be specified. The output may also be split across a number of files in order to facilitate the
separate post-processing of classes of diagnostics.

6 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 2. Introduction

Output

The results of an integration are stored in a format compatible with other file types used within the Unified Model
system; fieldsfiles. Several utilities exist which will convert them into a format suitable for further processing; for
example PP format, GRIB and netCDF.

Computer Requirements

The UM code is written in FORTRAN. A small number of low-level service routines are written in ANSI C to
facilitate portability.
The source code is managed and built using FCM (based on subversion) https://github.com/metomi/fcm. The
UM suites are managed/configured and run using the Rose infrastructure https://github.com/metomi/rose/.
For time critical (NWP and Climate prediction) production work the UM will require a powerful supercomputer.
The UM is written to be efficient on distributed memory massively parallel processing (MPP) computers, e.g.
Cray, optimised to run on vector processing platforms, e.g. NEC, and also on shared memory computers, such
as the IBM Power 7 series. The UM code is bit reproducible independent of the number of processing elements
used and also number of threads.
An I/O server is available providing both asynchronous diagnostic and asynchronous dump IO support.
The code may also be run on workstation systems and desktop PCs running the Linux operating system using
the IEEE standard for representing floating point numbers. Workstations and Linux PCs are particularly suited
to development work or research activities requiring the use of low resolution UM configurations.

User Interface

The wide range of options available within the Unified Model system means that a large number of parameters
need to be defined in order to specify and control a particular model configuration. All the UM inputs are
configurable via the Rose interface gui.

Reconfiguration

The reconfiguration stage processes the model’s initial data. For the atmosphere mode, it allows the horizontal
or vertical resolution of the data to be changed or the creation of a limited area region. Ancillary data may
also be incorporated into the initial data. Fields such as the specification of the height of orography or the
land-sea mask, for example, may be overwritten by more appropriate values if the horizontal resolution has
been changed. Other facilities available within the reconfiguration include the ability to incorporate ensemble
perturbations, tracer fields and the transplanting of prognostic data.

Ancillary Data

New ancillary data may be incorporated as an integration proceeds to allow for the effects of temporally varying
quantities such as surface vegetation, snow cover or wind-stress. The choice of whether to include ancillary data
will depend on the application. For example, the system supports daily updating of sea-surface temperatures
based on climatological or analysed data — this facility is not needed in coupled ocean-atmosphere runs of
the model. In fact options exist to allow most boundary forcing variables to be predicted. However, it is often
necessary to constrain such fields to observed or climatological values for reasons of computational economy
or scientific expediency.

OASIS Coupler

The UM supports the use of the serial OASIS3 (Ocean Atmosphere Sea Ice Soil) coupler, developed at the
European Centre for Research and Advanced Training in Scientific Computation (CERFACS), and the parallel

7 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 2. Introduction

OASIS3-MCT (Model Coupling Toolkit) coupler. Functionality provides for the coupling of independent atmo-
sphere and ocean-sea-ice GCMs, and potentially other components such as wave models, in climate, seasonal
and forecast modelling scenarios.

8 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 3. Understanding the Unified Model

Chapter 3:
Understanding the Unified Model
3.1 Atmospheric Physics

When the UM is run at the standard climate resolution grid-points are about 300 km apart and even in a high
resolution forecast run grid-points are still 4 km apart. Many atmospheric processes operate on much smaller
length-scales and therefore cannot properly be resolved in the UM. Such processes are parametrized: it is
assumed that the gross effects of these processes on resolved scales can be predicted using a modelling
scheme which itself operates on the large-scale fields in the model. This assumption seems to be justified in
practice.
The following sections describe the physical processes which are represented in the model.

3.1.1 Cloud Scheme


Author: Cyril Morcrette, Last Updated: 29 Nov 2010
The representation of clouds is an essential part of any general circulation model (GCM) used for numerical
weather prediction (NWP) or climate-change studies. Clouds interact with solar and thermal infra-red radiation,
affecting surface temperatures and the planet’s radiative budget. They affect the large-scale circulation by
redistributing latent heat and as precursors of precipitation they are a key part of the hydrological cycle.
At the resolution of general-circulation models, one cannot assume that a gridbox is either completely cloudy or
completely clear. The purpose of the cloud scheme is to determine how much cloud cover there is within the grid-
box and the condensed water content of those clouds. The cloud fraction and condensate amounts calculated by
the cloud scheme are then used as inputs to the radiation scheme and the microphysics/precipitation scheme.

The diagnostic cloud scheme

The diagnostic cloud scheme described by Smith (1990) [56] considers fluctuations about the grid-box mean
“vapour+liquid” water content and these are parametrized via a probability density function (PDF), which is
assumed to be triangular and symmetric and whose width is specified through the critical relative humidity
(RHcrit ), typically around 0.8. When the grid-box mean relative humidity is below RHcrit , there is no cloud. As
the relative humidity increases, so does the cloud fraction, reaching 0.5 when RH=100%. Complete cloud cover
is reached when RH=2.0-RHcrit . There is a close relationship between liquid cloud fraction and liquid water
content.
Ice cloud fraction is calculated assuming a diagnostic relationship with the ice water content which has been
calculated by the microphysics scheme and there is no direct link between relative humidity and ice cloud
fractions. The total volume cloud fraction is calculated assuming a specified amount of overlap between the ice
and the liquid clouds. Mixed-phase regions can exist where ice and liquid clouds overlap.
Further details can be obtained from UMDP-029 .

The prognostics cloud scheme (PC2)

The prognostic cloud fraction and prognostic condensate (PC2) cloud scheme recognises that the cloud fraction
and condensate amount at a particular place in time cannot necesarily be diagnosed from the temperature and
humidity that point and time. Clouds in the atmosphere may be the result of processes, such as convection,
which occured at different place, at some point in the past.
There are many physical process that can impact the evolution of the cloud fraction and condensed water con-
tent. These include: short-wave radiation, long-wave radiation, boundary-layer processes, large-scale ascent,

9 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 3. Understanding the Unified Model

large-scale precipitation, convection and sub-grid mixing. Each of these processes modifies the cloud fraction
and condensed water content and the updated cloud fields are then moved around by the wind. Mixed-phase
regions can exist where ice and liquid clouds overlap.
PC2 is available from UM version 6.5. Additional details are in UMDP-030 and in Wilson et al (2008a and b)
[66, 67].

3.1.2 Radiation
Author: J. Manners, J. M. Edwards, A. Bodas-Salcedo, Last Updated: 28 March 2012

Shortwave radiation from the Sun is absorbed in the atmosphere and at the Earth’s surface and provides the en-
ergy to drive the climate system. Longwave radiation is emitted from the planet into space and also redistributes
heat within the atmosphere. To represent these processes a GCM must contain a radiation scheme.
Various radiative processes occur in the atmosphere: gases, aerosols, cloud droplets and ice crystals absorb
radiation and emit thermal radiation. Aerosols, cloud particles and air molecules scatter radiation. The surface
absorbs, reflects and emits radiation. The radiative properties of atmospheric constituents may vary rapidly with
frequency and the geometrical arrangement of clouds is an important influence on the radiation budget. The
modelling of atmospheric radiation is therefore potentially very complicated and a radiation scheme suitable for
a GCM must involve some approximations: different approaches lead to different schemes which are discussed
in more detail in UMDP-023 .
The radiation scheme models the radiation field in terms of an upward and a downward flux. External spectral
files, discussed further in the appendix of UMDP-023 , allow freedom in the choice of gases, aerosols and
cloud parametrizations to be included in the run. Typically, water vapour, carbon dioxide, ozone and oxygen
are considered in shortwave calculations, and water vapour, carbon dioxide, ozone, methane, nitrous oxide and
various halocarbons in longwave calculations. See section 3.1.4 for details of the aerosols included. Within
clouds, water droplets and ice crystals are treated separately with optical properties depending on their sizes.
In the case of water droplets, a microphysical scheme allows the size to be related to the number of cloud
condensation nuclei. In the case of ice crystals, the size is related to the temperature. There are various
options for treating the overlap and inhomogeneity of cloud in the column. More details are given in the GUI
metadata/help.

COSP

The Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulator Package (COSP) is a
modular piece of software whose main aim is to enable the simulation of data from several satellite-borne
sensors from model variables. The stand-alone version of the software includes simulators for the following
instruments: CloudSat radar, CALIPSO lidar, ISCCP, the Multiangle Imaging SpectroRadiometer (MISR), and
the Moderate Resolution Imaging Spectroradiometer (MODIS). The UM includes an implementation. See [10]
for more details on COSP.

3.1.3 Boundary Layer


Author: A. Lock, Last Updated: 7th July 2010

The atmospheric boundary layer is that part of the atmosphere that directly feels the effect of the earth’s surface
through turbulent exchange of heat, moisture, momentum, pollutants and other constituents. Its depth can range
from just a few metres to several kilometres depending on the local meteorology. The turbulent motions within
the boundary layer are not resolved by NWP or climate models, but are important to parametrize in order to give
a good representation of, for example, low-level clouds, winds and temperatures.
The boundary layer scheme determines the turbulent fluxes strictly above the surface. At the surface, fluxes
are determined by the surface exchange scheme, described in section 3.1.7. The boundary layer scheme
involves the calculation of vertical eddy diffusivities to represent the unresolved turbulent motions. There are two
independent approaches available in the UM. The first is the long-standing “diagnostic” scheme, that uses the
resolved state of the atmosphere and surface to determine the diffusivities. The following sources of turbulence
are represented: surface heating or wind shear, radiative and evaporative cooling at stratocumulus cloud tops
and local instabilities driven by wind shear. The scheme is also closely coupled to the parametrization of

10 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 3. Understanding the Unified Model

cumulus convection, described in section 3.1.6. This version is currently used in all operational configurations.
The second version (1A), calculates the diffusivities from prognostic equations for the higher order moments of
the turbulence, including the turbulence kinetic energy (TKE). Three versions of varying complexity are available
but all require relatively short timesteps and high vertical resolution for acceptable accuracy. They are currently
only recommended for research purposes but their prognostic nature makes them particularly suited to regimes
of rapid temporal or spatial variation in surface forcing.

3.1.4 Aerosols
Author: N. Bellouin, J. G. L. Rae, S. Reddy, D. L. Roberts, M. J. Woodage, S. Woodward and A. Jones, Last Updated: 28th
July 2010
The UM includes schemes to interactively model ammonium sulphate, black carbon, biomass-burning, mineral
dust, fossil fuel organic carbon and ammonium nitrate aerosols, including transport, mixing, deposition and
radiative effects. There is also a relatively simple diagnostic scheme for sea-salt aerosol. Scientific details
can be found in UMDP-020 (Aerosol Processes). All aerosol types are independently selectable, though if
modelling their indirect effects on clouds one must include sulphate, as this is the dominant aerosol type for
cloud condensation nuclei (CCN).
The radiative properties of aerosols are specified in the spectral files which are discussed under the radiation
code, see section 3.1.2. The effects of hygroscopic growth are parametrized for hygroscopic aerosols (sulphate,
nitrate, sea-salt, biomass-burning and organic carbon aerosols).
The user may also select non-interactive aerosol climatologies from ancillary files. In the event of an aerosol
climatology being selected for a species for which interactive modelling has also been selected, the latter is
neglected.

3.1.5 Precipitation
Author: J.M. Wilkinson, Last Updated: 15 Apr 2009
Precipitation is represented in two schemes in the Unified Model. The convection scheme removes moisture
generated in subgrid-scale convective systems. The large-scale precipitation scheme, described here, removes
cloud water that is resolved on the grid scale.
The large-scale precipitation scheme is based on the Wilson and Ballard (1999) [65] mixed-phase precipitation
scheme. The basic scheme has four categories of water: vapour, liquid, ice and rain. Snowfall is represented
as the fall of particles in the ice category that are advected downwards.
The microphysical processes that transfer water between categories are parametrized within the large-scale
precipitation scheme.
The scheme is an upgraded version of the original [65] scheme, with greater functionality. It is designed to work
with the PC2 cloud scheme.
Your default choices should be to run with the second ice prognostic, prognostic rain and prognostic graupel all
turned off.
Further information is contained within UMDP-026 . Although you should cite Wilson and Ballard (1999) [65]
as the reference for the Unified model precipitation scheme, please refer to UMDP-026 for any scientific or
technical detail you require, as the code now significantly differs from that described in [65].

3.1.6 Convection
Author: Rachel Stratton, Last Updated: July 2010
The UM convection scheme represents the transport of heat, moisture and momentum associated with cumulus
convection (precipitating and non-precipitating) within a model gridbox. Within a model grid box, an ensemble of
cumulus clouds is represented as a single entraining-detraining plume. Entrainment and detrainment represent
interactions that occur between convective clouds and the environment through which they ascend. The scheme
is described in detail in UMDP-027 .
The scheme can be broken into four main components,

11 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 3. Understanding the Unified Model

1. Triggering, either determined by the diagnosis scheme used to classify boundary layer types, see [8] as
deep, congestus or shallow, or if the instability is above the boundary layer by the mid-level convection
scheme using the original Gregory-Rowntree scheme.
2. Cloudbase closure, i.e. determining how much convection will occur. The amount of convection is
determined by the mass transported through cloudbase.
3. Environment modification: The transport model determines the changes to the model temperature,
moisture and wind fields due to convection and associated precipitation.
4. Diagnosis of convective cloud (Non-PC2): The convective cloud scheme calculates the amount of cloud
associated with the convection. The convective cloud amount is passed to the radiation scheme. If PC2
is active, the convection scheme provides increments to the prognostic large-scale cloud variables and
these are passed to radiation instead of a separate diagnosed convective cloud.
There are currently three variants of the convection scheme, 4A, 5A and 6A. The 4A scheme is used by the
operational model and the development version of the climate model HadGEM3. The 5A version is currently a
development version, with the mass flux, shallow, deep and mid-level schemes of the 4A version plus options
for new turbulence based shallow and deep convection schemes. The 6A version is a second development
including a turbulence based congestus option.
Many options and parameters may be configured via the GUI/namelists.(For the latest advice and information
about the options consult the metadata/gui help.)

3.1.7 Land Surface and Vegetation


Author: Gabriel Rooney, Last Updated: 15 Nov 2012
The surface scheme calculates the fluxes of heat, moisture, momentum and carbon at the surface-atmosphere
interface, and updates the surface and sub-surface prognostic variables which affect these fluxes. In the current
UM, the surface-atmosphere fluxes are calculated within the JULES submodel. JULES code is separate to that
of the UM. Code from both repositories is extracted and combined in one model compilation.
For further information on JULES, please see:
Best, M. J., Pryor, M., Clark, D. B., Rooney, G. G., Essery, R .L. H., Mnard, C. B., Edwards, J. M., Hendry, M.
A., Porson, A., Gedney, N., Mercado, L. M., Sitch, S., Blyth, E., Boucher, O., Cox, P. M., Grimmond, C. S. B.,
and Harding, R. J.: The Joint UK Land Environment Simulator (JULES), model description Part 1: Energy and
water fluxes, Geosci. Model Dev., 4, 677-699, doi:10.5194/gmd-4-677-2011, 2011.
Clark, D. B., Mercado, L. M., Sitch, S., Jones, C. D., Gedney, N., Best, M. J., Pryor, M., Rooney, G. G., Essery,
R. L. H., Blyth, E., Boucher, O., Harding, R. J., Huntingford, C., and Cox, P. M.: The Joint UK Land Environment
Simulator (JULES), model description Part 2: Carbon fluxes and vegetation dynamics, Geosci. Model Dev., 4,
701-722, doi:10.5194/gmd-4-701-2011, 2011.
Blyth, E., Clark, D. B., Ellis, R., Huntingford, C., Los, S., Pryor, M., Best, M., and Sitch, S.: A comprehensive set
of benchmark tests for a land surface model of simultaneous fluxes of water and carbon at both the global and
seasonal scale, Geosci. Model Dev., 4, 255-269, doi:10.5194/gmd-4-255-2011, 2011.
Blyth, Eleanor, John Gash, Amanda Lloyd, Matthew Pryor, Graham P. Weedon, Jim Shuttleworth, 2010: Eval-
uating the JULES land surface model energy fluxes using FLUXNET data, J. Hydrometeor, 11, 509519, doi:
http://dx.doi.org/10.1175/2009JHM1183.1

3.1.8 Gravity Wave Drag


Author: S. Webster, Last Updated: 04 Jan 2011

The gravity wave drag (GWD) parametrization is actually two distinct parametrizations. The first represents
the GWD and flow-blocking drag due to sub-gridscale orography (SSO), whilst the second represents the
drag/acceleration due to all other (i.e. non-orographic) sources of gravity waves.
The default SSO (4a) scheme is described in the open literature [63], with more precise details described in
UMDP-022 and the calculation of the SSO fields described in UMDP-074 . The scheme represents the drag
due to the low-level flow impinging on the SSO. The drag is partitioned into blocked flow and gravity wave
components depending on the low-level Froude number (Fr ). Above a user defined critical Froude number

12 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 3. Understanding the Unified Model

(Frc ), all the drag is attributed to gravity waves. The waves are assumed to be linear hydrostatic and thus the
wave stress is carried upwards and deposited according to a simple saturation hypothesis. This mainly leads to
a drag in the lower stratosphere. When Fr < Frc , a proportion of the drag is attributed to flow blocking, with the
exact proportion being dependent on the exact value of Fr . The flow blocking drag is deposited uniformly from
the ground up to the top of the SSO. In practice, about 80% of the total drag is attributed to flow blocking. Note
a new SSO scheme (5a scheme) is also available but this is for experimental use only.
The non-orographic gravity wave scheme is also described in the open literature in [54], with more details of
the code itself described in UMDP-034 . This parametrization represents the spectrum of non-orographic
gravity waves that propagate upwards out of the troposphere into the middle atmosphere. The representation
of these waves is crucial to the simulation of the polar night jet and the quasi-biennial oscillation. Most of the
drag/acceleration from this scheme is exerted in the upper stratosphere and mesosphere. Therefore, to work
effectively, this scheme should only be included in middle atmosphere versions of the UM, i.e. versions that
include a number of model levels in the middle atmosphere. Ideally, as the net parametrized momentum flux
at launch is initialised to zero, the residual GW flux leaving the top model layer should also be zero but even
with an 85km lid this momentum conservation condition has to be imposed by selecting ’Use Opaque model lid
condition’. If the residual flux is relatively large, the opaque lid will result in large accelerations to the flow in
the top layer which may affect stability. The alternative, when ’Use Opaque’ is not selected, is a transparent lid
condition which allows the residual net flux to leak out of the top of the model.

3.1.9 River Routing


Author: Pete Faloon, Andy Wiltshire, Vicky Bell, Simon Dadson & Richard Jones , Last Updated: 27th Sept 2007

The river routing scheme routes the total (surface + subsurface) runoff produced by the surface physics along
prescribed river channels to the sea. The resulting freshwater outflow is passed to the ocean where it is involved
in the thermohaline circulation. The initial scheme, was written in 1990 and simply accumulated the total runoff
for each river basin and input it instantaneously into the prescribed ocean outflow point which was unrealistic.
It was part of the coupling routines so only ran in a coupled Atmosphere/Ocean. The new scheme is now
incorporated into the Atmospheric part of the UM after the call to Hydrology and can be run either in the
Atmospheric model for validation purposes or in the Coupled Atmosphere/Ocean Model.

The UKMO 1A Global Scheme

The new Global (1A) scheme uses the model ”TRIP” (Total Integrating Pathways), developed by Oki (1998) [47].
TRIP uses a simple advection method [46] to route total (surface and subsurface) runoff along prescribed river
channels. The river channels are prescribed in a river direction and river sequence file. Oki produced these on
a 1 × 1 degree grid and as they could not be interpolated to the GCM Atmosphere grid, the resolution of TRIP
was maintained when coupled, so the inputs and outputs are regridded between the UM and TRIP grids. As the
grids are non-congruent, some manual alteration was required in the river direction and sequence files. The river
channel fields (sequence and direction) and a monthly initial water storage field are read in from single ancillary
files. Total runoff is accumulated over a river routing timestep (currently 1 day) and, after regridding, passed in
to the routing scheme, together with the river channel and water storage files. Due to non-congruence of the
grids, some runoff is regridded to a TRIP seapoint and so is added directly to the river grid seapoint inside TRIP.
After routing the runoff and updating the water storage, TRIP outputs the diagnostics of total inflow, total outflow
and the prognostic of updated water storage for each grid-box on the river routing grid. The total outflow, at
river mouths and seapoints only, is ‘mapped’ on to the associated Atmosphere gridboxes, stored as a diagnostic
and passed to the coupling routines via the dump to be interpolated and passed to the ocean as in the initial
scheme.
Selecting ’Re-routing inland basin water back to soil moisture’ is only applicable to the global river routing
scheme 1A and the re-routed water is held over until the next timestep and then added to the change in top-
level soil moisture. Further corrections to the land water budget and river routing were applied for HadGEM2-ES
at UM version 6.6.2 and implemented in the UM trunk at version 7.5 including a) allowing for inland basins to
be defined seperately, b) correction to the coupling time for inland basins to 24hrs and removal of excess water
from the inland basins (e.g. where you might expect a lake to form) spreading this proportionately over the river
outflow points, c) correction to an out of bounds error and d) correction for diagnostic/climate meaning errors.
Ancillary files are available for different global resolutions, although the river routing scheme operates at one
degree.

13 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 3. Understanding the Unified Model

The UKMO 2A Regional Scheme

The new Regional (2A) scheme uses river routing component (River Flow Model - RFM) of the model ”G2G”
(Grid-to-Grid), developed by CEH Wallingford (Bell et al. (2007) [9]; Jones et al. (2007) [33]). The G2G uses a
discrete approximation to the 1-D kinematic wave equation with lateral inflows to route surface and subsurface
runoff along land and river pathways. The flow paths (which generally correspond to observed river networks)
are identified offline using a Digital Terrain Model (DTM), and are configured so that they coincide with the RCM
grid. The purpose of the scheme is to propagate grid-square estimates of runoff laterally to estimate flow at
points along a river as well as discharges to the sea. The flow paths take the form of one of eight directions,
corresponding to points on a compass. All land grid squares have a flow path associated with them. Depending
on the resolution of the RCM, not all grid-squares will be ”river”; some may be ”non-river”. The demarcation
between non-river and river pixels is determined by applying a threshold area to an ancillary file which gives an
estimate of the land area draining to every pixel. Pixels with larger drainage areas are assumed to correspond
to rivers, although in reality, soil type and relief would influence this demarcation. Inland lakes are assumed
to be ”land” points and are provided with a flow direction, so that lake pixels do not act as sinks. The flow
directions and catchment areas are read in from a single ancillary file. Provision has been made to read in files
of slope and initial flows although these are not used in the current version. Total runoff is accumulated over
a river routing timestep (currently 1 day) and passed in to the routing scheme, together with the river channel
and water storage files. After routing the runoff, G2G outputs diagnostics of river outflow and updates the
prognostics of surface storage, sub-surface storage and accumulated surface and sub-surface inflow to each
pixel.

3.1.10 UKCA (United Kingdom Chemistry and Aerosols)


Author: Nicholas Savage, Last Updated: June 2013

Introduction

UKCA is a framework for atmospheric chemistry and aerosols operating in the MetUM environment using stan-
dard MetUM prognostics and diagnostics. It is designed to operate a variety of chemistry and aerosol schemes,
together with the associated interactions between UKCA schemes and other components such as the radiation
and carbon cycle schemes. UKCA was developed as a community model, in a collaboration between NCAS
and The Met Office, with components provided by contributors from The University of Cambridge, University of
Leeds, University of Oxford and The Met Office. A variety of chemistry and aerosol configurations of UKCA are
available. This section will give a brief overview of the main chemistry and aerosol schemes which can be run.
For more complete details see UMDP-084 - UKCA Technical Description.

Chemistry schemes

The following chemistry schemes can be used with UKCA: Standard Tropospheric Chemistry; Standard Tropo-
spheric chemistry with parametrized Isoprene; Chemistry for Regional Air Quality (RAQ); Stratospheric Chem-
istry; a combined Chemistry for the stratosphere and troposphere, Heterogeneous chemistry for the troposphere
and aerosol chemistry. Emissions are defined by using the 2-D and 3-D user ancillary facilities in the UM. Some
of these schemes are suitable only for modelling the troposphere (Standard Tropospheric Chemistry; Tropo-
spheric chemistry with parametrized Isoprene; Chemistry for Regional Air Quality, Heterogeneous chemistry
for the troposphere), the Stratospheric Chemistry is used for modelling the Stratosphere only and Chemistry
for stratosphere and troposphere has a chemistry scheme which models both stratospheric and tropospheric
chemistry. Some chemistries (e.g. aerosol chemistry and tropospheric heterogeneous chemistry) are designed
to be run in conjunction with other schemes.

Aerosol schemes

The Global Model of Aerosol Processes (GLOMAP), simulates the evolution of size-resolved aerosol properties,
including processes such as new particle formation, coagulation, condensation (gas-to-particle-transfer) and
cloud processing. Prognostic variables in GLOMAP are particle number and mass concentrations in different

14 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 3. Understanding the Unified Model

size classes, with processes such as condensation and aqueous sulphate production able to grow particles
by increasing the mass in a size class while conserving particle number. The GLOMAP scheme also has
size-resolved representations of primary emissions and of particle dry deposition, sedimentation, nucleation
scavenging (rainout) and impaction scavenging (washout). GLOMAP-mode is comprehensively described in
Mann et al (2010) and the implementation within TOMCAT (driven by offline oxidant fields from a previous full
chemistry run) is evaluated against a range of global observational datasets.
Depending on the science questions addressed in the research to be carried out, one can resolve the size-
resolved composition and external/internal mixtures to a different level of sophistication. The main GLOMAP-
mode configuration used in MetUM (as described in Johnson et al., 2010) is to have dust treated by the existing
MetUM 6-bin dust scheme (Woodward et al., 2001) with GLOMAP-mode simulating sulphate, Black Carbon,
Organic Carbon and sea-salt in 5 modes (20 aerosol tracers).

Feedbacks

Radiatively active trace gases and aerosols simulated by the tropospheric and stratospheric configurations of
UKCA may participate in the UM radiation and cloud schemes. Additional code has been written in order
to calculate the aerosol optical properties from GLOMAP-mode results, and this scheme is known as UKCA
RADAER.
The cloud droplet number concentration calculated by GLOMAP-mode can also be coupled to the calculation
of cloud radiative properties and to the calculation of autoconversion rate in large-scale precipitation.

3.1.11 Nudging
Author: Mohit Dalvi, Last Updated: 21 July 2010
Nudging or Newtonian Relaxation is a simple form of data assimilation that adjusts the model dynamical vari-
ables using meteorological re-analysis data to give a realistic representation of the atmosphere at a given time.
The UM dynamical variables that can be currently nudged are U-wind,V-wind and Theta or potential tempera-
ture. These variables can be nudged using ECMWF data on hybrid levels, ECMWF data on pressure levels, UM
analysis on model levels or Japanese 25 year Re-Analysis.
The nudged model was originally developed at the University of Cambridge under Met Office - NCAS collabo-
ration. The technical details of the scheme can be found in Telford et al (2008) [57]. The nudging scheme is
used to align a climate model to the actual meteorology in order to compare model results with measurements.
Another use is to reduce variation between two model runs in order to compare the effect of a small change.

15 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 3. Understanding the Unified Model

3.2 Atmospheric Dynamics


Author: Tom Allen, Last Updated: 07 July 2014
The following is a simple explanation as no updated section has been supplied for some time.
The UM supports two variants of dynamical core; New dynamics and ENDGame. These are detailed in
UMDP-015 and UMDP-016 respectively.
For advice on what are the best settings for your configuration seek guidance from the dynamics teams. Initially
try a configuration similar to an example standard job, within rose stem.

16 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

Chapter 4:
Code Management and Structure
4.1 Managing and Modifying Source Code
Author: Glenn Greed, Last Updated: 7th July 2014

4.1.1 Configuration management

The UM source code is held in a Subversion repository. This may be viewed using the Trac wiki system, location
dependent upon local installation. At the Met Office
http://fcm2/projects/UM/browser/UM/trunk/src
The code is organised in a logical directory tree structure. The UM scripts, input variable metadata, upgrade
macros and routine model test configurations are held in this same repository project and managed in exactly
the same way as the UM source code.
The UM trunk is continuously under development with 3 or 4 trunk revisions a year being given a ’stable’ UM re-
lease version status. Developers make changes against the latest stable release, in branches. Once succesfully
documented, tested and reviewed, the changes are merged into the trunk. At predetermined times the code is
frozen and the next UM release made available to the UM community and this forms the new fixed baseline for
subsequent code development.
More information about the change control mechanisms for source code can be found from the UM Trac pages:
http://fcm2/projects/UM/wiki/WorkingPractices umandrose

4.1.2 About cpp

The UM source makes use of compile-time C preprocessor (cpp) features to only select platform specific code.
Science choices are made with runtime switches.
• Compile switches.
The list of DEF switches for Fortran/C code are maintained in the UM fcm make app.
keys_atmos_app=C84_1A=c84_1a C95_2A=c95_2a UM_JULES=um_jules

Here the DEF UM JULES indicates to the JULES code base that this run includes the UM atmosphere. The
JULES code can be used in a standalone form without beling linked with the UM atmosphere. The lower
case um jules is included for portability.
• Insertion of common code.
Common code is also inserted into the model code by cpp using the #include preprocessor directive. For
example, to include common code from the file argduma.h use
#include "argduma.h"

Such files are often referred to as “include files” or “header files”.

4.1.3 Viewing source code

The Trac wiki system provides an easy way of viewing the central trunk code as well as developer code branches.
At the Met Office this is available using the following link:

17 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

https://code.metoffice.gov.uk/trac/um
Then click on the Browse Source button at the top of the page. There are many projects here:
• main (which is the UM source code)
• aux (Auxillary files for the UM - including spectral files and UKCA files)
• doc (source code for the UM documentation)
Drilling down through main/trunk/src takes you to the UM source code.
The code may also be browsed using a code browser utility, which is maintained as part of the UM admin tools
and must be built locally due to its size. The code browser displays a two-way call tree for the code source with
options to search for specific routine or module names, enabling easy exploration of the model source.

4.1.4 Making a change: Fortran source, C source or script

To make code changes to the UM the reader is advised to work through 3 tutorials, so to understand the
underlying concepts of UM code development:
• FCM tutorial: to learn about branches, working copies etc
• Rose tutorial: to lean about suites, apps, metadata etc http://metomi.github.io/rose/doc/rose.html
• UM Rose tutorial: give specific code development examples to work through, based upon previous learnt
knowledge. https://code.metoffice.gov.uk/doc/um/10.1/um-training/ This training material is also supplied
with the UM external release.

18 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

4.2 Compilation System


Author: Paul Cresswell, Last Updated: 4th December 2014

4.2.1 Introduction

The Unified Model’s compilation system is designed around the use of Rose and FCM. The FCM tool fcm make
is used to compile all executables. Configuration files are contained in the UM trunk and are accessed by the
use of apps running in a Rose suite, which can be used to modify and control the details of the compilation. A
UM compilation could refer to the atmosphere model itself, the reconfiguration, the SCM, a coupled atmosphere-
ocean model or any of the UM utilities.
While it is possible to compile the UM without either using Rose (providing any additional inputs and then running
fcm make by another method), or without referring to the central configuration files (i.e. providing your own), this
document focusses on the supported method of Rose+FCM compilation.
Details of the settings used to control compilations are available in the metadata for fcm-make apps, um-fcm-
make (for UM executables) and nemo-cice-fcm-make (for coupled ocean executables), and are best viewed
using rose config-edit in an fcm-make app. This document is intended to serve as a general overview of the
compilation system and a brief guide to performing user actions such as overriding build settings.
For more details on the requirements for building the UM please refer to UMDP-X04 .

4.2.2 App structure

UM compilations occur when a Rose suite triggers an fcm make or, in the case of remote builds an fcm make2,
task. All such tasks should begin with fcm make[2] followed by an optional name to distinguish between multiple
fcm make[2] tasks, e.g. fcm make ga6. The task definition should set the necessary inputs, such as the remote
host and job submission settings, and identify the necessary fcm-make app to use. See the Rose and Cylc
documentation for more help with defining suite tasks.
The fcm-make app referred to by the task will usually also have a name beginning with fcm make; they may use
the same name in some suites, though this is not possible if several fcm make tasks share the same app (for
example, as is the case in the UM rose-stem suite).
An fcm-make app has the following structure:

app
fcm make <name>
rose-app.conf
file
fcm-make.cfg
etc.

Figure 4.1: Structure of an fcm-make app.

Fcm-make apps possess their own metadata and upgrade macros; whenever a UM runtime app is upgraded
care should be taken to upgrade the corresponding fcm-make apps to the same version.

The rose-app.conf file

The rose-app.conf file contains — aside from the metadata identifier — a single [env] section which defines the
variables that control the compilation settings. When viewed through the rose config-edit GUI these options
are divided into related groups across a number of input windows. The input options themselves can be split into
two main categories — those that control which central configuration file is accessed, and so are used only by
the fcm-make.cfg file; and those that used by the configuration files themselves to alter the compilation settings.

19 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

The former category all appear in rose config-edit under the “Configuration file” heading; users bypassing the
Rose interface will not need to define any of these inputs.
Please refer to the metadata for further details of each variable.

The fcm-make.cfg file

The fcm make task calls fcm make on this file, which in turn includes a central configuration file. All the variables
in this file must be defined or the Rose task will exit with an error. A typical UM configuration file in a standalone
suite may look as follows:

use = $prebuild

include = $config_root_path/fcm-make/$platform_config_dir/ \
\um-$config_type-$optimisation_level.cfg$config_revision

extract.location{diff}[um] = $um_sources
extract.location{diff}[jules] = $jules_sources

Figure 4.2: An example app’s fcm-make.cfg file

The first statement indicates which, if any, prebuild to use. Compilations can inherit preprocessed source code,
object files etc. from a prebuild to reduce the total compilation time.
The second statement identifies a configuration file, either on the trunk or in a branch or working copy. It is split
into multiple variables to allow for easier manipulation from within rose config-edit.
The final two statements are used to include any branches, working copies etc. that are to be included in the
build. There should be one such statement of the form
extract.location{diff}[<namespace>] = <source_list>
for each of project being compiled. Depending on the nature of the app these statements may vary - for example
a UM utility which does not require JULES may not possess the final statement above, whereas an fcm-make
app for a NEMO-CICE compilation may possess three such lines, one for each of the NEMO, IOIPSL and
CICE projects. It is the user’s responsibility to ensure the necessary statements are present for the current
compilation.
You should note that the configuration file and any sources used in the compilation are specified independently,
e.g. the selected configuration files and build sources may reside in different branches or working copies.
When the fcm-make app is processed by the suite this file will contain a copy of the central configuration files
containing all the instructions for the compilation. Further FCM directives may be inserted here by hand, for
example to add a file override to the build or to completely redefine the library flags being used; see below for
details.

4.2.3 Configuration file structure

The UM’s FCM configuration files are held in an fcm-make directory at the top of the UM project tree; an
fcm-make app can read these files from the trunk, a branch or a working copy. This directory contains sev-
eral site-specific subdirectories, each containing configuration files unique to a given platform, and a platform-
independent directory containing files common to compilations at all sites; see fig. 4.3. The exact contents of
each platform-specific subdirectory will vary depending on that site’s requirements.
To use a configuration file, a user should include the appropriate top-level file in the relevant site-specific di-
rectory. Other files should not be directly accessed but will be automatically included by the relevant top-level
file.
Each site may set up their own site-specific configuration files as necessary for their local UM installation,
although adhering to the general template is recommended so that:

20 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

fcm-make
inc
Instructions common to all platforms
site-platform-compiler
[Top-level files for user access]
project/executable-optimisation.cfg
inc
[Platform/compiler/executable-specific files]
options
[Settings for user-configurable options]
option off.cfg
option on.cfg
site-platform-compiler
...

Figure 4.3: Abstract representation of the UM’s fcm-make directory. Here “executable” refers to the type of
executable(s) to be built, e.g. atmosphere and reconfiguration, or a UM utility.

• the configuration files can be kept up to date by the Met Office when changes are made that affect all
sites;
• the config-edit GUI and upgrade macros function as expected;
• the configuration files are compatible with the rose-stem test suite.
The files in fcm-make/inc are used at all sites that employ this structure and they should not be modified without
considering the impact of the change at other sites.

4.2.4 The compilation process

A typical compilation proceeds through some or all of the following steps:


1. An fcm make task is run, accessing the configuration file indicated in the relevant fcm-make app.
2. Source code is extracted from the locations provided.
3. If using a two-step build: The extracted source code is mirrored to the remote platform.
4. FCM performs a dependency analysis to ensure all the source files and libraries necessary for building
have been provided.
5. The source files are pre-processed.
6. The source files are compiled.
7. The resulting executable, and any other build targets (e.g. scripts for running the UM) are copied to a
directory <task name>/build-<class> in the suite’s share directory, where <task name> is the name
of the fcm make task and <class> is the type of executable being created (e.g. atmos or recon).
Steps 4–7 are repeated for each separate build class, e.g. a compilation may include both the atmosphere
model and reconfiguration, or several UM utilities, each of which may be a separate build class.
For further details concerning any of these steps please refer to the FCM Make user guide.

4.2.5 Building from the Science Repository Service

Repositories stored on the Met Office Science Repository Service (MOSRS), such as those for the UM and
JULES, require authentication to access, and a running Rose suite cannot provide that authentication.

21 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

This means that when you wish to access an FCM or Subversion URL on the MOSRS in an fcm-make app,
it is necessary instead to access the local mirror of that URL, e.g. fcm:um.xm/trunk. This is true both when
providing a path to a central configuration file, and when providing a path to any sources from which to extract
source code.

4.2.6 Modifying compiler options

Using the config-edit GUI and accompanying metadata it is easy to make relatively simple changes such as
modifying which models are being compiled, or changing the trunk revision or which branches should be in-
cluded in the compilation. Making changes to the compilation process itself, for example by modifying compiler
flags or library paths, may be just as easy or rather more complicated, depending on the details of the modifica-
tion. There are up to three ways to make such changes:
• Using the GUI.
• Editing the app’s fcm-make.cfg file.
• Editing (a copy of) the central configs directly.
Which method is the most appropriate will depend on the details of the change being considered; not all methods
may be suitable in all cases.
There are two special rules to consider when modifying an app (either using the GUI or editing the fcm-make.cfg
file directly):
1. rose-app.conf variables may not refer to each other. For example, the following would be unsafe:
lib_path=/usr/lib
lib_flags=-L$lib_path -lfoo
If you need to refer to another variable, such as one containing an include or library path, you must do
so in the fcm-make.cfg file. Rose processes the rose-app.conf file before fcm-make.cfg file, so all the
variables it contains are then available for use.
2. The position of a statement relative to the include = line is important. If you wish to redefine a
variable used by the central config files, it must appear before this line. If you wish to override an FCM
build statement, or refer to a variable defined in the central configs, it must appear after this line. Consider
the following fcm-make.cfg file, which is similar to the one in figure 4.3 but contains extra statements at
lines [2], [3], [6] and [7]:

[1] use = $prebuild


[2] $ldflags_coupling = -L$UMDIR/oasis/lib
[3] build-atmos.prop{fc.flags}[um/src/atmosphere] = $fcflags_all -Wall
[4] include = $config_root_path/fcm-make/$platform_config_dir/ \
[5] \um-$config_type-$optimisation_level.cfg$config_revision
[6] $fcflags_coupling = -I$UMDIR/oasis/inc
[7] build-atmos.prop{fc.flags}[um/src/control] = $fcflags_all -pedantic
[8] extract.location{diff}[um] = $um_sources
[9] extract.location{diff}[jules] = $jules_sources

Figure 4.4: An app’s fcm-make.cfg file, with overrides

The statement at [2] will successfully redefine the value of $ldflags coupling used in the central configs.
The statement at [3] is invalid, as $fcflags all is not defined until after line [4], when the central configs
have been included. Additionally, any FCM declaration in the central configs which overrides the flags for
a file in src/atmosphere/ will replace this statement, meaning it was not applied to that file.
The statement at [6] does nothing, as $fcflags coupling has already been evaluated wherever it was
used in the central configs.
The statement at [7] successfully overrides the compiler flags for all files in src/control/. This FCM decla-
ration will replace any file overrides in the central configs which previously applied to any of these files.
Most compiler modifications fall under one of three categories:

22 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

• Modifying compiler or linker options, for all files (global overrides).


• Modifying the compiler options for a specific file or directory (file overrides).
• Modifying the path to an existing library (path overrides).

Global overrides

To apply a new compiler option to all files:


GUI Add the new option(s) using the variable fcflags overrides. This will override any existing options that
affect the same settings.
central configs Add the new option(s) to the value of fcflags common in the appropriate site-specific compiler
file, e.g. fcm-make/meto-pwr7-xlf/inc/pwr7-mpxlf.cfg
To add new library or linker flags:
GUI Add the new option(s) using the variables ldflags overrides prefix and/or ldflags overrides suffix.
central configs Add the new option(s) to the value of ldflags in the appropriate site-specific compiler file, as
above.
To completely replace a set of flags (not keeping any of the existing flags) it is generally advisable to replace
the appropriate FCM build declaration. These declarations are made in the appropriate site- and executable-
specific central configuration file, e.g. fcm-make/meto-pwr7-xlf/inc/um-atmos.cfg. For example, to override
all of the library flags for the atmosphere executable:
central config Redefine the value of build-atmos.prop{fc.flags-ld}.
app config Add the above declaration after the include statement, e.g.
build-atmos.prop{fc.flags-ld} = -L$gcom_path/lib -lgcom -L$netcdf_lib_path ...
(c.f. line [7] of fig. 4.4).
Note that in this case it would be unwise or incorrect to define ldflags in the app config: if ldflags was set
before the include statment, none of the default paths such as $gcom path would yet be defined (c.f. line [3]
of fig. 4.4), and the user would be required to provide every path explicitly. If it was set after the include
statement, the variable has already been used to set the above FCM declaration before the user redefines it,
and the additional line would do nothing (c.f. line [6] of fig. 4.4). Therefore to avoid mistakes we recommend
that the build declaration itself be edited when overriding an entire set of flags.

File overrides

File overrides in the central configs are stored in the appropriate platform-specific top-level config file (the one
at which the app’s include statment uses); this makes it clear which override is applied to which file in which
build. These files contain several examples on which to base a new override, e.g.
build-atmos.prop{fc.flags}[um/src/atmosphere/convection/ni_conv_ctl.F90] = \
\ $fcflags_common -O2 -fp-model precise $fcflags_options $fcflags_overrides
Despite their name, file overrides can also apply to entire directories; for example, removing “/ni conv ctl.F90”
from the above example would cause the override to be applied to all the files in the convection directory.
To add a new file override:
central config Add the relevant file override to the bottom of the file appropriate to the compiler, model and
optimisation, e.g. fcm-make/meto-pwr7-xlf/um-atmos-safe.cfg.
app config Add the relevant file override after the include statement (c.f. line [7] of fig. 4.4).
The most common uses of file overrides are to change an existing setting such as the optimisation level or
enabling/disabling OpenMP, or to add a new flag to a single file (e.g. to enable array bound checking).
Adding a new flag is the simplest case. All the compiler flags that are applied are stored in a variable
fcflags all, so to add a new flag we simply append it to this list, e.g.

23 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

build-atmos.prop{fc.flags}[um/src/atmosphere/convection/ni_conv_ctl.F90] = \
\ $fcflags_all -fbounds-check
While it’s possible to use this method to override existing flags, for example by adding a new compiler opti-
misation level flag to the end of the statement, it’s advised to expand fcflags all and replace the flag to be
overridden instead. This is because any given compiler flag may also cause other flags to be applied; the user-
supplied flag may then override the intended flag, but not the other implied flags. For example, setting -O3 may
cause additional optimisation settings to be set, which are not redacted if the optimisation level is then reduced
to -O2 by a user-appended flag.
The expanded form of fcflags all for each model is given in the relevant “common” configuration file, e.g. in
fcm-make/inc/um-atmos-common.cfg:
$fcflags_all = $fcflags_common $fcflags_level $fcflags_options $fcflags_overrides
The individual components here are:
fcflags common Flags common to all builds of this model (for a given platform). These should generally not
be altered by file overrides.
fcflags level Flags specific to a given optimisation level. To provide flags for a new optimisation level, replace
this with the value of fcflags level from the top-level configuration file for the appropriate model and
optimisation level.
fcflags options This contains settings determined by the user (other than optimisation level), such as flags
for OpenMP and DrHook. It is defined in the inc/options/common.cfg file within each platform-specific
directory. The potential values of these flags are contained in the other files in the options directory.
fcflags overrides This contains global overrides provided by the user in the GUI.
So for example, when building at the -O3 optimisation level the optimisation of a single file could be lowered by
substituting the value of fcflags level from the appropriate top-level config file:
build-atmos.prop{fc.flags}[um/src/atmosphere/convection/ni_conv_ctl.F90] = \
\ $fcflags_common -O2 -fp-model precise $fcflags_options $fcflags_overrides
When overriding a value that appears in fcflags options the general method is apply the expanded form of
this variable and then replace only the flags for the relevant option. For example, if the common.cfg file contains
$fcflags_options = $fcflags_omp $fcflags_drhook then to disable OpenMP from a single file we would
set:
build-atmos.prop{fc.flags}[um/src/atmosphere/convection/ni_conv_ctl.F90] = \
\ $fcflags_common $fcflags_level <flags to disable OpenMP> $fcflags_drhook $fcflags_overrides
In most cases compiling without OpenMP means providing no additional flags, so the second line above would
simply appear as:
\ $fcflags_common $fcflags_level $fcflags_drhook $fcflags_overrides
In this instance, if the user could guarantee that these modified config files would never be used to compile with
DrHook, the
fcflags drhook variable could also be omitted (because the DrHook variables are also blank when it is not
being used). This of course would not be suitable when intending to add a new file override to the configuration
files on the UM trunk.

Path overrides

Path overrides are used to change which libraries, supplementary executables, etc. are used by the executable
being compiled. The default paths are contained in the inc/external paths.cfg file within each platform-
specific directory. To use a path override:
GUI Provide the appropriate path in the External libraries window of the GUI. (You may need to enable the
“View Latent Pages” option in rose config-edit to see this window.)

24 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

central configs Edit the appropriate variable in external paths.cfg to point to the new location. Variables
should be sufficiently descriptively named to determine their purpose, or fully documented in the appropri-
ate metadata file.

Coupling

The flags used for coupling the UM to other models are often a mixture of the above types and deserve special
attention. The following variables are defined for providing flags for use in coupling:
flags coupling A general purpose variable for coupling flags.
fcflags coupling Fortran compiler flags for coupling. Defaults to flags coupling.
fppflags coupling Fortran preprocessor flags for coupling. Defaults to flags coupling.
ccflags coupling C compiler flags for coupling. Defaults to flags coupling.
cppflags coupling C preprocessor flags for coupling. Defaults to flags coupling.
ldflags coupling Linker flags for coupling.
However, none of them possess metadata and so none of them are available in the GUI. The reason for this
is simple: coupling options will frequently include paths to other libraries or executables but — as has already
been described — variables in the rose-app.conf file may not refer to each other. It is common when modifying
coupling options to also change or add paths to several libraries or executables, so the coupling variables above
are not provided in the GUI to reduce the risk of the user attempting to both define and use a path variable in
the rose-app.conf file.
To use the coupling variables:
app config Add the relevant variable(s) before the include statement (c.f. line [2] of fig. 4.4).
central configs Edit the appropriate variable in the relevant site-specific compiler file, e.g.
fcm-make/meto-pwr7-xlf/inc/pwr7-mpxlf.cfg.
The former solution allows the use of any path variables which have been defined in the GUI; the latter allows
both any variables defined by the user, and any centrally-defined path variables (in external paths.cfg) to be
used when defining the coupling variables.

25 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

4.3 The UM on Parallel Computers


Author: Paul Selwood, Last Updated: 14th April 2011

4.3.1 Overview of the UM Parallelisation

The UM has had the necessary explicit parallelisation code added to allow it to run on a range of parallel
computers.
The basic principle used is that of horizontal domain decomposition. This means the model grid-point space
is divided up into subdomains, each containing a complete set of vertical levels but only a rectangular latitude-
longitude horizontal subsection. Each processor is then responsible for one of these subdomains, and contains
all the data for this subdomain in its own local memory. Regular inter-processor communication is required
so that information can be transferred between neighbouring subdomains. This is achieved by extending the
local subdomain by a number of rows or columns in each dimension, to form a “halo” around the data. This
halo region holds data which is communicated from adjacent processors, so allowing each processor to have
access to neighbouring processors’ edge conditions. The UM uses different halo sizes for different variables.
Variables that have no horizontal dependencies in calculations will have no halos while those that have will have
single point halos. Some variables (primarily advected quantities) have extended halos, the size of which may
be specified in the GUI/input namelists.
The atmosphere model has a user-definable two dimension decomposition (that is, the data are decomposed
in both latitude and longitude).
Additionally, a shared memory parallelisation is available based on OpenMP compiler directive technology. This
works underneath the parallel decomposition layer with each processor being able to utilise a number of threads
to share the work available. This is done by subdivision of independent iterations of the most costly loops.

4.3.2 Inputs of interest for Parallel Computers

When running the UM on a parallel computer, the following UM inputs most likely to need to change.
• The ”job submission” method and resource ”directives” required. This is currently set up in the Rose
suite.rc files.
• The number of processors in the North-South / East-West directions. Some experimentation may be re-
quired to find the arrangement giving optimum performance. It is typical (although not universally the case)
for a decomposition with smaller numbers of processors in the East-West direction than in the North-South
direction to give best performance. The atmosphere model is restricted in the possible decompositions in
that (for efficient communications over the poles) the number of processors in the East-West direction has
to be either one, or an even number.
• The number of OpenMP threads. Most model configurations work best with 1 or 2 threads currently, but
this will depend on your computer architecture and more may be useful in some circumstances.

4.3.3 Maximising performance

There are a number of UM inputs that can change model runtimes without altering the science.
• The subroutine timer facility available from the UM should be disabled except for when investigating timing
of specific areas of the model. This is because it has a non-trivial overhead.
• The processor decomposition (mentioned above) can be tuned for best performance. On a vector machine
one may wish to have just one process in the East/West direction to maximise the vector length. In general,
it is better to keep the number of processes in the East/West direction lower than that in the North/South
direction, although this may vary on different platforms.
• The number of OpenMP threads (mentioned above) can be tuned for best performance also. There is little
experience with this yet but 1 or sometimes 2 threads tend to be the best currently.

26 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

• Short-Wave, Long-Wave Radiation and Convection schemes give the user the ability to tune either the
number of segments or the segment size. This affects the number of gridpoints that are blocked together
for a calculation and can affect runtime efficiency. For a vector machine one generally wants to use a single
segment to maximise vector length. For a scalar machine one should set the segment size to maximise
cache efficiency. This will vary from architecture to architecture, but for example IBM Power 6 does best
with a segment size of 80, whereas Cray XT5 is better with segment sizes of about 30.
• I/O servers may be utilised to speed up model output. The default (synchronous) mode has been well
tested. The asynchronous mode for STASH and dumps is newer and has had less testing at present, but
may give better performance.
• Section C96 contains a few parameters that may be tuned. The first of these is the GCOM collectives limit
which switches when GCOM or MPI methods of gathers and scatters are used. The tuning program that
comes with GCOM can help in choosing an appropriate value for this parameter. The second choice is in
the type of global summation. If bit-reproducibility is required it is important to also ensure compiles and
solver options are also bit-reproducible.
• On IBM Power 6 machines it is possible to run with SMT (simultaneous multithreading). This essentially
provides 2 virtual cores on every physical core and thus it is possible to run twice the number of OpenMP
threads or MPI tasks on the same physical resource. Of course these 2 virtual cores may contend for
physical resource so performance gains are often limited. It is however worth experimenting with for your
model configuration. Remember that to use this you will also need to adjust the number of threads or
change your parallel decomposition to provide more parallel work. Using SMT will change results (for a
non-reproducible binary) due to differences in summation order that may result.

27 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

4.4 Atmosphere Model Control Code


Author: Glenn Greed, Last Updated: 7th July 2014

4.4.1 Introduction

The atmosphere model control code is designed to allow users to set up Unified Model jobs in many different
configurations from the same code and scripts. This section explains how choices made in the gui/namelists
are passed to a Unified Model run, and outlines the structure of the control level code.

4.4.2 Control Files

The UM is configurable via many UM input namelists. These are collated into a number of input files for the UM
to read.
• SIZES — sizes defining model domain, number of levels etc.
• IOSCNTL — settings for the IO server
• NAMELIST — majority of runtime namelist items, especially science options.
• RECONA — control of the atmosphere reconfiguration
• SHARED — control of atmosphere sub-model and reconfiguration
• STASHC — control of STASH diagnostic output system
The top level script that runs the Unified Model atmos is um-atmos and may be found at, for example: $UMDIR/cylc-
run/vn9.1 prebuilds/share/[configuration]/build-atmos/bin/um-atmos
or http://fcm2/projects/UM/browser/UM/trunk/bin

4.4.3 IO Server option

The UM includes the option to use an IO Server. The user may devote a number of processors to handling
STASH and dump IO in parallel with the computational processors. This may improve runtime on some systems
when hundreds or thousands of processors are being employed and synchronising for output from a single
master processor has become a serious issue.
namelist io control is relevant here. Within the rose gui visit IO system settings.

4.4.4 The Top Level Control Routine UM SHELL

For a calling tree of Unified Model routines, see e.g. the Trac wiki browser. The main program is UM MAIN,
which calls subroutine UM SHELL. UM SHELL is the top level routine for the atmosphere model, and does the
following.
First there are top level declarations, the reading of values set by environment variables, and initialisation of the
message passing routines under the GCOM wrapper.
Also early on, the namelists are read in by the following routines:-
• READSIZE actually populates the &NLSIZES namelist items by reading the input dump header. The
namelist &NLSIZES is only read from the file SIZES by the RCF or SCM.
• READHIST to read the history file and
• READCNTL to read from the concatenated control related namelists within the file NAMELIST.

28 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

4.4.5 The Difference between History File and Control Files

The history file contains only those control variables needed for model restartability. Continuation runs read the
history file generated by the previous run (specified by the environment variable $HISTORY)

4.4.6 UM SHELL continued, and Routine U MODEL

Domain decomposition, even for a single processor job (1x1), is done by the DECOMPOSE ATMOS and
CHANGE DECOMPOSITION routines, followed by the routine DERVSIZE to derive the sizes needed for dy-
namic allocation of memory for all the main model arrays.
Calls to STASH PROC and its subsidiary routines deal with decisions about which primary variables will be
given space in the general D1 array and which diagnostics will be generated by the output processing STASH
sections of the model.
UM INDEX sets up the lengths of all the main storage arrays, allowing them to be dynamically allocated at run
time.
All this preliminary work leads up to the main call to the model itself in routine U MODEL.
Within routine U MODEL, the first ‘include files’ argszsp.h etc., typszsp.h etc., and spindex.h do the dynamic
allocation of memory, along with typspd1.h and the following two dozen include files a little further down the
routine.
The modules history mod.F90 and nlstcall mod.F90 and their components enable the variables in the history
and control namelists to be accessed at the top levels within the model.
After setting up more environment variables and some STASH arrays (see section 5.12), the first real work is
done in the routine INITIAL, which is always called once at the start of each run, whether a normal run (NRUN)
or a continuation run (CRUN). Its functions include initialising more STASH control arrays (INITCTL), reading
the initial model dump (INITDUMP) and setting pointers for the D1 array from STASH/addressing information
(INITHDRS), initialising the model time and checking its consistency (INITTIME), setting up timestep control
switches (SETTSCTL) and opening output files required at the start of the run (PPCTL INIT). There are optional
routines to initialise assimilation, ancillary and boundary updating.
Thereafter in U MODEL we come to the main loop over model timesteps.
Routine INCRTIME increments the model time by one timestep and routine SETTSCTL sets the timestep control
switches appropriately, e.g. LANCILLARY indicates whether any ancillary fields will need to be updated this
timestep. When the values of these control switches are false, parts of the code will be skipped for this timestep.
The re-initialising of output files is controlled by PPCTL REINIT.

4.4.7 The Routine ATM STEP

The main routine for the time integration of the atmosphere model is ATM STEP. Apart from STASH diagnostics
and lateral boundary conditions, this calls the 3 major sections of our atmosphere model, the dynamics, physics
and (optionally) data assimilation. Here we cover control aspects of the dynamics and physics routines called
from ATM STEP. Data Assimilation is documented elsewhere and for details of lateral boundary updating see
UMDP-C71 .
In the New Dynamics version of the Unified Model, the dynamics and physics aspects of the model are inter-
leaved, as follows:-
• Initialisation of Idealised Cases; Update LBCs; Polar Filtering
• Atmos Physics1 — Cloud/Precip/Tracer; SW & LW radiation; Gravity Wave Drag
• Semi-Lagrangian Advection
• Atmos Physics2 — Boundary Layer; Convection; Hydrology
• (optional AC Assimilation)

29 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

• Helmholtz solver
• Update variable to new time-level
• (optional Aerosol Modelling)
• (optional IAU Assimilation)
• End of Timestep Diagnostics (sections 15 dynamics, 16 physics, 0 primary variables)
ATM STEP can become unwieldy, with much initialisation and minor processing being done inline. Incidental
work has been extracted out into a series of “ATM STEP . . . ” routines performing specific allocation, initialisation
and diagnostic tasks.
The UM control code supports two dynamical cores, “New Dynamics” and “ENDGame”. To facilitate this, an
ENDGAME-specific version of ATM STEP, known as EG ATM STEP, is introduced, with filename ATM STEP -
4A. Several other routines have ENDGAME variants with the EG prefix and their filenames having a corre-
sponding 4A suffix. ENDGAME will not be described further here (see UMDP-016 . For basic details on the
UM dynamical core, see the Dynamics section 3.2 of this user guide.

4.4.8 The Routine U MODEL continued

If LDUMP is true, a model dump is performed by the routine DUMPCTL. To allow bit-reproducibility of model
runs restarting from any of these model dumps (which are normally compressed from 64-bit to 32-bit numbers to
save space), the routine RESETATM is called to recalculate the values of the prognostic variables in the dump.
Following this, on suitable timesteps (LMEAN true), climate mean diagnostics are created using routine MEANCTL.
Whenever dumps are made the history file values are updated and written to the history file using routine TEM-
PHIST.
Routine EXITCHEK checks whether the model run has completed the required number of timesteps and, if so,
starts the model completion process.
Finally, at the end of the timestep, if the control logical LANCILLARY or LBOUNDARY is true, then the routine
UP ANCIL or UP BOUND is called. UP ANCIL calls REPLANCA to update model fields with ancillary field
values, according to instructions passed from the UMUI in namelists in file CNTLATM.
If the model run is not ending this timestep, the statement GOTO 1 loops back to 1 CONTINUE to carry on with
the next timestep of the run.
As they are the heart of the atmosphere model, we look at the semi-Lagrangian dynamics, physics (Atmos -
Physics1 & 2) and elliptical solver control routines in more detail.

4.4.9 The Routines Atmos Physics1 & 2

Looking briefly at the control of the many physical parametrizations included in the Unified Model, we note the
following points.
Atmos Physic1/2 — DATA STRUCTURES — The huge argument lists of scalar and array variables are declared
and dimensioned explicitly. A large amount of local workspace storage is dimensioned by lengths passed
through the argument list too.
Atmos Physic1/2 — CONTROL VARIABLES — these logicals and parameters are also passed through the
subroutine argument lists. E.g. L RAD STEP indicates whether this is a radiation timestep.
N.B. As the argument lists of these routines have become unwieldy and difficult to keep with even 99 continuation
lines, work is in progress to put many of the control variables and some of the data arrays into Fortran modules.
ATMOS PHYSICS1 — CODE STRUCTURE — Ignoring calls to diagnostics routines, the basic structure is :-
• Initialisation of increment arrays etc.
• (optional — Add energy correction increments to temperature)
• Call microphysics routines: large-scale cloud, precipitation, tracer scavenging

30 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 4. Code Management and Structure

• (optional — PC2 turbulence)


• Radiation schemes
• (optional — Energy correction for total precipitation)
• Gravity wave drag
• (optional — Tracer source terms)
There is then a break in the physics while the semi-Lagrangian advection of thermodynamic, velocity and tracer
variables is done.
ATMOS PHYSICS2 — CODE STRUCTURE — is :-
• Initialisation
• Diagnosis of convection
• Boundary layer scheme
• Convection scheme
• (optional — Energy correction code)
• Boundary layer implicit solver
• Hydrology (called on land points only)
• (optional — River routing)
• (optional — Vegetation)
A number of the physics routines work with temperature rather than potential temperature; so the Exner pressure
is often needed to convert between them.
On full radiation timesteps (typically every 3 hours in global runs and every hour or more frequently in limited
area runs) NI RAD CTL is called, which controls both short and long wave radiation schemes and necessary as-
tronomy. There are many choices for each radiation scheme, selected using the gui/namelist. See UMDP-023
for details.
See UMDP-024 for details of the boundary layer scheme.
At various stages, SWAP BOUNDS is called for all the prognostic variables that have been updated to make
sure all grid points, including the “halo” points, are correct across all processors of the grid decomposition.
For further information, refer to sections 3.1 & 3.2 of this User Guide. The appropriate Unified Model Documen-
tation Papers give greater detail.
After the second set of physics, data assimilation AC CTL may be called, and NI DIFF CTL for optional diffusion
and divergence damping.
Finally, for the prognostics variables, a Helmholtz equation, in the increments resulting from the semi-Lagrangian
dynamics and the physics parametrizations, is set up and solved thus:-
• NI PG UPDATE to update the pressure gradient term
• POLAR FILTER INCS to filter the near polar rows of global models
• (optional SET LATERAL BOUNDARIES for limited area models)
• NI PE HELMHOLTZ to set up and solve the Helmholtz equation
• NI UPDATE RHO to update density
• UPDATE FIELDS to update prognostic variables
• AERO CTL for aerosol modelling of sulphur, soot and/or biomass if selected
• IAU to call the IAU data assimilation scheme if selected
Lastly there are further consistency checking and diagnostic routines to bring ATM STEP to its close.

31 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

Chapter 5:
Using the Unified Model
5.1 Getting Started
Author: Glenn Greed, Last Updated: 7th July 2014

5.1.1 Introduction

The best way to become familiar with the UM is not to read the UMBUG, but rather actually use the UM. In
this section we take you though the procedure of running an example UM suite and provide a review of some
of the important concepts in a UM app. This section follows the getting started UM rose tutorial example,
and all readers are encouraged to work their way through the complete UM rose tutorial. It takes you though
running your first UM suite, a brief tour of the UM gui and how to make code changes. These are essential
exercises/examples for all UMBUG readers! Please note that the UM Rose course does assume one is familiar
with both FCM concepts and basic Rose concepts.

5.1.2 Before you start

There are a number of things which you must do before using the UM with Rose. These are outlined below. In
what follows, within the Met Office, the remote machine is the IBM Power7 while the host machine will normally
be a Linux desktop. Other users must interpret these names appropriately to their computing system. For
example both could be a Linux desktop.
• read http://metomi.github.io/rose/doc/rose-rug-getting-started.html
• Make sure you have a valid account on the host and remote machine, and any other machines which you
may require access to.
• Rose UM job submission uses ssh so one needs to set up appropriate authorisation.
• At the Met Office there is a centrally managed script ukmo-ssh-setup to do this.
• One can also do this manually if required, using the ssh-keygen on the local and remote machines. (Seek
local advice.)

5.1.3 Overview of UM in rose

Let’s run a simple sample UM rose suite, using the command line.
Please note that the name of the standard suites in the Rosie database will vary between sites, and also change
between releases. Please consult with your local system manager for an appropriate suite.
The following is purely an example of the command line usage.
• rosie checkout mi-ab981@14410 - To check out this suite
• cd /roses/mi-ab981 - go into the checkout directory
• rose suite-run - submit the suite to run.
If all is well, this will bring up the rose cylc gui that depicts suite progress. Figure 5.1 here shows both task and
graph views. Notice that the suite pictorially is made up of 3 separate tasks:
• fcm make - the extract and build of the UM source code. (grey-completed)
• recon - reconfiguration is used to create a suitable start dump (grey-completed)

32 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

• atmos - the actual forecast run of a sample UM science configuration. (green-running)


When submitted each task will turn green when running and grey when completed. If all is successful the cylc
gui will empty and a ”stopped with succeeded” message will appear.

Figure 5.1: Example simple UM rose suite as depcited by cylc gui.

Congratulations you have run your first UM suite within the Rose environment.

5.1.4 Model Output

Great we have run the UM, but where did the model run output go?
Output from Rose suites goes to a variety of locations, rose bush is used to display standard output and errors
from the suite
• rose suite-log

Figure 5.2: Example rose bush view of model run with links to output.

Figure 5.2 shows the web browser interface to our sample suite. For each task one can click on, script (run
script), out (stdout) and err (stderr) to view the output from the suite tasks.
Binary output from the model run usually goes to a couple of locations:
• /cylc-run/(suite-name)/share/data
• /cylc-run/(suite-name)/work/
dependent upon the output settings of your apps in the suite.
Compilation output from the fcm make task may be found in
• /cylc-run/(suite-name)/share/fcm make/fcm-make.log

33 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.1.5 Example suite make up

The sample suite run is a very simple suite, with three components: fcm make, recon and atmos. Each task is
defined by its app config (http://metomi.github.io/rose/doc/rose-rug-introduction.html).
• fcm make apps are automatically recognised by rose as fcm make tasks.
– app/fcm make/rose-app.conf - variables associated with the build, including the trunk revisions of the
UM and JULES which are being built
– In some suites one will notice an optional fcm make2 task. This mirrors and builds the extracted code
on a remote machine.
• UM apps
– app/um/rose-app.conf - in this suite it defines the configuration of the UM to be run; for both the recon
and the atmos forecast.
A closer inspection of the UM app shows that its first line is
meta=um-atmos/vn9.1
This informs rose of the metadata used by the app. The app and metadata together provide all the information
required to understand what the app settings mean and how to display them within the rose UM gui.
[command]
default=um-atmos
recon=um-recon
This informs rose what command to run related to the app, here one can select to run either the um-atmos or
um-recon wrapper scripts to run the UM forecast or reconfiguration code respectively.
[env]
populates environment variables required by the reconfiguration and/or model run.
[file:xxxx]
collates following namelists into ’files’ as read in my the recon and/or UM code.
[namelist:xxxx]
defines all the runtime inputs as required by the reconfiguration and or UM atmosphere model run.

5.1.6 Configuring a Suite and/or app

Metadata and configuration files at both the suite and app level enable the user to configure what they are
running, via the rose UM gui.
To open the rose gui simply, in the working directory of the suite, enter
• rose edit
Take some time finding you way around the contents of the UM app and seeing how the interface works. Revisit
the Rose documentation to understand how the meta-data controls what you view in the gui.

5.1.7 Standard Suites

The UM trunk is routinely tested daily against a number of standard suites, to check they still work and results
are as expected, using rose stem. This same test system is employed by developers to provide supporting
evidence of the impact of their code changes. The standard suites are mentioned in the release notes and all
their app files are maintained alongside the UM source code on the UM trunk. We do not provide the required
input data for all these configurations though.
A simple standalone suite and all its required input files is provided alongisde the UM release to aid partners in
testing their installation.

34 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

Figure 5.3: Example rose-gui.

External users may obtain Met Office sample jobs from the http://collab.metoffice.gov.uk/twiki/bin/view/Support/UnifiedModel
under parallel suites or by contacting the External collaboration team or your Met Office collaborator.

5.1.8 Suite discovery and management

Rosie is an optional part of Rose that handles suite storage and discovery. It uses Subversion and a web-
accessed database to version control and keep track of suites. Suggested reading http://metomi.github.io/rose/doc/rose-
rug-rosie-go.html.

Figure 5.4: Example view of the rosie gui.

5.1.9 Copying a suite

While you can checkout and run anyone else’s suite, what if you want to duplicate and develop your own fork?
• rosie copy (suite-name)
will create a new suite name for you with a copy of the files. You now own a copy of the suite and may commit
changes.

35 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.1.10 Work through the tutorial

This section has deliberately been sparse in detail. Please work through the FCM, Rose and UMRose tutorials.
You will learn more by doing than reading!

36 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.2 Reconfiguration
Author: Glenn Greed, Last Updated: 8 July 2014

5.2.1 Introduction

The reconfiguration is part of the suite of UM supporting executables and is a standalone program. The re-
configuration allows the user to modify UM dump files which contain the necessary data to define the initial
conditions for a model run. The format of UM dump files is described in UMDP-F03 . The reconfiguration
program only processes atmospheric data.
The reconfiguration is run as a task within a Rose suite, see 5.1 for such a suite. The initial conditions are input
in the UM app namelist and configurable through the UM Rose GUI. Not all UM suites require a reconfiguration
task and thus this may or may not be included in a suite dependency graph as required.

5.2.2 When does a dump file need reconfiguring?

A dump might need reconfiguring for a wide variety of reasons. From simply upgrading a dump to be compatible
with the latest version of the UM to incorporating new prognostic fields or setting up for test cases.
A file needs reconfiguring if:
1. Upgrading to a new model release. This will ensure that the dump corresponds to the format for the new
model release. The reconfiguration will convert any dumps as required. See UMDP-F03 for details of
format. Note downgrading to an earlier release of the model is not guaranteed to work.
2. There is a change of resolution or area. The dump needs to be interpolated onto the new domain or
sub-domain.
3. There is a need or desire for configuration from an ancillary file. The ancillary data may overwrite existing
data or may form a new field.
4. New user prognostics, tracers or transplanted prognostic data are requested. Then the reconfiguration
reserves space in the dump for these fields and initialises them. See UMDP-S01 for details.
5. Initial file is in GRIB format. Currently only a limited selection of fields from ECMWF are supported. (see
UMDP-303 ).

5.2.3 What can the reconfiguration do?

The following facilities are available for atmospheric data:


• Upgrading a dump from earlier releases of the model. Backward compatibility is generally not supported,
but may work in some circumstances. Note also that due to large changes in model addressing the
Reconfiguration cannot upgrade dumps from UM runs prior to version 5.0, a special tool is required for
this purpose.
• Adding fields to or subtracting fields from the dump.
• Initialising or reinitialising fields to zero, constants, ’Missing Data’ or the result of some field calculations.
• Adding user defined fields to the dump.
• Adding tracers to the dump
• Interpolating to a new resolution or sub-domain.
• Initialising fields with ancillary data.
• A transplant option: A facility to replace prognostic data over part of the domain with prognostic data from
another dump.
• Create a UM dump from a suitable ECMWF data file. (see UMDP-303 )

37 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

• Initialisation of VAR LS state dumps.


Note that any data from sources external to the model e.g. ancillary files or prognostics in another dump must
be at the same resolution and on the same domain as the output dump. The same holds for fields held on a
different grids e.g. river routing fields which are read from external dump files. The fields used must also be at
the same resolution as the fields they will become in the output dump.

5.2.4 Defining a reconfiguration using the GUI

As the reconfiguration is preparing a dump for a model run, it takes most of the information required from that
provided to run the model itself; hence why the recon and atmos usually share an app configuration. Even when
running a suite for reconfiguration purposes only, it is necessary to configure the model science as will be used
in any future forecast suite.

Settings useful for all configurations

• Turning the reconfiguration on.


is defined at the suite level in the dependencies graph. In the previous section the sample suite has the
following defined in its suite.rc file:
graph = """ fcm_make_omp_prebuild => \
recon_omp_prebuild => \
atmos_omp_prebuild"""

If the initial supplied data was already suitable for the model run then one could remove the recon depen-
dency.
Let’s say the recon is required. Initial data to the suite is supplied in
• Supplying initial data um
namelist
Reconfiguration and Ancillary control
General reconfiguration options
ainitial

If the initial dump is in GRIB format, the user must reconfigure the dump with the GRIB switch. See the
HOWTO for more detail on reconfiguring from GRIB UMDP-302 .
Here the user can also specify whether to run the reconfiguration to create a VAR LS dump and can
impose checks on, or reset, the data in the dump created by the reconfiguration. These options can often
help reduce model problems after interpolation.
• Specifying the number or processors um
->env
-->Runtime Controls
--->Reconfiguration Only
is where the user can specify how many processors to use for the reconfiguration.
On many modern computers users will rarely have cause to use more than one processor, but users on
some platforms or with some model configurations may need to use more due to memory limitations or for
decreasing runtime on jobs performing a lot of interpolation.

To compile the reconfiguration

• Compilation of the Reconfiguration is controlled by fcm_make


-> env
--> Configuration file
--> Make steps

38 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

--> sources
--> Basic compilation
-->Pre-processing

FCM is used to modify the reconfiguration in the same manner as the model.

To use Ancillary files to update standard fields

• The contents of the dump file depend on which scientific sections are requested. For example SULPHUR
DIOXIDE EMISSIONS (STASH code 58) will only be incorporated into the dump if Aerosol Modelling
included and With Sulphur Cycle sections are used. So the user must specify what science sections
are required for any future model run even if currently only the reconfiguration is to be run. (Use case
would normally be run both.)
To view and change the scientific sections selected look in the GUI sections
um
-> namelist
--> UM Science Settings
• The user should select which fields are required from the ancillary files.
– Climatological datasets and are configured within
um
-> namelist
--> Reconfiguration and Ancillary control
--> Configure ancils and initialise dump fields
See section 5.8 for more information about ancillary files.

To replace or initialise other or new prognostic fields

It is possible to use the reconfiguration to replace the data in any existing field or to create space in the dump
for new fields associated with science development work. To affect any field not covered by the standard
ancillary selection process a branched stashmaster file will be required. It then may be initialised by using the
aforementioned ”Configure ancils and initialise dump fields” panel.
For more detail see the relevant HOWTO - ‘Initialise a Field via the Reconfiguration’ UMDP-302

5.2.5 Changing the resolution of the dump

The minimum requirement when reconfiguring to a new resolution is to state the number of land points in the
interpolated land-sea mask at the new resolution.
um
-> namelist
--> Reconfiguration and Ancillary control
--->Output dump grid sizes and levels

Then all the fields will be automatically interpolated to the new resolution from the original dump by the recon-
figuration program. If the number of land points is unknown, it can be determined by running the reconfiguration
with an arbitrary number of land points. This reconfiguration will abort with a message stating the number of
land points expected. The suite should then be rerun with the correct number of land points.
For a more realistic representation where more accurate simulations are needed, it is strongly recommended to
use dedicated ancillary files at the new resolution which have been derived from high resolution base data. See
section 5.8 for more details about ancillary files. Note that importing an ancillary land-sea mask may change
the number of land points.

39 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

Warning: Care must be taken with special fields such as orography and the land-sea mask. Requesting either
of these from an ancillary file will cause interpolation, Interpolation can negate bit comparison as it affects other
fields.
Warning: Care must also be taken to maintain consistency when interpolated fields have a link with fields read
in from ancillary, for example an ancillary orography will not necessarily have land in the same grid squares as
an interpolated land sea mask. The same can be true of Sea Ice fields.

5.2.6 Interpolation

Interpolation is the mechanism by which data are transformed from one resolution to another. The reconfigu-
ration offers a choice of a bilinear technique, an area weighted technique, or nearest neighbour technique, to
interpolate horizontally and assumes a linear relationship with height to interpolate vertically. The exceptions to
this being the vertical interpolation of Exner and Density. For Exner, a surface pressure is calculated from Exner
which is then horizontally interpolated before Exner is recalculated from the result. Density is not interpolated
but is always recalculated.
For general use the bilinear technique is recommended, however if interpolating onto a coarser grid the area
weighted technique is preferred.
The reconfiguration also has a coastal adjustment step, which adjusts land specific fields at coastal points
where horizontal interpolation uses a mixture of land and sea points. The spiral coastal adjustment scheme is
recommended here as the other scheme may generate inappropriate results when the nearest land points are
far removed geographically from the coastal points. This is explained in UMDP-S01 .

40 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.3 Changing Resolution


Author: Glenn Greed, Last Updated: 7th July 2014

5.3.1 Introduction

The Unified Model is designed to run at any horizontal resolution from coarse climate resolution with just 10s
of grid-points in each direction to high resolution limited area models with grid-lengths of less than 1km (0.01
degrees), and even down to 100m, provided sensible parameters are chosen.
Section 5.3.2 covers the gui settings involved in changing horizontal resolution.
Changing vertical resolution needs to be done with greater care. Avoid too great a difference between the
thicknesses of nearby levels and make sure the thicknesses of adjacent levels vary smoothly. I.e. specify the
ETA THETA and ETA RHO values to be smoothly varying. There are various sets of vertical levels that have
been tried and tested in standard jobs (see $UMDIR/vn9.1/ctldata/vert).
Section 5.3.3 lists the gui settings involved in changing vertical resolution.
When changing either horizontal or vertical resolution, you may need to change the values of some dynamics
and diffusion parameters too. Seek advice from the dynamics code experts.

5.3.2 Changing Horizontal Resolution

1. horizontal domain
um
-> namelist
--> Reconfiguration and Ancillary control
--->Output dump grid sizes and levels
As well as the number of rows and row length, the number of Land Points is needed too. If you reconfigure
to a different horizontal resolution without a new land sea mask, you may not know the new Number of
Land Points. Use the um-pumf utility on your reconfigured start dump and look at integer header item 25
in the um-pumf head output.
2. timestep
um
-> namelist
--> Top Level Model control
---> Model domain and timestep
Increased resolution normally needs a reduction in timestep to maintain stability. If the ‘period’ is kept as
1 day, this means increasing the Number of timesteps per ‘period’. Check that your STASH diagnostic
requests and the frequency of radiation calls are still integer multiples of the timestep.
Increasing resolution will increase the memory requirements of the model - you may need to increase the
“Number of segments” for Long- and Short-wave Radiation and Convection, to alleviate this. Alternatively,
increase the number of processors used by the job. To do this use:-
3. processors
um
-> env
--> Runtime Controls
--> Atmosphere Only
Choose values for the number of processors in the North-South and East-West directions. For vector
machines it is best to maximise vector lengths by choosing a large number of processors North-South
and a small number East-West (1 or a small even number). For other machines, like the IBM Power7, a
‘squarer’ processor decomposition is more efficient. If running on more than one node, make sure there
are no unused processors (unless being allocated to the I/O server or a coupled model).
4. Ancillaries:
um
-> namelist

41 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

--> Reconfiguration and Ancillary control


--> Configure ancils and initialise dump fields
The reconfiguration program interpolates dumps to other resolutions. However, ancillary files can only be
used at their native resolution; if you don’t have the correct ancillary files, then please ensure that you
are not requesting the configuration of said ancillary items here. The model will fail if you try to “Update”
ancillary fields from files at the wrong resolution. Reconfiguration will fail if you try to “Configure” from
ancillary files at the wrong resolution. See section 5.8.3 for notes on making new ancillary files.
As well as changing the processors one may also need to alter the job resources, particularly memory and time.
As a rough guide, scale the memory limit in proportion to the change in number of grid points, or you may have
to use more processors. Scale the time limit in proportion to the change in number of grid points and timesteps
required to complete the forecast (the cost of compilation and ‘start up’ time will remain fairly constant). Check
the actual amounts used and refine your choices.
These are set in the suite files rather than the apps.
Changes to dynamics and diffusion parameters when changing horizontal resolution will most likely also be
required. Seek guidance from dyanmics code owners.

5.3.3 Change Vertical Resolution

1. Veritcal grid information


um
-> namelist
--> Reconfiguration and Ancillary control
--->Output dump grid sizes and levels
Choose the Number of levels, ‘wet’ levels (those at which moisture calculations are done) and ozone
levels. The ETA THETA and ETA RHO levels as well as the height of the top of the model are specified
in the vertical levels namelist file. There are ‘number of levels + 1’ ETA THETA values which must always
start at 0.0 and end on 1.0. The parameter Z TOP OF MODEL determines the height of the atmospheric
domain and hence the physical height of each level. FIRST CONSTANT R RHO LEVEL is the level at
which they change from being terrain following to being surfaces of constant height above sea-level. The
choice of this parameter within a level set is governed by the orography in the model. The height of this
level should be at least twice that of the maximum height of the orography over the whole domain. In
general, FIRST CONSTANT R RHO LEVEL is the level at a height of around 18km. The pathname of
the vertical levels namelist file is vert lev. The numbers of boundary layer levels (which may now cover
the whole depth of the troposphere to allow better boundary layer/convection coupling), deep soil levels
(usually 4), and cloud levels used in radiation (which should be the minimum of number of wet levels and
total number of levels) are also set.
2. Related levels information
um
-> namelist
-->UM Science Settings
---> General Physics Options
----> Large Scale Cloud

The number of rhcrit values must match the number of model levels defined in the configuration. Limited
area models often use a different set of low level critical humidity ratio values from the standard global
configuration. The diffusion settings may also need to be reviewed as they contain level information.
Increasing vertical resolution will increase the memory requirements - you may need to review the “Number
of segments” for LW, SW and convection sections, although IBM optimisation now suggests using the
“Segment size” option with ‘80’.
Again, it may be necessary to increase the number of processors used and job resources. See Horizontal
Resolution section above.
3. Ancillaries:
um

42 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

-> namelist
--> Reconfiguration and Ancillary control
--> Configure ancils and initialise dump fields
The reconfiguration program will interpolate dumps to other resolutions, both horizontally and vertically.
However, the ancillary files cannot be used in models at other resolutions. Changes to number and
spacing of model levels applies particularly to the ozone ancillary file. Again see corresponding Horizontal
Resolution section above.
4. Finally, STASH items’ domains may need to be updated for the new levels, including boundary layer levels,
and possibly timestep changes, via
um
-> namelist
--> Model Input and Outputl
--> STASH request and profiles
If possible avoid the use of timesteps, in preference to hours, days etc.

43 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.4 Atmospheric Tracers


Author: Nick Savage, Last Updated: 28 March 2012
Please note that this section is only valid pre UM9.0 and needs to be updated for UM9.0 onwards.

5.4.1 Introduction

The UM has the functionality to advect a number of atmospheric ‘tracers’. These are variables that represent
the mass mixing ratio (normally kg/kg but in the case of murk µg/kg) of any quantity that the user wants to be
advected using the tracer advection scheme, and are distinct from the basic model prognostic variables. The
tracers are also transported by the convection scheme, and (optionally) by the boundary layer scheme.
For flexibility there are up to 150 ‘free’ tracers available in a prognostic section 33 and a further 150 tracers
reserved for the UKCA chemistry and aerosol model in section 34. The Unified Model and its reconfiguration
program treat fields in these sections (33 & 34) in a similar way to the standard prognostic fields in section 0,
placing them in the model start dump.
Some other specific tracer species such as ‘murk’ for visibility forecasting and those used by the CLASSIC
aerosol scheme remain within section 0.
The tracers are advected using the standard semi-Lagrangian advection scheme. Interpolation options for this
scheme may be chosen for tracers independently of other model variables. The method is designed to ensure
that the advection is done with a chosen level of accuracy (if desired, monotonically) and in such a way that
mass is conserved.
It is possible that concentrations less than or equal to zero may arise and calls to a routine to reset negative
values are included in the tracer transport scheme. This can be particularly important when the tracers represent
concentrations of a chemical compound, especially when they are passed to a model which includes chemical
reactions, as negative concentrations can cause numerical problems.
This section will cover the UMUI aspects of using tracers, some guidelines for getting them into the model (i.e.
ancillary fields), how to set up tracer lateral boundary conditions for LAMS and a brief introduction to aspects of
the code as a start for users wishing to build on the system.

5.4.2 Setting up tracer transport

If murk, CLASSIC aerosol, UKCA or free tracers are used in a job it is necessary to turn on the tracer transport
scheme. There is currently only one version (2A) available. To advect atmosphere tracers this version must be
included through window atmos Science Section Atradv
Atmosphere
=> Scientific Parameters and Sections
-- => Section by section choice
---- => Section 11: Atmospheric tracer advection
A wide range of options in terms of accuracy, monotonicity etc is possible. It is recommended that the highest
accuracy scheme is used, similar to that used for moisture advection. These options may be selected through
the window atmos Science Section Advec
Atmosphere
=> Scientific Parameters and Sections
-- => Section by section choice
---- => Section 12: Primary Field advection

5.4.3 Initialisation of Tracers

To introduce and initialise the selected tracers in the model start dump, (unless they already exist in the dump),
it is normal to edit the STASHmaster file and initialise either to a constant value or by using a named ancillary
file. See relevant panels under Atmosphere/STASH.

44 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

Creating an ancillary file for tracer initialisation is relatively straightforward, but does depend on where the data
originate and how they are generated. Programs exist to generate such files from pp format files. Points to note
are:-
• Ensure that the STASH, pp and grid codes are correct, and set the space code to equal 2 so that the item
is included in the dump.
• The reconfiguration step checks that the verification time of the data matches that of the dump being
reconfigured into. It is therefore difficult to include a general ‘test blob’ without recreating the ancillary
field. An FCM changeset can be generated to overcome this.
• It is important that the level dependent constants for the levels and the level thicknesses match the dump.

5.4.4 Schemes using atmospheric tracers

This section describes in more detail the schemes which use atmospheric tracers and how to set up lateral
boundary conditions for LAMs for each scheme. All tracers are held on atmospheric theta points.

MURK

A specific aerosol variable (known as ‘murk’) exists which is advected using the tracer scheme, and may have
sources and sinks applied. This is used to help estimate visibility, and may be adjusted by assimilation of
visibility data. In limited area models the boundaries may also be updated from the boundary conditions file.
The concentrations of murk need to be in the same lateral boundary condition file as the meteorological variables
and on the window atmos InFiles OtherAncil LBC
Atmosphere
=> Ancillary and input data files
-- => Other ancillary files and Lateral Boundary files
---- => Lateral boundary conditions
choose the tick box ’Input LBCs include murk aerosol’.

CLASSIC aerosol scheme

The CLASSIC aerosol scheme runs using tracers in section 0 within the UM. For more details see section 3.1.4
and UMDP-020 .
These can be set via the window atmos Science Section Aero
Atmosphere
=> Scientific Parameters and Sections
-- => Section by section choice
---- => Section 17: Aerosols
As for murk, lateral boundary conditions for CLASSIC aerosols can be included in the LBC file and need to be
turned on via the LBC file input window.

UKCA Model

This model uses tracers in section 34, and these are selected depending on the chemical or aerosol scheme
selected. See section 3.1.10 and UMDP-084 for more details including setting up lateral boundary conditions.

Using free tracers

The ‘free’ tracer system is intended as a basis for testing out any new scheme or for ‘user’ applications that
would not be appropriate for general introduction into the UM. The tracers are thus sometimes referred to as
‘free use’ tracers to remind potential users that they should not write code which hard wires them into some

45 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

aspect of the model. The system may be used as a way of marking particular air parcels to track them, or may
be used as the basis for more involved chemical modelling by the addition of extra code.
Once inside the model dump, free tracers will be advected and may be output by STASH using the appropriate
variables in section 33. If you simply wish to use tracers as markers, this will suffice. To use the free tracer
variables as the basis of, say, a chemical or dispersion model, will require linking in your own routines.
For atmosphere tracers select window atmos Config Tracer
Atmosphere
=> Model Configuration
-- => Atmospheric Tracers
Select the first question in the window to set whether tracers are included in the Atmosphere model.
After this question, there is a table that can be scrolled to select the tracers to be used. Tracers are selected by
inserting the following values in the first column against the relevant tracer:-
• 0 : Do not use the tracer.
• 1 : Include from an input dump.
Lateral boundary conditions may optionally be selected if running a LAM by using the second column of the
table. As the input LBCs are held in their own section it is also necessary to turn on section 36 on window
atmos Science Section Tra LBC
Atmosphere
=> Scientific Parameters and Sections
-- => Section by section choice
---- => Section 36:Tracer LBCs
There is a question which selects whether the tracers should only be advected or whether they should be mixed
in the vertical by the boundary layer scheme as well. It is possible, via a branch, to select a subset of tracers to
be mixed, but the UMUI currently selects either all or none.

46 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.5 Selecting a New LAM Area


Author: Glenn Greed, Last Updated: 8 July 14
It is assumed here that the developer has a working UM LAM suite and wants to alter/relocate the LAM do-
main/resolution.

5.5.1 Preliminaries

Setting up the Unified Model for a new limited area domain is relatively straightforward, and will enable you to
run higher resolution suites concentrating on your area of interest. However you should at the outset consider
the time and effort involved.
The time will be 2 days to a week or so of dedicated work, to get a smoothly running Limited Area Model (LAM)
on a new domain, depending on experience. A similar length of time may be needed to create new ancillary
files for the new domain. In addition, because of the work involved in preparing IAU/acobs files, running with
data assimilation on a new domain requires further work. If this is required, consult section 5.7.
It is recommended, where possible, that users base their LAMs on one of the Met Office app configurations. This
will greatly reduce the work involved in setting up a LAM over your domain of interest, leaving you to consider:
• the size of domain, horizontal and vertical resolution
• the availability of initial data and boundary conditions
• the length of forecast
• whether ancillary files are available or will need to be created
• what STASH diagnostic output you will require
• and what computer resources you have available.
If the user chooses to build a LAM from scratch, or with different settings (resolution, dynamics and/or physics)
to those used by the Met Office one will also need to consider:
• horizontal and vertical resolution (the spacing of grid points/model levels)
• the model timestep
• the choice of science parametrizations
• the order and coefficients of diffusion; these will depend on grid length and timestep.

5.5.2 Overview

The work required can be split into 5 areas, some of which can be carried out in parallel. They will be described
in detail below:
1. Choose new LAM domain
2. Create new ancillary files, if necessary
3. Obtain lateral boundary data for the new area
4. Set up the reconfiguration settings within the UM app to produce a suitable initial dump
5. Set up the UM atmos settings to produce the new LAM forecast
Tasks 2 and 3 can be done in parallel.

47 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.5.3 Choose new LAM domain

The most convenient method is to use the “LAMPOS” graphical utility. For Met Office users, this is available on
Linux. “LAMPOS” is also supplied with the ported Unified Model. Otherwise it may have to be done by spherical
trigonometry or by trial and error.
The following 8 data items are needed to define any Unified Model domain, and they are used in the subsequent
steps for generation of boundary conditions and setting up your new model domain:-
• Co-ordinates of Rotated Pole
– latitude and longitude of the rotated pole in ‘co-ordinates of rotated pole’ section
• Grid
– x and y direction gridlengths — labelled as col and row spacing
– x and y grid dimensions in gridlengths — labelled as No of cols, No of rows
– latitude and longitude of the “bottom left corner” of the domain (relative to the rotated pole) — labelled
as First lat, First lon
Please note that LAMPOS currently only works for New dynamics based domains. ENDGame domain would
be subtly different as first lat and first lon do not coincide with an actual grid point.
Conceptually it is simplest to think of situations where the equator/meridian crossing of the rotated grid (“+” on
the lampos display) is roughly in the centre of the domain, as with the Met Office NAE model. In the Northern
Hemisphere, the latitude of the rotated pole in degrees is given by (90 - latitude of rotated equator) and the
longitude of the rotated pole by (180 + the Eastward longitude of the rotated meridian). For example, for the
NAE the equator/meridian crossing is at 52.5 N 2.5 W, so the coordinates of the rotated pole are 37.5 N 177.5
E.
However it is not essential for the rotated equator/meridian point to be within the domain, and virtually any do-
main location and rotation can be achieved by moving the rotated meridian outside of the domain and adjusting
the coordinates of the bottom left corner accordingly. Navigating the rotated pole to the best location can be
quite tricky, and exploring the possibilities with ”lampos” is a great help.
When working with a regional domain in the Southern Hemisphere, extra care should be taken defining the
domain coordinates. For example, one could specify an ”imaginary” latitude, greater than 90N, for the rotated
pole. The Unified Model system (ancillary file creation and forecast model) will accept and run with such an
imaginary latitude, but this can lead to the data in output files (fieldsfiles, PP files, etc) being incorrectly identified
by some utilities and plotting programs, leading to incorrect map backgrounds or worse, so beware!
An alternative and the recommended approach, which gives an identical forecast and a correct map background,
is to ”mirror” back to a rotated latitude between 0 and 90N by subtracting from 180, e.g. 110N => 70N. Then
swing the rotated longitude through 180 degrees, and also add 180 degrees to the longitude of the bottom left
corner.
When selecting the new domain, it will save time on tuning the model if you choose a tried and tested resolution
for which diffusion coefficients etc. are known. These are 0.11 degrees for the current UK NAE model and 0.036
degrees for the UK 4km model.
If you intend to run your model in the future within an assimilation cycle, please note that the assimilation step
uses its ‘own’ model grid (VAR GRID, usually taken to be half the resolution of the LAM). It is important that the
number of rows and columns in the VAR GRID are multiples of 2, 3 and 5 only, to avoid aliasing problems when
moving between the LAM and VAR grids. It is thus recommended that the rows and cols used in the UM grid
take this into account, otherwise the VAR GRID will have to be a sub domain of the LAM.

5.5.4 Create new ancillary files

This step is not essential as you could simply reconfigure (see section 5.2) from a global dump to get initial
conditions for your run. However, the resulting land/sea mask, orography and other fields after interpolation will
be smooth and lacking in detail. For your new domain, you will almost certainly want to create new land/sea

48 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

mask and orography files at your chosen resolution. It is advised that new soil, vegetation and ozone ancillary
files are also created for a new LAM domain.
Atmosphere ancillary files for a new domain can be created using the Central Ancillary Program (CAP) which
has gathered together all the currently available options into a single flexible package. For details on its use see
UMDP-073 . A script to run this program and generate one or more ancillary files is available on the IBM. The
latest stable version is:
$UMDIR/vn8.3/ancil/build/v1.1/bin/AncilScr_top
The script is run on the IBM and among its output includes simple maps of land/sea mask and orography for
you to inspect. Please note: when the script is run to create ancillaries it creates a README file in the target
directory. Make a note of the ‘No of Land Points’ as this value is required by the UM rose gui when setting up
the reconfiguration task.
This program may not be available directly to external UM users. Please seek advice and assistance from
Ported Unified Model support at the Met Office.
Before using ancillary files for a new domain, it is advisable to have a careful look at the land/sea mask, which
influences all the other ancillary files except ozone, to make sure it meets your requirements. In particular, single
grid point islands or sea inlets could give rise to unsightly or unrealistic effects in surface and boundary layer
fields. The CAP provides the capability to output an ASCII representation of the IGBP derived land sea mask
(LSM). The user may then ‘edit’ this to remove unwanted features. Similarly another option (-k) can be used
to ‘smooth’ features. Once happy with the LSM, create your other chosen ancillaries; for example, orography,
vegetation, soil fields, ozone and soil moisture.
Make sure you enter values to exactly the same precision as you will use in the UM namelists/GUI when setting
up suites. Otherwise your runs may fail internal consistency checks in the model. Also make sure longitudes
are in the range 0 - 360 degrees as this is preferred for the UM namelists.

5.5.5 Obtain lateral boundary data for the new area

This step is essential as the Unified Model cannot run as a limited area model with fixed boundary conditions,
except for idealised suites. Beyond a few hours the forecast will become unrealistic without properly generated
boundary data.
Typically you will obtain boundary conditions from the global model for a limited area model (LAM). However, it
is possible to drive higher resolution LAMs from another LAM. For example, a 1km model could be driven by
boundary data generated by a 4km model.
The only way of generating lateral boundary conditions for a limited area run is by generating them from existing
standard UM dumps or fields files using a utility called MAKEBC. For details of how to use MAKEBC, see
UMDP-F54 .

5.5.6 Setting up the RCF and UM run app for the new LAM forecast

We shall assume that you haven taken a copy of a ’standard LAM suite app’ to form the basis of your new LAM
suite. The following will take you on a whistle stop tour of the UM gui panels that you may need to alter in order
for the new LAM domain to function. As the RCF and UM run share the same app we shall consider changes
altogether.
In addition to reconfiguring the initial dump from the driving model (whether Global or LAM) to the new LAM
domain and resolution, the reconfiguration program can be used to transplant data from any new ancillary files
into the LAM start dump. Such data might include a new land/sea mask and orography files at your chosen
resolution and perhaps also new soil and vegetation ancillary files for the new LAM domain. For details of how
to include new ancillary data using the reconfiguration program see section 5.8, especially section 5.8.6.
A special note is necessary regarding the orography for the new LAM domain. The orography at the edge of the
LAM needs to be the same as that which was used in the driving model. This is because LBC data is defined
on the driving model’s levels, and these need to be at the same heights as the LAM levels in the rimwidth where
they are defined. Since the height of a model level depends on the height of the orography beneath it, both

49 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

models must have the same orography within the LBC rimwidth. In the model area immediately inside the LBC
rimwidth the orography is slowly relaxed from that of the driving model to the full resolution orography for the
LAM using linear interpolation. The area over which this occurs is called the blending zone, and it includes the
LBC rimwidth. Each gridpoint within the blending zone is designated a weight, with weight=1 indicating that
the orography in the new initial dump will be purely that of the driving model while for weight=0 it will be purely
that of the LAM. The final gridpoint of the blending zone is the last such gridpoint with a non-zero weight. The
interpolation is achieved by running the reconfiguration program with the input dump from the driving model to
create an output dump for the LAM model to be run.
The settings for the number of gridpoints in the blending zone and their weights will depend upon the particular
LAM area and especially the orographic details at the boundary of that area. For example, for the NAE model
the blending zone is set to 25 gridpoints with the first 10 having weight=1 followed by a smooth decrease over
gridpoints 11 to 25 such that gridpoint 26 (i.e. the first point not in the blending zone) has weight=0. For the UK
4km model, the blending zone is 15 grid points, with the first 8 having weight=1 and a smooth decrease over
blending zone gridpoints 9 to 15 such that gridpoint 16 has weight=0. Note that there is a close relationship
between the blending zone discussed here and the rimwidths for prognostic fields discussed in section 5.5.5
above. The orography blending values should all be 1 within the rimwidth and then decrease smoothly outside
of the rimwidth.
An important point to note is that, whenever the orography of the driving model is changed, this step to recon-
figure the orography for the LAM must be repeated to ensure consistency!
We assume that model domain is set to LAM within the app config file. To check:
um
-> namelist
--> Top Level Model Control
--->Model Domain and Timestep
Go to window
um
-> namelist
-> Top Level Model Control
--> LAM Configuration
to set, delta lat, delta lon, frstlata, frstlona, polelata, polelona as set in LAMPOS.
The developer specifies the width of the blending zone by supplying a list of rimweights and the LBC file to be
read in:
um
-> namelist
--> Top Level Model Control
--->LBC Related Options
To set your run start time and forecast length, visit
um
-> namelist
--> Top Level Model Control
---> Run control and Time Settings
making sure you do not go outside the timespan covered by your boundary data.
To set the number of land points and vertical level information. Enter the file containing the vertical level defi-
nitions (in the VERTLEVS namelist) which must be identical to the file specified for generating your boundary
data.
um
-> namelist
-> Reconfiguration and Ancillary Control
--> Output dump grid sizes and levels
If you are purely interpolating data or you do not know the number of land points in you land sea mask, it is
possible to find this number by running the reconfiguration program. The reconfiguration will fail with a message

50 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

like:
Reconfiguration Error
No of land points in output land_sea mask = 21271
No of land points specified in namelist RECON = 104538
Please correct the number of land points, via the gui, updating LAND_FIELD within SIZES
For further information about making changes to horizontal resolution, see section 5.3.2.
If you have created ancillaries for your new domain you may configure them via
um
-> namelists
--> Reconfiguration and ancillary control
---> configure ancils and initialise data
Many suites will be set up (via env var) to immediately pick up the start dump created by the reconfiguration. It
can be set explicitly at
um
-> namelists
--> Model Input and Output
---> Filenames for model input and output
If you have chosen a non-standard horizontal resolution, you should revise your choice of diffusion coefficients,
consult dynamics code owners.
um
-> namelist
--> UM Science settings
---> Section 13: Diffusion, Filtering
Check gui panel
um
-> namelist
--> Data Assimialtion
and deselect l iau and lac mes unless you have produced valid IAU/ACOBS files for your domain.

5.5.7 Advice

Having set up your new suite, it is best to test it with a short run (but long enough to cover all important
features, e.g. STASH output or ancillary updating) and check the output, before committing yourself to using
large amounts of computer time and resources for your full runs.

51 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.6 Science Options


Author: Glenn Greed, Last Updated: 8th July 2014

5.6.1 Fundamentals

The key to the UM system is that it supports use in many model configurations, with different choices of func-
tionality and science code, all available via selecting different options in the UM gui or input namelists, as well as
being a development test-bed for prospective new schemes. Selection of different science options is enabled by
Run-time variables. Logical switches and numerical parameters are initialised as Fortran NAMELIST variables.

5.6.2 Sub-models, Internal Models and Sections

From a systems point of view, a UM configuration can be described by a hierarchy of Fortran components
forming a composite structure that defines the model as a whole. The model may be composed of one or more
sub-models, such as atmosphere or ocean sub-models, which each have their own data space, holding their
respective state variables. Within a sub-model there may be one or more internal models, for example prior to
UM8.1 we had atmosphere and land (JULES).

Sections

One can sub-divide an internal model further into a number of separate sets of calculations, such as for radiation
or boundary layer processes. These are included lower down the calling tree and these are termed “sections”.
This label is used in 2 ways:
1. To group related routines together in practical sets, for source code management and ownership.
2. To specify groups of diagnostic variables calculated in these routines, to be referenced by the internal
addressing and diagnostic system; STASH (see section 5.12). Each prognostic and diagnostic variable is
indexed by (internal model, section, item) in a master file.
The definition of a “section” has been extended to include specific sets of routines for calculating diagnostics,
such as atmospheric dynamics diagnostics (section 15), and also sets that are independent of internal models,
e.g. for I/O operations (section 95), which have no application for labelling indices of diagnostic variables.

52 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

Lists of sections current at vn9.1 are given in Table 5.1.

Plug compatibility and Glue routines

It is a requirement of the UM system that science schemes should be readily interchangeable with those from
other models through the concept of ‘plug compatibility’, such that there is no dependency on data structure
within the schemes. In practice this requires data to be passed down into science subroutines by argument or by
USE’ing a suitable MODULE. Plug compatibility is often achieved within the UM atmosphere science sections,
below an interface known as a glue routine. The glue routine separates control and science layers, and allows
alternative schemes — potentially with different argument lists — to be accessed for the same section.

53 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

Section label Description


A01 Short-wave radiation
A02 Long-wave radiation
A03 Boundary layer
A04 Large scale precipitation
A05 Convection
A06 Gravity wave drag
A08 Surface hydrology
A09 Cloud scheme - stratiform cloud amount
A10 Dynamical adjustment and solver
A11 Tracer advection
A12 Dynamics advection
A13 Dynamics filtering and diffusion
A14 Energy budget and energy adjustment
A15 Dynamics diagnostics
A16 Physics diagnostics
A17 Aerosols
A18 Atmosphere assimilation of observations
A19 Vegetation scheme
A20 Fieldcalc generated diagnostics
A21 Electrification scheme
A26 River Routing
A30 Climate diagnostics at end of timestep
A31 Reading LBCs
A33 Free Tracers
A34 UKCA Chemistry Prognostics
A35 Stochastic Physics
A36 Tracer LBCs
A37 UKCA LBCs
A38 UKCA Diagnostics
A39 Nudging in the UM
A50 UKCA Chemistry Diagnostics
A70 Radiation service routines
A71 Control level routines for atmosphere
C70 Control level routines for all sub-models
C72 Control level routines for atmosphere/ocean coupled models
C80 Model dump file I/O routines
C82 Ancillary field initialisation and updating
C84 STASH diagnostics service routines
C90 General configuration-specific service routines
C92 Vertical, horizontal (including LAM) and time interpolation
C94 Miscellaneous service routines
C95 All C code: low level I/O and portable alternative routines
C96 MPP service routines
C97 UM timer routines
C98 OpenMP
L08 JULES

Table 5.1: Unified Model Sections - Atmosphere (A), Control (C) and Land (L)

54 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.7 Assimilation in the Atmosphere Model


Author: Bruce Macpherson, Last Updated: 02 Aug 2010

5.7.1 Cloud and rainfall data assimilation

The data assimilation scheme adjusts the model state towards observations, providing initial fields for forecasts.
For most applications, analysis increments are calculated outside the UM by the VAR variational scheme and
increments are added to the model at the start of the forecast. When this addition takes place gradually over a
period, to limit noise generation in the model, it is known as an Incremental Analysis Update (IAU). If the analysis
increments are sufficiently ‘well balanced’, they may be added at a single time, and this is the procedure used
operationally in the Met Office global and NAE (North-Atlantic European) models.
For observations of cloud fraction and rainfall rate, however, assimilation may take place inside the UM model
code itself, where observational data are assimilated by an iterative or ‘nudging’ procedure which is carried out
at each timestep of an assimilation ‘time window’ around the analysis time. This technique is referred to as
the Analysis Correction scheme and is currently used operationally in the limited-area NAE and UK4km con-
figurations, for which rain rate observations are available from a radar processing system. Cloud observations
analysed within a nowcasting system were also assimilated operationally via the AC scheme until November
2008, but since then have been assimilated within the VAR scheme. The interface between cloud and rainfall
data and the UM system is known as MOPS (Moisture Observation Pre-Processing System) and the processed
observations as MOPS data.
Users who have access to these MOPS data may run a limited area assimilation configuration including them.
Other users may be contemplating the assimilation of their own cloud and rain data. In either case, users are
advised to seek specialist advice from the Met Office.

55 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.8 Ancillary and Boundary Files


Author: Glenn Greed, Last Updated: 14 July 2014

5.8.1 Introduction

Whether a particular UM variable is an “ancillary field” depends on the context in which it is used. A ”prognostic”
field is any field which is forecast by the model, and is required to define the model state.
An ancillary field may be defined as a prescribed prognostic variable held in a model dump, which may be
updated during a UM run by values read from an external file, with appropriate time interpolation. The external
file is known as an “ancillary file”. Ancillary fields can thus be thought of as boundary conditions in time for the
model forecast.
For example, snow depth is an ancillary field if it is imposed and either held constant or updated from an ancillary
file. However, if it is allowed to be predicted by the model’s precipitation and hydrology schemes, then it is not
an ancillary field (but it is still a ”prognostic” field).
Some fields, such as orography, are ancillaries in all circumstances. Some ancillaries, such as the parame-
ters which classify soil and vegetation characteristics, are used depending on what hydrology and vegetation
schemes are selected.
There are 3 basic ways of including an ancillary file in a UM run:
• It may be in the initial dump already. Most data fields found in ancillary files are generally already available
in initial dumps. Note that if the dump is reconfigured to a new resolution and/or domain then the ancillary
fields will also be interpolated to the new grid.
• It may be replaced in the initial dump by configuring in from an ancillary file on the correct domain and
resolution.
• It may be added to the initial dump as a new ancillary field by configuring in from an ancillary file on the
correct domain and resolution.
The configuring operations referred to above are carried out by the Reconfiguration program (see section 5.2).
The user should also be aware of the following:
• There are advantages to using ancillary files. The user has more control of the data used, and the
interpolation to the correct resolution and/or domain done within the ancillary creation program is better,
as certain fields are checked against one another for consistency (vegetation and soil parameters for
instance). Also, some fields, especially the land-sea mask, can be edited by hand to reduce the amount
of noise in the lower boundary layer.
• For initial suites with new areas the interpolation performed within the reconfiguration is usually sufficient
and it only becomes necessary to use ancillary files when the new area is finalised.

5.8.2 Standard ancillary files

Many standard ancillary files are available. The locations of Ancil files are specified in ”Ancil version files”. This
is fully explained in the documentation for the Ancil project - available in the doc/ folder relative to the project
root directory.
Ancillary file names include the designation “parm” for constant fields (e.g. orography, land-sea mask) or “clim”
for climatologies, which have time-dependence (e.g. sea surface temperature, soil moisture).

5.8.3 Generation of new ancillary files

Atmosphere ancillary files for a new domain can be created using the Central Ancillary File Making Program
- see the doc/tech_papers/ATDP10.html documentation relative to the root directory of the Ancil project. Its
output includes simple maps of land/sea mask and orography for the user to inspect. (This program is not made

56 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

directly available to external UM users. Please seek advice and assistance from UM Collaboration support at
the Met Office).
Before using ancillary files for a new domain, it is advisable to examine the land/sea mask, which influences all
the other ancillary files except ozone, to make sure it meets your requirements.

5.8.4 Model aspects

Ancillary fileds may be either ”configured”, i.e. replaced or added to a start dump by the reconfiguration program;
or ”updated”, i.e. modified at specified time intervals during a model run, using values taken from an external
ancillary file. In the case of ”updating”, the routines which read in the values from an ancillary file, and carry out
the updating, are part of the UM itself.
The interpolation, replacement or addition of ancillary fields by the stand-alone reconfiguration program is cov-
ered separately (see section 5.2).
Within the UM itself, control of ancillary fields is by namelists in the rose-app.conf file. The relevant namelists are
&ANCILCTA and optionally one or more &ITEMS namelists, depending on which fields, if any, are to be updated.
These namelists control whether any given field is to be updated and, if so, at what frequency. The &ITEMs
name lists are also used by the reconfiguration program to control ancillary configuring. &ITEMs namelists
which control ancillary updating contain the entry update anc=.true.
Ancillary data may be single-time or multi-time, and updating may be regular or irregular, periodic or non-
periodic, or time-series. These choices are controlled by the ancillary file header - see UMDP-F03 for details.
The type of calendar used — Gregorian or 360-day — is specified in the gui. Climate users must be careful to
use ancillary files created for the appropriate year-length.
All other controls, such as:
• inter-field dependencies, e.g. SST and sea-ice,
• time interpolation using simple linear interpolation or controlled linear interpolation (for snow depth and
SST & sea-ice fields),
• whether to update land points, sea points or all points,
are coded within the subroutine REPLANCA (there are two versions of this — one for the reconfiguration and
one for the UM).

5.8.5 Boundary updating files

As an external file which updates model prognostic values during a model run, albeit only at a small number of
lateral boundary points, a boundary data file can reasonably be termed an ancillary file.
They are generated using the MakeBC utility which will take dumps or fieldsfiles as input and produce a LBC
file. This is now the only method of generating these files.
For further details, see section 5.5.5 or UMDP-F54 .

5.8.6 User Ancillaries

The UM allows users to add new prognostic and ancillary fields. User prognostics may be set up using any
STASH item number and can be initialised in the reconfiguration.
Setting up user ancillaries is more restrictive and the user must choose from a list of reserved STASH item
numbers for single and multi-level ancillary fields.
• USRANCIL : 20 fields, Stashcodes 301-320
• USRMULTI : 20 fields, Stashcodes 321-340

57 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

Ancillary fields provided through these files will be recognised and read in; the user then has to provide appro-
priate code changes to utilise these fields within the model.
Instructions on this procedure can be found via the UM project page. External users who wish to use this facility
may contact the Met Office External Collaboration team for advice.

58 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.9 Single Column Unified Model


Author: R .Wong., Last Updated: 25 Nov 2010

5.9.1 Introduction

The Single Column Model (SCM) is a tool mainly used in research for the development of physics code. The
SCM can be compiled and run like the UM as a rose suite, though it requires the user to provide forcing data.

Single Column Unified Model

A SCM represents a single atmospheric column within a General Circulation Model(GCM). Treatment of all
physical processes (where implemented) within the SCM are identical to the Met Office UM. However, unlike
the UM, the effects of large-scale horizontal and vertical motion in the SCM must be prescribed via external
time varying forcings. Data for these forcings are typically obtained via (though not restricted to):
1. Observations
2. Idealised cases
3. Statistical data
There are two main advantages of running a SCM over a GCM:
1. Parametrization development: SCMs are useful in assessing the impact of new parametrizations on the
local climate in response to prescribed atmospheric conditions. SCMs allow these impacts to be assessed
without the complications of large-scale dynamical feedback. However, it is precisely this lack of feedback
which limits a SCM to being a tool to complement full GCM tests.
2. Resource requirements: SCMs use far less computer storage and time to run. The SCM can be imple-
mented on a linux desktop with turnround times typically shorter than using the supercomputer due to the
absence of network communication and job queues.
There are of course drawbacks in this approach, the most important being the need to prescribe the advective
forcing. This prohibits the use of the model for studying climate change due to large-scale motions.
The UM document UMDP-C09 , provides more detailed information on how to setup and run the Single Column
Model.

59 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.10 NEMO Ocean and CICE Sea Ice Models


Author: Richard S.R. Hill, Last Updated: 06 March 2014

5.10.1 Introduction

The Met Office routinely employs the NEMO ocean [42] and CICE sea ice [32] models in full coupled configu-
rations. These codes are owned by IPSL and LANL respectively. The code repositories for these components
reside outside of the UM system and are managed and developed completely independently of the UM, both in
terms of where they are physically stored and in terms of their release cycles which are in no way linked to the
UM release cycles or to each other.
Under Rose (i.e. from UM version 9.0 onwards), the UM no longer supports NEMO-only, CICE-only or combined
NEMO-CICE configurations, other than in the context of fully coupled models involving the UM atmosphere
component. Stand-alone NEMO (versions 3.2, 3.3.1 and 3.4) and CICE (versions 4.0 and 4.1) were most
recently supported by the UMUI at UM version 8.6.
This section is intended to act as a general guide to setting up and running combined NEMO-CICE components
in a coupled model context of the Unified Model system. It is not intended as a guide to development, debugging
or performance assessment of NEMO or CICE. Appropriate guidance can be found in the separate trac systems
and user guides for these models.

5.10.2 NEMO and CICE General Points

Whilst it is not the place of this document to describe in detail the principles and functions underlying NEMO
or CICE, it is worth pointing out some important aspects of controlling NEMO and CICE behaviour via the UM
control system. Unlike the UM, the CICE code and NEMO codes prior to version 3.3.1 are not dynamically
allocatable. In particular, details of CPU arrangements must be provided to the code during compilation. This
is done typically by supplying dfimensions and PE configuration details in the form of numeric FPP keys which
are incorporated in the code during compilation.
The consequence of this is that whenever the number or configuration of processors is altered, the NEMO-CICE
executable must be recompiled.
From NEMO vn3.3.1 onwards, dynamic allocation is used, meaning the use of some of these FPP keys is no
longer necessary for the NEMO component. However, the use of static allocation in CICE persists and demands
recompilation if any pertinent details are altered.

5.10.3 Setting Up NEMO and CICE

Setting up Rose-based NEMO and CICE component details within a coupled model is usually best done by
taking a copy of an existing Rose-based coupled model and modifying it, or using Rosebud to convert and
existing UMUI based job to Rose format. Many details will be common to models of similar type. This approach
avoids the need to visit every relevant potential input item. See the sub-section on standard experiments in
section 5.1

Before you can run

Before the model can be run, there are a number of things which need to be in place and a number of further
items which may be required under certain model configurations.
1. Directories containing input files — In the case of the Met Office supercomputers, there are often existing
standard input files and directories which can be referenced directly.
2. Initial dump/restart files, input files and configuration settings — The NEMO and CICE restart files are
the equivalent of a UM atmosphere dump file and contain the initial conditions of your model. Note: it is
possible in certain configurations to run without reference to an initial restart file. Various other input files

60 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

are required to drive the NEMO and CICE components. Both NEMO and CICE require netcdf input files
containing the grid definition.
See, for example, Rose configuration items:
Coupled
-> env
-> NEMO_START
-> NEMO_VERSION
-> NEMO_ANCIL
-> NEMO_GRID
-> CICE_GRID
-> CICE_START
-> CICE_NPROC
3. NEMO and CICE configuration and FCM options — The NEMO, IOIPSL and CICE versions must be
selected. The UM control system only supports a specific range of NEMO and CICE versions at any given
release. Supported ones will be those available for selection in the following locations.
See Rose configuration items:
fcm_make_ocean
-> env
-> Extract settings for ocean builds
-> ocean_version_file
-> nemo_rev
-> ioipsl_rev
-> cice_rev
The FPP keys to set all the necessray options (including CICE PE configuration) also need to be set:
See Rose configuration items at:
fcm_make_ocean
-> env
-> CPP and FPP keys
The NEMO and CICE components are automatically built into a single executable combining both sets of code,
with an internal direct coupling interface. Hence the the OASIS or OASIS3-MCT couplers play no part in this
internal coupling.

5.10.4 NEMO and CICE model output

Following successful completion of a run, the NEMO standard output file, ocean.output, is included in the UM
standard output listing. In the case of a coupled run, the NEMO output appears after the atmosphere output, but
before any job summary statistics. The CICE standard output file, ice diag.d, is not included in the UM ouput,
but remains in the job directory.
If a coupled model run fails for any reason, it pays to examine not only the contents of the UM standard output
file, but any files created in the job directory produced by the coupler, NEMO or CICE components (of which
there may be many).

61 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.11 Coupled Models


Author: Richard S. R. Hill, Last Updated: 06 March 2014

5.11.1 Overview

The UM is able to employ either of the OASIS3 or OASIS3-MCT couplers to support two-way coupling between
the UM atmospheric component and a NEMO-CICE ocean and sea-ice component. All model components are
executed concurrently, in contrast to pre-7.0 versions of the UM where both the atmosphere and ocean models
were contained within a single executable and execution followed the atmosphere-ocean-atmosphere-ocean-. . .
sequence.
The coupler libraries and, in the case of OASIS3, the coupler executable, have to be pre-built and available
centrally.
For OASIS3, the UM job control system only has to link to the existing coupler executable at run time.
The coupler interface routines (otherwise known as the PRISM System Model Interface Library or PSMILe) are
linked during compilation to the UM atmosphere and NEMO-CICE models.
In order to achieve coupling in general, information is passed periodically from one sub-model to another via
the coupler. This information may be in the form of instantaneous values (e.g. presently predicted SST from the
NEMO component), or time-averaged over the period used for coupling (e.g. the time averaged solar energy flux
from the atmospheric component). In addition to regridding data using a variety of transformation methods, the
couplers have the ability to perfrom other operations on the regridded data, such as multiplication by or addition
of a constant (e.g. sometimes used when converting temperatures to or from Kelvin or reversing the sign of a
particular flux in order that received fluxes conform to the convention of the target component.)
Technical details of coupling are given in UMDP-C02 . See also the OASIS3 3.0 user guide [59] or the OASIS3-
MCT user guide [60].

5.11.2 Time-stepping in Coupled Models

A well-defined information-passing sequence is necessary to carry a coupled model forward in time, since each
sub-model is dependent on other sub-models for forcing. The convention is that the atmosphere typically gen-
erates time averaged fields to force the ocean-seaice model, whereas the ocean-seaice component generates
instantaneous fields to force the atmosphere. The period over which the cycle or generation and exchange of
coupled data repeats is called the coupling period. In principle it would be possible to arrange for models to run
sequentially rather than concurrently. However the UM does not currently support this and there are currently
no perceived advantages in pursuing such an approach with reagard to atmosphere-ocean coupling.
Note: It is common practise to use a 3-hour coupling period in the coupled atmosphere-NEMO-CICE configura-
tion, but this can easily be changed if desired, provided consideration is given to the use of appropriate OASIS
control “namcouple” files and to the period over which any time-meaned coupling fields are accumulated and
made available.

5.11.3 Interpolation Between Sub-models

In general, the spatial grids of sub-models to be coupled will not be congruent. Indeed, the UM uses a regular
lat-long grid while NEMO and CICE typically use a tri-polar grid. The transformation of data from one grid for
use on the other grid is carried out by OASIS3 or OASIS3-MCT. A remapping weights file is required for each
grid transformation undertaken. These files are typically set up prior to the run with links set up to them by the
UM control scripts at run time. See the OASIS3/OASIS3-MCT user guide for full details.
Generation of remapping weights files for lower resolution configurations has traditionally been carried out using
SCRIP directly or the embedded SCRIP functionality within the OASIS couplers. However, for higher resolution
configurations this process can become cripplingly slow. In such cases the use of the ESMF [22] weights
generation tool is recommended.

62 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.11.4 Special Ancillary Forcing for Coupled Models

In general, coupled models require fewer ancillary fields to be supplied than single-component models (e.g.
atmosphere-only or NEMO-only), since the model predicts many of the fields which otherwise need to be spec-
ified as forcing or boundary conditions.

5.11.5 Coupling Information Required in Rose

In addition to setting up the details for the individual atmosphere, NEMO and CICE models, a coupled model
needs the following:
1. The relationship between the various compilation and run stages of a coupled model.
The Rose file “suite.rc” sets out the various stages involved in compiling and/or running a coupled model.
A typical coupled suite will contain:

suite conf
coupled
fcm_make_ocean
fcm_make_um
2. Coupler settings, including coupling frequency, location of coupler and associated input and control files.
Rose configuration settings:

coupled
-> command
-> command default: set to ‘‘um-coupled’’
-> env
-> COUPLER: ‘‘OASIS3’’ or ‘‘OASIS3-MCT’’
-> PRISM_NPROC: The number of PEs on which to run the coupler
(0 for OASIS3-MCT)
-> RMP_DIR: Location of OASIS remapping (rmp) weights files
-> NAMCOUPLE_DIR: Location of controlling namcouple file(s)
-> NAMCOUPLE_STUB: Stub name of controlling namcouple file(s)
-> file
-> SHARED
-> oasis_couple_freq: Coupling frequency in whole model hours
fcm_make_ocean
-> env
-> Include and library paths for the build
-> prism_path: location of coupler build
fcm_make_um
-> env
-> Include and library paths for the build
-> prism_path: location of coupler build
Note that the coupling frequency specified in namcouple files must match that specified in the UM oasis couple -
freq value. Namcouple coupling frequencies are not automatically updated based on values specified in Rose.
If these settings are not consistent, then the coupled model is liable to deadlock.

5.11.6 Coupled Models and Performance

OASIS3 is inherently un-parallel and thus a potential performance bottleneck since all coupling fields must ef-
fectively be gathered to and scattered from their full global domains in order to be dealt with by OASIS3. This
gathering or scattering can be managed explicitly within the component models, with communication between
components and OASIS carried out only by a single “master” process. Alternatively, each process in the compo-
nent may communicate its local sub-domain of data with the coupler, leaving OASIS to deal with the gathering
and scattering internally. Control of this aspect of the data exchange is simply via environment variable settings

63 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

at:

coupled
-> env
-> ATMOS_COUPLE_TYPE:(M)aster or (P)arallel coupling
-> OCEAN_COUPLE_TYPE:(M)aster or (P)arallel coupling
->file
-> SHARED
-> l_couple_master: True if ATMOS_COUPLE_TYPE = M
This is provided as an option in order to easily switch from one method to the other in the case that there
are any perceived performance gains to be had by either approach. At the time of writing it appears that, on
the Met Office IBM Power6, performing the gathering/scattering explicitly within the component appears to be
marginally faster than allowing OASIS3 to perform the gathering/scattering, particularly with regard to higher
resolution models (N216-ORCA025).
OASIS3 (version 3.0 onwards) supports the concept of “pseudo-parallelism” via multiple OASIS3 instances.
That is, more than one instance of OASIS3 may be initiated and each coupling field may then be assigned
to a particular instance of OASIS3. This approach effectively employs a “task parallel” approach rather than
a data decomposition approach to work distribution. In Met Office models, the psuedo-parallel approach has
been found to be an effective means of improving performance in jobs using higher resolution components (e.g.
N216 atmosphere, ORCA025 ocean) and when employing higher coupling frequencies.
There is no concept of “pseudo-parallelism” when using OASIS3-MCT. OASIS3-MCT performs its coupling
transformations in parallel, directly in the PSMILe library layer, without the need to gather or scatter data to or
from the global domain. Thus, compared with OASIS3, this particular performance bottleneck no longer exists.
Further, OASIS3-MCT requires no separate coupler executable.
Running coupled model components concurrently presents opportunities to achieve an optimum load balance,
since each component can independently be given an appropriate number of processors to match the speed of
other components. The total speed of the coupled system will be constrained by:
1. The scalability limit of the slowest component. i.e. if the atmosphere scales well at high numbers of
PEs but scalability of the NEMO-CICE component starts to degrade as more PEs are added, then there
will come a point where adding more processors to the total system will have no overall improvement on
elapsed time. In such a scenario, adding further PEs to the atmosphere may speed that component up,
but its progress will always be restricted by its need to exchange data with the slower NEMO-CICE which,
at the limit of its scalability, by definition, can never be made to go any faster.
2. Potential OASIS bottlenecks. At the time of writing, OASIS3 costs are so small in existing operational
coupled models models that it has not yet been seen to be a significant contributor to the overall coupled
model cost. Scalability of model components still tend to be the limiting factors. Memory requirements of
components. If one component, due to memory consumption, needs to run on a certan minimum number
of PEs which cause it to run faster than other components then this may force the whole system to be
re-assessed for optimal load balance and may force one or other component to be run using a less than
optimal processor configuration. On existing Met Office IBM Power HPC platforms, OASIS3-MCT allows
us to mitigate against this eventuality by distributing atmosphere and ocean processes among the same
nodes to achieve uniform memory use across all nodes.
Running concurrent components also presents something of a challenge in achieving load balance. Standard
timing data alone (e.g. produced by UM TIMER or Dr Hook) is not often an adequate means of determining
whether the various components are well synchronised. Ideally, one would be able to position coupling opera-
tions so that each component could put data to the coupler, then perform some useful work while the coupler
processes that data and then get incoming data from the coupler with the minimum of delay. In practise, the
code structure of the models means that it is currently difficult to achieve this and there tends to be some un-
avoidable idle time while models wait for each other’s data to arrive. The key is to minimise this time. One of the
most effective methods of load balancing has been to instrument the calls to oasis puts and gets within each
component so as to output the exact wall clock time at which each operation starts and ends. This provides de-
tails of the relative times at which corresponding puts and gets begin in the sending and recieving components,
indicating precisely how synchronised the various model components are.
Use of the “lucia” tool originally developed by SMHI, provided by CERFACS, also allows a graphical means of
analysing load balance in either OASIS3 or OASIS3-MCT. Data for this is gathered by building the coupler with

64 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

the “balance” CPP key. There is a certain overhead with lucia which means it will not provide a completely ac-
curate view of an equivalent uninstrumented coulped model. It is however useful as an initial stage in assessing
load balance.

65 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.12 Diagnostic Output (STASH)


Author: Glenn Greed, Last Updated: 14 Jul 2014

5.12.1 Introduction

The output of diagnostics from the UM is done using the Spatial and Temporal Averaging and Storage Handling
(STASH) system. STASH is very flexible, allowing users to obtain diagnostics on a timestep to timestep basis
and to process diagnostics in both time and space. Diagnostics include intercepted fields generated as the
model runs, and derived quantities. Users can choose from the wide range of diagnostics available, or interface
new diagnostics into the system.
More detail on the STASH system can be found in UMDP-C04 .
By default diagnostic output is sent to files with a .pp extension. (See usage profiles in section 5.12.4 for details.)
These files are ‘fieldsfiles’ as defined in UMDP-F03 . Each field may be called a PP field, where PP means
‘for Post Processing’. The UM output fieldsfile format has all the headers at the beginning of the file followed
by all the data fields in one large block:- header(1),header(2). . . header(n) : data(1),data(2). . . data(n). For
operational output and archiving, fieldsfiles are often converted to ‘PP files’ with each data field paired up with
its header:- header(1):data(1), header(2):data(2). . . header(n):data(n). The conversion to PP files uses utility
programs like um-convpp, um-ff2pp or um-qxfieldcos (see UMDP-F05 for details).

5.12.2 STASH in the Rose UM GUI

The main STASH panel is at:


um
-> namelist
--> Model Input and Output
---> STASH Requests and Profiles
----> Stash requests

Here the user selects the diagnostics required in their model run. Each diagnostic will have three profiles
attached to it; ”dom(ain) name”, ”tim(e) name” and use name”. Initially it may simplest to copy another app’s
STASHC namelists streq(:), domain(:), time(:) and use(:) to populate it with diagnostics and profiles already set
up.
At the top of the STASH requests panel there are a number of macros that may be used to tidy and check the
requested stash namelists:
• stashindices.TidyStashTransform - Correct the index of the STASH related namelists
• stashindices.TidyStashTransformPruneDuplicated - Correct the index of the STASH related namelists and
prune duplicates
• stashindices.TidyStashValidate - Check STASH related namelists have the correct index
• stashtestmask.STASHTstmaskValidate -Check that the stash requests are available
Once done the user is advised to work through the above macros to check that their STASHC namelist is correct
prior to submitting a suite,

5.12.3 Setting up the Diagnostic List

To add diagnostics to a job click the ”new” button top right which brings up list of all the UM diags which you
can select to add. Once added one needs to add the domain time and use profiles for the job, simply done
by clicking on the various columns and picking from drop down menus. Basically, the user chooses to add or
delete diagnostics from one of the available sections. A useful feature is the option to disable (rather than delete)
various diagnostics so that they can be reactivated later if required. Enabling/disabling is done by clicking on
the ‘Include’ checkboxes.

66 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.12.4 Profiles

After choosing the diagnostics list, the user needs to attach three profiles — Time, Domain and Usage — to
each diagnostic. The profiles are identified by their names; the user gets to create/alter such names, there is
no specific naming convention.
Detailed information on any profile may be viewed by either: right clicking on the profile name in the stash
requests panel and selecting view or by selecting any of the profiles directly from the navigation panel on the
LHS of the gui. Please note that the profile names on the LHS are the ’hashed’ names rather than the profile
name. Rose deliberatly hashes every profile/request so to ensure they are unique and not duplicates.

Time Profiles

These determine when a diagnostic will be output and the period over which any time processing (e.g. accumu-
lations or means) is done.

Domain Profiles

These determine the geographical area and level(s) for which the diagnostic will be output.

Usage Profiles

These specify the output unit, and hence the fieldsfile the diagnostic will be written to. Diagnostics can be sent
to more than one output unit by repeating the diagnostic in the list and attaching different usage profiles.
The unit numbers for user fieldsfiles are in the range 60–69 and 151. Output written to units 60, 61,. . . , 69 and
151 is stored in files with extensions .pp0, .pp1,. . . , .pp9 and .pp10. As an example, output sent to unit 64 from
run task ATMOS will go to the file atmos.pp4. Fieldsfiles can also be periodically opened and closed during a
model run. This is called reinitialisation and such fieldsfiles have extensions .pa, .pb, . . . , .pj and include an
automatically coded model date/time.
To select this, visit

um
-> namelist
--> Model Input and Output
---> Model Output Streams

Usage profiles can also direct output to files in connection with climate meaning. For further details, see section
5.13.
Not all diagnostic fields go into a fieldsfile. Some fields are stored internally to be passed between different
sections of the UM during a model run. Through the Usage profile these fields are given a TAG number to
enable the UM to search for them by that TAG number when required.

5.12.5 The STASHmaster file

The STASHmaster file is a centrally held control file which defines the characteristics of every prognostic and
diagnostic available to the UM. The metadata record is described fully in UMDP-C04 . Typical information stored
includes what grid type and level type (model or pressure) the data are stored on and how the data should be
packed. It is this information that the stash panels in the UMUI also use to verify whether each diagnostic and
its profiles, have been set up correctly.
Any prognostic or diagnostic used in the UM must be defined in this file.

67 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

Altering the stashmaster

One can alter the stashmaster in their branch for their own purposes, for example to add new prognostics,
diagnostics or ancillary fields.
Adding new prognostics or diagnostics usually requires changes to the UM code. A “user ancillary” system
exists to allow the user to add new fields and use them as ancillary fields in the UM without the need to modify
any top-level code. Of course, code changes are required for these fields to affect the evolution of the model
prognistics. These ‘User Ancillaries’ use reserved section 0 STASHcodes:
• 301–320 : For atmosphere single level ancillaries
• 321–340 : For atmosphere multi-level ancillaries
More details on ‘User Ancillaries’ can be found in section 5.8.6 and the HELP panels.

5.12.6 Initialisation of User Prognostics

After providing extra records through an altered stashmaster, the user may choose to initialise any prognostic
fields for a run. This is done by adding an appropriate &items namelist in

um
-> namelist
--> Reconfiguration and ancillary control
---> Configure ancils and initialise dump fields

and selecting the required ”source”.


When a macro is selected, STASH generates the diagnostics in a specific fieldsfile during the model run. The
help page for each macro shows where the diagnostics go.
The full list of diagnostic fields set up by the macro can be browsed in the STASHC file in the user’s umui jobs
directory after the job has been processed. If you need to add to or alter the diagnostics in a macro this has to
be done through hand-edits to the STASHC file.

68 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.13 Climate Meaning


Author: S.D. Mullerworth, Last Updated: 21 Jul 2010
The climate meaning system generates long period means of data from the Unified Model atmosphere. The
most common use of the climate means system is to generate monthly, seasonal, annual and decadal means
of diagnostics.
While STASH could theoretically generate such long period means, numerical issues from large numbers arise
when summing fields in the STASH system every time-step for months or years. The climate means system
avoids such issues by generating, to use a typical example, a monthly mean from three 10-day means provided
by STASH, then generating a seasonal mean from the monthly means it has calculated, annual means from the
seasonal mean and decadal means from the annual mean.
The climate means system has some restrictions: the lowest period mean must be a multiple of the model
dumping period; higher period means are multiples of the lower period; there are up to 4 periods available; all
climate meaned diagnostics use a subset of the same periods; the climate means system runs continuously
throughout the whole of a long run. When using the Gregorian calendar, only monthly, seasonal and annual
means are available.
Typically, it is desirable that seasonal or annual means start from certain months (ie. December, January,
February is a winter seasonal mean). To ensure that this happens even when starting a run in November, it is
possible to supply a mean reference date to which all mean periods are aligned.
Once climate meaning is set up, diagnostics have to be set up in STASH which are attached to a usage profile
that sends them to the climate means system. The usage profile can send fields to all, or just a subset, of the
climate mean periods. STASH diagnostic requests for climate meaning should have time profiles that specify
output every dump period rather than specifying an absolute time, otherwise problems may occur if the dump
period is changed without the time profiles being changed. For example, if the time period is a daily mean, and
the dump period is ten days, then a monthly mean will be a mean of days 10, 20 and 30 of the month.
Full documentation of the Climate Means system is available in UMDP-C05 .

69 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.14 File Utilities


Author: Glenn Greed, Last Updated: 14 July 2014
The utilities are a collection of standalone executables. Depending on your installation they are usually located
both on the remote HPC and the local Linux desktop.
Typically, they require small amounts of memory and can be used interactively. The most useful utilities use
wrapper scripts to access the executable, this simplifies usage by setting up required settings automatically.
Table 5.2 lists the utilities which are available.

Utility Platform Description


um-cumf All Compares two UM files. Useful in testing for bit comparison or highlighting
differences between dumps.
um-pumf All Prints out contents of a UM file. Useful for checking headers.
um-ieee All Converts 64-bit UM files to 32-bit IEEE format for transfer to worksta-
tions/Linux desktop.
um-convpp All Converts fieldsfile into sequential PP-file for PP processing. Allows users
to display dump data on workstations/Linux desktop using PP based soft-
ware. Must be run on the same platform as where the data will be dis-
played.
um-makebc All Create a boundary dataset from model analyses or dumps. For further
details see UMDP-F54
frames All Cuts out required data from fieldsfiles to reduce size for future generation
of LBCs. For further details see UMDP-F57
vomext All Utility used for 3DVOM, for further details see UMDP-F56
um-fieldcalc All Filter and perform calculations on fieldsfiles. For further details see UMD-
P-F53
um-cutout All Cut out a fixed-resolution domain contained in a variable resolution grid.
This is a wrapper for FieldCalc which automatically generates an appropri-
ate namelist. See UMDP-F53 .

um-ff2grib All convert FF to GRIB2 file format. This is dependent upon the user supply-
ing/pointing to a suitable metadata translation table between STASH codes
and GRIB2 codes.

Table 5.2: File Utilities

The majority are run by wrapper scripts located in directory $UMDIR/vn{VN}/{TARGET_MC}/utilities. The
environment variable UMDIR is automatically initialised at logon time. Unless otherwise stated in table 5.2, the
syntax for all the utilities is given in UMDP-F05 .
Unified Model data files are binary files in a format described in UMDP-F03 . They include model dumps,
fieldsfiles (“.pp[0-9]” files output by the UM), ancillary files, boundary datasets and observation files.
Most of these utilities will manipulate files created by earlier versions of the model. Any exceptions are stated
explicitly in UMDP-F05 . All other executables are either used by the UM during a run or specialised utilties
used in the Met Office. For further information please contact the UM system team or external users can also
seek help from your local support network.

70 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

5.15 Troubleshooting
Author: Glenn Greed, Last Updated: 14 July 2014
The UM is a large and complex system with many opportunities for mistakes to occur and errors can become
apparent at a number of stages. The logical sequence of a successful run of the model starts with the job being
defined through accessing the UM gui/input namelists. The job is launched by Rose on the target machine and
UM program scripts are initiated. A standard pattern for a Rose suite is first to compile the model to generate a
new executable (fcm make task), then invoke the reconfiguration program (recon task) and finally run the model
(atmos task). For any particular rose suite, each execution stage is optional. Of course, these stages may all
complete successfully but the model integration may still contain serious errors, as evident by an inspection of
model output fields.
Each UM version has a set of Release Notes, available for Met Office users on the Trac wiki pages. This
contains a list of problems and bugs found in the current version of the UM as well as other useful information
which is well worth perusing.
The first step in troubleshooting the UM involves examining the relevant output file. The output from a compila-
tion step is contained in something akin to
/cylc-run/(suite name)/share/fcm make (app name)/fcm-make.log
The std out and std err from the the recon and atmos tasks may easily be accessed through the rose bush web
interface
rose suite-log
by clicking upon the appropriate task err or out links. If either task has failed typically the output will contain an
error message stating the problem, although sometimes these messages can be rather opaque and hard to find
in the rather verbose UM output. A standard model compilation and run can easily produce 10k lines of printout,
and it is useful to have some idea of the overall structure of the information and know what to search for. A job
output listing will consist of:
1. Unix messages from scripts;
2. output from Fortran and C routines -
3. Unix housekeeping and job accounting information.
When running in parallel note that an OUTPUT file is generated for each processor and held temporarily in
separate files, optionally deleted at end of job. Only the file corresponding to PE0 is included in the job output
listing, with title “PE0 OUTPUT”. If there isn’t an error message in the PE0 output it is worth checking the output
from other PEs as it is possible that only a subset of PEs encountered the error.
Runtime output will appear in the following order in job output files. The variables starting “%” can be used for
searching but are also in the listing of SCRIPT if requested.
• Rose Script output:
• %PE0 : all output from PE0
– “Atmosphere run-time constants”: details of science settings.
– “READING UNIFIED MODEL DUMP” : details of the input starting dump
– “Atm Step: Timestep 1” : initialisation completed, atmosphere time-stepping started
– “END OF RUN - TIMER OUTPUT”: timing information
Typical problems include:
1. User errors in model set-up.
• Incorrect filenames specified (including user-specific variables and typos)
• Logic errors in user updates;
• Inappropriate parameter values chosen.
2. UM system errors in control.
3. Science formulation errors.

71 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Chapter 5. Using the Unified Model

4. Input data errors.


5. Computer operating system/hardware problems.
If the job appears to complete successfully but there are still errors, utilities such as um-pumf and um-cumf,
described in section 5.14 (File Utilities) enable a user to inspect and compare model dumps, boundary and
ancillary files, and output pp files. The external tool xconv is particularly useful for the quick examination of a
dump (in both graphical and numerical forms).

72 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Appendix A. Output From the Unified Model

Appendix A:
Output From the Unified Model
Author: Martyn Foster, Last Updated: 27 Feb 2014

A.1 Introduction

This chapter describes the behaviour and configuration of output from the Unified Model (UM), related exe-
cutables and utilities (the reconfiguration program and “small executables”). In using the expression “output”,
we refer to the actual text outputs made by the model as it runs. Other files generated by the model, e.g.,
diagnostic, restart and history files are documented elsewhere. The UM is an application designed for POSIX
environments, and as such generates two output streams - output, and error. Additionally the UM and reconfig-
uration are also a parallel programs, using task parallelism (MPI), and thread parallelism (OpenMP), to achieve
high performance levels. Each MPI task individually has its own output and error stream which, if unmanaged,
would be combined in an implementation dependent fashion by the MPI library. Consequently, the UM uses a
separate managing subsystem (“Print Manager”) to output text within the program.
The UM is generally not run directly, but via an operating environment such as Rose. Rose takes responsibility
for presenting the user with the relevant output and error from the model run, which in many cases is sufficient
for the end user. It is not the purpose of this section to document the functioning of Rose, but it is worth noting
that the outputs of Rose labeled OUT and ERR, are not the direct output and error streams from the model, but
comprise
• OUT The combined output streams of all MPI tasks before the UM initializes (this output is generally
minimal, but contains for example, the initial GCOM version banner). A copy of the output stream of the
first MPI task which is taken after completion of the job.
• ERR The combined error streams of all MPI tasks. Any fatal error (ereport) messages generated by any
task are also presented.

A.2 Managing Output

A.2.1 Location of output files

During initialization, the UM will close the default output stream. Unique files for each MPI process are used for
subsequent output, if required. These files will be created as needed if and when the model needs to write to
them. These filenames will be constructed with the form:
• {$ATMOS STDOUT FILE}XXX for the UM.
• {$RECON STDOUT FILE}XXX for the Reconfiguration.
• {$STDOUT FILE}XXX for the parallel UM utility Crmstyle-coarse-grid.
where the $*STDOUT FILE variables all default to pe_output/atmos.fort6.pe. Unless the $*STDOUT DIR
variables are absolute paths these directories will be relative to the directory from which the UM is executed,
typically the work directory for that Cylc task, $CYLC TASK WORK DIR. Finally, “XXX“ is the MPI rank of the
process within the communicator holding the model, and uses a fixed-width format depending on the number or
processors (so for example, a task with 32 processes will produce output files ending pe00, pe01, . . . , pe31).
This communicator contains IO servers and atmosphere tasks, but is not generally “MPI COMM WORLD” as
the atmosphere/IOS combination may be coupled to other models such as NEMO.
With the exception of the parallel utlity named above, the remaining UM utilities do not re-target their output in
this way, however their driving scripts may redirect the output streams to specific files.

73 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Appendix A. Output From the Unified Model

A.2.2 Selecting the amount of output generated

The UM can be configured to output different amounts of data via an environment variable
($PRINT STATUS, or $RCF PRINTSTATUS reconfiguration), with increasing verbosity;
• PrStatus Min Minimal Output
• PrStatus Norm Normal Output
• PrStatus Oper Operationally required output
• PrStatus Diag Diagnostic and debugging.
The IO Server component (See UMDP-C10 ), has a separate, but analogous, set of controls for its output. In the
UM and reconfiguration, the level may generally be chosen to reflect the desired level of information required,
but some “small executables” rely on a particular level for their normal behaviour, for example “pumf” normally
runs at level PrStatus Diag in order to output expected information.

A.2.3 Error Reports (ereport)

The UM will generate error and warning reports if an unexpected condition occurs. Error reports cause the
program to terminate, where warnings do not. The associated message will be written to the output stream,
and, if it is generated by a task where “mype” is zero, also to the error stream. Tasks that have mype as zero
are the first MPI rank participating in the atmosphere model, and the first MPI rank in any IO Server teams
configured.

A.2.4 Controlling output behaviour via the Print Manager (umPrintMgr)

The Print Manager has additional controls to modify its behaviour. These are specified in the namelist; “prnt -
control”.
• LOGICAL prnt src pref If enabled, this will prefix output with the name of the source file writing the
message. It is intended to enable easier debugging by identifying the origin of a message.
• INTEGER prnt paper width This specifies a notional width of the output medium
• LOGICAL prnt split lines If enabled, this will cause output lines to be split into multiple lines if they are
longer than the width specified above.
• LOGICAL prnt force flush If enabled, this will cause output lines to be immediately flushed (i.e. not
buffered by the OS or Fortran). It is intended to aid debugging where buffering of output may result in lost
if the program hangs, or crashes.
• INTEGER prnt writers This controls which tasks will output.

74 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Appendix B. Error Handling In the Unified Model

Appendix B:
Error Handling In the Unified Model
Author: Martyn Foster, Last Updated: 27 Feb 2014

B.1 Introduction

The Unified model provides an exception handler which is invoked in various failure situations. The purpose
of the handler is to provide a) A means of tuning behaviour on a given platform, b) More consistent and useful
reporting of errors across a range of platforms, and c) A mechanism for performing post-failure operations.
The error handling code is invoked when the application exits via an error report (ereport) or similar mechanism
or when the application fails in such a way that results in intervention by the operating system. Not all events
that result in operating system intervention are intercepted by default, as such a complete set would not be
machine portable.
It should be remembered that a failing program is very possibly in an undefined state, and thus there is no
guarantee that any signal handler will execute correctly.

B.2 Default Exceptions

By default the subsystem will handle events resulting from the following signals being delivered by the OS;
• SIGTRAP Trace/break-point (notably including integer division by zero under AIX)
• SIGFPE Floating point exception, such as division by zero, overflow, etc.
• SIGILL An illegal instruction was issued
• SIGBUS An unsupported load or store was attempted.
• SIGSEGV The process attempted to read from, or write to, memory that it did not own.
• SIGXCPU A signal usually delivered by a scheduler when the jobs wall clock limit has been reached.

B.3 Overriding Default Behaviour

There are several reasons to override the default list, either to include additional signals, or to disable trapping
of some of the defaults. It is possible that external programs, particularly debugging and performance analysis
tools require they handle such signals and will not function correctly if the UM intercepts them. Additionally
some tools, may have better signal handlers dedicated for specific purposes, such as compiler based array
bounds checking, in these situations valuable detailed information may be lost by using the UM’s more generic
built in handler.
To override the default set, a colon separated list of the numeric signal numbers may be provided via the
environment variable UM SIGNALS; e.g. UM SIGNALS=8:11 will trap floating point exceptions (SIGFPE) and
segmentation errors (SIGSEGV) on most machines. Numeric values and behaviour of system signals, can
usually be located in the manual pages, signal(7) or signal(2). Setting the environment variable to nothing, will
result in no signals being intercepted, and the OS and/or compiler based defaults being used. If signals are
overridden, this will be reported in the jobs error stream.
If the UM is being deployed on a new machine or architecture, and problems are experienced, it is recommended
that signal handling is disabled in this way to ensure there is no interference from the host platform.

75 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Appendix B. Error Handling In the Unified Model

B.4 Post-failure actions

Using the signal handling mechanism, it is possible to register additional functions to be called in the event of
failure. One application of this is to recover any buffered output from the program at the time of failure. Additional
actions may be registered with a call to signal add handler(my handler) where my handler is a function that
takes no arguments, and will be called in the order they were registered in the case of an error.

76 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Appendix C. Acronyms

Appendix C:
Acronyms
3DVAR: Three dimensional variational analysis (replaced the analysis correction scheme)
4DVAR: Four dimensional variational analysis (the latest data assimilation system which has replaced the 3DVAR
scheme)
AC: Analysis Correction (still used for MOPS data assimilation)
ACOBS: Observation files used for the analysis correction data assimilation component of the UM
ADCP: Acoustic Doppler Current Profiler
ANSI: American National Standards Institute
ASCII: American Standard Code for Information Interchange
AVHRR: Advanced Very High Resolution Radiometer
BUFR: Binary Universal Form for the Representation of meteorological data. It is a standard developed by WMO
for the efficient storage of meteorological observations in a machine independent form, where all the
information to describe the observations is contained within the data.
C90: Cray 90 Supercomputer
CAMMS: Crisis Area Mesoscale Model Service
CERFACS: European Centre for Advanced Research and Training in Scientific Computing
CAPE: Convective Available Potential Energy
CDC: Control Data Corporation. Brand of supercomputer previously used at the Met Office
CERFACS: Centre Europeen de Recherche et de Formation Avancée en Calcul Scientifique
CFC: Chloro-Fluoro-Carbons
CFO: Central Forecasting Office
CMT: Convective Momentum Transports
COSMOS: Front end computing system at the Met Office, residing on an IBM mainframe computer, and separate
from the workstation networks and supercomputer.
CR: Climate Research (Hadley Centre)
CRUN: Continuation Run
CX: Columns of observation locations
DEF: Abbreviation for definition used by source code preprocessor
ELF: Equatorial Lat-long Fine-mesh (usually a rotated LAM grid)
ECMWF: European Centre for Medium-Range Weather Forecasts
ERA40: ECMWF Re-Analysis (40+ years)
ERS: European Research Satellite
FCM: Flexible Configuration Management
FLUME: Flexible Unified Model Environment
FOAM: Forecasting Ocean Atmosphere Model
FR: Forecasting Research (defunct), see NWP
FS: Forecasting Systems (old), see OS

77 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Appendix C. Acronyms

GCM: General Circulation Model


GCOM: Generalized Communication package
GFDL: Geophysical Fluid Dynamics Laboratory
GHUI: Generic Hierarchical User Interface
GLOSS: Global locally-processed satellite soundings
GRIB: WMO standard for GRIdded Binary data. It is designed to provide efficient storage on data on a variety of
grids
GUI: Graphical User Interface
GWD: Gravity Wave Drag
HADAM: Hadley Centre Atmospheric Model
HADCM: Hadley Centre Coupled Model
HIRLAM: High Resolution Limited Area Model
HP: Hewlett Packard. Brand of workstation previously used at the Met Office.
HTML: Hyper Text Markup Language
IAU: Incremental Analysis Update
IBM: International Business Machines. Brand of computers used at the Met Office.
IEEE: Institute of Electrical and Electronic Engineers
IGBP: International Geosphere-Biosphere Programme
JCL: Job Control Language (IBM)
JCMM: Joint Centre for Mesoscale Meteorology
JULES: Joint UK Land Environment Simulator
LAM: Limited Area Model
LASS: Local Area Sounding System
LBC: Lateral Boundary Condition(s)
LS: Linearisation State
LSM: Land-Sea Mask
LSP-SVAT: Land Surface Process - Soil Vegetation Atmosphere Transfer
MARS: Meteorological Archival and Retrieval System
MASS: Managed Archive Storage System
MES: Mesoscale Model
MetDB: Meteorological Data Base (of observations prior to processing for the UM)
MLD: Mixed Layer Depth
MOPS: Moisture Observation Pre-processing System
MOSES: Met Office Surface Exchange Scheme
MOUSE: Met Office Unified Storage Environment
MPI: Message Passing Interface
MPP: Massively Parallel Processors
MSL: Mean Sea Level
MSLP: Mean Sea Level Pressure (see PMSL)
MVS: Multi Virtual Storage IBM mainframe operating system - in use at the Met Office since 1979

78 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Appendix C. Acronyms

NAE: North Atlantic and Europe


NASA: National Aeronautics and Space Administration
NCAS: National Centre for Atmospheric Science
NEC: Nippon Electric Company. Brand of supercomputer formerly used at the Met Office.
NOAA: National Oceanic and Atmospheric Administration
NRUN: Normal Run
NWP: Numerical Weather Prediction
OASIS: Ocean Atmosphere Sea Ice Soil; a distributed O-AGCM coupler developed at CERFACS.
O-AGCM: Ocean-Atmosphere Global Climate Model
OPS: Observation Processing System
OS: Operational Services
PC2: Prognostic Cloud 2
PE: Processing Element
PL: Programmer Letters
PMSL: Pressure at Mean Sea Level
PP: Post Process (Met Office proprietary file format)
PVP: Parallel Vector Processor
RCS: Revision Control system
RH: Relative Humidity
RFS: Rolling Field Store
Rose: The (Met Office) Environment for Scientific Suites and Applications
SCM: Single Column Model
SGI: Silicon Graphics Inc., a brand of computer.
SMF: StashMaster File
SMP: Symmetric Multi Processor
SSO: Sub gridScale Orography
SSS: Sea Surface Salinity
SST: Sea Surface Temperature
STASH: Spatial and Temporal Averaging and Storage Handling (storage handling and diagnostic system)
STOCHEM: 3D lagrangian tropospheric chemistry model
SQL: Structured Query Language
SW: Short Wave (radiation)
TCL-TK: Tool Command Language/Tool-Kit
TCR: Tropical Convective Rainfall
TIC: Task Identification Code (for administrative accounting)
TKE: Turbulent Kinetic Energy
TRIFFID: Top-down Representation of Interactive Foliage and Flora Including Dynamics
TRIP: Total Runoff Integrating Pathway
UARS: Upper Atmosphere Research Satellite

79 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Appendix C. Acronyms

UI: User Interface


UK4: UK 4 km model.
UKCA: UK Chemistry and Aerosols scheme
UKV: UK Variable resolution model 4km → 1.5km.
UKMO: United Kingdom Meteorological Office
UM: Unified Model
UMACS: Unified Model Analysis Correction Scheme
UMDP: Unified Model Documentation Paper
UMPL: Unified Model Program Library (deprecated)
UMSL: Unified Model Script Library (deprecated)
UMBUG: Unified Model Basic User Guide
UMUI: Unified Model User Interface
UNICOS: Unix-like operating system on Cray supercomputer
VAR: VARiational analysis system
VARUI: VAR User Interface
WAM: Wave Forecast Model (deprecated)
WBPT: Wetbulb Potential Temperature
WGDOS: Working Group on the Development of the Operational Suite
WGDUM: Working Group on Development of the Unified Model
WMO: World Meteorological Organisation

80 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Appendix D. Definitions

Appendix D:
Definitions
This definition section is a new section in the User Guide at Version 8.1 so other parts of the User Guide and
Documentaiton may be inconsistent with these definitions. The rest of the model documentation will be checked
for consistency with these definitions as these sections are rewritten.
• Diagnostic Variable: A model variable is said to be diagnostic when it can be calculated, at any given time,
from other model variables at that time without reference to the value the variable at other times.
• Diagnostic Scheme: A diagnostic scheme represents a process within a model by means of only diag-
nostic variables. The diagnosed variables will be calculated using prognostic variables and the scheme
can affect one or more of the model prognostic variables. For example, the UM radiation scheme (sec-
tion 3.1.2) is diagnostic, in that it calculates the fluxes of radiant heat at different locations, levels and
wavelengths through the atmosphere based on a large number of model prognostic variables. These
fluxes then alter some of those variables, but the radiative fluxes themselves are diagnostic, not prognos-
tic.
• Prognostic Variable: A model variable is said to be prognostic if its value at the current and/or previous
time is required in order to calculate it at a future time.
• Prognostic Scheme: A prognostic scheme is formulated so that it represents a process within a model by
means calculating the change over time of one or more prognostic variables. A prognostic scheme can
interact with other prognostic variables, so affecting the evolution of the model as a whole, or it can affect
only the evolution of its own variables. For example the UM Aerosol scheme (section 3.1.4) is prognostic
in that it is concerned with the evolution in time of a set of prognostic variables, namely the aerosol mixing
ratios. Depending on the model configuration in use, these prognostic variables can be used to affect the
evolution of other prognostics through the radiation or cloud microphysics schemes. These effects can be
switched off, or overridden by climatological values, but the scheme is still prognostic.

81 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Bibliography

Bibliography
[1] ECMWF online MARS User Guide
http://www.ecmwf.int/publications/manuals/mars/guide/index.html
[2] MASS: Managed Archive Storage System
http://www01/teams/gpcs/Mass/index.shtml
[3] MASS-R: MASS Replacement System
http://www01/teams/mass replacement/
[4] MetView Graphics Software Documentation
http://gfx0100/documentation/Metview/
[5] MOUSE: Met Office Unified Storage Environment
http://www01/teams/gpcs/Mass/index.shtml
[6] A method for improved representation of dense water spreading over topography in geopotential-
coordinate models
Beckmann A. and Döscher R.
J. Phys. Oceanogr., 27, 581-591, (1997)
[7] Experiments with the assimilation of thermal profiles into a dynamical model of the Atlantic Ocean
M.J. Bell,
Forecasting Research Technical Report 134, UK Met Office (1994).
[8] Variational assimilation evolving individual observations and their error estimates.
M.J, Bell, A. Hines, and M.J. Martin,
OA Tech. Note No. 32. (2003)
[9] Development of a high resolution grid-based river flow model for use with regional climate model
output
Bell, V.A., Kay, A.L., Jones, R.G. and Moore, R.J. (2007)
Hydrology and Earth System Sciences, 11(1), 532-549
(Referenced on page 14.)

[10] COSP: satellite simulation software for model assessment


A. Bodas-Salcedo et al.,
Bull. Am. Meteorol. Soc., 92, pp 1023–1043. (2011)
(Referenced on page 10.)

[11] Parametrizing the difference in cloud fraction defined by area and by volume as observed with radar
and lidar
Brooks M.E., Hogan R.J. and Illingworth A.J., 2005
J. Atmos. Sci., 62, 2248-2260
[12] A numerical method for the study of the circulation of the world ocean
K. Bryan,
J. Comp. Phys., 3, pp. 347–376. (1969)
[13] Accelerating the convergence to equilibrium of ocean climate models
K. Bryan,
J. Phys. Oceanogr., 14, pp. 666–673. (1969)
[14] RothC-26.3, A model for the turnover of carbon in soil: Model description and User’s guide
Coleman, K. and Jenkinson, D.S.,
Lawes Agricultural Trust Harpenden, UK (1999)
[15] Tropospheric ozone in a global-scale three-dimensional Lagrangian model and its response to NOx
emission controls
W.J. Collins, Stevenson, D.S., Johnson, C.E., and Derwent, R.G.
J. Atmos. Chem., 26, pp. 223–274. (1997)
[16] A comparison of two schemes for the convective transport of chemical species in a Lagrangian
global chemistry model

82 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Bibliography

W.J. Collins, Derwent, R.G., Johnson, C.E., and Stevenson, D.S.


Q. J. Roy. Met. Soc., 128, pp. 991–1009. (2002)
[17] Altimetric assimilation with water property conservation
M. Cooper and K. Haines,
J. Geophys. Res., 101, C1, pp. 1059–1077. (1996)
[18] A primitive equation, three-dimensional model of the ocean
M.D. Cox,
GFDL Ocean Group Technical Report no. 1, Princeton, NJ, USA. (1984)
[19] The impact of new land surface physics on the GCM simulation of climate and climate sensitivity
P. Cox, R. Betts, C. Bunton, R. Essery, P. Rowntree and J. Smith,
Climate Research Technical Note 82, Hadley Centre for Climate Prediction and Research (1998).
[20] Description of the TRIFFID Dynamic Global Vegetation Model
P.M. Cox,
Hadley Centre Technical Note 24, Hadley Centre for Climate Prediction and Research (2001).
[21] A lateral boundary formation for multi-level prediction models
H.C. Davies,
Quart. J. Roy. Met. Soc., 202, pp. 405–418. (1976)
[22] ESMF: Earth System Modeling Framework
http://www.earthsystemmodeling.org
(Referenced on page 62.)

[23] Explicit representation of subgrid heterogeneity in a GCM land-surface scheme


R.L.H. Essery, M.J. Best, R.A. Betts, P.M. Cox and C.M. Taylor,
J. Hydrometeorol., 4, pp. 530–543. (2003)
[24] Ridging and strength in modeling the thickness distribution of Arctic sea ice
G.M. Flato and W.D. Hibler,
J. Geophys. Res., 100, pp. 18611–18626 (1995).
[25] Experiments with the assimilation of surface temperature and thermal profile observations into a
dynamical model of the Atlantic Ocean
R.M. Forbes,
Forecasting Research Technical Report 167, UK Met Office (1995).
[26] Initial results from experiments assimilating satellite altimeter data sea surface height data into a
tropical Pacific ocean model
R.M. Forbes,
Ocean Applications Technical Note 12, UK Met Office (1996).
[27] Isopycnal mixing in ocean circulation models
P.R. Gent and J.C. McWilliams,
J. Phys. Oceanogr., 20, pp. 150–155. (1990)
[28] Isoneutral diffusion in a z-coordinate ocean model
S.M. Griffies, A. Gnanadesikan, R.C. Pacanowski, V.D. Larichev, J.K. Dukowicz and R.D. Smith,
J. Phys. Oceanogr., 28, pp. 805–830. (1998)
[29] The Gent-McWilliams skew flux
S.M. Griffies,
J. Phys. Oceanogr., 28, pp. 831–841. (1998)
[30] Modeling a variable thickness sea ice cover
W.D. Hibler,
Mon. Wea. Rev., 108, pp. 1943–1973 (1980).
[31] An elastic-viscous-plastic model for sea ice dynamics
E.C. Hunke and J.K. Dukowicz,
J. Phys. Oceanogr., 27, pp. 1849–1867 (1997).
[32] CICE: the Los Alamos Sea Ice Model, Documentation and Software User’s Manual, version 4.0
E.C. Hunke and W.H. Lipscomb,

83 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Bibliography

LA-CC-06-12, Los Alamos National Laboratory, Los Alamos, N.M. (2008)


(Referenced on page 60.)

[33] Report on European grid-based river-flow modelling for application to Regional Climate Models.
Jones, R.G., Dadson, S. J. and Bell, V.A. (2007)
Report to the UK Department for Environment, Food and Rural Affairs, Milestone 02.04.06, Met Office
Hadley Centre, March 2006, 59pp
(Referenced on page 14.)

[34] A one-dimensional model of the seasonal thermocline. Part II.


E.B. Kraus and J.S. Turner,
Tellus, 19, pp. 98–105. (1967)
[35] Diapycnal mixing
E.B. Kraus,
in M.E. Schlesinger (ed). Climate-ocean interactions. Kluwer, Amsterdam, Netherlands, pp. 269–293.
(1990)
[36] Ocean vertical mixing: a review and a model with a nonlocal boundary layer parametrization
W.G. Large, J.C. McWilliams and S.C. Doney,
Rev. Geophys., 32, pp. 363–403. (1994)
[37] A stable and accurate convective modelling procedure based on quadratic upstream interpolation
B.P. Leonard,
Cmput. Methods Appl. Mech. Eng., 19, pp. 59–98. (1979)
[38] Positivity-preserving numerical schemes for multidimensional advection
B.P. Leonard, M.K. MacVean and A.P. Lock
NASA Tech. Memo 106055. ICOMP-93-05, NASA, Washington D.C. 20546-0001. (1993)
[39] Remapping the thickness distribution in sea ice models
W.H. Lipscomb,
J. Geophys. Res., 106 (C7), pp. 13989–14000 (2001).
[40] The Meteorological Office analysis correction data assimilation scheme
A.C. Lorenc, R.S. Bell and B. MacPherson,
Q. J. Roy. Meteorol. Soc., 117, pp. 59–89, 1991.
[41] Iterative analysis using covariance functions filter
A.C. Lorenc,
Quart. J. Roy. Meteor. Soc., 118, pp. 569–591, 1992.
[42] NEMO ocean engine
G. Madec,
Note du Pole de modlisation, Institut Pierre-Simon Laplace (IPSL), France, 27, ISSN No. 1288-1619,
(2007)
(Referenced on page 60.)

[43] Estimation of three-dimensional error covariance statistics for an ocean assimilation system
M.J. Martin, M.J. Bell, and A. Hines,
OA Tech. Note No. 30 (2002)
[44] Lateral boundary conditions for operational regional forecast models; a review
A. McDonald,
HIRLAM Tech Report 32, HIRLAM-5 Project. c/o Per Unden, SMHI, S-601, Norrkoping, Sweden. (1997)
[45] Evaluation of the new UKCA climate-composition model — part 1: The stratosphere
Morgenstern, O., Braesicke, P., O’Connor, F.M., Bushell, A.C., Johnson, C.E., Osprey, S.M., and Pyle, J.A.,
Geosci. Model Dev., 2, pp. 43–57. (2009)
[46] Validating the runoff from LSP-SVAT models using a global river routing network by one degree
mesh
T. Oki,
American Met. Soc. pp. 319–322. (1997)
(Referenced on page 13.)

84 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Bibliography

[47] Design of Total Runoff Integrating Pathways (TRIP) — A global river channel network
T. Oki and Y.C. Sud,
Earth Interactions, 2, pp. 1–37. (1998)
(Referenced on page 13.)

[48] Parametrization of vertical mixing in numerical models of tropical oceans


R.C Pacanowski and S.G.H Philander,
J. Phys. Oceanogr., 11, pp. 1443–1451. (1981)
[49] On the parametrization of equatorial turbulence
H. Peters, M.C. Gregg and J.M. Toole,
J. Geophys. Res., 93, pp. 1199–1218. (1988)
[50] A fast and complete convection scheme for ocean models
S. Rahmstorf,
Ocean Modelling, 101, pp. 9–11 (Unpublished Manuscript). (1993)
[51] Ocean isopycnal mixing by coordinate rotation
M.H. Redi,
J. Phys. Oceanogr., 12, pp. 1154–1158. (1982)
[52] Do we require adiabatic dissipation schemes in eddy-resolving ocean models?
M.J. Roberts and D. Marshall,
J. Phys. Oceanogr., 28, pp. 2050–2063. (1998)
[53] The energetics of the plastic deformation of pack ice by ridging
D.A. Rothrock,
J. Geophys. Res., 80, 4514–4519 (1975).
[54] Impact of a spectral gravity wave parametrization on the stratosphere in the Met Office Unified
Model
A.A. Scaife, N. Butchart, C.D. Warner and R. Swinbank,
J. Atm. Sci., 59, pp. 1473–1489. (2002)
(Referenced on page 13.)

[55] A model for the thermodynamic growth of sea ice in numerical investigations of climate
A.J. Semtner,
J. Phys. Oceanogr., 6, pp. 379–389 (1976).
[56] A scheme for predicting layer clouds and their water contents in a General Circulation Model
R.N.B. Smith,
Q. J. Roy. Meteorol. Soc., 116, pp.˜435–460. (1990)
(Referenced on page 9.)

[57] Technical Note: Description and assessment of a nudged version of the new dynamics Unified
Model
Telford, P. J., Braesicke, P., Morgenstern, O., and Pyle, J. A.
Atmos. Chem. Phys., 8, 1701-1712, 2008.
(Referenced on page 15.)

[58] The thickness distribution of sea ice


A.S. Thorndike, D.A. Rothrock, G.A. Maykut and R. Colony,
J. Geophys. Res, 80, pp. 4501–4513 (1975)
[59] OASIS3 User Guide (oasis3 3)
S. Valcke
PRISM Support Initiative, pp. 68, (2008)
(Referenced on page 62.)

[60] OASIS3-MCT User Guide (OASIS3-MCT 1.0)


S. Valcke, T. Craig, L. Coquart
CERFACSCNRS, pp. 46, (2012)
(Referenced on page 62.)

85 c Crown Copyright 2015



UMDP: 000
UM Basic User Guide
Bibliography

[61] On the specification of eddy transfer coefficients in coarse resolution ocean circulation models
M.J. Visbeck, J. Marshall, T. Haine and M. Spall,
J. Phys. Oceanogr., 27, pp. 381–402. (1997)
[62] Modelling of land surface processes and their influence on European climate
D.A. Warrilow, A.B. Sangster and A. Slingo,
Dynamical Climatology Technical Note 38, 92pp, UK Met Office (1986).
[63] Improvements to the representation of orography in the Unified Model
S. Webster, A.R. Brown, D.R. Cameron and C.P. Jones,
Q.J.Roy. Meteorol. Soc, 129, pp 1989–2010. (2003)
(Referenced on page 12.)

[64] A global archive of land cover and soils data for use in general circulation climate models
M.F. Wilson and A. Henderson-Sellers,
Journal of Climatology, 5, 119–143 (1985).
[65] A microphysically based precipitation scheme for the UK Meteorological Office Unified Model
D.R. Wilson and S.P. Ballard,
Q.J.Roy. Meteorol. Soc, 125, pp 1607–1636. (1999)
(Referenced on page 11.)

[66] PC2: A prognostic cloud fraction and condensation scheme. I: Scheme description
Wilson, D. R., Bushell, A. C., Kerr-Munslow, A. M., Price, J. D., Morcrette, C. J.
Quart. J. Roy. Met. Soc., 134, pp. 2093–2107. (2008)
(Referenced on page 10.)

[67] PC2: A prognostic cloud fraction and condensation scheme. II: Climate model simulations
Wilson, D. R., Bushell, A. C., Kerr-Munslow, A. M., Price, J. D., Morcrette, C. J., Bodas-Salcedo, A.
Quart. J. Roy. Met. Soc., 134, pp. 2109–2125. (2008)
(Referenced on page 10.)

[68] Timestep sensitivity and accelerated spinup of an ocean GCM with a complex mixing scheme
R.A. Wood,
J. Atmos. Ocean. Technol., 15, pp. 482–495. (1998)

86 c Crown Copyright 2015

You might also like