You are on page 1of 32

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/325578958

Statistical analysis of hydrological data using the HBV-Light Model for the
Boyne catchment

Research · May 2015


DOI: 10.13140/RG.2.2.25980.03207

CITATION READS

1 1,201

1 author:

Nazmi Gendzh
National University of Ireland, Maynooth
9 PUBLICATIONS   3 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Statistical analysis of hydrological streamflows View project

All content following this page was uploaded by Nazmi Gendzh on 05 June 2018.

The user has requested enhancement of the downloaded file.


STATISTICAL ANALYSIS OF HYDROLOGICAL DATA USING
THE HBV-LIGHT MODEL FOR THE BOYNE CATCHMENT

BY
NAZMI GENDZH

MSc Climate Change 2014-15


GY669: Hydrology; Variability and Change

Maynooth University

Faculty of Science
Department of Geography

May 2015
Table of Contents

Preface…..…………………………..……………………………………………………….iii
Introduction .............................................................................................................................. 1
Background .............................................................................................................................. 1
Task 1 – Manual Calibration of the HBV-light model ......................................................... 2
Task 2 – Model Parameter Uncertainty................................................................................. 3
2.1. Monte Carlo Assessment –dotty plots ............................................................................. 3
Task 3 – Calibrate and Validate the HBV-Light model for the Boyne Catchment ........... 7
3.1. Set up the model ready to run.......................................................................................... 7
3.2. GAP Optimisation –Calibration. ..................................................................................... 8
3.3. Check model performance during validation. ............................................................... 9
3.4. Check model performance for the full period. ............................................................. 10
3.5. Monte Carlo Uncertainty -Comparison of Calibration to Validation. ...................... 11
4. Statistical analysis of the HBV-Light Model on the Boyne Catchment at Slane Castle.
.................................................................................................................................................. 13
Discussion and conclusion ..................................................................................................... 24
References ............................................................................................................................... 25
Glossary of Terms .................................................................................................................. 26
Glossary of Routines and Parameters .................................................................................. 28

Table of Tables from section 4


Table 4.1. Statistics for the 100 best HBV-Light parameter sets for the Monte Carlo
Calibration period. ...................................................................................................................... 15
Table 4.2. Statistics of the 100 best HBV-Light parameters sets simulated using the Monte
Carlo Batchfile runs for the Calibration period (1955 – 1979). .............................................. 15
Table 4.3. Comparison of the statistics between the calibration, validation and full period
for the 100 best HBV-Light parameters sets simulated ........................................................... 15

ii
Table of Figures from section 4
Figure 1.1. Uncalibrated PTQ plot for period 01-10-1981 to 01-10-1982. ............................... 2
Figure 1.2. Calibrated PTQ plot for period 01-10-1981 to 01-10-1982. ................................... 3
Figure 2.1.1. Dotty-plots of the Obj. function Reff, for the Snow Routine parameters. ......... 5
Figure 2.1.2. Dotty-plots of the Obj. function Reff, for the Soil Routine parameters. ............ 5
Figure 2.1.3. Dotty-plots of the Obj. function Reff, for the Response Routine parameters. .. 6
Figure 4.1. Dotty-plots of the Obj. function (Reff values), for the 100 best HBV-Light
parameter sets for the calibration period (01-01-1955 to 31-12-1979) ................................... 14
Figure 4.2. Comparison of Reff scores and R2 between the Calibration period (1955 – 1979)
and Validation period (1980 – 2009) ......................................................................................... 16
Figure 4.3. Comparison of LogReff scores between the Calibration (1955 – 1979) and
Validation period (1980 – 2009). ................................................................................................ 17
Figure 4.4. Comparison of LogReff scores greater than 0.6, between the Calibration (1955 –
1979) and Validation period (1980 – 2009) ............................................................................... 17
Figure 4.5. Comparison of the mean annual observed and simulated median, and the Q10
and Q90 flow exceedence percentiles of streamflows for the full period (01-01-1955 to 31-
12-2009). ........................................................................................... Error! Bookmark not defined.
Figure 4.6. Comparison of the mean annual observed and simulated median, and the Q10
and Q90 flow exceedence percentiles of streamflows for the validation year 1980. ........ Error!
Bookmark not defined.
Fig 4.7. Comparison of the observed and simulated mean seasonal cycle of streamflows for
the full period (01-01-1955 to 31-12-2009). ................................... Error! Bookmark not defined.
Fig 4.9. Comparison of the observed and simulated mean, maximum and minimum
streamflows for the period (01-01-1955 to 31-12-2009). .............. Error! Bookmark not defined.
Figure 4.10. Comparison of the observed and simulated mean streamflows for the
validation year 1980. ....................................................................... Error! Bookmark not defined.
Fig 4.9. Comparison of the observed and simulated mean, maximum and minimum
streamflows for the month of March for the period (01-01-1955 to 31-12-2009) ............ Error!
Bookmark not defined.

iii
Introduction

Background of HBV model


The HBV (Hydrologiska Byråns Vattenbalansavdelning) model (Bergström, 1976) is a
rainfall-runoff model, which includes conceptual numerical descriptions of hydrological
processes at the catchment scale. The general water balance can be described as:

𝑑
𝑃−𝐸−𝑄 = [𝑆𝑃 + 𝑆𝑀 + 𝑈𝑍 + 𝐿𝑍 + 𝑙𝑎𝑘𝑒𝑠]
𝑑𝑡
Where:

P = precipitation
E = evapotranspiration
Q = runoff
SP = snow pack
SM = soil moisture
UZ = upper groundwater zone
LZ =lower groundwater zone
lakes = lake volume

For the purposes of this class journal, the more user-friendly HBV-Light version of the model
will be used.

Manual available at:


http://www.geo.uzh.ch/fileadmin/files/content/abteilungen/h2k/Docs_download/HBV_manua
l_2005.pdf

Aims and objectives:


Complete Tasks 1, 2 and 3 and section 4: a statistical analyse on the Boyne Catchment using
calibration, validation and the full period models using the HBV-Light model.

Note: terms in italics are defined in the glossary section.


Note: Parameter and Routine descriptions are also in the Glossary section.

1
Task 1 – Manual Calibration of the HBV-light model

Learning Outcome: Getting to know the model and the functioning of the model
parameters.
Introduction: The HBV model will be used to calibrate the HBV-land catchment for the
period 01-09-1981 to 31-08-1991 (where the warm-up period starts at 01-01-1981). This
catchment behaves just like the HBV model, hence a perfect fit (Reff = 1) is ascertainable.
Procedure: The model output is calibrated by changing the parameters to match the
simulation with observations.
PTQ plot before calibration:

Figure 1.1. Uncalibrated PTQ plot for the period 01-10-1981 to 01-10-1982.

Analysis: The effects that different parameter have on the model output (in terms of
Efficiency of the model):
When each of the Snow Routine parameters (TT, CFMAX, SFCF, CFR and CWH),
Response Function parameters (PERC, Alpha, UZL, K0, K1 and K2) and the CET parameter
were decreased, the efficiency of the model output increased.
When each of the Soil Routine parameters (FC, LP and BETA) and the Routing Routine
parameter (MAXBAS) were increased, the efficiency of the model output increased.

2
PTQ plot after calibration:

Figure 1.2. Calibrated PTQ plot for the period 01-10-1981 to 01-10-1982.

Summary: The model output is improved by changing the parameters to calibrate with the
observations. Therefore each parameter can be used to provide information about the
catchment variables. For example, the temperature parameter (TT), when below zero tells us
that there was an accumulation of precipitation as snow as part of the precipitation variable.

Task 2 – Model Parameter Uncertainty


Introduction: HBV-light allows many model runs with randomly generated parameter sets
using the “Monte Carlo Runs” tool. Parameter uncertainty is estimated by allowing all/many
parameters to vary.

2.1. Monte Carlo Assessment –dotty plots


Learning Outcome: Testing the Monte Carlo tool and producing dotty-plots.
Procedure: In the Monte Carlo tool, the maximum and minimum is set for all parameters for
the HBV-land catchment. The number of model runs is set to 1000 (for scientific journals,
this would be set to several thousand, however it would take a very long period of time to
run). Choose “save only if objective function > 0.6” (to avoid large files), where the objective
function is Reff. The dotty-plots are produced by plotting individual parameter values against
model efficiency (Reff) using Excel. The dotty-plots are used to visualise the optimal
parameters and the range of the parameters.

3
Parameter Explanation Min. Max. Unit

SNOW ROUTINE

TT Threshold temperature -1.5 2.5 °C

CFMAX Degree-day factor 1 10 mm °C-1 d-1

SCF Snowfall correction factor 0.4 1 -

CWH Water holding capacity 0 0.2 -

CFR Refreezing coefficient 0 0.1 -

SOIL ROUTINE

FC Maximum of SM (storage in soil box) 50 500 mm

LP Threshold for reduction of evaporation 0.3 1 -

(SM/FC)

BETA Shape coefficient 1 6 -

CET Correction factor for potential 0 0.3 °C-1

evaporation

RESPONSE ROUTINE
Recession coefficient (upper box) 0.1 0.5 d-1
K0

K1 Recession coefficient (upper box) 0.01 0.4 d-1

K2 Recession coefficient (lower box) 0.001 0.15 d-1

PERC Maximal flow from upper to lower box 0 3 mm d-1

UZL Threshold parameter 0 70 mm

MAXBAS Routing, length of weighting function 1 7 d

Table 2.1. Maximum and minimum of parameters for the Monte Carlo Run for HBV-
Land.

4
Figure 2.1.1. Dotty-plots of the Obj. function Reff, for the Snow Routine parameters.

Figure 2.1.2. Dotty-plots of the Obj. function Reff, for the Soil Routine parameters.

5
Figure 2.1.3. Dotty-plots of the Obj. function Reff, for the Response Routine
parameters.

Analysis: Given the selected maximum and minimum limits, the parameters that are most or
least constrained:
For the Snow Routine parameters (Figure 2.1.1), the Threshold temperature (TT) parameter is
the most constrained, (TT is the most constrained parameter overall). Threshold Temperature
denotes a temperature where incoming precipitation is partitioned to either rainfall or
snowfall and can be influenced by climate variability and change. Precipitation in this
catchment most likely falls as snow, hence the temperature tends to be constrained to
temperatures where precipitation falls as snow (< 0𝑜 𝐶). The dotty-plot of TT shows a well-
defined behavioural parameter i.e. low levels of uncertainty. For the Soil Routine parameters
(Figure 2.1.2), the FC is the most constrained (however, it still has high levels of uncertainty
because of the large range of values). For the Response Routine parameters (Figure 2.1.3),
K1 is the most constrained, where most of the plots (48/61 = 80%) are less than the median of
the limits (however, it still has high levels of uncertainty due to a large range).
All the other parameters, are not very constrained, with values ranging randomly across the
maximum and minimum limits i.e. large levels of uncertainty.

6
Summary: Threshold Temperature (TT) is the most constrained parameter in the model. It
has a well-defined behaviour and low uncertainty. The other parameters are not as well
defined hence they have high levels of uncertainty.

Task 3 – Calibrate and Validate the HBV-Light model for the Boyne
Catchment at Slane Castle.
Introduction: Calibration and validation are essential in testing the accuracy and
performance of the model runs. If the validation shows the model to be inaccurate, the model
is recalibrated until the required model accuracy is reached.
The following files are needed for the model runs:
1. ptq.txt -Contains time series of daily precipitation (mm/day), temperature (oC) and
discharge (mm/day).
2. EVAP.txt -Contains values for the potential evaporation (mm/day).
The template for storing catchment data:
C:\Modelling\Boyne\Data (Where the ptq and EVAP data is stored).
C:\Modelling\Boyne\Results (Where results are outputted).
3.1. Set up the model ready to run.

Procedure: Having set up the folder structure, the catchment is opened in HBV-Light. In
model settings, the period is set from 01-01-1952 to 31-12-1954 for the model warm-up and
the model is calibrated for the period 01-01-1955 to 31-12-1979. The model is then run. The
model efficiency score at the starting point: 0.4793 i.e. without calibration.
For Validation, the model is set to the period 01-01-1980 to 31-12-2009. Where the warm-up
period is 01-01-1977 to 31-12-1979.
For the full period run, the model is to the period is from 01-01-1952 to 31-12-1954 for the
model warm-up and the model is calibrated for the period 01-01-1955 to 31-12-2009.

Parameter Explanation Min. Max. Unit

SNOW ROUTINE

TT Threshold temperature -2 0.5 °C

CFMAX Degree-day factor 0.5 4 mm °C-1 d-1

SCF Snowfall correction factor 0.5 0.9 -

CWH Water holding capacity 0.1 0.1 -

CFR Refreezing coefficient 0.05 0.05 -

SOIL ROUTINE

FC Maximum of SM (storage in soil box) 100 550 mm

LP Threshold for reduction of evaporation 0.3 1 -

(SM/FC)

7
BETA Shape coefficient 1 5 -

CET Correction factor for potential 0 0.3 °C-1

evaporation

RESPONSE ROUTINE
Recession coefficient (upper box) 0.1 0.5 d-1
K0

K1 Recession coefficient (upper box) 0.01 0.2 d-1

K2 Recession coefficient (lower box) 0.00001 0.2 d-1

PERC Maximal flow from upper to lower box 0 4 mm d-1

UZL Threshold parameter 0 70 mm

MAXBAS Routing, length of weighting function 1 7 d

Table 3.1. Maximum and minimum of parameters for the Monte Carlo Run for the
Boyne Catchment. Note that these are more constrained than for the HBV-Land
catchment.

3.2. GAP Optimisation –Calibration.

The GAP optimisation tool is used to produce model runs that are optimised using a genetic
algorithm. It is an alternative calibration method to the Monte Carlo tool.
Learning Outcome: Use GAP Optimisation on the calibration period and compare model
performances.
Procedure: Using the calibration period in section 3.1: from 01-01-1955 to 31-12-1979.
Where the warm-up period is 01-01-1952 to 31-12-1954. The GAP optimisation tool is then
run with the following settings: Number of parameter sets: 50, Model Runs: 1000
Number of runs for local optimisation (Powell): 1000, Number of times to run the model: 5
Select start to begin the calibration.
Analysis:
Reff LogReff PERC UZL K0 K1 K2 MAXBAS TT
Calibration 1 0.869 0.783 1.447 50.905 0.301 0.124 0.070 2.500 0.385
Calibration 2 0.869 0.807 1.490 53.378 0.500 0.124 0.070 2.500 0.400
Calibration 3 0.869 0.793 1.429 51.373 0.319 0.124 0.069 2.500 0.400
Calibration 4 0.869 0.784 2.086 45.274 0.274 0.130 0.080 2.500 0.447
Calibration 5 0.869 0.768 2.142 45.002 0.261 0.129 0.080 2.500 0.000
Reff LogReff CFMAX SFCF CFR CWH FC LP BETA
Calibration 1 0.869 0.783 3.927 0.845 0.050 0.100 255.826 0.828 5.000
Calibration 2 0.869 0.807 3.336 0.847 0.050 0.100 267.338 0.862 5.000
Calibration 3 0.869 0.793 3.316 0.845 0.050 0.100 258.922 0.838 5.000
Calibration 4 0.869 0.784 3.727 0.900 0.050 0.100 267.302 0.858 5.000
Calibration 5 0.869 0.768 1.090 0.823 0.050 0.100 255.689 0.847 5.000
Reff LogReff PERC UZL K0 K1 K2 MAXBAS TT
Max Calibration 0.869 0.807 2.142 53.378 0.500 0.130 0.080 2.500 0.447
Mean Calibration 0.869 0.787 1.719 49.186 0.331 0.126 0.074 2.500 0.326

8
Reff LogReff CFMAX SFCF CFR CWH FC LP BETA
Max Calibration 0.869 0.807 3.927 0.900 0.050 0.100 267.338 0.862 5.000
Mean Calibration 0.869 0.807 3.079 0.852 0.050 0.100 261.015 0.847 5.000

Table 3.2. Model Efficiency scores of the 5 model runs from each calibrated parameter
and their maximum and mean scores.
Different parameter values can have the same Reff scores e.g. 0.0 TT and 0.4 TT both
correspond to the Reff score of 0.869 i.e. there is a problem of Equifinality (too many
possible solutions). Hence looking at other Objective functions such as the LogReff can help
select which run was most optimal and therefore select which parameter sets to use.
Summary: The highest maximum and mean model efficiency score (Reff) is 0.869 and
(LogReff) is 0.807 for the calibration period.
3.3. Check model performance during validation.

Learning Outcome: Use GAP Optimisation on the validation period and compare model
performances.
Procedure: The model is set to the validation period from section 3.1: from 01-01-1980 to
31-12-2009. Where the warm-up period is 01-01-1977 to 31-12-1979. The same settings are
used as section 3.2.
Analysis:

Reff LogReff PERC UZL K0 K1 K2 MAXBAS TT


Calibration 1 0.831 0.775 2.685 67.614 0.105 0.176 0.075 2.500 0.500
Calibration 2 0.831 0.790 2.310 69.690 0.213 0.170 0.069 2.500 0.500
Calibration 3 0.831 0.773 2.672 68.367 0.234 0.175 0.073 2.500 0.500
Calibration 4 0.836 0.816 1.681 16.348 0.106 0.126 0.063 2.500 0.500
Calibration 5 0.837 0.808 1.843 13.378 0.100 0.123 0.066 2.500 0.500
Reff LogReff CFMAX SFCF CFR CWH FC LP BETA
Calibration 1 0.831 0.775 1.799 0.900 0.050 0.100 217.136 1.000 5.000
Calibration 2 0.831 0.790 1.687 0.900 0.050 0.100 211.765 1.000 5.000
Calibration 3 0.831 0.773 1.986 0.900 0.050 0.100 206.035 1.000 5.000
Calibration 4 0.836 0.816 1.799 0.900 0.050 0.100 218.068 1.000 5.000
Calibration 5 0.837 0.808 2.329 0.900 0.050 0.100 215.541 1.000 5.000

Reff LogReff PERC UZL K0 K1 K2 MAXBAS TT


Max Calibration 0.837 0.816 2.685 69.690 0.234 0.176 0.075 2.500 0.500
Mean Calibration 0.833 0.792 2.238 47.079 0.152 0.154 0.069 2.500 0.500

Reff LogReff CFMAX SFCF CFR CWH FC LP BETA


Max Calibration 0.837 0.816 2.329 0.900 0.050 0.100 218.068 1.000 5.000
Mean Calibration 0.837 0.816 1.920 0.900 0.050 0.100 213.709 1.000 5.000

Table 3.3. Model Efficiency scores of the 5 model runs from the validation period and
their maximum and mean scores.
Similarly to the calibration period, different parameter values can correspond to the same
Reff scores, (such as a PERC score of 2.685 or 2.310, both correspond to the Reff score of
9
0.831, once again an a problem of equifinality), however both scores correspond to different
LogReff scores, hence the LogReff value can be used to select the appropriate parameter set.
Note that the Reff and LogReff values have decreased from the calibration period (by ~5%).
This suggests that the validation period is harder for the GAP tool to model. Furthermore, the
model run for the validation period has a higher number of non-variable optimal parameter
scores such as TT (0.5) and SFCF (0.9) for all 5 outputs, compared to the calibration period
in which TT has greater variability (ranges from 0.0 to 0.4) i.e. there are more well-defined
behavioural parameter sets which leads to higher certainty and confidence in the model
output.
Summary: The highest maximum and or mean model efficiency score Reff is 0.837 and
LogReff is 0.816 for the validation period.
3.4. Check model performance for the full period.

Learning Outcome: Use GAP Optimisation for the entire period and compare model
performances.
Procedure: The model is set to the full period from section 3.1: from 01-01-1955 to 31-12-
2009. Where the model warm-up period is from 01-01-1952 to 31-12-1954. The same
settings as used as section 3.2.
Analysis:

LogReff CFMAX SFCF CFR CWH FC LP BETA


Calibration 1 0.805 2.666 0.900 0.050 0.100 246.680 0.985 5.000
Calibration 2 0.812 2.454 0.900 0.050 0.100 248.736 0.984 5.000
Calibration 3 0.766 2.691 0.900 0.050 0.100 197.894 0.832 4.331
Calibration 4 0.815 2.537 0.900 0.050 0.100 251.020 0.987 5.000
Calibration 5 0.796 2.632 0.900 0.050 0.100 240.021 0.939 5.000
Reff PERC UZL K0 K1 K2 MAXBAS TT
Calibration 1 0.834021 2.376308 44.95215 0.203791 0.152507 0.072073 2.499999 0.499739
Calibration 2 0.83393 2.253371 45.90241 0.22477 0.150012 0.069564 2.499999 0.449755
Calibration 3 0.827956 1.841311 48.67546 0.241427 0.150197 0.065424 2.49981 0.499992
Calibration 4 0.833853 2.202466 47.33065 0.257833 0.149351 0.068514 2.499986 0.487255
Calibration 5 0.835845 1.940318 16.74483 0.1 0.124322 0.070906 2.499998 0.415526

Reff PERC UZL K0 K1 K2 MAXBAS TT


Max Calibration 0.835845 2.376308 48.67546 0.257833 0.152507 0.072073 2.499999 0.499992
Mean Calibration 0.833121 2.122755 40.7211 0.205564 0.145278 0.069296 2.499958 0.470453
LogReff CFMAX SFCF CFR CWH FC LP BETA
Max Calibration 0.814758 2.376308 48.67546 0.257833 0.152507 0.072073 2.499999 0.499992
Mean Calibration 0.798678 2.122755 40.7211 0.205564 0.145278 0.069296 2.499958 0.470453

Table 3.3. Model Efficiency scores of the 5 model runs for the full period and their
maximum and mean scores.

10
The parameter values (and Reff and LogReff) for the full period are similar to the Validation
period, except with more variability of certain parameters such as TT (ranges from 0.415 to
0.499) than the validation period.
Summary: The highest maximum model efficiency score (Reff) is 0.836 (and 0.833 for
mean) and (LogReff) is 0.815 (and 0.798 for mean) for the full period.
3.5. Monte Carlo Uncertainty -Comparison of Calibration to Validation.

The Monte Carlo tool is used to calibrate the model by finding the 100 best parameter sets
with the highest Objective function (Reff) score. This is an alternative method to producing
optimal parameters compared to the Monte Carlo run for parameters with Reff scores > 0.6
(as used for HBV-Land) or GAP optimisation.
Learning Outcome: Calibrate and validate the model using Batch Runs from the Monte
Carlo tool and create a hydrograph for 1 year of validation period for the 100 best
simulations.
Procedure: Reset the model to the calibration dates. Using the Monte Carlo tool select 1000
runs and save 100 sets with highest Objective Function (Reff) score. The parameter limits are
the same as the GAP optimisation limits as defined in Table 3.1.
Rename the “Multi.txt” file from Results as “Batch.txt” and place it in the Data folder. Then
in the Batch Run tool select the following: Save Qsim in separate files, Save frequency
distributeon, Save peaks, Save output files with headerline and Run all climate series
combinations.
Batch Run tool:
For Calibration, set the period from 01-01-1955 to 31-12-1979. Where the model warm-up
period is from 01-01-1952 to 31-12-1954. Once again using the Monte Carlo tool select 1000
runs and save 100 sets with highest Objective Function score. Rename the “Multi.txt” file
from Results as “Batch.txt” and place in the Data folder. Then rerun the Batch Run tool with
the same selections. The Batch file contains both the parameter values and the simulation
results (goodness-of-fit measures) for each of the model runs done during the Batch
simulation.
For Validation set the period from 01-01-1980 to 31-12-2009. Then run the Batch Run tool
with previous setting. (Do not need to run the Monte Carlo tool for validation).
For a complete run of the model, select the entire calibration and validation period from 01-
01-1955 to 31-12-2009. Where the model warm-up period is from 01-01-1952 to 31-12-1954.
Then run the Batch Run tool with previous setting.

11
Hydrograph of observed and 100 best simulated
streamflows for the validation year 1980
14

12

10

0
105
1
14
27
40
53
66
79
92

118
131
144
157
170
183
196
209
222
235
248
261
274
287
300
313
326
339
352
365
Figure 3.5. Hydrograph of observed and simulated streamflows for the validation year
1980. Where the dark line represents the observed.
The model is capable of reproducing the observed flow quite well. The ensemble spread is
quite low relative to the dynamic range of values. The observations fall within the ensemble
on almost all of the days of the year. However, the simulation does not always pick up the
winter peaks, which are underestimated.
Calibration period Reff R2 LogReff FlowWeightedReff
VolumeError
LindstromMeasure
MAREMeasure
SpearmanRank
MeanDiff
mean 0.797973 0.809072 0.666496 0.828454 0.944176 0.792391 0.590609 0.920169 3.6
median 0.79725 0.80835 0.76015 0.8308 0.9563 0.792 0.609 0.9245 1
max 0.8498 0.8535 0.8343 0.8769 0.9979 0.8475 0.6836 0.9432 57
min 0.7709 0.7799 -0.604 0.7568 0.8436 0.7569 0.3418 0.8564 -64
Validation period Reff R2 LogReff FlowWeightedReff
VolumeError
LindstromMeasure
MAREMeasure
SpearmanRank
MeanDiff
mean 0.722541 0.797941 0.601199 0.689664 0.792359 0.701776 0.660341 0.921528 107.45
median 0.7231 0.79855 0.73855 0.70145 0.79535 0.7009 0.66855 0.92725 106
max 0.8053 0.8393 0.8595 0.8218 0.9257 0.7915 0.7507 0.9463 159
min 0.638 0.7544 -0.9739 0.5413 0.6933 0.6118 0.5143 0.8715 38
Full Period Reff R2 LogReff FlowWeightedReff
VolumeError
LindstromMeasure
MAREMeasure
SpearmanRank
MeanDiff
mean 0.752484 0.784354 0.63935 0.734221 0.870635 0.739546 0.628489 0.912793 60.09
median 0.7534 0.7858 0.75385 0.7415 0.8754 0.74025 0.6386 0.91865 58
max 0.8019 0.8231 0.8421 0.833 0.9927 0.7934 0.7034 0.9363 112
min 0.6917 0.7469 -0.7628 0.61 0.7591 0.6676 0.4776 0.8537 -8

Table 3.5. Comparison of the statistics between the calibration, validation and full period for
the 100 best HBV-Light parameters sets simulated using the Monte Carlo Batchfile runs.

In comparison to the GAP optimisation tool, the Monte Carlo tool has produced lower Reff
and LogReff scores. Hence, higher constraints on the Monte Carlo parameter limits is needed
to increase the efficiency of the model.
Summary: Gap optimisation produced better Reff scores than the Monte Carlo run. The
hydrograph shows that the model captures summer months well, while underestimating the
winter months.

12
4. Statistical analysis of the HBV-Light Model on the Boyne
Catchment at Slane Castle.

13
Figure 4.1. Dotty-plots of the Obj. function (Reff values), for the 100 best HBV-Light parameter
sets for the calibration period (01-01-1955 to 31-12-1979): The best value for each variable is
highlighted with a black circle.

The large amount of scatter means that the parameters are poorly identified and have close to
flat optimal parameter sets, both of which increases the level of uncertainty. However, in the
case of CWH, CFR (limits for these were constrained to 0.1 and 0.05 respectively) and
MAXBAS, there is a high level of parameter identifiability and therefore less uncertainty.

14
PERC UZL K0 K1 K2 MAXBAS TT_1 CFMAX_1 SFCF_1
mean 2.260954 45.80343 0.264868 0.112718 0.065636 1.92434 -0.5964 2.261238 0.694438
median 2.259143 50.95468 0.238681 0.109876 0.065666 2.027361 -0.54446 2.078881 0.680348
max 3.925113 68.7963 0.497307 0.198508 0.099744 2.486961 0.492934 3.987583 0.897878
min 0.002478 4.616322 0.100614 0.015403 0.000117 1.017102 -1.95834 0.55392 0.502075
CFR_1 CWH_1 FC_1 LP_1 BETA_1 Reff R2 LogReff MeanDiff
mean 0.05 0.1 215.3483 0.670558 3.535479 0.800543 0.812286 0.622399 4.326924
median 0.05 0.1 201.1038 0.682142 3.639156 0.797979 0.813137 0.738428 5.325196
max 0.05 0.1 475.4484 0.972219 4.984828 0.85004 0.857541 0.853493 67.9855
min 0.05 0.1 100.0064 0.31244 1.620912 0.77306 0.780696 -2.03443 -60.9868

Table 4.1. Statistics for the 100 best HBV-Light parameter sets for the Monte Carlo Calibration
period.

PERC UZL K0 K1 K2 MAXBAS TT_1


mean 2.0852951 42.35723 0.277 0.11314433 0.061428976 1.94454922 -0.747894106
median 2.103097 45.23553 0.2687 0.105469378 0.063488425 2.060968849 -0.629360666
max 3.9873062 68.57069 0.4943 0.199588099 0.098742145 2.48631773 0.411605996
min 0.0740428 4.895329 0.101 0.030062043 0.001283167 1.123276967 -1.926296414
CFMAX_1 SFCF_1 CFR_1 CWH_1 FC_1 LP_1 BETA_1
mean 2.2686131 0.707372 0.05 0.1 221.1115756 0.684199805 3.559734875
median 2.446315 0.701737 0.05 0.1 205.4178874 0.708912044 3.598287455
max 3.9463525 0.891347 0.05 0.1 473.8706098 0.991338286 4.912917817
min 0.5180687 0.500753 0.05 0.1 102.6212887 0.301572427 1.587954873

Table 4.2. Statistics of the 100 best HBV-Light parameters sets simulated using the Monte Carlo
Batchfile runs for the Calibration period (1955 – 1979).

Calibration period Reff R2 LogReff FlowWeightedReff


VolumeError LindstromMeasure MAREMeasure SpearmanRank MeanDiff
mean 0.79797 0.80907 0.6665 0.828454 0.944176 0.792391 0.590609 0.920169 3.6
median 0.79725 0.80835 0.76015 0.8308 0.9563 0.792 0.609 0.9245 1
max 0.8498 0.8535 0.8343 0.8769 0.9979 0.8475 0.6836 0.9432 57
min 0.7709 0.7799 -0.604 0.7568 0.8436 0.7569 0.3418 0.8564 -64
Validation period Reff R2 LogReff FlowWeightedReff
VolumeError LindstromMeasure MAREMeasure SpearmanRank MeanDiff
mean 0.72254 0.79794 0.6012 0.689664 0.792359 0.701776 0.660341 0.921528 107.45
median 0.7231 0.79855 0.73855 0.70145 0.79535 0.7009 0.66855 0.92725 106
max 0.8053 0.8393 0.8595 0.8218 0.9257 0.7915 0.7507 0.9463 159
min 0.638 0.7544 -0.9739 0.5413 0.6933 0.6118 0.5143 0.8715 38
Full Period Reff R2 LogReff FlowWeightedReff
VolumeError LindstromMeasure MAREMeasure SpearmanRank MeanDiff
mean 0.75248 0.78435 0.63935 0.734221 0.870635 0.739546 0.628489 0.912793 60.09
median 0.7534 0.7858 0.75385 0.7415 0.8754 0.74025 0.6386 0.91865 58
max 0.8019 0.8231 0.8421 0.833 0.9927 0.7934 0.7034 0.9363 112
min 0.6917 0.7469 -0.7628 0.61 0.7591 0.6676 0.4776 0.8537 -8

Table 4.3. Comparison of the statistics between the calibration, validation and full period for
the 100 best HBV-Light parameters sets simulated using the Monte Carlo Batchfile runs.

15
Table 4.1 and 4.2 show how some of the parameter sets shift after the Monte Carlo runs are
passed through the Batchfile runs to produce the simulated streamflows, most notable for
PERC and UZL mean and medians. Table 4.3 shows how the Reff scores (among other
scores) deteriorates between the calibration and validation periods.

Comparison of Reff values of Calibration and Validation

0.84
y = 0.7644x + 0.1106
R² = 0.1312
0.79

0.74

0.69

0.64
0.64 0.69 0.74 0.79 0.84

Comparison of R2 values of Calibration and Validation

0.85
y = 0.6243x + 0.2908
R² = 0.2572
0.83

0.81

0.79

0.77

0.75
0.75 0.77 0.79 0.81 0.83 0.85

Figure 4.2. Comparison of Reff scores and R2 scores between the Calibration period (1955 –
1979) and Validation period (1980 – 2009), where the black line represents the R2 regression
line.

Reff values between the calibration period and validation period have a wide range, resulting
in a low R2 correlation score of 0.1312. The R2 values between the calibration period and
validation period are not as wide in range, hence the higher R2 correlation scores (R2 =
0.2572). Both scores suggest that the model efficiency is not optimal.

16
LogReff values of Calibration and Validation

0.8
y = 1.3086x - 0.2709
0.6
R² = 0.9572
0.4

0.2

0
-1 -0.5 0 0.5
-0.2

-0.4

-0.6

-0.8

-1

Figure 4.3. Comparison of LogReff scores between the Calibration (1955 – 1979) and
Validation period (1980 – 2009), where the black represents the regression line.

LogReff values >0.6 of Calibration and Validation

0.85
y = 0.5594x + 0.3272
R² = 0.131
0.8

0.75

0.7

0.65

0.6
0.6 0.65 0.7 0.75 0.8 0.85

Figure 4.4. Comparison of LogReff scores greater than 0.6, between the Calibration
(1955 – 1979) and Validation period (1980 – 2009), where the black represents the
regression line.

Although the LogReff calibration period appears to be correlated with the validation period in
Figure 4.2, when only LogReff values greater than 0.6 (i.e. values of significance) are
accepted, then the R2 correlation value is only as good as the Reff correlation value (R2 =
0.131).

17
Comparison of the observed and simulated median of
streamflow for the period (1955-2009)
2.5
2
mm/day

1.5
1
0.5
0
1955
1958
1961
1964
1967
1970
1973
1976
1979
1982
1985
1988
1991
1994
1997
2000
2003
2006
2009
year

Observations median

Comparison of the observed and simulated Q10 of


streamflow for the period (1955-2009)
2.5
2
mm/day

1.5
1
0.5
0
1970
1955
1958
1961
1964
1967

1973
1976
1979
1982
1985
1988
1991
1994
1997
2000
2003
2006
year 2009

Observations Q10

Comparison of the observed and simulated Q90 of


streamflow for the period (1955-2009)
2.5
2
mm/day

1.5
1
0.5
0
1973
1955
1958
1961
1964
1967
1970

1976
1979
1982
1985
1988
1991
1994
1997
2000
2003
2006
2009

year

Observations Q90

Figure 4.5. Comparison of the mean annual observed and simulated median, and the Q10 and
Q90 flow exceedence percentiles of streamflows for the full period (01-01-1955 to 31-12-2009).

18
The median and Q10 exceedence percentile are captured well in the calibration period, while
the Q90 flows are overestimated. However, for the validation period, the median and Q10 are
underestimated, while the Q90 captures the streamflow well. To know what causes this, a
look at the individuals years is needed.

Comparision of the observations and median, Q10 and Q90 exceedence


percentiles for the validation year 1980
12

10

8
mm/day

0
1
12
23
34
45
56
67
78
89
100
111
122
133
144
155
166
177
188
199
210
221
232
243
254
265
276
287
298
309
320
331
342
353
364
Observations median Q10 Q90

Figure 4.6. Comparison of the mean annual observed and simulated median, and the Q10 and
Q90 flow exceedence percentiles of streamflows for the validation year 1980.

The observations are underestimated during the winter months by the median, Q10 and Q90.
Hence, a correction for the winter streamflows could increase the correlation for the median,
Q10 and Q90 for the full period in Figure 4.5.

Comparison of the observed and simulated seasonal cycle of


streamflow for the period (1955 - 2009)
3

2.5

2
mm/day

1.5

0.5

month

Fig 4.7. Comparison of the observed and simulated mean seasonal cycle of streamflows for the
full period (01-01-1955 to 31-12-2009). Where the dark line represents the observed and dashed
line represents the mean seasonal.

19
Similar to Figure 4.6, the winter months are underestimated by the simulations. The mean of
the simulation corresponds almost perfectly during the summer months, while the winter
months are off by as much as 5mm/day.

Comparison of the observed and simulated mean streamflow for the


period (1955 - 2009)
2.5

1.5
mm/day

0.5

0
1959

1965
1955
1957

1961
1963

1967
1969
1971
1973
1975
1977
1979
1981
1983
1985
1987
1989
1991
1993
1995
1997
1999
2001
2003
2005
2007
2009
year

Comparison of the observed and simulated maximum streamflow for the


period (1955 - 2009)
25

20

15
mm/day

10

0
1979

1997
1955
1957
1959
1961
1963
1965
1967
1969
1971
1973
1975
1977

1981
1983
1985
1987
1989
1991
1993
1995

1999
2001
2003
2005
2007
2009

year

20
Comparison of the observed and simulated minimum streamflow for the
period (1955 - 2009)
0.9
0.8
0.7
0.6
mm/day

0.5
0.4
0.3
0.2
0.1
0
1959

1965
1955
1957

1961
1963

1967
1969
1971
1973
1975
1977
1979
1981
1983
1985
1987
1989
1991
1993
1995
1997
1999
2001
2003
2005
2007
2009
year

Fig 4.9. Comparison of the observed and simulated mean, maximum and minimum streamflows
for the period (01-01-1955 to 31-12-2009). Where the dark line represents the observed and
arterial drainage during the period 1969 – 1986 is shaded in grey.

The model appears to capture the observed flows quite well until they start to diverge from
the 1980s onwards. Both the mean and maximum flow is underestimated, while the minimum
flow is overestimated. This could be explained by the fact that the Boyne catchment
underwent severe arterial drainage during the period 1969 -1986. The dredging of the river
beds resulted in a drop in the datum and an increase in the amount of water that the catchment
can hold. This can be seen by the step-change during the mid-1970s for the mean and
maximum flows. The river flows started to deposit loads again, which raised the datum level
back up, however, the model assumes they did not rise, hence the underestimation of the
streamflow. By the mid-1990s the model starts to capture the observed streamflow once
more. The Boyne catchment then under went further, less frequent and less intensive arterial
drainage after the 1969-1986 period, which causes the model to underestimate the mean and
maximum streamflow again during the 2000s. The step-change in the mid-1970s for
minimum flows are not captured well in the annual streamflows (flows are too low to show
major changes), hence looking at an individual years and months can make the change more
apparent.

21
Comparison of observed and simulated mean streamflows for
the validation year 1980
14

12

10
mm/day

0
144

254
1
12
23
34
45
56
67
78
89
100
111
122
133

155
166
177
188
199
210
221
232
243

265
276
287
298
309
320
331
342
353
364
Figure 4.10. Comparison of the observed and simulated mean streamflows for the validation
year 1980. Where the dark line represents the observed.

The individual year 1980, shows that the simulations do not always pick up the winter peaks,
which are often underestimated. Hence, a correction for the arterial drainage should be done
for the winter months, notably March, which is when the arterial drainage was done in the
Boyne Catchment.

Comparison of the observations to the average simulations for the


month of March for the period 1955 - 2009
3.5
3
2.5
mm/day

2
1.5
1
0.5
0
1959

1965
1955
1957

1961
1963

1967
1969
1971
1973
1975
1977
1979
1981
1983
1985
1987
1989
1991
1993
1995
1997
1999
2001
2003
2005
2007
2009

year

22
Comparison of the observations to the maximum simulations for the
month of March for the period 1955 - 2009
15
13
11
9
mm/day

7
5
3
1
-1

1979

1997
1955
1957
1959
1961
1963
1965
1967
1969
1971
1973
1975
1977

1981
1983
1985
1987
1989
1991
1993
1995

1999
2001
2003
2005
2007
2009
year

Comparison of the observations to the minimum simulations for the


month of March for the period 1955 - 2009
2.5

2
mm/day

1.5

0.5

0
1959

1965
1955
1957

1961
1963

1967
1969
1971
1973
1975
1977
1979
1981
1983
1985
1987
1989
1991
1993
1995
1997
1999
2001
2003
2005
2007
year 2009

Fig 4.9. Comparison of the observed and simulated mean, maximum and minimum streamflows
for the month of March for the period (01-01-1955 to 31-12-2009). Where the dark line
represents the observed and arterial drainage during the period 1969 – 1986 is shaded in grey.

Similarly to the annual streamflow simulations, the month of March captures the mean and
maximum flows well until the period of arterial drainage which drops the datum level as can
be seen by the step-change during the mid-1970s for mean, maximum and minimum
streamflow.

23
Discussion and conclusion

Streamflow for the Boyne catchment was modelled using the HBV-Light model with Nash-
Sutcliffe efficiency (Reff) used to measure the performance of the model. The Reff, LogReff
and R2 scores do not correlate well between the calibrated and validated period, (R2
correlation values of 0.131, 0.131 and 0.257 respectively), hence more constrained limits
should be implemented to increase these scores. There is a high level of uncertainty within
the parameter sets used in this model. Although the GAP optimisation runs produced higher
objective function (Reff) scores, Uhlenbrook, (1999) suggests the use of 100 model
ensembles using the Monte Carlo runs rather than single values produced by GAP runs
because ensembles accounts for two common sources of uncertainty in any model: errors
introduced by the use of imperfect initial conditions which get amplified by the chaotic nature
of the equations as they evolve in the dynamic system, and errors introduced by the
imperfections in the model formulation such as approximations in the mathematical methods
to solve the equations.
The model is capable of reproducing the observed flow quite well during the calibration
period i.e. the ensemble spread is quite low relative to the dynamic range of values and the
observations fall within the ensemble on almost all of the days of the year. However, the
winter peaks tend to be underestimated. Steele-Dunne et al., (2008) have made similar
observations for the winter months on the Boyne Catchment using the HBV-light model.
Steele-Dunne et al., (2008) have also shown that parameter uncertainty affects seasonal flows
far less than annual flows due to seasonal flows being integrated values rather than single
events, i.e. the annual flows amalgamate the well modelled spring, summer and autumn
months with the poorly simulated winter months.
Murphy et al., (2013) have also concluded that arterial drainage in the Boyne catchment
during the period 1969 – 1986 is a leading contributor to the underestimation of the mean and
maximum flows. However, this may not be the full cause of the model underestimations, as
Murphy et al, (2013) suggest that other factors such as the negative winter NAOI anomalies
complement the effects of the arterial drainage in the catchment. Rosenberg et al. (1997)
have stated that in heavily developed catchments, such as the Boyne catchment, it is
difficult to discern hydrological alterations from other environmental perturbations. Hence,
a more thorough attribution of the causes of the model trends is essential. Bhattarai and
O’Connor, (2004) have suggested that arterial drainage can be further amplified by
agricultural land-use and management change.

24
References

Bergström, S., (1976). Development and application of a conceptual runoff model for
Scandinavian catchments, SMHI Report RHO 7, Norrköping, 134 pp.

Bhattarai, K. P. and O’Connor, K. M., (2004). The effects over time of an arterial drainage
scheme on the rainfallrunoff transformation in the Brosna Catchment. Physics and Chemistry
of the Earth, 29, 787–94.
Murphy, C., Harrigan, S., Hall, J., Wilby, R.L., (2013). HydroDetect: the identification and
assessment of climate change indicators for an Irish reference network of river flow stations.
Climate Change Research Programme (CCRP) Report Series No. 27, Environmental
Protection Agency, Co. Wexford, 1–66.

Rosenberg D.M., Berkes F., Bodaly R.A., Hecky R.E., Kelly C.A., Rudd J.W.M., (1997).
Large–scale impacts of hydroelectric development. Environmental Reviews 5: 27–54.
Steele-Dunnea, S., Lynch, P., McGrath, R., Semmler, T., Wang, S., Hanafin, J., Nolan, P.,
(2008). The impacts of climate change on hydrology in Ireland. Journal of Hydrology,
356(1-2), 28–45.
Uhlenbrook, S., Seibert, J., Leibundgut, Ch., Rodhe, A., (1999). Prediction uncertainty of
conceptual rainfall-runoff models caused by problems to identify model parameters and
structure. Hydrological Sciences – Journal des Sciences Hydrologiques 44(5), 779–798.

25
Glossary of Terms

Accuracy: The average degree of correspondence between individual pairs of model outputs
and observations. It is usually measured by the mean square error (MSE). The differenceis
the error. The lower the errors, the greater the accuracy.

Analysis: Information at every model point right now or at the beginning (0 hrs).
Black-Box Model: is a device, system or object which can be viewed in terms of its inputs
and outputs (or transfer characteristics), without any knowledge of its internal workings.
Calibration: is the setting or correcting of a measuring device or base level, usually by
adjusting it to match or conform to a dependably known and unvarying measure.
Conceptual Model: A model made of the composition of concepts, which are used to help
people know, understand, or simulate a subject the model represents. E.g. “Bucket Models”,
where buckets of reservoirs (of a specific size and shape) are used to represent catchments.
i.e. one bucket represents the soil data, and another bucket represents groundwater data.
Correlation: mutual relationship or connection between two or more variables.
Dotty-Plots: A scatter plot of parameter value against model output.
Efficiency of the model: How good the model simulates the observations [-∞ to 1). Where 1
is a perfect fit.
Empirical model: refers to any kind of (computer) modelling based on empirical
observations rather than on mathematically describable relationships of the system modelled.
Equifinality: Multiple ways of getting the same solution. i.e. different parameter values
achieving the same objective function value. This is due to different parts of the sample space
with scores that equally good.
Globally identifiable: (or non-identifiable) if this is independent of size or scale. (While
local identifiability refers to identifiability dependent on size or scale.)
Identifiable parameters: are those which effect the value of the data and can be estimated
with some degree of certainty. (Non-identifiable parameters are those which effect the value
of the data but which cannot be estimated accurately. Non-observable parameters are those
which don't have an effect on the data.)
Lumped Models: E.g. assumes all the land/soils is the same.
Objective Function: An equation to be optimized given certain constraints and with
optimisation parameters that need to be minimized or maximized (using nonlinear
programming techniques). E.g. R2 and Reff, (which are used to judge the accuracy of the
model i.e. the measure of success of the model).
Parameter: is a characteristic, feature, or measurable factor that can help in defining a
particular system. E.g. in the function: 𝑓(𝑥) = 𝑎𝑥 2 + 𝑏𝑥 + 𝑐. Where 𝑎, 𝑏 and 𝑐 are
parameters that determine which particular quadratic function is being considered.

26
Parameter Identifiability: How many combinations to get to your objective function value?
If there are too many solutions, it may lead to the problem of equifinality and large
uncertainty. While well-defined behavioural parameters leads to more certainty and
confidence in the solution.
Parsimonious Model: is a model that accomplishes a desired level of explanation or
prediction with as few predictor variables as possible.
Physical Models: A complex model based on physical laws. (It can be reduced to a
conceptual model.) The conditions required are geometric similarity and physical similarity
between the model and the original: at similar moments and similar points in space, the
values of the variables that describe the full-scale phenomena must be proportional to the
values of the same quantities for the model.
Powell: Powell’s quadratic convergent method is used to fine-tune the parameter sets.
Qualitative: does the model output look right?

Quantitative: how accurate was the model output?

R2: is a statistical measure of how close the data are to the fitted regression line. It is also
known as the coefficient of determination.

Reff: Nash-Sutcliffe Model Efficiency coefficient. It is used as an objective function and is


identical to R2, where the range is -∞ to 1, where 1 is a perfect score.

Skill: the average accuracy of a forecast relative to the accuracy of a forecast produced by the
reference method.

Validation: is a way of checking whether the numerical results are acceptable as a


description of the data, i.e. is the methodology done right? It provides information on the
accuracy (or inaccuracy) of the logger and to check that an instrument meets the
specifications for the specific intended use.

Variance: a measure of the dispersion of the data set.

Verification: Model output is compared to what actually occurred (observations) or to a


good estimate of the true outcome, i.e. it is a process of assessing the quality of the model
output by checking against a reference to verify that an instrument meets its manufacturers’
broad and general specifications.

Warm-Up Period: The time that the simulation will run before starting to collect results.
This allows the queues (and other aspects in the simulation) to get into conditions that are
typical of normal running conditions in the system you are simulating.

Quality: correspondence between model output and observations i.e. check if the observed
conditions are modelled well according to some objective or subjective criteria.

27
Glossary of Routines and Parameters
Snow Routine:

Imputs: Precipitation and Temperature


Outputs: Snow pack and Snowmelt

Parameters:
TT = threshold temperature (oC)
CFMAX= degree-Δt factor (mm oC-1 Δt-1)
SFCF = snowfall correction factor (-)
CFR = refreezing coefficient (-)
CWH = water holding capacity (-)
Soil Routine:

Imputs: Potential evapotranspiration, Precipitation and Snowmelt


Outputs: Actual evapotranspiration, Soil moisture and Groundwater recharge

Parameters:
FC = maximum soil moisture storage (mm)
LP = soil moisture value above which AET reaches PET (mm)
BETA= parameter that determines relative contribution to runoff from rain or snowmelt (-)
Response Function:

Imputs: Groundwater recharge and Potential evapotranspiration


Outputs: Runoff and Groundwater level

Parameters:
PERC = threshold parameter (mm Δt-1)
Alpha = non-linearity coefficient (-)
UZL = threshold parameter (mm)
K0 = storage (or recession) coefficient (Δt-1)
K1 = storage (or recession) coefficient (Δt-1)
K2 = storage (or recession) coefficient (Δt-1)
Routing Routine:

Imputs: Runoff
Outputs: Simulated Runoff

Parameters:
MAXBAS = Length of triangular weighting function (Δt)
Other:
CET = correction factor [°C-1]

28

View publication stats

You might also like