0% found this document useful (0 votes)
407 views839 pages

FEWSDOC Configuration Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
407 views839 pages

FEWSDOC Configuration Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1. Delft-FEWS Configuration Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6
1.1 01 Structure of a DELFT-FEWS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 02 Data Handling in DELFT-FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 03 System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.1 01 FEWS Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.2 02 Time Series Display Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.3 03 Display Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.3.4 04 Location Icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.3.5 05 Module Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.3.6 06 Display Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
1.3.7 07 Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
1.4 04 Regional Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.4.1 01 Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.4.2 01 - Related Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.4.3 02 LocationSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
1.4.4 03 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
1.4.5 05 Branches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.4.6 06 Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.4.7 07 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
1.4.8 08 ValidationRulesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
1.4.9 09 Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
1.4.10 10 ThresholdValueSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
1.4.11 11 ColdModuleInstanceStateGroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
1.4.12 12 ModuleInstanceDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
1.4.13 13 WorkflowDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
1.4.14 14 IdMapDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
1.4.15 15 FlagConversionsDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
1.4.16 16 UnitConversionsDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
1.4.17 17 CorrelationEventSetsDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
1.4.18 18 TravelTimesDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
1.4.19 19 TimeUnits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
1.4.20 20 Historical Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
1.4.21 21 Value Attribute Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
1.4.22 22 Locations and attributes defined in Shape-DBF files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
1.4.23 23 Qualifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
1.4.24 24 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
1.4.25 25 ModifierTypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
1.4.26 26 TimeSteps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
1.5 05 Configuring the available DELFT-FEWS modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
1.5.1 01 Interpolation Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
1.5.2 02 Transformation Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
1.5.3 03 Import Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
[Link] Available data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
[Link].1 HCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
[Link].2 HymosAscii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
[Link].3 LMW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
[Link].4 MM3P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
[Link].5 Pegelonline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
[Link].6 WQCSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
[Link].7 ArcInfoAscii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
[Link].8 ArcWat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
[Link].9 BIL Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
[Link].10 BUFR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
[Link].11 CSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
[Link].12 Database import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
[Link].13 Delft-Fews Published Interface timeseries Format (PI) Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
[Link].14 DINO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
[Link].15 DIVER MON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
[Link].16 FewsDatabase Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
[Link].17 Gray Scale Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
[Link].18 hdf4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
[Link].19 HYMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
[Link].20 KNMI CSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
[Link].21 KNMI EPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
[Link].22 KNMI HDF5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
[Link].23 KNMI IRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
[Link].24 KNMI SYNOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
[Link].25 Landsat-HDF5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
[Link].26 LUBW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
[Link].27 Matroos NetCDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
[Link].28 Msw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
[Link].29 NETCDF-CF_PROFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
[Link].30 NETCDF-CF_GRID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
[Link].31 NETCDF-CF_TIMESERIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
[Link].32 NOOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
[Link].33 NTUQUARTER Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
[Link].34 NTURAIN Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
[Link].35 SSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
[Link].36 TMX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
[Link].37 Wiski . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
[Link].38 WSCC csv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
[Link].39 Singapore OMS Lake Diagnostic System files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
[Link].40 EasyQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
[Link].41 McIdasArea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
[Link].42 Keller IDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
[Link].43 Obserview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
[Link].44 generalCsv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
[Link].45 DINO Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
[Link].46 GermanSnow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
[Link].47 Delft3D-Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
[Link].48 CERF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
[Link].49 SWE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
[Link].50 NetcdfGridDataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
[Link].51 IP1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
[Link].52 IFKIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
[Link].53 IJGKlepstanden . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
[Link].54 Radolan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
[Link].55 Bayern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
[Link] Custom time series import formats using java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
[Link].1 [Link] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
[Link].2 [Link] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
[Link].3 [Link] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
[Link] Import data using OPeNDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
[Link] Import Module configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
1.5.4 04 Export modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
[Link] EA XML Export Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
[Link].1 GRDC Export Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
[Link] Export module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
[Link] Export module, available data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
[Link].1 BfG Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
[Link].2 CSV Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
[Link].3 DINO Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
[Link].4 Fliwas Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
[Link].5 GIN Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
[Link].6 GRDS Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
[Link].7 iBever Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
[Link].8 Menyanthes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
[Link].9 NetCDF Alert Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
[Link].10 NETCDF-CF_GRID_MATROOS Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
[Link].11 NETCDF-CF_GRID Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
[Link].12 NETCDF-CF_PROFILE_MATROOS Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
[Link].13 NETCDF-CF_PROFILE Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
[Link].14 NETCDF-CF_TIMESERIES_MATROOS Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
[Link].15 NETCDF-CF_TIMESERIES Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
[Link].16 NetCDF MapD Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
[Link].17 PI Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
[Link].18 Rhine Alarm Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
[Link].19 SHEF Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
[Link].20 TSD Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
[Link].21 UM Aquo export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
[Link] Rdbms Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
[Link] Report Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
1.5.5 05 General Adapter Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
1.5.6 06 Lookup Table Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
1.5.7 07 Correlation Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
1.5.8 08 Error Correction Module (ARMA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
[Link] AR Module Background information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
1.5.9 09 Report Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
[Link] Improved report configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
[Link] Tags in report template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
[Link] Using Colours in Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
1.5.10 10 Performance Indicator Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
1.5.11 11 Amalgamate Import Data Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
1.5.12 12 Archive Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
1.5.13 13 Rolling Barrel Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
1.5.14 14 Support Location Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
1.5.15 15 Scenario Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
1.5.16 16 Pcraster Transformation (pcrTransformation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
[Link] List of pcraster functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
1.5.17 17 WorkflowLooprunner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
1.5.18 18 Mass-balances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
1.5.19 19 Rating curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
1.5.20 20 Transformation Module (Improved schema) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
[Link] Accumulation Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
[Link].1 AccumulationMeanInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
[Link].2 AccumulationSum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
[Link].3 AccumulationSumInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
[Link].4 AccumulationSumOriginAtTimeZero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
[Link] Adjust Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
[Link].1 AdjustQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
[Link].2 AdjustQMeanDailyDischarge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
[Link].3 AdjustQUsingInstantaneousDischarge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
[Link].4 AdjustStage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
[Link].5 AdjustTide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
[Link] Aggregation transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
[Link].1 Aggregation Accumulative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
[Link].2 Aggregation Instantaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
[Link].3 Aggregation InstantaneousToMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
[Link].4 Aggregation MeanToMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
[Link] DisaggregationTransformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
[Link].1 Accumulative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
[Link].2 Instantaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
[Link].3 MeanToInstantaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
[Link].4 meanToMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
[Link].5 weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
[Link] DischargeStage Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
[Link].1 DischargeStageMergedRatingCurves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
[Link].2 DischargeStagePower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
[Link].3 Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
[Link] Events Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
[Link].1 EventsDischargeVolume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
[Link].2 EventsDuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
[Link].3 EventsMaximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
[Link].4 EventsMeanDischargeVolume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
[Link].5 EventsNumberOfEvents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
[Link] Filter Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
[Link].1 FilterLowPass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
[Link] Interpolation Serial Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
[Link].1 Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
[Link].2 directionLinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
[Link].3 extrapolateExponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
[Link].4 Transformation - InterpolationSerial Linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
[Link] Interpolation Spatial Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
[Link].1 InterpolationBilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
[Link].2 InterpolationSpatialAverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
[Link].3 InterpolationSpatialClosestDistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
[Link].4 InterpolationSpatialInverseDistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
[Link].5 InterpolationSpatialMax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
[Link].6 InterpolationSpatialMin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
[Link].7 InterpolationSpatialSum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
[Link].8 InterpolationSpatialWeighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
[Link] Lookup transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
[Link].1 Multidimensional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
[Link].2 Simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
[Link] Merge Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
[Link].1 Simple Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
[Link] Review transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
[Link].1 Stage Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
[Link].2 TidalBalanceReview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
[Link] StageDischarge transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
[Link].1 StageDischargeMergedRatingCurves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
[Link].2 StageDischargePower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
[Link].3 StageDischarge table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
[Link] Statistics Summary Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
[Link] Structure Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
[Link].1 crumpWeir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
[Link].2 crumpWeirBackwater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
[Link].3 flatVWeir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
[Link].4 flatVWeirBackwater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
[Link].5 StructurePumpFixedDischarge Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
[Link].6 StructurePumpHeadDischargeTable Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
[Link].7 StructurePumpSpeedDischargeTable Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
[Link].8 StructurePumpSpeedHeadDischargeTable Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
[Link] TimeShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
[Link].1 Constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
[Link] User Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
[Link].1 UserPeriodic Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
[Link].2 UserSimple Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
[Link] DayMonth Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
[Link] PCA and Regression Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
[Link] Selection Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
[Link].1 Selection of independent lows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
[Link].2 Selection of independent peaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
[Link].3 Selection of lows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
[Link].4 Selection of maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
[Link].5 Selection of minimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
[Link].6 Selection of peaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
1.5.21 21 Secondary Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
[Link] Checks for counting reliable, doubtful, unreliable and missing values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
[Link] FlagsComparisonCheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
[Link] SeriesComparisonCheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
1.5.22 22 forecastLengthEstimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
1.5.23 23 Decision Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
[Link] Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
1.5.24 24. ImportAmalgamate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
1.6 06 Configuring WorkFlows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
1.7 07 Display Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
1.7.1 01 Grid Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
[Link] Coupling ArcSDE and WFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
1.7.2 02 Longitudinal Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
1.7.3 03 What-If Scenario Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
1.7.4 04 Lookup Table Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
1.7.5 05 Correlation Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
1.7.6 06 System Monitor Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
1.7.7 07 Skill Score Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
1.7.8 08 Time Series Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
1.7.9 09 State editor display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
1.7.10 10 Interactive forecast display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
1.7.11 11 Threshold Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
1.7.12 12 Task Run Dialog Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
1.7.13 13 Manual Forecast Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
[Link] Add Macro Button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
1.7.14 14 ChartLayer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
1.7.15 15 Schematic Status Display (formerly Scada Display) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
1.7.16 16 Modifier display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
1.8 08 Mapping Id's flags and units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
1.8.1 01 ID Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
1.8.2 02 Unit Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
1.8.3 03 Flag Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
1.9 09 Module datasets and Module Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
1.9.1 01 Module Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
1.9.2 02 Module Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
1.10 10 Setting up an operational system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
1.10.1 01 Root Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
[Link] [Link] file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
1.10.2 02 Launching FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
1.10.3 03 Setting Up Scheduled Forecasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
1.10.4 04 Setting Up Event-Action Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
1.10.5 05 Setting up sending emails on events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
1.10.6 06 Checklist for creating a live system from a stand alone system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
1.10.7 07 Setting up alerts for the Alarmmodule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
1.11 11 Setting up a forecasting system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
1.11.1 01 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
1.11.2 02 Designing the Forecasting System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
1.11.3 03 Creating a FEWS Application Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
1.11.4 04 Static Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
1.12 12 Configuration management Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
1.12.1 01 Managing Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
1.12.2 02 Validation of a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
1.12.3 03 Analysis of a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
1.12.4 04. Automatic Configuration Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
1.13 13 Additional Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
1.13.1 01 Flood Mapping Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
1.13.2 03 Automatic WorkflowRunner in SA mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
1.13.3 04 Bayesian Model Averaging (BMA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
[Link] BMA in FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
1.13.4 05 Historic Forecast Performance Tool (HFPT) Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
1.14 14 Tips and Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
1.15 15 External Modules connected to Delft-FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
1.15.1 Adapter Manuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
[Link] HEC-HMS model adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
[Link] HEC-RAS Model Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
[Link] SYNHP Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
1.15.2 Developing a FEWS (Compliant) Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
[Link] Updating PI State XML file within Pre and Post Model Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
1.15.3 External model specific files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
[Link] PDM State file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
[Link] The ISIS .ini file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
1.15.4 Delft3D-FEWS adapter configuration manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
[Link] 1. General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
[Link] 2. Adapter configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
[Link].1 01 Design philosophy of Delft3D model adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
[Link].2 02 Adapter configuration - Delft3D model adapter in relation to FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
[Link].3 03 Adapter configuration - configuration workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
[Link].4 04 Adapter configuration - XML configuration scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
[Link].5 05 Adapter configuration - template files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
[Link].6 06 Adapter configuration - naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
[Link].7 07 Adapter configuration - state handling and communication files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
[Link] 3. Example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
[Link] 4. Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
1.15.5 Models linked to Delft-Fews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
[Link] Modflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
[Link] PCOverslag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
[Link] RTC Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
1.16 17 Launcher Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
1.16.1 Launcher XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
1.16.2 Security XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
1.17 18 FEWS data exchange interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
1.17.1 Fews JDBC server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
1.17.2 Fews PI service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
[Link] Using the Fews PI service from C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
1.17.3 Fews Workflow Runner service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
1.17.4 JDBC vs. FewsPiService . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
1.18 19 Parallel running of ensemble loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
1.19 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
1.19.1 A Colours Available in DELFT-FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
1.19.2 B Enumerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
1.20 WhatIfScenarioEditor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
Delft-FEWS Configuration Guide

Introduction
The DELFT-FEWS configuration guide provides the advanced user of DELFT-FEWS with the information required to set-up and maintain a
configuration of DELFT-FEWS. The objective of the guide is to be used both as a reference manual during the development and maintenance of
an implementation of DELFT-FEWS, as well as to provide some of the background philosophy on how to go about setting up a forecasting
system. It is expected that the reader of this guide has a basic understanding of DELFT-FEWS and its structure.

To understand how to configure DELFT-FEWS, a good understanding of the structure of the configuration is required. The first part of this guide
therefore gives an introduction into the different parts of configuration. In the second part of the guide the concepts used in DELFT-FEWS for
handling data are explained.

Section 3 gives details on configuration of system components such as display settings while section 4 describes the various regional
configuration components which relate to a specific regional FEWS systems (e.g. monitoring locations). Section 5 provides documentation on
the 'module instances' available in DELFT-FEWS for example interpolation of data or how to configure an external model such as ISIS (using
the General Adapter Module) including how these can be configured to achieve the required functionality.

Section 6 explains how configured modules are linked into logical tasks through configuration of workflows. In section 7 the configuration of user
displays is discussed. Section 8 discusses mapping information from external data sources. Section 9 discusses handling of static module data.
In section 10 some elements of configuring DELFT-FEWS as an operational system are discussed.

In section 11 a brief introduction is given on how to set-up a forecasting system. This is to give some idea on how to approach configuring a
complex system such as DELFT-FEWS. A brief guide in the use of the configuration management module to support configuration is given in
section 12. Section 13 details the additional functionality available in additional modules available through DELFT-FEWS.

Section 14, finally, gives some tips and tricks on configuring DELFT-FEWS.

Contents
01 Structure of a DELFT-FEWS Configuration
02 Data Handling in DELFT-FEWS
03 System Configuration
04 Regional Configuration
05 Configuring the available DELFT-FEWS modules
06 Configuring WorkFlows
07 Display Configuration
08 Mapping Id's flags and units
09 Module datasets and Module Parameters
10 Setting up an operational system
11 Setting up a forecasting system
12 Configuration management Tool
13 Additional Modules
14 Tips and Tricks
15 External Modules connected to Delft-FEWS
17 Launcher Configuration
18 FEWS data exchange interfaces
19 Parallel running of ensemble loops
Appendices
WhatIfScenarioEditor

01 Structure of a DELFT-FEWS Configuration


Introduction
Elements of the configuration
Versions of configuration and XML file naming conventions
XML Schemas and schema validation

Introduction

6
The configuration of DELFT-FEWS is defined in a set of XML files. In this section the different parts of the configuration are introduced. An
understanding of these different parts of the configuration is required before attempting configuration of a DELFT-FEWS system.

When dealing with the configuration, it is important to note that the configuration may be retrieved from the local file system or from the local
database.

In the former case the configuration is a set of directories, each containing different parts of the configuration. These directories are
all contained under the Config directory.

In the latter case the configuration is contained entirely within the local database in a set of tables, each table containing different
parts of the configuration. Each of the tables in the local database reflects one of the sub-directories in the file system.

When initiating DELFT-FEWS, it will first look for configuration stored on the local file system. If this is found, then the system will expect to use all
configuration from the file system. If the Config directory is not available the system will expect to use all configuration from the database. If
neither is found then an appropriate error message is issued and the system will stop.

The configuration stored in either the Config directory of the database is configuration that is common to all versions of an implementation of
DELFT-FEWS for a particular forecasting system. In the live system situation the contents of the database will be synchronised between all
operator clients and forecasting shell servers in the system, and is therefore expected to be identical in all parts of the system.

Besides this configuration that is synchronised, there is also a small set of XML files referred to as the root configuration files. These may be
unique to each operator client and/or forecasting shell server. This root configuration is required to identify for example if the particular instance of
DELFT-FEWS is operating in stand-alone mode or as an operator client, and for the latter cases information such as IP-addresses for the Master
Controller the operator client should log on to. These root configuration files are always used from the file system. They have no effect on the
hydrological configuration and are normally not changed during configuration of the forecasting system side of DELFT-FEWS.

Elements of the configuration


The two tables below provide an overview of the configuration elements of DELFT-FEWS. In the first table the configuration contained both in the
database and on the file system is described. The second table describes the configuration that is only available on the file system.

Table 1 Overview of different configuration items contained either in the config directory or in the database

Configuration Item Directory on File Table name in file Single/


System system Multiple

Definition of regional configuration, including all locations, parameters etc. RegionConfigFiles RegionConfigurations Single

Definition of system configuration items, including the plug-ins available to the SystemConfigFiles SystemConfigurations Single
system, definition, icons etc.

Definition of modules for handling data and running forecasting models ModuleConfigFiles ModuleInstanceConfigs Multiple

Definition of workflows for running sequences of modules WorkflowFiles WorkflowFiles Multiple

Cold states for modules. Zip file containing model specific data exported by GA ColdStateFiles ColdSateFiles Multiple
usually before running a model

Definition of mapping of ID's and parameters between external sources (e.g. IdMapFiles IdMaps Multiple
telemetry, modules) and ID's and parameters defined in the DELFT-FEWS
configuration

Definition of unit conversions between external sources (e.g. telemetry, modules) UnitConversionFiles UnitConversions Multiple
and units used in DELFT-FEWS

Definition of flag conversions between external sources (e.g. telemetry, modules) FlagConversionFiles FlagConversions Multiple
and flags used in DELFT-FEWS

Definition of layout of user displays, including What-if scenarios, Grid Display etc.) DisplayConfigFiles DisplayConfigurations Multiple

Definition of module parameters stored in DELFT-FEWS ModuleParameters ModuleParameters Multiple

Zipped files containing datasets for modules used by the forecasting system. ModuleDataSetFiles ModuleInstanceDatasets Multiple

Definition of HTML template files used in creating HTML reports for use on the ReportTemplateFiles ReportTemplates Multiple
web server.

Map layers (shape files) used in main map display and spatial interpolation MapLayerFiles MapLayerFiles Single

Images used in reports etc ReportImageFiles ReportImageFiles Single

Icons used in main map display and button bar IconFiles IconFiles Single

7
Table 2 Overview of different configuration items contained in the file system only

Configuration Item Directory on File System

Root Configuration. Several XML files describing some of the settings specific to the These files are contained in the root of the
Operator Client used (e.g. client configuration, IP addresses) DELFT-FEWS configuration directory

Versions of configuration and XML file naming conventions


For each of the configurations managed by DELFT-FEWS in either the database or on the file system as described above, various versions of
configuration may exist.

Configurations that are active and used as a default can be identified both in the file system and in the database.

On the file system a naming convention is introduced to identify which of the possible multiple versions are used as a default.

The naming convention for the default version:

<Name of XML configuration file>SPACE<Version number>SPACE<default>.xml

Other version of configuration will have a different version number. The <default> item is omitted.

Example:

The default version of the configuration settings for the FEWS Explorer could be:

Explorer 1.00 [Link]

A second version may exist. This should then not have the <default> item in the file name:

Explorer [Link]

If the configuration does not include the "default" item it will not be used. This configuration may be reverted to by adding the "default" flag - and
removing it from the other file.

In the database the default version for each configuration item is identified in an associated table. For each configuration item a default table is
available. This is identified by a table with the same name, prefixed by "Default". For example for the SystemConfigurations a table with the name
DefaultSystemConfigurations identifies which of the available versions in the former table is to be used a default.

XML Schemas and schema validation


Each configuration item contained in an XML file must be formatted as specified in an appropriate XML schema (XSD file). Validating against the
schemas is an important step in configuring DELFT-FEWS, as the primary validation makes sure the syntax of the configuration made is correct.

There are two types of configuration in DELFT-FEWS. In the first set, for each different schema type, only one default configuration file may be
used and the name of the configuration file is unique. For the second set of configuration, multiple configuration types may be available for a
specific schema. The names of these may be defined by the user. An XML file contained in the regional configuration element is then used to
register these XML files with a user specified name to the system, and identify the type of configuration. This file is referred to as a descriptor file.

Table1 identifies for which type of configuration a single files per type is allowed and for which multiple instances for each type of configuration
may exist.

02 Data Handling in DELFT-FEWS


Introduction
Types of time series
External and Simulated time series
Forecast and Historical time series
External Historical time series
External Forecasting time series
Simulated Historical time series
Simulated Forecasting time series
Time Series Sets
description
moduleInstanceId/moduleInstanceSetId
valueType

8
parameterId
locationId/locationSetId
timeSeriesType
timeStep
relativeViewPeriod / relativeForecastPeriod
externalForecastMaxAge
externalForecastTimeCardinalTimeStep
readWriteMode
synchLevel
expiryTime
delay
multiplier
divider
incrementer
ensembleId

Introduction
One of the most important properties of DELFT FEWS as a forecasting system is its ability to efficiently deal with large volumes of dynamic data.
Dynamic data covers mainly time series data in various formats (scalar- 0D, vector - 1D, grid - 2D, and polygon data - 2D). Dynamic data also
includes the management of model states produced by the system.

A thorough understanding of how DELFT-FEWS handles dynamic data is fundamental in the correct configuration of an operational system. For
each of the different types of dynamic data specific optimisations have been introduced.

To allow handling of time series data, the concept of a "Time Series Set" is introduced. A Time Series Set is used to retrieve and submit data to
the database. In this chapter the concept of the time series set is explained.

Types of time series

External and Simulated time series

Time series are considered to be available from two sources. All time series sourced from external systems are considered as "External". All time
series produced by the forecasting system itself are considered as "Simulated".

Forecast and Historical time series

Time series are considered to be of two categories in relation to time. Historical time series are continuous time series that describe a parameter
at a location over a period of time. Forecast time series are different to historical time series in that for each location and parameter one forecast
is independent of another forecast. A forecast is characterised by its start time and the period it covers. Generally when a new forecast is
available for a given location and parameter it will supersede any previous forecast for that location parameter. Each forecast is therefore an
independent entity.

On the basis of this, four categories of time series are identified;

+ External Historical
+ External Forecasting
+ Simulated Historical
+ Simulated Forecasting

There are significant differences in how each of these time series are handled.

External Historical time series

In an online system DELFT-FEWS will incrementally import observed data as it becomes available from external systems. This data should be
imported as an External Historical time series. When data marked as external historical is presented to the system with exactly the same values
and covering the same period as data for that location/parameter already available in the database then it will be ignored. Only new data is
imported and stored. If data for a given period is already available but is changed (manual edit or update), then the new values will be added to
the database. For each item of data added to the database, a time stamp is included to specify when the data was made available to the system.

When data of the external historical type is requested from the database, the most recently available data over that whole period is returned. If the
data for that period was imported piecewise, then the individual pieces will be merged prior to the data being returned. An example is given in
Figure 1 where data is imported sequentially. Each data imported/edited is indicated using a different line style. At the request for the complete
series (a) the most recent data available over the complete period is merged and returned. The data imported at 12:00 partially overlaps that
imported at 10:00. As the 12:00 data is the most recent, it will persist in the complete series. A manual edit may be done (or interpolation) to fill
the gap in the data. This will be returned in a subsequent request for the complete series. Although a complete series is returned, the data is
stored as it is imported, including a time stamp indicating when the import happened. If at a later stage the data available at directly preceding the
manual edit is requested, then the additional data will not be included in the complete series.

9
Figure 1 Schematic representation of data imported as external historical

External Forecasting time series

External forecasts are imported by DELFT-FEWS as these are made available by the external forecasting systems. Again each forecast is
imported and stored individually. External forecasts are referenced by the start time of that forecast. When retrieving an external forecast time
series from the database, the most recent available forecast, as indicated by the forecast start time will be returned. The most recently available
forecast is determined as the latest forecast with a start time earlier or equal to the start of the forecast to be made using DELFT-FEWS (forecast
T0). It is thus not possible to see an external forecast time series on request, as the latest available is always returned.

With possible exceptions for modules considering multiple forecasts (e.g. performance module), only one external forecast is returned. Different
external forecasts are not merged.

Simulated Historical time series

Simulated historical time series are similar to the external historical time series in that they are continuous in time. The difference is that the time
series are referenced through the forecast (model) run they have been produced by. As a consequence the time series can be retrieved either by
directly requesting it through opening the run and viewing, or if the run is approved.

Simulated historical time series are generally produced by model runs where a model initial state is used. Each time series has a history, i.e. the
state used as its initial condition. Each state again has a history, i.e. the model run that produced the state. This history is used by the database in
constructing a continuous time series.

Simulated Forecasting time series

Simulated forecast time series are again similar to external forecasting time series. Again the main differences is that they are referenced through
the forecast (model) run they have been produced by. As a consequence the time series can be retrieved either by directly requesting it through
opening the run and viewing, or if the run is approved. Simulated forecast time series are treated in the same way as the external forecast time
series in that the last approved forecast (referred to as the current forecast) is seen as a default. All other runs can be seen on request only. Note
that the last approved forecast which is shown by default may not be the last available forecast.

Figure2 schematically shows how a sequence of runs producing simulated historical and simulated forecasting time series are stored. Each
simulated historical run uses the module state saved at the end of the previous run. It can be seen that these simulated historical traces are
treated as a continuous time series when requested later. For the forecasting time series, only the most recent (approved) time series is
displayed.

Figure2 Schematic overview of handling simulated forecasting and simulated historical time series. Three subsequent forecasts are shown, and

10
the resulting complete time series returned when requested after 12:00. The historical time series is traced back using the state used to create the
link to a previous run. For the forecast time series the most recent forecast supersedes previous forecasts.

The time series type simulated historical should only be assigned to time series that have a relation to a previous time series
through a model state. In all other cases, the time series is independent, and should be allocated simulated forecasting as time
series type.

Time Series Sets


Any module in DELFT-FEWS that requires data from the database, or produces data that must be stored in the database, does so through the
use of a complex data type referred to as the Time Series Set. A time series set can be compared to a query that is run against the database. It
contains all the keys to uniquely identify the set of data to be retrieved.

Time series sets form a large part of the configuration. Most modules have a standard structure, where the configuration starts with a request of
specific set of data from the database using one or more input time series sets, a number of functional items which describe how the data is
transformed, and one or more output time series sets which are used to store the data in the database under a unique combination of keys.

Figure3 shows the elements of the Time Series Set complex type. a number of these elements are compulsory (solid borders), while other
elements are optional (dashed borders). If any of the required elements is omitted then the primary validation of that configuration will fail.

Depending on the information required, each of the elements will be used differently. Some elements are simple flags, indicating specific
properties of the time series set if they are present. For others a string or value must be given in the configuration. In code this will mean that
value or string is assigned to a variable with the name of the element. Other elements may also contain properties. The example of the time series
set below illustrates the different types of element of the time series set.

Optional items may also need to be required to fulfil the requirements of the module using the time series set. This will be indicated in this manual
for those modules where appropriate.

11
Figure3 Schema of the Time Series Set Complex type

description

This is an optional description for the time series set. It is only used a caption in configuration and is not stored with time series.

moduleInstanceId/moduleInstanceSetId

The module instance Id is the ID of the module that has written the data in the time series set to the database. This ID is one of the primary keys
and is required to uniquely identify the data on retrieval.

In the time series set a single module instance Id may be referenced or multiple module instance Id's. The latter is done either by including a list
of module instance Id's or by referencing a module instance set Id. This again resolves to a list of module instance set Id's as defined in the
[Link] configuration file.

One or more moduleInstanceId may be defined, or a single ModuleInstanceSetId. These cannot be mixed

valueType

This specifies the dimension/data type of the time series. This element is an enumeration of the next types;

scalar

longitudinalprofile

grid

polygon

sample

parameterId

12
The parameterId describes the parameter of the data in the time series. This Id is a cross reference to the [Link] configuration file in the
regional configuration defining the parameters. The reference is not enforced through an enumeration in the XML schema. If a parameter not
included in the parameter definition is referred to, an error will be generated and an appropriate message returned.

locationId/locationSetId

The locationId is a reference to the location for which the data series is valid. Each individual data series may belong to one location only. In the
time series set a single location may be referenced or multiple locations may be referenced. The latter is done either by including a list of
locationId's or by referencing a locationSetId. This again resolves to a list of locationId's as defined in the [Link] configuration file.

One or more locationId's may be defined, or a single locationSetId. These cannot be mixed.

timeSeriesType

This specifies the type of time series (see discussion above). This is an enumeration of;

external historical

external forecasting

simulated historical

simulated forecasting

timeStep

This is the time step of the time series. The time step can be either equidistant or non-equidistant. The time step is defined in the parameters of
the timeStep element;

Attributes;

unit (enumeration of: second, minute, hour, day, week, month, year, nonequidistant)

multiplier defines the number of units given above in a time step (not relevant for nonequidistant time steps).

divider same function as the multiplier, but defines fraction of units in time step.

timeZone defines the timeZone of the timeStep, this is only relevant for units of a day or larger.

For hourly timesteps this may also be relevant in the case of half-hourly timezones. Untested at the oment.

New, August 8 2007

New timestep options as of August 8 2007 (development version):


meteorological seasons
<timeStep monthDays="--03-01 --06-01 --09-01 --12-01"/>

every day at 13:00


<timeStep times="13:00"/>

every day at 13:00 en 20:00


<timeStep times="13:00 20:00"/>

every 12th of the month


<timeStep daysOfMonth="12"/>

decade
<timeStep daysOfMonth="1 11 21"/>

relativeViewPeriod / relativeForecastPeriod

The relative view period defines the span of time for which data is to be retrieved. This span of time is referenced to the start time of the forecast
run (T0) the time series set is used in. If the time series set is not used in a forecast run (e.g. in the displays), then the reference is to the
DELFT-FEWS system time.

Parameters

13
unit identifies the time unit with which the time span is defined (enumeration of second, minute, hour, day, week).

start identifies the start time of the time span with reference to the T0 (in multiples of the unit defined).

end identifies the start time of the time span with reference to the T0 (in multiples of the unit defined).

startOverrulable Boolean flag to indicate if the start time given may be overruled by a user selection.

endOverrulable Boolean flag to indicate if the end time given may be overruled by a user selection.

For equidistant time series, all values at the time interval within the span defined will be returned by the database following a request. If no values
are found, then missing values will be returned at the expected time step. For non-equidistant time series, all values found within the time span
are returned. If none are found, then no values are returned.

The relativeForecastPeriod Period is relative to the T0 of the current/selected forecast or the external forecast time of an (the latest) external
forecast. Use relativeViewPeriod instead for simulated series created by the current task run.

Of the parameters in the relative view period, only the end and unit parameters are compulsory. Generally, however, the start
time will also be required. It is omitted only if the start time is determined through selection of a module state.

If the start time is overruled through user selection, it may only be earlier than the start time defined The same holds for the end
time, but then later. The start and end time thus define the minimum time span of data.

Figure 4 Schematic representation of the relative view period with reference to the T0. The start and end time defined may be overruled if the
appropriate parameters are set to true.

externalForecastMaxAge

when the externalForecastMaxAge is not configured there is no maximum age for a forecast series to be used, so the returned external forcast
can be very old when there is no recent forecast available. ALL external forecasts after the T0 are ALWAYS ignored. The age of an external
forecast is defined as the time span between the external forecast time and T0.

Attributes;

unit (enumeration of: second, minute, hour, day, week)

multiplier defines the number of units given above

divider same function as the multiplier, but defines fraction of units.

externalForecastTimeCardinalTimeStep

When no external forecast exists in the data store younger that the specified age a new external forecast is returned with a minimum age that
applies to the specified cardinal time step.

Attributes;

unit (enumeration of: second, minute, hour, day, week, nonequidistant)

multiplier defines the number of units given above (not relevant for nonequidistant time steps).

divider same function as the multiplier, but defines fraction of units.

timeZone defines the timeZone, this is only relevant for units of a day or larger.

readWriteMode

14
The readWriteModel definition is mainly used in the definition of filters to be applied in the time series display when used in edit mode. This
element is an enumeration of;

read only implies the data cannot be edited.

add originals implies the data is new and is added to the database.

editing only visible to current task runs implies any changes made remain invisible to other tasks (used in What-If scenarios)

editing visible to all future task runs implies any changes made will be visible to other task

read originals only implies all edited, corrected or interpolated data should be ignored.

The only enumeration that can be used in timeseriessets in FEWS modules is:

read complete forecast reads the complete forecast series from the database. If this enumeration element is used no relative View
Period has to be configured

It is a good convention to set this property to read only in all input blocks.

synchLevel

This is an integer value determining how the data is stored and synchronised through the distributed system. There is no enumeration as the
synchLevel is used in the configuration of synchronisation, where optimisations can be defined for each synchLevel. The convention used is
explained in the Live System configuration section.

expiryTime

This element allows the time series created to have a different expiry time to the default expiry time. This means it may be removed earlier, or
later, by the rolling barrel function. For temporary series the value may be set to a very brief period. For other time series (e.g. Astronomical input
series), the value should be set sufficiently high.

Attributes;

unit (enumeration of: second, minute, hour, day, week)

multiplier defines the number of units given above.

divider same function as the multiplier, but defines fraction of units.

delay

This element allows the time series retrieved to be lagged (positive or negative). The time stamps of the series will then be shifted by the period
specified on retrieval. This is used only when retrieving time series from the database, and not inversely when submitting time series to the
database.

Attributes;

unit (enumeration of: second, minute, hour, day, week)

multiplier defines the number of units given above.

divider same function as the multiplier, but defines fraction of units.

multiplier

This element allows the time series retrieved to be multiplied by the factor given. This is used only when retrieving time series from the database,
and not inversely when submitting time series to the database.

divider

This element allows the time series retrieved to be divided by the factor given. This is used only when retrieving time series from the database,
and not inversely when submitting time series to the database.

incrementer

This element allows the time series retrieved to be incremented by the factor given. This is used only when retrieving time series from the
database, and not inversely when submitting time series to the database.

ensembleId

15
A time series set may be defined to retrieve all members of an ensemble at once (for example in evaluation of ensemble statistics). This is done
by defining the optional ensembleId. The ensembleId should also be defined when writing new ensemble members. (e.g. on importing ensembles
in the import module).

Example:

<timeSeriesSet>
<description>Example time series set</description>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>[Link]</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="-48" end="24" endOverrulable="true"/>
<readWriteMode>read only</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="100"/>
</timeSeriesSet>

When dealing with ensembles, the ensembleId need only be defined if the workflow activity these are used in must retrieve the
complete ensemble, or if members are to be written under a new ensembleId. In other cases the ensembleId need only be
defined in the workflow definition (see Workflows chapter). For the TimeSeriesSets defined in modules there is then no
difference between running in ensemble mode and running normally.

03 System Configuration

Introduction
The system configuration items form a primary part of the configuration of DELFT-FEWS as a system. It includes definition of the functional
elements DELFT-FEWS has available (both GUI plug-ins and Module plug-ins). The layout of the main GUI (the FEWS Explorer) and the Time
Series display are also defined.

The system configuration items include;

Explorer configuration of the FEWS Explorer (the main GUI)


ModuleDescriptors Configurations of the plug-in modules available to DELFT-FEWS
DisplayDescriptors Configuration of the plug-in displays available to DELFT-FEWS
DisplayInstanceDescriptors Definition of the plug-in displays used.
TimeSeriesDisplayConfig Configuration of the time series display.
DisplayGroups Configuration of the shortcuts to display templates available in the time series display.
LocationIcons Definition of the icons used in the FEWS Explorer layout for locations.
Permissions Configuration of permissions that can be set for tasks and windows in DELFT-FEWS.

For each of the configuration items listed above only one configuration is active (or default) at any one time. Each item is defined in an XML file
with a unique name.

Many of the configuration items required will include references to strings. To avoid duplication, a tag can be defined in the
[Link] file in the root configuration and the tag name used in the XML file.
To use a tag, add this in the [Link] file.
To reference a tag include the sting $TAG_NAME$, where TAG_NAME is the tag to be used.

Contents
01 FEWS Explorer
02 Time Series Display Configuration
03 Display Groups
04 Location Icons
05 Module Descriptors
06 Display Descriptors
07 Permissions

16
01 FEWS Explorer
What [Link]

Config group SystemConfigFiles

Required yes

Description Defines the main display and configures system settings

schema location [Link]

Introduction
The layout of the FEWS Explorer is configured in an XML file in the SystemConfigurations section. When available on the file system, the name of
the XML file is for example:

Explorer 1.00 [Link]

Explorer Fixed file name for the explorer settings

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure5 shows the main elements of the Explorer configuration, These are divided into a number of sections, each of which will be discussed
individually (these are indicated with a + sign).

Figure 5 Elements in the Explorer configuration

System Information
This section contains some general information of the DELFT-FEWS configuration.

Figure 6 Elements in the System Information section of the Explorer configuration

description

An optional description of the configuration element. This is for reference purposes only and will not be used elsewhere.

systemCaption

The caption that will appear in the title bar of the FEWS Explorer window.

systemHelpFile

17
Reference to the file name and location of the help file

Panel sizes
This optional section allows the panel sizes to be pre-set by the configuration as a percentage of the window size.

Panel header labels


This optional section allows the configurator to specify headers above the left panels in the Explorer window. Not all headers need to be specified
simultaneously

Explorer Options
Depreciated section

18
Figure 7 Elements in the Explorer Options section of the Explorer configuration

The explorer options define whether a number of items are visible on the main display when started. Each of these may either have the value
"true" or "false". The items listed in Figure 7 lists all the items. The names are self-explanatory.

Map
In this section the background maps can be defined. The configuration of this section is described in the Grid Display (definition of a background
map).

Zoom extents
The zoom extents is used to define the pre-configured zoom levels that can be selected from the main display.

Figure 8 Elements in the Explorer Options section of the Explorer configuration

zoomExtent

Main element of each zoomExtent defined. Note that multiple zoom extents may exist, with the elements below to be defined for each.

Attributes:

19
title name of the zoom extent in the list of extents.

left, right, top, bottom

Coordinates of the corners of the zoom extent (specified in the geoDatum selected below)

mnemonic

Optional definition of a letter in the title to use as shortcut.

Explorer Tasks
The explorer tasks define the tasks that can be carried out from the explorer. These tasks are for example the initiation of plug-ins such as the
time series display.

NOTE: These settings should be amended only by expert users.

Figure 9 Elements in the Explorer Tasks section of the Explorer configuration

explorerTask

Main element of each explorer Task. Note that multiple tasks may exist, with the elements below to be defined for each.

Attributes;

name name of the task

iconFile

Reference to an icon file to be used in the toolbar. If left empty the name will be used to identify the task in the toolbar

20
Mnemonic

Optional definition of a letter in the title to use as shortcut.

taskExe

Command line for executable to run on initiating the task (the task may be either a call to an executable or to a Java class)

taskClass

Java class to run on initiating the task (the task may be either a call to an executable or to a Java class)

arguments

Optional argument string to be passed to the task

workDir

Optional working directory to start the task in.

Description

Optional description of the task (for reference only)

toolbarTask

Boolean flag to indicate if the task is to appear as a part of the toolbar

menubarTask

Boolean flag to indicate if the task is to appear in the tools menu

allowMultipleInstances

Boolean flag to indicate if multiple instances of task can be initiated concurrently

Permission

Optional name of the permission that is needed to use this task

statusBar
The status Bar settings define the format of the time string displayed in the status bar

Figure 10 Elements in the Status bar section of the Explorer configuration

dateTimeFormat

String defining the time format for time displayed in the status bar. For example HH:mm:ss will display time as [Link].

restSettings
This section includes additional settings for the FEWS Explorer.

21
Figure 11 Elements in the Rest Settings section of the Explorer configuration

defaultSelectFilterId

Defines the default filter to be selected on starting the fewsExplorer

geoDatum

Default definition of the geographic datum. This is an enumeration of geographic datums supported. As further geographic datums are supported,
the list will be expanded;

For the enumeration of geoDatums suppoted, see Appendix B

dateTimeFormat

Format definition for time strings in displays (e.g. yyyy-MM-dd HH:mm:ss is resolved to 2004-07-03 [Link])

cardinalTimeStep

Default cardinal time step for the system. The system time will be rounded down to the actual time to the closest cardinal time step.

Attributes;

unit (enumeration of: second, minute, hour, day, week)


multiplier defines the number of units given above in a time step.
divider same function as the multiplier, but defines fraction of units in time step.

timeZone

Defines the default time zone for the system. The default time zone is used for all times in user displays, unless locally overruled. This includes
time series displays and the system time. The time zone used by the system should conform to a time zone that does not consider summer time.
If this optional entry is not included then the timeZone is considered to be UTC+0:00 (or GMT). The time zone can be defined in two ways:

timeZoneOffset: The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or
fraction of hours) offset. (e.g. +01:00)
timeZoneName: Enumeration of supported time zones. See appendix B for list of supported time zones.

logPanelConfig
Configuration of the log panel at bottom of the main display. This can be configured to show all messages (DEBUG level and up), or filtered from
a defined level. Two types of log message can be displayed; those generated by the DEBUG logger and those by the EVENT logger. In normal
use the latter is defined to show messages from a level of INFO or above. The former is not normally used except for configuration in the stand
alone when additional information may be useful. Different settings are available for stand alone clients and for operator clients

22
Figure 12 Elements in the Log Panel section of the Explorer configuration

clientFilter

Root element of a definition of filters (multiple entries may exist).

clientId

Definition of log filters for Operator client or for Stand alone system (both may be included).

logFilter

Root element for log filter definition

eventType

Filter applicable to System logger or to debug logger. Enumeration of "system"or "event".

Level

Level of log message below which messages are not displayed. Enumeration of DEBUG, INFO, WARN, ERROR, FATAL ERROR

rollingBarrelOptions
This allows you to set the rolling barrel options for the client. Available options for the type are:

not_automatic: The Rolling Barrel will only run if you launch it using the F12 menu
startup_only: The Rolling Barrel will only run when starting up the client
shutdown_only: The Rolling Barrel will only at showdown of the client
interval: The Rolling Barrel will run at the specified interval

Example:

<type>interval</type>
<interval unit="hour" multiplier="1"/>

]]>

Schema:

23
parameterListConfig
This allows you to set the default sorting option for the parameters in the Explorer. Available options are:

default: Use the default sorting from the configuration file [Link].
name: Sort by parameter name (ascending).

Example:

<sortOption>name</sortOption>

]]>

Schema:

notification
The system can notify the completion of a manually dispatched task run when the notification property is enabled (i.e. enabled=TRUE).

Filemenu and interactiveExportFormats


From the FEWS Explorer File menu it is possible to export selected time series to a subset of the export file formats. To enable the Export
Timeseries file menu option, it should be enabled in the [Link] file.

Example:

<exportTimeSeries visible="true"/>

]]>

By default the exported time series will not do any ID mapping on exporting. Pre-defined ID mapping configuration files can be configured in the
interactiveExportFormats element. In the example below the export type iBever will always use the ID Mapping configuration file
IdExportKwaliteit. For each export type a default ID mapping file can be configured.

The next exportTypes are available:

csv
dutch-csv
gin-xml
Hymos 4.03
Hymos 4.5
iBever
Menyanthes
pi-xml
UM-Aquo

Some export formats (like UM Aquo) explicitly require an idMap before they are enabled in the file export menu.

24
Example:

<interactiveExportFormat>
<name>iBever Export</name>
<exportType>iBever</exportType>
<IdMapId>IdExportKwaliteit</IdMapId>
</interactiveExportFormat>
<interactiveExportFormat>
<name>HYMOS Transferdatabase 4.03</name>
<exportType>Hymos 4.03</exportType>
<IdMapId>IdHYMOS</IdMapId>
<flagConversionsId>ExportHYMOSFlagConversions</flagConversionsId>
</interactiveExportFormat>
<interactiveExportFormat>
<name>HYMOS Transferdatabase 4.50</name>
<exportType>Hymos 4.5</exportType>
<IdMapId>IdHYMOS</IdMapId>
<flagConversionsId>ExportHYMOSFlagConversions</flagConversionsId>
</interactiveExportFormat>
<interactiveExportFormat>
<name>UM aquo</name>
<exportType>UM-Aquo</exportType>
<IdMapId>IdExportUMAQUO</IdMapId>
</interactiveExportFormat>

]]>

All geographic locations used in DELFT-FEWS are resolved to WGS 1984. If another coordinate system is to be used, then the
transformation between this and WGS 1984 will need to be added. There is a class definition for these transformations. Once
added the enumeration used here can be extended

Care needs to be taken when working with time zones. Mixing time zones can lead to great confusion. It is wise to define the
time zone as an offset with respect to UTC and use this throughout. In configuring import data, a local time zone can be used. It
is advisable to always set time zones when required.

02 Time Series Display Configuration


What [Link]

Config group SystemConfigFiles

Required no

Description Configuration file for the time series display (line styles etc)

schema location [Link]

Introduction
description
General Display Configuration
convertDatum
maximumInterpolationGap
valueEditorPermission
lableEditorPermission
commentEditorPermission
Default view period
Time Markers Display Configuration
timeMarkerDisplayOptions
color
lineStyle
label

25
Parameters display configuration
Default Options
ParameterDisplayOptions
PreferredColor
lineStyle
markerStyle
markerSize
Precision
min
max
maxTimeSpan
Module instance mapping
moduleInstanceMapping
description
Statistical functionality
Calendar aggregation (with associated time step)
Relative aggregation (with associated time span)
Duration exceedence
Duration non-exceedence
Moving average (with associated time span)
Central moving average (with associated time span)
Accumulation interval (with associated time span or time step)
Accumulation aggregation (with associated time span or time step)
Frequency distribution (with associated samples)
Gaussian curve (with associated samples)
Show peaks above value
Show lows below value
Scatterplot
Boxplot
Schemas for the slider
movingAccumulationTimeSpan
timeStep
samples
Descriptive Function Group
Info functions (if this type of function is specified, the display provides a hint to select a column in the table in order to get more
descriptive information):
Time series information available
Descriptive statistical functions
Duration curve

Introduction
The layout of the time series display is configured in an XML file in the System Configuration folder. When available on the file system, the name
of the XML file is for example:

TimeSeriesDisplayConfig 1.00 [Link]

TimeSeriesDisplayConfig Fixed file name for the time series display settings

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 13 shows the main elements of the TimeSeriesDisplay configuration, These are divided into a number of sections, each of which will be
discussed individually (these are indicated with a + sign).

26
Figure13 Elements in the TimeSeriesDisplay configuration

description

An optional description of the configuration. This is for reference purposes only and will not be used elsewhere.

General Display Configuration

This includes global setting for the time series display.

convertDatum

This optional Boolean setting can be used to start the time series display showing levels against the global reference level. The default is that
levels are displayed against the local reference level. The difference between local and global reference is defined in the locations definition (see
regional settings).

maximumInterpolationGap

Maximum gapsize which can be filled in by linear or block filler in the data editor

valueEditorPermission

27
Optional permission which must be assigned to a user to to edit values in the data editor

lableEditorPermission

Optional permission which must be assigned to a user to to edit labels in the data editor

commentEditorPermission

Optional permission which must be assigned to a user to to edit comments in the data editor

Default view period


The optional default view period can be used to define the time span of data displayed in the time series display (unless overruled by the user).

Parameters

unit identifies the time unit with which the time span is defined (enumeration of second, minute, hour, day, week).
start identifies the start time of the time span with reference to the T0 (in multiples of the unit defined).
end identifies the start time of the time span with reference to the T0 (in multiples of the unit defined).

Time Markers Display Configuration


Time series display markers are informative lines in the display. These may be defined to display vertical lines for times values of interest. The
configuration of horizontal threshold lines is also included in this definition. Markers can be defined for three time values as well as for the
thresholds;

for time series

systemTime
displayTime
timeZero
threshold
forecastConfidence1
forecastConfidence2
forecastConfidence3

Within longitudinal profile displays, the markers can be set to display the minimum or maximum values. A variety of river bed levels can be
included in a display if these are specifed in the [Link] file.

for longitudinal profile:

leftBankLevel
leftFloodPlainLevel
leftMainChannelLevel
longitudinalProfileMaximum
longitudinalProfileMinimum
rightBankLevel
rightFloodPlainLevel
rightMainChannelLevel

To visualize model layer elevations when drawing a cross section in a spatial plot, one should use: bottomLayerLevel
topLayerLevelThis applies only for parameters with unit in meters

Figure 14 Elements in the TimeMarkersDisplay section of the TimeSeriesDisplay configuration

timeMarkerDisplayOptions

Root element of a definition time markers (multiple entries may exist).

Attributes;

marker enumeration of the possible markers (see list above).

color

28
Colour of the time series marker line (see enumeration of colours in appendix A).

lineStyle

Line style of time series marker line. Enumeration of "solid", "none", "bar", "dashdot", "dashed", "dotted", "solid;thick", "dashdot;thick",
"dashed;thick"; "dotted;thick"; "area"; "constant".

The "constant" label should preferably only be applied if the time series holds only one value.

label

Define the name of time series marker line.

Parameters display configuration


In this section of the time series display, default attributes of lines plotted for parameters can be allocated. These defaults will be used when
plotting lines for these parameters in the time series displays and in the reports. Note that for allocation of the colour only the preferred colour can
be identified. When the line is plotted this colour will be used only if there is no other line with the same colour. If this is the case then the next
colour in the colour palette will be used.

Figure showing the options in the ParameterDisplayConfig section of the TimeSeriesDisplay configuration

Default Options

The Default Options can be configured (and has the same sequence as the single parameter display options) to be set as default appearance.
This default is overruled by an individual ParameterDisplay Configuration for a parameter.

29
Figure showing the options in the Default Options section of the TimeSeriesDisplay configuration

The sequence is identical to the parameterDisplayOptions Configuration, see below

ParameterDisplayOptions

Figure 15 Elements in the ParameterDisplayOptions section of the TimeSeriesDisplay configuration

30
PreferredColor

The preferred colour for the line and markers. This colour will only be used if it is not yet available in the current graph. If it is, then the next colour
in the template order will be selected.

lineStyle

Line style of time series line. Enumeration of "solid", "none", "bar", "dashdot", "dashed", "dotted".

markerStyle

Marker style for markers plotted on the vertices of the line. Enumeration of "none", "+", "x", "circle", "square", "rectangle", "diamond", "triangleup" ,
"triangledown".

markerSize

Size of the marker in points

Precision

31
Decimal precision with which values are given in the table.

min

Minimum of the y-scale to be used as a default for all displays of this parameter. The minimum is used, unless a value in the time series is lower
than this minimum, in which case that is used. The minimum defined here can also be overruled in individual template definition in the
DisplayGroups (see below).

max

Maximum of the y-scale to be used as a default for all displays of this parameter. This maximum is used, unless a value in the time series is
higher than this maximum, in which case that is used. The maximum defined here can also be overruled in individual template definition in the
DisplayGroups (see below).

maxTimeSpan

Optional field, only intended to be used for viewing non-equidistant series. Specifying this value will prevent the drawing of line segments in the
TimeSeriesDisplay between adjacent time/value points for which the time difference exceeds the specified maximum time span.

Module instance mapping


The module instance mapping allows user defined strings to be defined which will display the moduleInstanceID in the legends of the time series
display. This can help shorten legends, or add additional information.

Figure 16 Elements in the Log Panel section of the Explorer configuration

moduleInstanceMapping

Root element for each mapping to be defined.

Attributes;

id ModuleInstanceId to be replaced

description

String to replace the module instance id in the legends

Statistical functionality
Time series statistical functions are statistical functions that use one equidistant, scalar time series to produce another one using a statistical
function. When defined, a list of statistical functions will be displayed in the Time Series Display. If the user selects a function then for each

32
(equidistant, scalar) time series in the display another one is generated and displayed with the calculated results. Calculated time series are
displayed in the same graph as the time series they are based on.

Some statistical functions require an additional accumulation time span, time step or samples which the user can select using a slider. As soon as
the user selects a different value from the slider the calculation is launched with the new value.

The Statistical functions group defines dedicated graphing options shown in the combo box in the toolbar:

Calendar aggregation (with associated time step)

Creates an aggregation of a time series array according to the selected time step.

Attributes:

function: calendarAggregation
timeStep: Time steps that the user selects from using the slider.

Relative aggregation (with associated time span)

Creates an aggregation of a time series array. A relative time step is calculated by the selected time span and the start time of the period from the
time series array.

Attributes:

function: relativeAggregation
movingAccumulationTimeSpan: Time spans that the user selects from using the slider.

Duration exceedence

Sorts the values in a time series array and its flags in descending order.

Attributes:

function: durationExceedence
ignoreMissings: when true, missing values are ignored and each duration will be calculated from the available values within the current

33
time window.
When false, missing values are added to the end of the sorted series.

Duration non-exceedence

Sorts the values in a time series array and its flags in ascending order.

Attributes:

function: durationNonExceedence
ignoreMissings: when true, missing values are ignored and each duration will be calculated from the available values within the current
time window.
When false, missing values are added to the end of the sorted series.

Moving average (with associated time span)

A moving average calculates the mean value of the all values within the selected time window.

Attributes:

function: movingAverage
ignoreMissings: when true, missing values are ignored and each average will be calculated from the available values within the current
time window.
When false, calculated values will be set to missing if one or more values within the current time window are missing.
movingAccumulationTimeSpan: Time spans that the user selects from using the slider.

The moving average function only works for true equidistant data (i.e. no daysOfMonths etc.)
The difference between moving average and central moving average is that the central moving average uses values before and after the current
value to calculate the average. Moving average only uses values in the past.

Central moving average (with associated time span)

A central moving average calculates the mean value of the time window of which the current value is in the middle. It is the same as the moving
average, but shifted to the past for half the time window.

Attributes:

function: centralMovingAverage
ignoreMissings: when true, missing values are ignored and each average will be calculated from the available values within the current
time window.
When false, calculated values will be set to missing if one or more values within the current time window are missing.
movingAccumulationTimeSpan: Time spans that the user selects from using the slider.

The central moving average function only works for true equidistant data (i.e. no daysOfMonths etc.)
The difference between moving average and central moving average is that the central moving average uses values before and after the current
value to calculate the average. Moving average only uses values in the past.

Accumulation interval (with associated time span or time step)

The accumulation interval function creates cumulative series for several intervals. For the first output time step in each interval the output value
equals the input value for that time step. For a given other output time step the output value is equal to the sum of the input value for that time
step and the previous output value. The intervals are defined by the selected time span or the selected time step. If a time span is selected, then
the function uses the time span as the size for the intervals and the first interval starts at the start of the period that is visible in the time series
display. If a time step is selected, then the function uses the selected time step to create intervals. Each period between two consecutive time
steps in the selected time step is an interval.

Attributes:

function: accumulationInterval
movingAccumulationTimeSpan: Time spans that the user can select using the slider.
or
timeStep: Time steps that the user can select using the slider.

Accumulation aggregation (with associated time span or time step)

The accumulation interval sums the values for all time steps within the selected time window range. The time window range is defined by the
associated time span or time step.

Attributes:

function: accumulationAggregation
movingAccumulationTimeSpan: Time spans that the user can select using the slider.
or

34
timeStep: Time steps that the user can select using the slider.

Frequency distribution (with associated samples)

The frequencies of the available values are counted and are plotted within a number of bins to create a frequency distribution. The number of bins
can be selected using the slider. The data range that is covered by the bins can be changed as follows. Clicking the "Set boundaries" button
brings up the "Set boundaries" dialog. In this dialog the lowest bin boundary and the highest bin boundary can be changed. The space between
these two boundaries is divided regularly into the selected number of bins. Initially the boundaries are in automatic mode. In automatic mode for
each time series the minimum data value and the maximum data value are used as boundaries. When the user selects new boundaries manually,
then the new boundaries will be used instead, i.e. manual mode. In manual mode the boundaries are fixed and the same boundaries are used for
all time series, until they are changed again. This makes comparisons between different time series possible. When the user clicks the
"Automatic" button, then the boundaries will be in automatic mode again.

In manual mode the selected boundaries are remembered. This means that when the user closes and re-opens the time series display or starts
working in another separate time series display, then in manual mode the previously selected boundaries will still be used for new frequency
distributions. The mode will also be the same for all time series displays.

Attributes:

function: frequencyDistribution
samples: The number of bins that the user can select using the slider.

Gaussian curve (with associated samples)

The mean value and standard deviation are calculated for the timeseries from which the normal distrubution function is calculated. The selected
sample determines in how many samples the normal distribution function is divided into.

Attributes:

function: gaussianCurve
ignoreMissings: when true, missing values are ignored and each average will be calculated from the available values within the current
time window.
When false, calculated values will be set to missing if one or more values within the current time window are missing.
samples: Definition of samples sizes that the user selects from using the slider.

Note: The displayed diagram is no longer a graph of unit time and therefore uses a different panel for displaying the graph. The associated tabel
panel is currently not working for this type of graph and therefore the table toggle button will be disabled.

Show peaks above value

A scatterplot is made where the x-axis shows the duration of a 'peak' (=values within this peak-area are all above the given reference level), the
y-axis shows the normalized difference between the parameter value and the reference level. The reference level can be altered by entering a
value into the input field associated with this statistical function. After clicking 'Apply' the result time series array is returned.
If no reference level is entered, then the 'peak' areas are determined according to the minimum available value of the input time series array.

Attributes:

function: showPeaksAbove

Show lows below value

A scatterplot is made where the x-axis shows the duration of a 'low' (=values within this low-area are all beneath the given reference level), the
y-axis shows the normalized difference between the parameter value and the reference level. The reference level can be altered by entering a
value into the input field associated with this statistical function. After clicking 'Apply' the result time series array is returned.
If no reference level is entered, then the 'low' areas are determined according to the maximum available value of the input time series array.

Attributes:

function: showLowsBelow

Scatterplot

The data is displayed as a collection of points, each having the value of the timeseries determining the position on the horizontal axis and the
value of the other timeseries (one or more) determining the position on the vertical axis.
The timeseries used for the horizontal- and vertical axis can be changed by the user by using the 'Series selection' dialog, which is opened by
clicking on the 'Edit' button.

Attributes:

function: scatterPlot

Note: The displayed diagram is no longer a graph of unit time and therefore uses a different panel for displaying the graph. The associated tabel
panel is currently not working for this type of graph and therefore the table toggle button will be disabled.

35
Boxplot

The data is graphically displayed by a box-and-whisker diagram. The following five-number summaries are displayed: the smallest observation
(sample minimum), lower quartile (Q1), median (Q2), upper quartile (Q3), and largest observation (sample maximum). An additional dot is plotted
to represent the mean of the data in addition to the median.

Attributes:

function: boxPlot

Note: The displayed diagram is no longer a graph of unit time and therefore uses a different panel for displaying the graph. The associated tabel
panel is currently not working for this type of graph and therefore the table toggle button will be disabled.

Schemas for the slider

movingAccumulationTimeSpan

Defines the time spans that the user can select using the slider for this function in the TimeSeriesDisplay. MovingAccumulationTimeSpan can only
be used for functions of the following type: relativeAggregation, movingAverage, centralMovingAverage, accumulationInterval,
accumulationAggregation.

timeStep

Defines the time steps that the user can select using the slider for this function in the TimeSeriesDisplay. TimeStep can only be used for functions
of the following type: calendarAggregation, accumulationInterval, accumulationAggregation.

36
samples

Defines the amounts of samples that the user can select using the slider for this function in the TimeSeriesDisplay. Samples can only be used for
functions of the following type: frequencyDistribution, gaussianCurve.

37
Descriptive Function Group
The descriptiveFunctionGroup defines the contents of the descriptive tabel. Several sub-table can be configured (see sample).

Functions supported are:

Info functions (if this type of function is specified, the display provides a hint to select a column in the table in order to
get more descriptive information):

info: displays parameter (long name+id), location (long name+id)


infoLocationId
infoLocationName
infoModuleinstance
infoParameterId
infoParameterName

Time series information available

count: total number of populated records


missings: total number of missings
completed: number of records flagged as completed (gap filling)
corrected: number of records flagged as corrected
reliables: number of records flagged reliable
unreliables: number of records flagged unreliable
doubtfuls: number of recorsd flagged doubtfull

startTime
endTime

Descriptive statistical functions

The descriptive statistics functions are functions that are used for descriptive statistics. They can be defined to describe the distribution of the data
(e.g. mean, min, max) or the data itself (info, start_time). All descriptive statistical functions produce a single value for a time series.

The descriptive functions results are displayed in group boxes that are named according to the group names that have been defined in the
configuration file.

Attributes:
• function: Can be one of the functions below.
Information functions:

info: Name of the time series


start_time Start time of the time series in the view period
end_time: End time of the time series in the view period
Grouping functions:
count: Number of time steps in the view period
missings: Number of missing values in the view period
doubtfuls: Number of doubtful values in the view period
reliables: Number of reliable values in the view period
unreliables: Number of unreliable values in the view period
Statistical functions:
mean: Mean value in the view period
min: Lowest value in the view period
max: Highest value in the view period
percentile: Percentile in the view period for a given percentage (requires one or more value elements to define percentages; see below)
standard_deviation: Standard deviation in the view period
sum: Sum of the values in the view period
• ignoreMissings: when true, missing values are ignored and each function will be calculated from the available values within the current

38
time window.
When false, calculated values will be set to missing if one or more values within the current time window are missing.

Duration curve

A duration curve illustrates the relationship between parameter values and their duration. When selected, the current graphs are replaced with
duration curves.

Attributes:

function: always duration.


Moving average
A moving average calculates the mean value of the time window directly before the current value.

Attributes:

function: always moving_average.


ignoreMissings: when true, missing values are ignored and each average will be calculated from the available values within the current
time window.
When false, calculated values will be set to missing if one or more values within the current time window are missing.

for more details, a Word-attachement is available.

03 Display Groups
What [Link]

Config group SystemConfigFiles

Required no

Description Defines pre-configured displays

schema location [Link]

Still to document TimeSpanComplexType

Introduction
displayGroups
plot
displayGroup
subplot
line
display
SubPlotArea

Introduction
A list of pre-configured displays can be configured in the Display groups. When available on the file system, the name of the XML file is for
example: DisplayGroups 1.00 [Link]

The pre-configured displays are organised in a tree view in the time series display (see example in Figure 1). Each pre-configured display is
identified by its name, and may include one or more subplots, each with one or more time series lines.

39
Figure 1 Example of time series display, showing two sub-plots and tree-view of pre-configured displays

Another option is to plot a longitudinal profile in the time series display (see figure 2). The main difference with the normal time series plot is, that
on the X-axis the river chainage is plotted. With the control toolbar a specific time step can be selected.

Figure 2 Example of time series display, showing longitudinal profile.

The display groups are configured by first listing the names of the filters to be shown in the display (for example "Rain gauges", "Gauges" and
"Fractions" in figure 3 below) under the displayGroup descriptor. The names of the subplots can then be added (e.g. "MacRitchie" and
"Woodleigh" below). Each of the subplots is assigned a plotId which links to the definitions of the plots and the time series set to be used. For
example in the Fractions displayGroup a stackPlot is defined with a max and min (this file is attached as an example). Please note that the
colours, linestyle, precision etc are defined in the TimeSeriesDisplayConfig.

40
figure 3 - example of a configured displayGroup file (click to enlarge)

Figure 4 Root element of the display groups definition

displayGroups

Root element for each displayGroup. A display group forms one of the main nodes in the tree view and may contain multiple displays. Multiple
display groups may be defined.

Attributes:

description: Optional description


plot: Plot complex type
displayGroup: displayGroup complex type

plot

Defines the plots relating to the displayGroup

Attributes:

description: Description of plot


legendFontSize: Font size of legend
axis TitleFontSize: Title font size
tickLableFontSize: Option to change tick label font size
subplot: Link to subplot complex type.

41
displayGroup

Defines the groups of plots to be viewed (i.e. the branches of the shortcuts in the display)

Attributes:

description: Optional description


display: Display complex type
displayGroup: displayGroup which links to the plot id

subplot

Root element for each subplot. Multiple sub-plots may be defined per display.

42
43
Attributes:

description
axisScaleUnit
lowerMarginPercentage:
upperMarginPercentage:
inverted
plotweight
thresholdAxisScaling
forecastConfidenceTimeSpan: TimeSpanComplexType
line
area: SubPlotAreaComplexType. Displays the extent of multiple time series as a single area
color: Overides colours specified in the timeseriesdiplay
lineSytle: Line style of time series marker line. Enumeration of "solid", "none", "bar", "dashdot", "dashed", "dotted".
timeSeriesSet

inverted

This tag can be used to invert the y-axis of a plot. Below a screenshot of an inverted graph.

In the example the timeseries with parameter RAIM is inverted.

line

The tag line can be used to configure options like:

dual y-axis plots


discharge/stage-plots or stage/discharge plots

Dual y-axis plots

It is possible to display a set of time series with two different parameters types in one plots. One parameter will be displayed on the left axis, the

44
other will be displayed on the right axis.

Below an example of a dual y-axis plot.

Below an example of how to configure a timeseries which should be displayed on the right y-axis.

<plotWeight>2</plotWeight>
<line>
<color>blue</color>
<axis>right</axis>
<timeSeriesSet>
<moduleInstanceSetId>MAP</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>MAP</parameterId>
<locationId>LNDW1XU</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</line>
<line>
<color>dark orange</color>
<axis>right</axis>
<timeSeriesSet>
<moduleInstanceSetId>MAP</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>MAP</parameterId>
<locationId>LNDW1XM</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</line>
]]>

In setting the axis to right this timeseries will be displayed at the right axis instead of the (default) left axis.

Stage/discharge plots

When a discharge is displayed, it is possible to show the stage on the right axis.

The right axis is then not a linear axis but the ticks on the right axis are calculated from the discharge ticks on the left axis.

It is also possible to display the stage and show the discharge on the right axis. The example below shows a display which plots several
discharge time series.

45
The left axis is a linear axis with ticks for the discharge. The right axis is a non-linear axis.

The ticks on the right axis are calculated from the value of the discharge on the left axis by using a rating curve.

Below a configuration example

<plotWeight>5</plotWeight>
<line>
<axis>left</axis>
<ratingAxis>
<parameterGroupId>Level</parameterGroupId>
<transformationType>dischargeStage</transformationType>
<ratingCurve>
<locationId>exampleId</locationId>
</ratingCurve>
</ratingAxis>
<timeSeriesSet>
<moduleInstanceId>STAGEQ_LEDC2_LEDC2R_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>QIN</parameterId>
<locationId>LEDC2R</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>read only</readWriteMode>
<ensembleId>QPF</ensembleId>
</timeSeriesSet>
</line>
]]>

display

Definition of a pre-configured display. Each display may contain multiple sub-plots. Multiple displays may be defined per display group.
[

46
47
Figure 4 Elements in the Display section of the DisplayGroups configuration

Attributes;

description: Optional description


relativeViewPeriod
nrOfRecentForecasts: Can be applied to show multiple recent forecasts (simulated forecast type)
ParentLocationId:
ThresholdLocationSetId:
plotId: Link to the plot id

SubPlotArea

Options available when creating an area type display

Atrributes:

color: Overides other predefined colours


opaquenessPercentage: Opaqueness expressed as a percentage
IncludesZeros: When true the zero is always included in the painted extent

48
Making stacked graphs

Since 2007/02 release the functionality of the SubPlotArea complex type has been extended to include stack plots. The only
thing needed to implement this is to add a stackPlot="true" attribute to a subplot element. Attached to this page is a example of
this funtionality.

If stackPlot is True, the timeseries of this subplot are plotted as stacked areas, except for the timeseries that are specified inside
the (optional) element <area>. Area-series are always plotted as so called 'difference area'.

Attribute stackPlot is intended as overruling of the default series paint (i.e. line or bar)

It is not possible to combine 'stacked areas' and lines/bars in one plot.

Display groups may be defined while DELFT-FEWS is running and reloaded by re-opening the time series dialogue. If a mistake
is made, then the shortcuts item to open the tree view will not appear and an appropriate message will be generated. After
resolving the mistake the item will again become available on re-loading the display.

04 Location Icons
What [Link]

Config group SystemConfigFiles

Required no

Description Defines which icon to show for a location

schema location [Link]

General
Configuration of location icons can be used to help identify the different types of locations on the map display. This is an optional configuration
item. If it is not available then the default location icon will be used to all locations. When available on the file system, the name of the XML file is
for example:

LocationIcons 1.00 [Link]

LocationIcons Fixed file name for the location icon configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 20 Elements in the LocationIcons configuration

49
rootDir

This is the directory where the icons referred to are stored. By convention this directory is the <REGION>\Icons directory. The directory can be
given relative to the <REGION> directory. If the convention is followed then only "Icons" needs to be entered.

locationIcon

Root element of a location icon definition. Multiple entries may be defined.

description

Description of the group of locations for which an icon is defined (for reference in the configuration only).

iconID

ID of the icon to be used in the display for this group of locations. This id is the same as the name of the icon file, without the ".gif" file extension.

locationId/locationSetId

The locationId is a reference to the location for which icon is used. Either one or more locationId's may be defined, or a single locationSetId.

05 Module Descriptors
What [Link]

Config group SystemConfigFiles

Required yes

Description Registers the available modules to the system

schema location [Link]

Only expert users should attempt to make changes in this configuration. Errors could implement the functionality of the complete
system.

The module descriptors file is used to register module plug-ins that can be used in workflows. The module descriptors define the name of the
module and the associated Java class to call. This class must implement the module plug-in interface for it to work within DELFT-FEWS. All
modules that are included in the distribution of DELFT-FEWS are registered in the Module Descriptors. When available on the file system, the
name of the XML file is for example:

ModuleDescrtiptors 1.00 [Link]

ModuleDescrtiptors Fixed file name for the module descriptors configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 21 Elements in the ModuleDescriptors configuration

moduleDescriptor

Root element of the module descriptor configuration. One entry is required for each module defined.

Attributes;

50
Id: Id or Name of the module

description

Optional description of the module. This is used for reference only.

className

Java class called when running the module as referenced by its Id. NOTE; this class must implement the DELFT-FEWS module plug-in interface.

06 Display Descriptors
What [Link]

Config group SystemConfigFiles

Required yes

Description Registers the available gui-display plugins to the system

schema location [Link]

Only expert users should attempt to make changes in this configuration. Errors could implement the functionality of the complete
system.

The display descriptors file is used to register display plug-ins that can be called from the DELFT-FEWS GUI. The display descriptors define the
name of the display and the associated Java class to call. This class must implement the display plug-in interface for it to work within
DELFT-FEWS. All displays that are included in the distribution of DELFT-FEWS are registered in the Display Descriptors. When available on the
file system, the name of the XML file is for example:

DisplayDescriptors 1.00 [Link]

DisplayDescriptors Fixed file name for the display descriptors configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 22 Elements in the DisplayDescriptors configuration

displayDescriptor

Root element of the display descriptor configuration. One entry is required for each display defined.

Attributes;

Id: Id or Name of the display

description

Optional description of the display. This is used for reference only.

className

Java class called when running the display as referenced by its Id. NOTE; this class must implement the DELFT-FEWS display plug-in interface.

51
07 Permissions

What [Link]

Config group SystemConfigFiles

Required no

Description Set permissions for user groups

schema location [Link]

What [Link]

Config group SystemConfigFiles

Required no

Description Define user groups

schema location [Link]

General
Permissions can be added to the FEWS configuration to allow users (user groups) to access Explorer tasks, Data Editor functions, Filters, etc..
Permissions can be optionally configured in the following configuration files:

[Link]
Restrict access to explorer tasks such as the Time Series Dialog or the Grid Display. The tasks will not be available in the menus
or toolbar for users which do not have the right permissions
[Link]
Control who can add and edit values in the data editor window
Control who can add and edit labels in the data editor window
Control who can add and edit comments in the data editor window
[Link]
Control who can create, edit, delete, persist and run scenarios in the scenario editor window
[Link]
Control which displays are visible in the spatial plot window for the current user
[Link]
Control which filters are visible in the FEWS explorer for the current user
[Link]
Control which shortcuts are visible in the Time Series Display for the current user
[Link]
Control which users can view, run and approve workflows in the Forecast Dialog and Manual Forecast Dialog.
Also control which users can delete forecasts and change expiry times of forecasts in the Forecast Dialog.
NOTE: Using permissions on workflows indirectly influences the behaviour of the scenario editor window. Scenario's, based on
hidden or non-runnable workflows are not showed in the scenario editor.

Permissions are to be configured as follows

Configure optional permission names in any of the above described configuration files.
Create the permissions in the permissions configuration file (Permissions 1.00 [Link]) and configure usergroup names which should
have access to the permissions.
Create the usergroups in in the usergroup configuration file (Usergroups 1.00 [Link]) and assign them user names.

Configure optional permission names

This can be achieved by adding the optional permission tag to the configuration and give it a self-describing name.

Create the permissions configuration file

When available on the file system, the name of the XML file is for example:

Permissions 1.00 [Link]

Permissions Fixed file name for the permissions configuration

1.00 Version number

52
default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 3 Elements in the Permissions configuration

Permissions 1.00 [Link]


<permissions xmlns=".....">
<permission id="AllowDataEditor">
<userGroup id="Hydroloog"/>
<userGroup id="Veldmedewerker"/>
</permission>
<permission id="AllowManualForecast">
<userGroup id="Hydroloog"/>
</permission>
<permission id="AllowLabelEditor">
<userGroup id="Hydroloog"/>
</permission>
<permission id="AllowCommentEditor">
<userGroup id="Hydroloog"/>
<userGroup id="Veldmedewerker"/>
</permission>
<permission id="AllowValueEditor">
<userGroup id="Hydroloog"/>
</permission>
</permissions>
]]>

Permission

Unique name of the permission

Usergroup

Id of each usergroup that is granted the given permission

create the user groups

When available on the file system, the name of the XML file is for example:

Usergroups 1.00 [Link]

Usergroups Fixed file name for the user group configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

53
Figure 4 Elements in the Usergroups configuration

UserGroups 1.00 [Link]


<userGroups xmlns="....">
<userGroup id="Veldmedewerker">
<user id="Stephan Zuiderwijk"/>
<user id="Marc van Dijk"/>
</userGroup>
<userGroup id="Hydroloog">
<user id="Toon van Peel"/>
</userGroup>
</userGroups>
]]>

Usergroup

Base tag for a usergroup configure one for each user group. Usergroups can contain other usergroups.

User

Name of the user that belongs to the usergroup. Users can be placed in multiple usergroups.

Example of permissions in the Explorer XML and TimeSeriesDisplayConfig XML file.

Explorer 1.00 [Link]


<iconFile>%FEWSDIR%/icons/[Link]</iconFile>
<mnemonic>E</mnemonic>
<arguments>table</arguments>
<taskClass>[Link]</taskClass>
<toolbarTask>false</toolbarTask>
<menubarTask>true</menubarTask>
<accelerator>ctrl E</accelerator>
<permission>AllowDataEditor</permission>

....
]]>

54
TimeSeriesDisplayConfig 1.00 [Link]
<convertDatum>true</convertDatum>
<valueEditorPermission>AllowValueEditor</valueEditorPermission>
<labelEditorPermission>AllowLabelEditor</labelEditorPermission>
<commentEditorPermission>AllowCommentEditor</commentEditorPermission>

....
]]>

04 Regional Configuration

Introduction
The regional configuration items form the basis of the region specific configuration of DELFT FEWS as a forecasting system for a particular river
system or administrative region. It includes definitions of the parameter, locations, units and flags used , which may vary per application of the
system.

The region configuration items include (items in bold are required for a minimal system):

Parameters definition of the parameters used for time series


Locations definition of all locations
LocationSets definition of sets of locations
Branches definition of branches (used for displaying longitudinal profiles)
Grids definition of grids used in the system.
TimeUnits Definition of time units supported by the system (used for mapping external time units to internal time units).
Filters Definition of filters in the main map display
ValidationRuleSets Definition of validation rule sets
Thresholds definition of threshold types
ThresholdValueSets definition of threshold values for all locations and data types.
ValueAttributeMaps attributes to be mapped from time series values for use in reports.
ColdModuleInstanceStateGroups Definition of groups of cold module states.
ModuleInstanceDescriptors Definition of instances of modules
WorkflowDescriptors Definition of workflows
IdMapDescriptors Definition of ID maps used for mapping external parameters and ID's to DELFT-FEWS locations and parameters.
FlagConversionDescriptors Definition of Flag conversions used for mapping external data quality flags to DELFT-FEWS data quality
flags.
UnitConversionsDescriptors Definition of unit conversions used for mapping external units to DELFT-FEWS units.
CorrelationEventSetsDescriptors Definition of sets of correlation events (used by correlation module only).
TravelTimesDescriptors Definition of sets of travel times for correlation events (used by correlation module only).
HistoricalEvents Definition of historical events to be plotted against real time forecast data for reference purposes.

From version 2008.03 onwards you can now choose to define Locations, LocationSets, IdMaps, DisplayGroups,
ThresholdValueSets and ValidationRuleSets in a dbf file. This can be a useful way of defining all your information in one place.
It also means you can efficiently configure Fews without XML Spy and reduces XML parsing time when starting Fews. See
chapter 22 for further details.

Many of the configuration items required will include references to strings. To avoid duplication, a tag can be defined in the
[Link] file in the root configuration and the tag name used in the XML file (see also System Configuration).

Contents
01 Locations
01 - Related Locations
02 LocationSets
03 Parameters
05 Branches
06 Grids
07 Filters
08 ValidationRulesets

55
09 Thresholds
10 ThresholdValueSets
11 ColdModuleInstanceStateGroups
12 ModuleInstanceDescriptors
13 WorkflowDescriptors
14 IdMapDescriptors
15 FlagConversionsDescriptors
16 UnitConversionsDescriptors
17 CorrelationEventSetsDescriptors
18 TravelTimesDescriptors
19 TimeUnits
20 Historical Events
21 Value Attribute Maps
22 Locations and attributes defined in Shape-DBF files
23 Qualifiers
24 Topology
25 ModifierTypes
26 TimeSteps

01 Locations
What [Link]

Required yes

Description Definitions of all locations

schema location [Link]

DELFT-FEWS is a location oriented system. All time series data must be referenced to a (geographic) location. This location must be identified by
its geographic coordinates within the coordinate system used. When available on the file system, the name of the XML file is for example:

Locations 1.00 [Link]

Locations Fixed file name for the locations configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted)

Figure 23 Elements in the Locations configuration

timeZone (available since build 25437)

Optional time zone for dates and times defined in the locations configuration file. If no time zone is defined, then dates and times are in GMT.

geoDatum

Definition of the geoDatum used in defining the locations. This may be different than the geoDatum used in the displays. For enumeration of
geoDatums, see Appendix B.

location

Root element for the definition of each individual location. Multiple entries may be defined.

56
Attributes;

id: id of the location. This must be unique


name: name of the location used in displays and reports

description

Optional description of the location. This description will appear as a tool-tip when hovering over the location in the map display.

shortName

Optional short name for the location. This string when available will replace the name in the time series display legend.

toolTip

optional element to customize the tooltip shown when hovering over a location in the main map display.

You can use use \n or CDATA or HTML when you need multiple lines. Beside tags defined in the [Link] file you can use the following
tags:

%ID%, %NAME%, %DESCRIPTION%, %LAST_VALUE%, %LAST_VALUE_TIME%,


%FORECAST_START_TIME%, %MAXIMUM_VALUE%, %MAXIMUM_VALUE_TIME%

The tooltip supports html including images and hyperlinks. The url in the hyper link can be an internet url, an executable file, a document file, or a
[Link] the CDATA xml tag to include html in a xml file. Check the available HTML functionalities here.

By default (if not defined) the following tool tip is used.

Name: %NAME%\n
Desc: %DESCRIPTION%\n
Last value \[%LAST_VALUE%\] Time \[%LAST_VALUE_TIME%\]\n
Forecast Start Time \[%FORECAST_START_TIME%\]\n
Maximum \[%MAXIMUM_VALUE%\] Time \[%MAXIMUM_VALUE_TIME%\]

A more advanced example is, using HTML (use the <BR> tag to start new line):

57
<toolTip><![CDATA[<html>
<table id="details">
<tr>
<td width="50" valign="top">ID</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">%ID%</td>
</tr>
<tr>
<td width="50" valign="top">Naam</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">%NAME%</td>
</tr>
<tr>
<td width="50" valign="top">Type</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">%DESCRIPTION%</td>
</tr>
<tr>
<td width="50" valign="top">Foto</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">
<a href="file:/$FOTOSDIR$/%ID%.jpg" >
<img src="file:/$FOTOSDIR$/thumbs/%ID%.jpg" border="0">
</a>
</td>
</tr>
<tr>
<td width="50" valign="top">Documentatie</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">
<a href="file:/$PDFDIR$/%ID%.pdf">%ID%.pdf</a>
</td>
</tr>
</table>
</html>
]]></toolTip>

parentLocationId

Optional Id of a location that functions as a parent. In the filters child locations (locations that refer to a parent) are normally invisible. However,
they are displayed in the graphs whenever a parent is selected.

visibilityPeriod (available since build 25437)

Optional. This is the period for which a location is visible in the user interface. The start and the end of the period are inclusive. If no
visibilityPeriod is defined for a location, then the location is visible for all times. Currently the visibility period is used in the map (explorer) window,
the time series display and the spatial display.

startDateTime: the date and time of the start of the visibility period. The start of the period is inclusive. If startDateTime is not defined,
then the location is visible for all times before endDateTime.
endDateTime: the date and time of the end of the visibility period. The end of the period is inclusive. If endDateTime is not defined, then
the location is visible for all times after startDateTime.

Geographic coordinate of the location (Easting)

Geographic coordinate of the location (Northing)

Optional elevation of the location above the global reference.

58
The elevation defined will be used for converting a parameter supporting local and/or global datum. By convention the data
stored in the DELFT-FEWS database is at the local datum. The elevation defined here is added when displaying/converting to a
global datum.

The value defined for the elevation should be the gauge zero for river gauges where an exact level is important.

When using transformations and the datum needs to be converted and also a multiplier, divider and/or incrementer are defined
in the time series set of the data, then the following equations are used.
When reading data from the database the calculation is:
value = (stored_value + z) * multiplier / divider + incrementer
When writing data to the database the multiplier, divider and incrementer of the time series set are not used, so the calculation
is:
stored_value = value - z

All time series data in DELFT-FEWS must be referenced to a location. This is the case for all data types (scalar, longitudinal,
grids & polygons).

For Grids and Longitudinal profiles, additional information may be required and defined in the grids and branches configurations
respectively. For scalar and polygon time series no additional information is required.

01 - Related Locations
Function: Functionality to define related locations and how to use them in your configuration

Where to Use? Locations, LocationSets, Filters, DisplayGroups, Transformations

Why to Use? To be able to simply relate series of several locations to each other, e.g. water levels to a weir, raingauge to a catchment etc.

Description: Based on the DBF shape file or [Link] you can easily manage the configuration_

Available since: Delft-FEWS 201101

Contents

Contents
Overview
How to be used
Examples
[Link]
[Link]
timeSeriesSets (in Filters, DisplayGroups and Transformations)
Transformation to compute flows at a weir

Overview

This functionality enables linking time series between locations, without copying data or whatsoever. It can be done both in the user interface
(filters and displayGroups in the FEWS Explorer) and in the transformations.
Typical examples of this functionality are:

relate the nearest rain gauge time series to a catchment or fluvial gauge
relate water level gauges to structures: like upstream and downstream water levels to several gates.

If you relate for example a rain gauge to a list of fluvial gauges, it will look in the filters like that the location has rainfall time series as parameter.
Once you select this parameter and location to make a graph, you see that you get the rainfall time series displayed at the original rain gauge.

How to be used

Some remarks:

in timeSeriesSets you always should be sure to use the locationRelationId only if the locationRelationId is defined for all locations in the
locationSet. It is not allowed to have an undefined or empty locationRelationId. In that case a configuration error will occur. It is easy to
do: add a relatedLocationExists constraint to the locationSet.
in transformations you can easily connect series from one location to another. If you have for example a weir with two gates, you can
defined the upstream and downstream water level gauges as relatedLocations, but you refer to them through each gate.

59
Examples

[Link]

This example shows how to configure related locations in the [Link] configuration file. The upstream and downstream waterlevel gauges
are related to the two weir gate locations. Notice that the namespace relatedLocationId should be added to the XML definition.

<geoDatum>Rijks Driehoekstelsel</geoDatum>
<location id="weir_gate1">
<parentLocationId>weir</parentLocationId>
<x>0</x>
<y>0</y>
<relatedLocationId:H_US>weir_h_us</relatedLocationId:H_US>
<relatedLocationId:H_DS>weir_h_ds</relatedLocationId:H_DS>
</location>
<location id="weir_gate2">
<parentLocationId>weir</parentLocationId>
<x>0</x>
<y>0</y>
<relatedLocationId:H_US>weir_h_us</relatedLocationId:H_US>
<relatedLocationId:H_DS>weir_h_ds</relatedLocationId:H_DS>
</location>
<location id="weir_h_us">
<parentLocationId>weir</parentLocationId>
<x>0</x>
<y>0</y>
</location>
<location id="weir_h_ds">
<parentLocationId>weir</parentLocationId>
<x>0</x>
<y>0</y>
</location>
<location id="weir">
<x>0</x>
<y>0</y>
<relatedLocationId:METEO>meteo_station</relatedLocationId:METEO>
</location>
<location id="meteo_station">
<x>0</x>
<y>0</y>
</location>
....
]]>

[Link]

This example shows how to define related locations if you use a DBF file to define the locations.

60
<esriShapeFile>
<file>myLocDBF</file>
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<id>%ID%</id>
<name>%NAME%</name>
<description>%TYPE%</description>
<parentLocationId>%PARENT_ID%</parentLocationId>
<x>%X%</x>
<y>%Y%</y>
<relation id="METEO">
<relatedLocationId>%METEO%</relatedLocationId>
</relation>
<relation id="H_US">
<relatedLocationId>%H_US%</relatedLocationId>
</relation>
<relation id="H_DS">
<relatedLocationId>%H_DS%</relatedLocationId>
</relation>
<attribute id="regio">
<text>%REGIO%</text>
</attribute>
<attribute id="type">
<text>%TYPE%</text>
</attribute>
...
</esriShapeFile>
<constraints>
<relatedLocationExists locationrelationid="METEO"/>
<relatedLocationExists locationrelationid="H_US"/>
<relatedLocationExists locationrelationid="H_DS"/>
</constraints>

]]>

timeSeriesSets (in Filters, DisplayGroups and Transformations)

This example shows how you link to a related location in the timeSeriesSet. This can be done in the filters, displayGroups and Transformations
only.

<moduleInstanceId>ImportCAW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationRelationId>METEO</locationRelationId>
<locationSetId>my_locations</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>

]]>

Transformation to compute flows at a weir

This example shows a transformation (from the new TransformationModule) that computes the flows over the two weir gates, by using the
upstream and downstream water level gauges. By using the relatedLocations, this can be set up very easily in only one transformation.

<structure>
<generalWeirVariableHeight>
<headLevel>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationRelationId>H_US</locationRelationId>

61
<locationId>weir_gate1</locationId>
<locationId>weir_gate2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-365" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</headLevel>
<tailLevel>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationRelationId>H_DS</locationRelationId>
<locationId>weir_gate1</locationId>
<locationId>weir_gate2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-365" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</tailLevel>
<height>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>weir_gate1</locationId>
<locationId>weir_gate2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-365" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</height>
<coefficientSet>
<width>1</width>
<freeFlowLimitCoefficient>1</freeFlowLimitCoefficient>
<freeDischargeCoefficient>1</freeDischargeCoefficient>
<drownedDischargeCoefficient>1</drownedDischargeCoefficient>
</coefficientSet>
<discharge>
<timeSeriesSet>
<moduleInstanceId>Flows</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>weir_gate1</locationId>
<locationId>weir_gate2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-365" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</discharge>
</generalWeirVariableHeight>
</structure>

]]>

02 LocationSets

62
What [Link]

Required no

Description Definitions of groups of locations

schema location [Link]

Location sets may be used to define logical groups of locations. Often an action may need to be taken on a whole set of locations (e.g. validation).
By creating a LocationSet the action need only be defined once.

Any location may appear in more than one location sets. Internally a location set is simply evaluated as a list of locations.

When available on the file system, the name of the XML file is for example:

LocationSets 1.00 [Link]

LocationSets Fixed file name for the locationSets configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 24 Elements in the LocationSets configuration

63
locationSet

Root element for the definition of a location set. Multiple entries may exist.

Attributes;

id: Id of the location set. This must be unique

description

Optional description of the location set. Used for reference purposes only.

locationId

Location ID configured to be a member of the locationSet. Multiple entries may exist.

locationSetId

LocationSet ID configured to be a member of the locationSet. Multiple entries may exist. This is useful to group locationSets together.

esriShapeFile

It is also possible to define locationSets with locations that are automatically generated (so NOT defined in the [Link]) from an ESRI Shape
file. See all detailed information at the next page

03 Parameters
What [Link]

Required yes

Description Definitions of all parameters used in DELFT-FEWS

schema location [Link]

All time series data in DELFT-FEWS must be defined to be of one of the parameters supported. This configuration file defines the list of supported
parameters, including the unit of the parameter.

Parameters are organised into ParameterGroups. All parameters within a group should have the same properties and the same units. Only
parameters of the same group may be displayed in a single (sub) plot in the time series display, though this can be overruled if requested using a
display template.

When available on the file system, the name of the XML file is for example:

Parameters 1.00 [Link]

Parameters Fixed file name for the Parameters configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

64
Figure 25 Root element of the parameter definition

displayUnitConversionId

The unit conversions id to convert from the (internal) units to the display units. This id should be available in the UnitConversionsDescriptors. Only
required when a displayUnit is specified for a parameter group

configUnitConversionId

The unit conversions id to convert from the units specified in config files to the internal units for this parameter. This id should be available in the
UnitConversionsDescriptors. Only required when a user unit is specified for a thresholdValuesSet, validationRuleSet or ratingCurve

ratingCurveStageParameterId

This parameter is used to resolve the internal stage unit and display stage unit and the name (label) that is used for the rating curve stage
axis/column in the user interface

ratingCurveDischargeParameterId

This parameter is used to resolve the internal discharge unit and display discharge unit and the name (label) that is used for the rating curve
discharge axis/column in the user interface

parameterGroup

Root element of each definition of a parameter group. Multiple entries may exist.

Attributes;

id: Id of the parameter group. The ID must be unique.


name: optional name for the parameter group. Used for reference only.

65
Figure 26 Elements of the ParameterGroup configuration in the parameter definition

description

Optional description of the parameter group. Used for reference purposes only.

parameterType

Definition if the parameters in the group if these are "instantaneous" parameters, "accumulative" parameters or "mean" parameters.

dimension

unit

Unit of the parameters defined in the group. The unit may be selected from a list of units supported by DELFT-FEWS. These are generally SI
units. For an enumeration of supported units see Appendix B.

displayUnit

Specify when the unit seen by the user is not the same as the unit of the values internally stored in the data store. Also specify
displayUnitConversionsId above. In this unit conversions the conversion from (internal) unit to display unit should be available

usesDatum

Indicates if the parameters in the group are to be converted when toggling between local and global datum. Value is either true or false. If the
value is true, the elevation defined in the location is added to the time series in the database on conversion. See Locations

valueResolution

Default accuracy (smallest increment between two values) of the calculated or measured values for all parameters in this group. Value resolution
can also be specified for a single parameter (since 2011.01). By default the resolution is dynamic and the values are stored as a 32 bit floating
point with 6 significant digits. Floating points don't compress very well and are slow to decode. It is far more efficient to store a value as an integer
with a scale factor (= value resolution). When a 8, 16 or 24 bit integer is not big enough to achieve the value resolution the default 32 bit floating
point is used as fall back. E.g. When the accuracy of the water level is half a centimeter specify 0.005. When the accuracy of the discharge in 10
m3/s specify 10.

parameter

Definition of each parameter in a parameter group. Multiple parameters may be defined per group.

Attributes;

id: Id of the parameter. The ID must be unique.


name: optional name for the parameter. Used for reference only.

66
shortName

Short name for the parameter. This name will be used in the time series display and reports.

Configuring a hierarchical tree view for parameters in Fews Explorer

As of Fews 2010_01 there is a second root node parameters available in the [Link] schema. This new element facilitates configurations
of the parameters to be displayed in Fews Explorer in the form of a hierarchical tree. The parameters node embeds the parameterGroups element
described above. The element parameterRootNode is of type ParameterNodeComplexType and represents the top node of the hierarchical tree
structure that is to be displayed. Other parameterNodes can be nested within each instance of ParameterNodeComplexType. Each node has an
id field and can have a name and description and multiple parameterIds. The parameterIds from parameterGroups that are not included in the
hierarchical tree specified by parameterRootNode are added automatically at the root level.

id

This attribute is used as the identifier label which is displayed when ids are made visible within Fews Explorer.

name

This element is used as the name label which is displayed when names are made visible within Fews Explorer.

description

This element is used as the description label which is displayed when descriptions are made visible within Fews Explorer.

parameterId

This element must refer to the identifier of one of the parameters defined within the parameterGroups section.

NB. Each parameterId can only be used once and has to be defined within one of the parameterGroups.

parameterNode

This element is of type ParameterNodeComplexType and can be used similar to parameterRootNode.

Sample xml for hierarchical parameters

67
<parameterGroups>
<parameterGroup id="Discharge">
...as before ...
</parameterGroup>
</parameterGroups>
<parameterRootNode id="Parameters">
<parameterNode id="Discharge">
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
</parameterNode>
<parameterNode id="Water Level">
<name>Water level</name>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
...
</parameterNode>
</parameterRootNode>

]]>

Sample parameters tree view in Fews Explorer

In case there is no timeseriesset for the combination of parameter and location, the nodes will be grayed out:

68
05 Branches
What Required Description schema location

[Link] no Definitions of branches [Link]

DELFT-FEWS is a location oriented system. All time series data must be referenced to a (geographic) location. Scalar time series need no
additional information. For longitudinal time series data, each point in the vector must be referenced to a location in a branch structure. This
location must be identified by its coordinate within the branch (chainage), and may also be defined by its geographic coordinates within the
coordinate system used.

When available on the file system, the name of the XML file is for example:

Branches 1.00 [Link]

Branches Fixed file name for the Branches configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 27 Elements of the Branches configuration

69
geoDatum

Definition of the geoDatum used in defining the locations of the branch. This may be different than the geoDatum used in the displays. For
enumeration of geoDatums, see Appendix B.

branch

Root element of the definition of a branch. Multiple entries may exist to define multiple branches.

Attributes;

id: Id of the current branch. This ID must refer to a location ID in the Locations definition.

branchName

Name of the current branch. Used for reference purposes only

startChainage

Chainage of the start of the branch (only used in the longitudinal display)

endChainage

Chainage of the end of the branch (only used in the longitudinal display)

upNode, downNode

Optional item in branch to create branch linkage. This information is not used in DELFT-FEWS, but may be relevant to an external module when
exported through the published interface.

zone

Optional item in branch that allows definition of a zone – this is a part of the branch that may be indicated in the longitudinal display with the name
given (currently not used in DELFT-FEWS).

pt

Definition of the points belonging to the branch. At least two points must be defined per branch.

Attributes;

chainage; coordinate of point as measured along the branch (should be greater than or equal to the start chainage and less than the end
chainage).
label; label used to identify the point
x; optional geographic coordinate of the point (Easting)
y; optional geographic coordinate of the point (Northing)
z; optional elevation of the point. The elevation is an important attribute for plotting in the longitudinal profile display. This elevation is
taken as the bed level.
description; optional description string. When defined a vertical line will be drawn in the longitudinal display at the location of this point,
and the description given displayed.
thresholdValueSetId; optional reference to an ID of a threshold value set. When defined, the threshold values will be drawn as markers
in the longitudinal display at the location of this point.

comment

Comment item. Used for reference only.

06 Grids
What Required Description schema location

[Link] no Grid definitions, either regular or irregular [Link]

DELFT-FEWS is a location oriented system. All time series data must be referenced to a (geographic) location. Scalar time series need no
additional information. For grid time series data, each point in the grid must be referenced to a location in a grid structure.

Grids may be regular or irregular. In regular grids each cell has the same width, height and area within the coordinate system it is specified in.

In irregular grids the grid has a fixed number of rows and columns, but the cell height and width is not equal in each row and column. For these

70
grids additional information is required on the location of each individual cell in the grid to allow display in the grids display as well as for use in the
spatial interpolation routine.

When available on the file system, the name of the XML file is for example:

Grids 1.00 [Link]

Grids Fixed file name for the Grids configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 28 Elements of the grids configuration

regular

Definition of a regular grid. One or more regular grids may be defined.

Attributes;

locationId; location Id of the grid. This locationId must be included in the locations definition.

irregular

Definition of a irregular grid. One or more irregular grids may be defined.

Attributes;

locationId; location Id of the grid. This locationId must be included in the locations definition.

Regular grids

Figure 29 Elements of the Regular Grid in the Grids configuration

71
description

Optional description of the grid. Used only for reference purposes

rows, columns

Number of rows and columns in the grid

geoDatum

Coordinate system the grid is defined in. This may be a different coordinate system to that used in the main map display. The coordinate system
may also differ per grid, as a grid may be regular in one coordinate system, but may be irregular in another. Defining the grid in the regular
coordinate system is easier.

firstCellCenter

Coordinates of the center of the first grid cell. The convention in DELFT-FEWS is that this is the center point of the top left cell in the grid
(Upper-Left).

firstCellCenter: x

Geographic coordinate of the first cell center point (Easting)

firstCellCenter: y

Geographic coordinate of the first cell center point (Northing)

firstCellCenter: z

Optional elevation of the first cell center point (Easting). If only this elevation is defined , then all cells in the grid are assumed to have the same
elevation.

xCellSize / columnWidth

Cell width of each column in the grid. The cell width is given in the unit of the coordinate system referred to in the geoDatum. Generally this is
metres, but in WGS 1984 this is decimal degrees.

The xCellSize-element is used when all cells are equal in width. Please use the columnWidth-element to define cells with variable columnWidth.

yCellSize / rowHeigt

Cell height of each row in the grid. The cell height is given in the unit of the coordinate system referred to in the geoDatum. Generally this is
metres, but in WGS 1984 this is decimal degrees.

The yCellSize-element is used when all cells are equal in height. Please use the rowHeight-element to define cells with variable height.

z / zBottom / zTop

Optional definition of the elevation of each point in the grid. This definition is only necessary where a datum is required in for example
3-dimensional interpolation. This may be applied in for example interpolating temperature grids in high mountain areas. Alternative uses are the
display of elevation in a cross section of the Spatial Display. The bottom/top layer is only displayed if the parameter unit is meters and if the
corresponding displayOptions are configured in the TimeSeriesDisplayConfig file and if the layer contains non-NaN values. The useDatum
property is not used here.

Use 'z' for the average elevation, and zBottom and zTop in case a model layer needs to be defined

zMapLayerName / zBottomMapLayerName / zTopMapLayerName

Optional definition of the elevation by reference to an ASC or BIL file stored in the MapLayerFiles-directory. This definition is only necessary
where a datum is required in for example 3-dimensional interpolation. This may be applied in for example interpolating temperature grids in high
mountain areas. Alternative uses are the display of elevation in a cross section of the Spatial Display. The bottom/top layer is only displayed if the
parameter unit is meters and if the corresponding displayOptions are configured in the TimeSeriesDisplayConfig file and if the layer contains
non-NaN values. The useDatum property is not used here.

Use 'zMapLayerNAme' for the average elevation, and zBottomMapLayerName and zTopMapLayerName in case a model layer needs to be
defined.

Irregular grids

72
Figure 30 Elements of the irregular grid in the Grids configuration

description

Optional description of the grid. Used only for reference purposes

rows, columns

Number of rows and columns in the grid

geoDatum

Coordinate system the grid is defined in. This may be a different coordinate system to that used in the main map display. The coordinate system
may also differ per grid. A grid may be regular in one coordinate system, but may be irregular in another. Defining the grid in the regular
coordinate system is generally easier.

cellCentre

Definition of the cell centre points of all cells in the irregular grid. The number of cellCentre points defined must be the same as the number of
cells in the grid (rows x columns).

cellCentre: x, cellCentre: y

Geographic coordinates of the ell centre point (x: Easting, y: Northing)

cellCentre: z

Elevation of the cell centre point.

07 Filters
What Required Description schema location

[Link] yes Definition of filters in the main map display [Link]

Filters are used in DELFT-FEWS to define the locations that are displayed on the main map display, and that can be selected to display data.
Filters are defined to arrange locations, with associated parameters in logical groups. Each filter is defined as a collection of time series sets.

Filters may be defined as a nested structure, allowing for the definition of a hierarchical set of filters.

When available on the file system, the name of the XML file is for example: Filters 1.00 [Link]

73
Figure 31 Elements of the filters configuration

It is possible to explicitly define every (child)filter. This may result in to many repeating timeSeriesSet definitions. Therefore it is also possible
(since version 2009.02) to define groups of timeSeriesSets that can be used many times in the filter, additionally by using constraints on the
location attributes. See also next examples.

description

Optional description of the filter configuration. Used for reference purposes only.

defaultFilterId

Filter that is selected automatically on start up of FEWS. If not defined no filter will be selected.

filter

Definition of a filter. Multiple entries may exist where multiple filters are defined. Each filter may contain either a set of one or more time series set,
or a child filter. The child is a reference to another filter definition that again contains either a child filter or a list of time series sets. This structure
is used to construct a hierarchal tree view of filters.

Attributes:

id: Id of the filter. This ID is used in the tree view in the main display
name: optional name for the filter. For reference purposes only.
ValidationIconsVisible: This allows the user to make use of the additional validation icons available in the explorer (2009.01)

child

Reference to another filter. The child element refers to the ID of the other filter as a foreign key.

Attributes:

foreignKey: Reference to ID of another filter, that is displayed as child filter

timeSeriesSet

Definition of a time series set belonging to a filter. Multiple time series sets may be defined.

74
The time span defined in the time series sets in the filter has an important function. It determines the time span checked for
determining status icons (e.g. missing data) in the main map display.

Not all locations need to be included in a filter. Locations that are not defined, will never be visible to the user.

The readWrite parameter defined in the time series set included in the filter will determine if the time series may be edited by the
user. If this parameter is set to read only then the time series will not support editing. This is the case for external time series
only. Simulated time series are read only by convention. Notice that editable timeSeriesSets should have synchLevel 5 to get
the edits saved into the central database.

Example (time series sets not expanded)


<filter id="Config Example">
<child foreignKey="Nested Filter"/>
</filter>
<filter id="Nested Filter">
<timeSeriesSet/>
</filter>
<filter id="Main Level Filter">
<timeSeriesSet/>
</filter>

Figure32 Example of filter configuration, as defined in the example XML configuration above.

timeSeriesSetId & constraints


Instead of a extensive definition of all possible time series that usually will be repeated for different types of locations and various districts, it is
possible to define the timeSeriesSets once and to use it various times through all filters. At this point you refer to such a timeSeriesSets and
optionally add some constraints on the location attributes. Notice that location attributes only can be defined in the locationSets for locations that
are read from an ArcGIS shape DBF file. See the example below.

It is possible to define filters that may become empty: no locations comply with the constraints. These filters are not displayed. An advantage of
this approach is that all possible filters can be defined once and automatically become visible when a location complies the constraints.

75
08 ValidationRulesets
What Required Description schema location

[Link] no Definition of validation rule sets [Link]

Validation on extreme values


Validation on rate of change
Validation on series of same readings
Validation on Temporary Shifts
Examples of validation rules

Validation rules are defined in DELFT-FEWS to allow quality checking of all time series data (scalar time series only). Several validation criteria
may be defined per time series. All validation rules for all time series are defined in this configuration. For each time series to be checked, a set of
validation rules is defined. Defining validation rules to apply to a time series set using a locationSet rather than identifying series individually can
simplify the configuration greatly. Most validation rules may be defined either as a constant value, or as a value valid per calendar month.

When available on the file system, the name of the XML file for configuring the Validation Rule Sets is for example:

ValidationRuleSets 1.00 [Link]

ValidationRuleSets Fixed file name for the Validation rules configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 33 Elements of the ValidationRuleSets configuration.

validationRuleSet

Root element of the definition of a validation rule set. Multiple entries may exist.

76
Attributes;

validationRuleSetId: Optional reference ID for the validation rule set. Used only in messaging.
timeZone: Shift (in hours) of the time zone to be used in considering time values in validation.

unit

Specify when the unit given for the values is not the same as the (internally stored) unit of the parameter it applies to. When specified it is required
to also specify configUnitConversionsId in [Link]. In this unit conversions the conversion from the specified unit to the (internal) unit
should be available

timeSeriesSet

Definition of the time series to apply validation rule to.

extremeValues

Validation rules defined to check for extreme values (hard and soft limits)

rateOfChange

Validation rules defined to check rate of change. Please note the units are per second i.e. 2m in 15mins is 0.00222.

sameReading

Validation rules defined to check for series of same readings.

temporaryShift

Validation rules defined to check for temporary shifts in time series.

extremeValuesFunctions, rateOfChangeFunctions, sameReadingFunctions, temporaryShiftFunctions

The function equivalencese to the Values have to do with Shape-DBF file configuration. See here

Validation on extreme values

This group of validation rules checks that the values in the time series do not exceed minimum and maximum limits. These limits may be defined
as soft limits or as hard limits. Values exceeding soft limits will be marked as doubtful but retained. Values exceeding hard limits will be marked as
unreliable.

Figure 34 Elements of the Extreme values configuration of the ValidationRuleSets.

77
hardMax

Validation rule for checking hard maximum. Values exceeding this limit will be marked as unreliable.

Attributes;

constantValue: Value of hardMax limit, used irrespective of time of value.

hardMin

Validation rule for checking hard minimum. Values exceeding this limit will be marked as unreliable.

Attributes;

constantValue: Value of hardMin limit, used irrespective of time of value.

softMax

Validation rule for checking soft maximum. Values exceeding this limit will be marked as doubtful.

Attributes;

constantValue: Value of softMax limit, used irrespective of time of value.

softMin

Validation rule for checking soft minimum. Values exceeding this limit will be marked as doubtful.

Attributes;

constantValue: Value of softMin limit, used irrespective of time of value.

monthLimit

Element used when defining variable limits per calendar month. Twelve values must be defined. When defined the monthly limit will overrule the
constant limit.

Validation on rate of change

This group of validation rules checks that the values in the time series do not exceed maximum rates of change. When the rate of change limit is
exceeded, the values causing the limit to be exceeded will be marked as unreliable. Rate of change limits may be defined to be the same for the
rate of rise as for the rate of fall. These may also be defined to be different. The rates need to be specified in the unit of the timeseries it applies
per second. E.g. if you define a rate of change for a water level gauge with values in metres the rate should be given in metres per second.

Figure 35 Elements of the rate of change configuration of the ValidationRuleSets.

rateofRiseFallDifferent

Root element used if the rate of rise limit is defined different to the rate of fall.

rateOfRise

Validation rule defined for the rate of rise.

Attributes;

78
constantValue: Maximum rate of rise, used irrespective of date of the value. [unitofinput/s]

rateOfFall

Validation rule defined for the rate of fall.

Attributes;

constantValue: Maximum rate of fall, used irrespective of date of the value. [unitofinput/s]

monthLimit

Element used when defining variable limits per calendar month. Twelve values must be defined. When defined the monthly limit will overrule the
constant limit.

Validation on series of same readings

Time series of data can be validated on series of same readings. This may be unlikely for field observations, and may indicate an instrumental
error. In some cases a small variability may still be observed, despite instrumental error. The same readings check allows for defining a bandwidth
within the value is considered to be the same.

Figure 36 Elements of the same reading configuration of the ValidationRuleSets.

sameReadingDeviation

Root element for definition of bandwidth the value may vary within if it is considered to the same reading. The bandwidth is twice the deviation.

Attributes;

constantValue: Value for deviation, used irrespective of date of the value.

sameReadingPeriod

Root element for definition of time span limit the value may remain the same to be considered realistic. If the reading remains the same for a
longer period of time, ensuing values will be considered unreliable.

Attributes;

constantValue: Value for time span in seconds, used irrespective of date of the value.

monthLimit

Element used when defining variable limits per calendar month. Twelve values must be defined. When defined the monthly limit will overrule the
constant limit.

Validation on Temporary Shifts

Time series of data can be validated on temporary shifts. These occur when instruments are reset, and can be identified by the values rapidly
falling to a constant value, remaining at that value for a short period of time and then returning to the original value range. A complex set of
validation criteria include the rate of change as well as a maximum time the value remains the same.

79
Figure 37 Elements of the temporary shift configuration of the ValidationRuleSets.

rateOfTemporaryShift

Rate of change that must be exceeded both on change to shifted value and change back to original value range for validation rule to apply.

Attributes;

constantValue: Value for rate of change, used irrespective of date of the value.

temporaryShiftPeriod

Maximum time span constant shifted value is in time series for validation rule to apply.

Attributes;

constantValue: Value for time span in seconds, used irrespective of date of the value.

monthLimit

Element used when defining variable limits per calendar month. Twelve values must be defined. When defined the monthly limit will overrule the
constant limit.

Examples of validation rules

Example for "Rate of Rise" and "Temporary Shift" validation rules

09 Thresholds
DELFT-FEWS supports checking of time series against thresholds. When thresholds are crossed, appropriate messages may be issued.
Definition of thresholds is in two parts. In the first part of the configuration, the types of threshold used are defined. In the second, the values for
threshold valid for a particular location and time series are defined. In this section the configuration for the definition of the thresholds is defined.
DELFT-FEWS supports different types of threshold events. These include crossing of level and rate thresholds. The occurrence of a peak is also
seen as a threshold event.

For each threshold defined, two additional items need to be configured. Internally DELFT-FEWS maintains threshold events as a non-equidistant
time series, where the crossings are identified by an integer. For each threshold two unique integer Id's need to be assigned. One ID is used to
identify the upcrossing of the threshold, the other Id is assigned to identify the downcrossing. The exception to this is the peak threshold where
only a single Id needs to be assigned to identify the occurrence of the peak event. Note: in the new thresholds configuration approach
(thresholdGroups) these ids are optional and will be generated when not specified in configuration.

Similar to the Id's used for upcrossings and downcrossing, a warning level integer can be assigned to threshold crossings. This warning level is
resolved to either an icon (for display in the main FEWS GUI), or a colour (for use in reports). Warning levels need not be unique. These levels
are used only for level thresholds.

Configuration

When available on the file system, the name of the XML file for configuring the types of thresholds is for example:

Thresholds 1.00 [Link]

80
Thresholds Fixed file name for the Thresholds configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

levelThreshold

Root element for definition of a level threshold. Multiple entries may exist.

Attributes;

- id: Unique Id of level threshold.

- name: Name for the level threshold.

rateThreshold

Root element for definition of a rate threshold. Multiple entries may exist.

Attributes;

- id: Unique Id of rate threshold.

- name: Name for the rate threshold.

maxThreshold

Root element for definition of a peak event threshold. Multiple entries may exist.

Attributes;

- id: Unique Id of max threshold.

- name: Name for the max threshold.

Figure 38 Elements of the Threshold configuration

upWarningLevel

Integer level used in determining icon (through ValueAttributesMap) on up-crossing of threshold (level thresholds only).

downWarningLevel

Integer level used in determining icon (through ValueAttributesMap) on down-crossing of threshold (level thresholds only).

upIntId

Unique integer level defined in threshold crossing time series (internal) on up-crossing of threshold.

downIntId

Unique integer level defined in threshold crossing time series (internal) on down-crossing of threshold.

intId

Unique integer level defined in threshold crossing time series (internal) on occurrence of peak event.

Some notes that explain how Delft-FEWS handles thresholds

Each thresholdValue links a level (e.g. 3.28 meters) to a threshold (e.g. "top of dike"). Each threshold (e.g. "top of dike") links a crossing direction
(up or down) to a warning level (e.g. "Flood Alarm"). Each warning level corresponds to a unique integer that is called the severity of the warning
level. Also see the figure below.

81
Definitions

If a threshold only has an upWarningLevel or has upWarningLevelSeverity > downWarningLevelSeverity, then the threshold is called an
"upCrossing threshold". This means that the threshold activates its upWarningLevel when there are data values above it (e.g. flood
warning).
If a threshold only has a downWarningLevel or has downWarningLevelSeverity > upWarningLevelSeverity, then the threshold is called a
"downCrossing threshold". This means that the threshold activates a warning when there are data values below it (e.g. drought warning).
If a threshold has upWarningLevelSeverity = downWarningLevelSeverity, then the threshold is called both an "upCrossing threshold" and
a "downCrossing threshold". This means that the threshold activates its upWarningLevel if there is data above it and/or below it. It does
not make sense to have upWarningLevelSeverity = downWarningLevelSeverity, but this is possible in the old thresholds configuration
(not in the new improved thresholds configuration).

A thresholdValue with an upCrossing threshold has been crossed when there are data values above or equal to its value.
A thresholdValue with a downCrossing threshold has been crossed when there are data values below or equal to its value.
A thresholdValue with a threshold that is both upCrossing and downCrossing has been crossed when there are data values above, below
or equal to its value, i.e. always.

Determination of most severe activated warning level

The most severe activated warning level is used for the warning icons and colours in the user interface and in the reports. Delft-FEWS takes the
following steps to determine the most severe activated warning level for a given time series (the threshold log events are generated in a different
but similar way).

1. First Delft-FEWS finds the thresholdValueSet (V) that corresponds to the given time series. If there is no thresholdValueSet defined that
corresponds to the given time series, then no warning levels are activated, i.e. "All clear".
2. For the given time series only the data within a given time period is used. The TimeSeriesDialog and DataEditor use the period that is
currently visible in the chart. The explorer user interface uses the relativeViewPeriod defined for the timeSeriesSet in the Filters
configuration file. The ThresholdEventCrossingModule uses the relativeViewPeriod defined for the timeSeriesSet in the
ThresholdValueSets configuration file. The ThresholdOverviewDisplay uses the configured aggregationTimeStep or relativePeriod in the
ThresholdOverviewDisplay configuration file. Please note that in the ThresholdOverviewDisplay and in the Reports the data is read using
the timeSeriesSets configured in the inputVariables. Therefore the relativeViewPeriods defined for the timeSeriesSets of the
inputVariables must include the relativePeriod for which the most severe activated warning level has to be determined. Otherwise not all
of the required data is read.
3. If the given time series contains only missing values, then no warning levels are activated, i.e. "All clear".
4. For each data value separately, Delft-FEWS considers each levelThresholdValue in V and determines if it has been crossed for the given
data value (see above for definitions of crossed). Each levelThresholdValue that has been crossed, activates its corresponding warning
level. From all the warning levels that are activated for the given data value, the most severe warning level is chosen. This is repeated for
each data value within the given time period. From the resulting warning levels for the individual data values, the most severe warning
level is chosen.

10 ThresholdValueSets
What [Link]

82
Required no

Description definition of threshold values for all locations and data types

schema location [Link]

Complementary to the definition of the types of thresholds identified, the values of the thresholds are defined in the ThresholdValueSets
configuration. The configuration of this is similar to the validation rules. Several thresholds may be defined per time series. For each time series to
be tested, a set of thresholds is defined.

Thresholds may be defined to initiate an action by the Master Controller when applied in a live forecasting system. Actions are taken in response
to a log event code. To identify which threshold crossing for which locations will initiate an actions (e.g. enhanced forecasting), an event code can
be defined in the ThresholdValueSet. When the threshold is crossed the event code is generated.

When available on the file system, the name of the XML file for configuring the ThresholdValueSets is for example:

ThresholdValueSets 1.00 [Link]

ThresholdValueSets Fixed file name for the ThresholdValueSets configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 39 Elements of the ThresholdValueSets configuration.

thresholdValueSet

Root element for defining a set of thresholds. For each time series or time series set for which a threshold event is to be tested new element is
required.

Attributes;

Id: Id of the thresholdValueSet defined.


Name: optional name, for reference purposes only

description

Optional description for the ThresholdValueSet. Used for reference purposes only

unit

83
Specify when the unit given for the values is not the same as the (internally stored) unit of the parameter it applies to. When specified it is required
to also specify configUnitConversionsId in [Link]. In those unit conversions the conversion from the specified unit to the (internal) unit
should be available

levelThresholdValue

Definition of values for level thresholds.

rateThresholdValue

Definition of values for rate thresholds.

maxThreshold

Definition of values fro peak event thresholds.

forecastAvailableThresholdValue

If a threshold crossing event is measured for a given observed parameter, then the thresholdEventCrossing module logs whether or not there is a
forecastrun available for the corresponding forecast parameter, within a given relative time period. This information is used in the
ThresholdSkillScoreDisplay

timeSeriesSet

Definition of the time series set for which the thresholds are to be tested.

ratingCurve

Convert this threshold level value to a discharge level threshold value using the ratingcurve defined here

Defining level thresholds

84
Figure 40 Elements of the Level Threshold configuration of the ThresholdValueSets configuration

levelThresholdId

Id of the level threshold. This Id must refer to a threshold type defined in the Thresholds definition (see previous paragraph).

value

Value of the threshold.

valueFunction

85
Function alternatives may also be used instead of the value itself (see: Location and attributes defined in Shape-DBF files).

upActionLogEventTypeId

Event code to be generated on the up-crossing of the threshold. This event code can be used to initiate for example enhanced forecasting. The
event code need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for
runs which have an a-priori approved status. This is normally the scheduled forecast run.

downActionLogEventTypeId

Event code to be generated on the down-crossing of the threshold. This event code can be used to initiate for example enhanced forecasting. The
event code need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for
runs which have an a-priori approved status. This is normally the scheduled forecast run.

Defining rate thresholds

Figure 41 Elements of the Rate Threshold configuration of the ThresholdValueSets configuration

rateThresholdId

Id of the rate threshold. This Id must refer to a threshold type defined in the Thresholds definition (see previous paragraph).

value

Value of the rate threshold that must be exceeded in timeSpan.

timeSpan

Time span to use to establish the rate.

rainRate

Boolean indicator to identify thresholds in rain rates where the threshold is defined as the average rainRate over the timeSpan exceeding the
threshold, and a rate in for example a level where the rate is determined as the value divided by the time span.

upActionLogEventTypeId

Event code to be generated on the up-crossing of the threshold. This event code can be used to initiate for example enhanced forecasting. The
event code need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for
runs which have an a-priori approved status. This is normally the scheduled forecast run.

downActionLogEventTypeId

Event code to be generated on the down-crossing of the threshold. This event code can be used to initiate for example enhanced forecasting. The
event code need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for
runs which have an a-priori approved status. This is normally the scheduled forecast run.

Defining peak event thresholds

86
Figure 42 Elements of the maxThreshold configuration of the ThresholdValueSets configuration

maxThresholdId

Id of the max threshold. This Id must refer to a threshold type defined in the Thresholds definition (see previous paragraph).

value

The value item is used here as a selection of peaks. The peak must exceed this value to be deemed significant (peaks over threshold)..

timeSpan

The timeSpan is used to establish independence of peaks. Peaks within timeSpan of each other are considered as being of the same event as a
message will only be issued for the highest.

actionLogEventTypeId

Event code to be generated on the threshold occurring. This event code can be used to initiate for example enhanced forecasting. The event code
need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for runs which
have an a-prioriapproved status. This is normally the scheduled forecast run.

11 ColdModuleInstanceStateGroups
What Required Description schema location

[Link] no Definition of [Link]


groups of
cold module
states

Many forecasting models use an initial state as initial condition. When used in real time, DELFT-FEWS can be used to manage these states, such
that models are run from a warm state. Long run times in initiating models is thus avoided.

When no warm state is available a cold state will be used. Additionally the user may explicitly select the cold state to be used as model initial
condition.

A default initial condition must be available for models requiring state management. Additional groups of cold module states may also be defined.
These can be selected in for example scenario runs. While a default state is required for every model, additional states need only be defined
where available. When the state indicated is not found for a particular , DELFT-FEWS will revert to the default state. Where it is found, it will be
used as selected.

When available on the file system, the name of the XML file for configuring the ColdModuleInstanceStateGroups is for example:

ColdModuleInstanceStateGroups 1.00 [Link]

ColdModuleInstanceStateGroups
Fixed file name for the ColdModuleInstanceStateGroups configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

87
Figure 43 Elements of the ColdModuleInstanceStateGroups configuration

defaultGroup

Definition of the default group of module states. This is a required item, and only a single definition is allowd.

Attributes;

id: Id of the state group (e.g. Default)

name: name of the state group.

additionalGroup

Definition of the additional group of module states. One or more items may exist.

Attributes;

id: id of the state group (e.g. Wet)


name: name of the state group.

description

Optional description of the state group. Used for reference purposes only.

The name of the ZIP file containing the state follow a strict convention. This name is constructed using the moduleId of the
module using this cold state and writing the warm state, appended by the Id of the state group.

e.g ISIS_Eden_Historical [Link]

12 ModuleInstanceDescriptors
What Required Description schema location

[Link] yes Definition of instances [Link]


of modules

Each module configured in DELFT-FEWS must be registered in the ModuleInstanceDescriptors configuration. This is required to identify the
module to DELFT-FEWS (the name is free format), but is also required to define the type of module through reference to the moduleDescriptors
defined (see system configuration).

When available on the file system, the name of the XML file for configuring the ModuleInstanceDescriptors is for example:

ModuleInstanceDescriptors 1.00 [Link]

ModuleInstanceDescriptors Fixed file name for the ModuleInstanceDescriptors configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

88
Figure 44 Root element of the ModuleInstanceDescriptors configuration

ModuleInstanceDescriptorsId

Root element of the ModuleInstanceDescriptor element. For each module defined the element is repeated. Multiple instances may exist.

Attributes;

Id: Id of the Module Instance. This Id must be unique. Normally a string is used that gives some understanding of the role of the
module (e.g. SpatialInterpolationPrecipitation).

name: Optional name for the module. Used for reference purposes only.

moduleId

Reference to the ModuleDescriptors defined in the SystemConfiguration to identify the type of module.

description

Optional description. Used for reference purposed only.

13 WorkflowDescriptors
What Required Description schema location

[Link] yes Definition of workflows [Link]

Each workflow configured in DELFT-FEWS must be registered in the WorkflowDescriptors configuration. This is required to identify the workflow
to DELFT-FEWS (the format of the name is free). The configuration also sets some properties of the workflow.

When available on the file system, the name of the XML file for configuring the WorkflowDescriptors is for example:

WorkflowDescriptors 1.00 [Link]

WorkflowDescriptors Fixed file name for the WorkflowDescriptors configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 45 Elements of the workflowDescriptor configuration.

workflowDescriptor

Root element of the WorkflowDescriptor. New element is required for each workflow. Multiple instances may be defined.

Attributes;

Id: Id of the workflow. This Id must be unique. Normally a string is used that gives some understanding of the role of the module (e.g.
ImportExternal).
name: Optional name for the module. Used for reference purposes only.
visible: Boolean toggle to indicate if workflow is visible for selection in the manual forecast display. Non-independent workflows (e.g.
sub-workflows) should not be marked visible so that these cannot be run independently. Default is true.

89
Forecast: Boolean flag to indicated if workflow is identified as a forecast. This should be the case for workflows with simulated time series
as a results. Import workflows of external data are not forecasts. Default is true.
allowApprove. Boolean flag to indicate if workflow may be approved a-priori through manual forecast display (stand-alone only). Default is
true.
autoApprove. Boolean flag to indicate workflow should automatically be approved a-priori (stand-alone only). Default is false.
autoSetSystemTime. Boolean flag to indicate workflow should automatically adjust the system time. When the workflow is completed and
is fully or partly successful, the system time wil be set to the start time of the period written by this workflow.
If the start time is not a valid time in accordance with the cardinal timestep, the next valid time wil be used.
Default flag value is false. Applicable only on stand-alone.

14 IdMapDescriptors
What Required Description schema location

[Link] no Definition of ID maps used for mapping [Link]


external parameters and ID's to
DELFT-FEWS locations and parameters

Each IdMap to support mapping external to internal location and parameter Id's configured in DELFT-FEWS can be registered in the
IdMapDescriptors configuration.

From Delft-FEWS version 2008.03 it is no longer required to identify the IdMap to DELFT-FEWS. If this IdMapDescriptor file does NOT
exist or the individual reference to an IdMap is not present, the system will assume the corresponding IdMap file exist within the
IdMapFiles directory. This functionality is similar for UnitConversion(Descriptors), FlagConversion(Descriptors) and
TravelTimes(Descriptors).

When available on the file system or in the (central) database, the content should be in line with the available IdMap files to prevent errors. When
available, the name of the XML file for configuring the IdMapDescriptors is for example:

IdMapDescriptors 1.00 [Link]

IdMapDescriptors Fixed file name for the IdMapDescriptors configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 46 Elements of the IdMapDescriptors configuration.

IdMapDescriptor

Root element of the IdMapDescriptor. New element is required for each IdMap. Multiple instances may be defined.

Attributes;

Id: Id of the idMap. This Id must be unique. Normally a string is used that gives some understanding of the role of the module (e.g.
ImportRTS).
name: Optional name for the IdMap. Used for reference purposes only.

15 FlagConversionsDescriptors
What [Link]

Required no

Description Definition of Flag conversions used for mapping external data quality flags to DELFT-FEWS data quality flags

schema location [Link]

Each FlagConversion to support mapping external to internal data quality flags configured in DELFT-FEWS can be registered in the
FlagConversionDescriptors configuration.

90
From Delft-FEWS version 2008.03 it is no longer required to identify the FlagConversion to DELFT-FEWS. If this
FlagConversionDescriptors file does NOT exist or the individual reference to an FlagConversion is not present, the system will assume
the corresponding FlagConversion file exist within the FlagConversionsFiles directory. This functionality is similar for
IdMap(Descriptors), UnitConversion(Descriptors) and TravelTimes(Descriptors)

When available on the file system, the name of the XML file for configuring the FlagConversions is for example:

FlagConversions 1.00 [Link]

FlagConversions Fixed file name for the FlagConversions configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 47 Elements of the FlagConversions configuration.

FlagConversion

Root element of the FlagConversion. New element is required for each FlagConversion. Multiple instances may be defined.

Attributes;

Id: Id of the FlagConversion. This Id must be unique. Normally a string is used that gives some understanding of the role of the module (
e.g. ImportRTS).
name: Optional name for the FlagConversion. Used for reference purposes only.

16 UnitConversionsDescriptors
What UnitConversionsDescriptors .xml

Required no

Description Definition of unit conversions used for mapping external units to DELFT-FEWS units

schema location [Link]

Each UnitConversion support mapping external to internal units configured in DELFT-FEWS can be registered in the UnitConversionsDescriptors
configuration.

From Delft-FEWS version 2008.03 it is no longer required to identify the UnitConversion to DELFT-FEWS. If this
UnitConversionsDescriptors file does NOT exist or the individual reference to an UnitConversion is not present, the system will assume
the corresponding UnitConversion file exist within the UnitConversionsFiles directory. This functionality is similar for
IdMap(Descriptors), FlagConversion(Descriptors) and TravelTimes(Descriptors.

When available on the file system, the name of the XML file for configuring the UnitConversionDescriptors is for example:

UnitConversionDescriptors 1.00 [Link]

UnitConversionDescriptors Fixed file name for the UnitConversionDescriptors configuration

1.00 Version number

default Unit to indicate the version is the default configuration (otherwise omitted).

Figure 48 Elements of the UnitConversions configuration.

UnitConversionDescriptor

91
Root element of the UnitConversionDescriptor. New element is required for each UnitConversion identified. Multiple instances may be defined.

Attributes;

Id: Id of the UnitConversion. This Id must be unique. Normally a string is used that gives some understanding of the role of the module (
e.g. ImportRTS).
name: Optional name for the UnitConversion. Used for reference purposes only.

17 CorrelationEventSetsDescriptors
What [Link]

Required no

Description Definition of sets of correlation events (used by correlation module only)

schema location [Link]

The correlation module in DELFT-FEWS allows forecasts for a downstream location to be established using a correlation of peak events for the
forecast site and one or more support sites. For each river multiple correlations between several sites on the river may be defined. Correlation
sets can be defined to allow logical ordering these into logical groups. This configuration file defines the groups for which correlation event data
will be later defined.

When available on the file system, the name of the XML file for configuring the CorrelationEventSetsDescriptors is for example:

CorrelationEventSetsDescriptors 1.00 [Link]

CorrelationEventSetsDescriptors
Fixed file name for the CorrelationEventSetsDescriptors configuration

1.00 Version number

default Unit to indicate the version is the default configuration (otherwise omitted).

Figure 49 Elements of the CorrelationEventSetsDescriptors configuration

CorrelationEventSetsDescriptor

Root element of the CorrelationEventSetsDescriptor. New element is required for each CorrelationEventSet identified. Multiple instances may be
defined.

Attributes;

Id: Id of the CorrelationEventSet. This Id must be unique. Normally a string is used that gives some understanding of the group created (
[Link]).
name: Optional name for the CorrelationEventSet. Used for reference purposes only.

18 TravelTimesDescriptors
What [Link]

Required no

Description Definition of sets of travel times for correlation events (used by correlation module only)

schema location [Link]

The correlation module in DELFT-FEWS allows forecasts for a downstream location to be established using a correlation of peak events for the
forecast site and one or more support sites. For each river multiple correlations between several sites on the river may be defined. Together with

92
the correlation establishing a forecast value, an estimate of travel time between the locations can be given. This is given either as a default travel
time, or it is established through regression of the events considered. An estimate of the travel time is also used to establish which events in the
upstream and downstream location are paired.

From Delft-FEWS version 2008.03 it is no longer required to identify the TravelTimes to DELFT-FEWS. If this TravelTimesDescriptor file
does NOT exist or the individual reference to an TravelTime is not present, the system will assume the corresponding TravelTimes file
exist within the TravelTimesFiles directory. This functionality is similar for UnitConversion(Descriptors), FlagConversion(Descriptors)
and IdMap(Descriptors).

Correlation sets can be defined to allow logical ordering these into logical groups. This configuration file is similar to the CorrelationEventSets and
defines the groups for which travel time data will be later defined.

When available on the file system, the name of the XML file for configuring the TravelTimesDescriptors is for example:

TravelTimesDescriptors 1.00 [Link]

TravelTimesDescriptors
Fixed file name for the TravelTimesDescriptors configuration

1.00 Version number

default Unit to indicate the version is the default configuration (otherwise omitted).

Figure 50 Elements of the TravelTimesDescriptors configuration

TravelTimesDescriptors

Root element of the TravelTimesDescriptor. New element is required for each TravelTimes set identified. Multiple instances may be defined.

Attributes;

Id: Id of the TravelTimes set. This Id must be unique. Normally a string is used that gives some understanding of the group created (
[Link]).
name: Optional name for the TravelTimes Set. Used for reference purposes only.

19 TimeUnits
What [Link]

Required no

Description Definition of time units supported by the system (used for mapping external time units to internal time units)

schema location [Link]

External data sources to be imported in DELFT-FEWS may provide data at an equidistant time step. The time unit defined is often defined in a
string, and must be resolved on import to a time unit recognised by DELFT-FEWS. The mapping of time units is defined in the TimeSteps
configuration.

When available on the file system, the name of the XML file for configuring the TimeUnits for example:

TimeUnits 1.00 [Link]

TimeUnits Fixed file name for the TimeUnits configuration

1.00 Version number

default Unit to indicate the version is the default configuration (otherwise omitted).

93
Figure 51 Elements of the TimeUNits configuration

timeUnit

Root element for each external time unit identified. Multiple entries may exist.

Unit

String value for unit as identified in external data source

milliseconds

Equivalent of time unit in milliseconds (base unit in DELFT-FEWS). By convention 0 milliseconds is a non-equidistant time unit. -1 indicates that
the unit is not supported. This is the case for time units such as months, years etc.

20 Historical Events
What [Link]

Required no

Description Definition of historical events to be plotted against real time forecast data for reference purposes

schema location [Link]

DELFT-FEWS allows a set of historical events to be defined that can be retrieved when looking at forecast data through the time series display.
These events can then be displayed in the same plot as the real-time data for reference purposes.

Historical events are configured as a time series referenced to a location/parameter. When that location/parameter is displayed in the time series
display, a drop down list of the events available for that specific combination is displayed. Selected events are displayed in the same sub-plot as
the real time data for that location parameter.

This configuration is optional. If not available no historical events will be displayed.

When available on the file system, the name of the XML file for configuring the HistoricalEvents is:

HistoricalEvents 1.00 [Link]

HistoricalEvents: Fixed file name for the HistoricalEvents configuration (this can now be split into mulitple files with different post fixes i.e.
HistoricalEvents_Northern.xml, HistoricalEvents_West.xml)
1.00: Version number
default: Unit to indicate the version is the default configuration (otherwise omitted).

94
Figure 52 Elements of the HistoricalEvents configuration.

historicalEvent

Root definition of an historical event. Multiple events may be defined.

Attributes:

locationId: Id of the location of the event (see definition of DELFT-FEWS locations).


parameterId: Id of the parameter in the event (see definition of DELFT-FEWS parameters).
name: name of the historical event. This name will be available as a list of historical events.

Alternatively locationId, parameterId and eventData can be left out and replaced with historicalEventSets.

eventData

Time series data for the event. This follows the same defition of the inputVariable detailed in the Tranformation Module configuration. The typical
profile option is used for defining an historical event).

Attributes:

variableId: ID of the variable (group).Later used in referencing the variable.


variableType: Optional type definition of variable (defaults to "any")
convertDatum: Optional Boolean flag to indicate if datum is to be converted.

timeStep

Time step for typical profile if variable to be defined for the historical event.

Attributes:

unit (enumeration of: second, minute, hour, day, week, nonequidistant)


multiplier defines the number of units given above in a time step (not relevant for nonequidistant time steps)
divider same function as the multiplier, but defines fraction of units in time step.

relativeViewPeriod

95
Relative view period of the event. This is the time span of the event. The start and end information will be used when initially plotting the event to
determine its position against the display time at the time of display

data

Data entered to define the event. Data is entered using the dateTime attribute only, the specific date and time values given for each data point.
Other attributes available for defining typical profiles are not used.

Attributes:

dateTime: Attribute value indicating the value entered is valid for a specific date time combination. The string has the format "<year>
<month><day>T<hour>:<minute>:<second>". For example the 23rd of August is "1984-12-31T[Link]".

timeZone

Optional specification of the time zone for the data entered (see timeZone specification).

timeZone:timeZoneOffset

The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.
(e.g. +01:00)

timeZone:timeZoneName

Enumeration of supported time zones. See appendix B for list of supported time zones.

Example of an Historic event without using historical event sets:

<!— Example historic event -->


<historicalEvent name="04-07 January 1999" locationId="[Link].765512" parameterId="[Link]">
<eventData>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="-48" end="24"/>
<data dateTime="1999-01-04T[Link]" value="2.196"/>
<data dateTime="1999-01-04T[Link]" value="2.199"/>
<data dateTime="1999-01-04T[Link]" value="2.201"/>
<data dateTime="1999-01-04T[Link]" value="2.198"/>
<data dateTime="1999-01-04T[Link]" value="2.204"/>
<data dateTime="1999-01-04T[Link]" value="2.213"/>
<data dateTime="1999-01-04T[Link]" value="2.218"/>
<data dateTime="1999-01-04T[Link]" value="2.233"/>
<data dateTime="1999-01-04T[Link]" value="2.252"/>
...
...
...
<data dateTime="1999-01-07T[Link]" value="2.472" comment="Notified everybody to monitor this."/>
<data dateTime="1999-01-07T[Link]" value="2.462"/>
<data dateTime="1999-01-07T[Link]" value="2.453"/>
<data dateTime="1999-01-07T[Link]" value="2.444"/>
</eventData>
</historicalEvent>

Example of an Historic event when using historical event sets:

96
<!— Example historic event sets -->
<historicalEventSet name="04-07 January 1999">
<historicalEvent locationId="[Link].765512" parameterId="[Link]" name="test">
<eventData>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="-48" end="24"/>
<data dateTime="1999-01-04T[Link]" value="2.196"/>
<data dateTime="1999-01-04T[Link]" value="2.199"/>
<data dateTime="1999-01-04T[Link]" value="2.201"/>
<data dateTime="1999-01-04T[Link]" value="2.198"/>
<data dateTime="1999-01-04T[Link]" value="2.204"/>
<data dateTime="1999-01-04T[Link]" value="2.213"/>
<data dateTime="1999-01-04T[Link]" value="2.218"/>
<data dateTime="1999-01-04T[Link]" value="2.233"/>
<data dateTime="1999-01-04T[Link]" value="2.252"/>
<data dateTime="1999-01-07T[Link]" value="2.472" comment="Notified everybody to monitor this."/>
<data dateTime="1999-01-07T[Link]" value="2.462"/>
<data dateTime="1999-01-07T[Link]" value="2.453"/>
<data dateTime="1999-01-07T[Link]" value="2.444"/>
</eventData>
</historicalEvent>
<historicalEvent locationId="[Link].765772" parameterId="[Link]" name="test2">
<eventData>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="-48" end="24"/>
<data dateTime="1999-01-04T[Link]" value="3.146"/>
<data dateTime="1999-01-07T[Link]" value="3.371" comment="Notified AK."/>
</eventData>
</historicalEvent>
</historicalEventSet>
</historicalEvents>

21 Value Attribute Maps


What [Link]

Required no

Description attributes to be mapped from time series values for use in report etc.

schema location [Link]

DELFT-FEWS allows attributes to be associated to values in a time series. This can be used to associate either a textual value or an icon for use
in displays or in reports. Typically the use of value attribute maps is important in forecasts derived through application of the lookup table
modules. Critical conditions are then defined which resolve a combination of inputs to a single "Lookup Index" output. This Lookup index is then
resolved either to a textual message, an icon or a colour using the value attribute maps. The same principle is used in allocating colours/icons to
thresholds, where the unique threshold index is used as an entry to the value attribute mapping.

When available on the file system, the name of the XML file for configuring ValueAttributeMaps is:

ValueAttributeMaps 1.00 [Link]

ValueAttributeMaps Fixed file name for the ValueAttributeMaps configuration

1.00 Version number

default Unit to indicate the version is the default configuration (otherwise omitted).

97
Figure 53 Elements of the value attribute maps configuration

valueAttributeMap

Root element for the definition of a set of attribute values. The Id used to identify this set is later referenced in for example the report module
configuration to allow association of an attribute to a value. Multiple sets may be defined.

Attributes;

id: unique id of the set of value attributes

attibutes

Root element for associating an attribute to a value. Each value may be attributed a definition (text), a colour and/or an icon.

Attributes;

value: value for which the attributes defined must be associated. Note that an exact match is required to allow the mapping to be valid.

description

Text to be attributed where this value is given in the input series. This text may be used in a report.

image

Path and filename of the icon to be attributed where this value is given in the input series. This icon may be used in for example displays as well
as in reports.

colour

Colour to be attributed where this value is given in the input series. This colour may be used in for example background colouring of a table in a
report

22 Locations and attributes defined in Shape-DBF files


Function: Functionality to define locations and to generate locationSets from a DBF file

Where to Use? Locations, LocationSets, IdMaps, DisplayGroups, ThresholdValueSets and ValidationRuleSets

Why to Use? To have only one file or a set of files where all region specific information is stored.

Description: Based on the DBF or shape file you can easily manage the configuration

Available since: DelftFEWS200803

Contents

Contents
Overview
Configuration
locationSets
locationIcons
locations
idMaps
displayGroups
Thresholds
ValidationRuleSets
CoefficientSetFunctions

98
Coefficients that depend on location and time
Coefficients with multiple values (tables)
Sample input and output
Error and warning messages
Known issues
Related modules and documentation
Technical reference

Overview

To be able to have only one file that manages all the regional information, Delft-FEWS offers the functionality to use a DBF or shape files that can
be linked to the configuration. Locations and locationSets can be automatically generated and useful information as idMaps, thresholdvalues or
validation values can be derived from these tables. It is also possible to link to one or more DBF files that contain time-dependent attributes. This
can be used to define time-dependent coefficients that can be used by the new transformation module.

Finally you have a configuration that has many links to the DBF / shape files, but that will be managed only in these files. The advantage is that
these files can simply be updated by automatic updating processes.

The functionality is based on the next principal:

you generate locations in a locationSet and define attributes to these locations that store additional information like idMapping or
validation limits.
locationSets can be generated from the DBF or from another locationSet by using conditions.
idMaps can be linked to the location text attributes.
all values in validationRuleSets can be linked to location number attributes
all threshold values can be linked to location number attributes
displayGroups can be generated automatically from locationSets
it works both for regular point locations as for grids
values in coefficientSetFunctions in a transformation config file can be linked to location number attributes

DBF file format


This functionality currently only works for DBF files in the dBASE III format. The Dbase file should also use the character set
"Western Europe (ISO-8859-1)"

Configuration

locationSets

The most useful way is first to read all locations from the DBF into one locationSet, where all attributes are assigned.
See for example:

99
<esriShapeFile>
<file>gegevens</file>
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<id>%ID%</id>
<name>%NAAM%</name>
<description>%TYPE%</description>
<iconName>%ICONFILE%</iconName>
<parentLocationId>%PARENT%</parentLocationId>
<timeZoneOffset>+05:00</timeZoneOffset>
<dateTimePattern>yyyyMMdd HH:mm:ss</dateTimePattern>
<visibilityStartTime>%START%</visibilityStartTime>
<visibilityEndTime>%EIND%</visibilityEndTime>
<x>%X%</x>
<y>%Y%</y>
<z>0</z>
<attribute id="PARENT">
<text>%PARENT%</text>
</attribute>
<attribute id="TYPE">
<text>%TYPE%</text>
</attribute>
<attribute id="CITECTLOC">
<text>%CITECTLOC%</text>
</attribute>
<attribute id="IDMAP_Q">
<text>%DEBIET%</text>
</attribute>
<attribute id="HMIN_Q">
<number>%HMIN_Q%</number>
</attribute>
<attribute id="HMAX_Q">
<number>%HMAX_Q%</number>
</attribute>
<attribute id="ROC_Q">
<number>%ROC_Q%</number>
</attribute>
</esriShapeFile>

]]>

In the above example the visibilityStartTime and visibilityEndTime tags are used to define the columns in the DBF file that contain the start and
end dateTimes of the visibilityPeriod for each location. The (optional) visibilityPeriod is the period for which a location is visible in the user
interface. The start and the end of the period are inclusive. Currently the visibility period is used in the map (explorer) window, the time series
display and the spatial display. If startDateTime is not defined, then the location is visible for all times before endDateTime. If endDateTime is not
defined, then the location is visible for all times after startDateTime. If startDateTime and endDateTime are both not defined, then the location is
visible for all times. Furthermore the (optional) dateTimePattern tag is used to define the pattern for the dateTimes defined in the DBF file. If
dateTimePattern is not specified, then the default pattern "yyyyMMdd" is used, which is the internal format that a DBF file uses for columns of type
'D' (date columns). The (optional) timeZoneOffset is the offset of the times in the DBF file, relative to GMT. For example "+02:00" means
GMT+02:00. If no offset is specified, then time zone GMT is used by default.

Next you can derive the required locationSets from this dump by using constraints.
You can use constraints like:

attributeTextEquals
attributeTextContains
attributeTextStartsWith
idContains
attributeExists
etc (see schema or the schema diagram)

For example:

100
<locationSetId>gegevensdump</locationSetId>
<constraints>
<not>
<attributeTextEquals id="IDMAP_KLEP" equals=""/>
</not>
<attributeTextEquals id="TYPE" equals="Stuwen"/>
</constraints>

]]>

It is also possible in a locationSet to link to time-dependent attributes. Time-dependent attributes need to be defined in a separate DBF file. In the
locationSet use the attributeFile tag to make a reference to such a file. The following xml example has a reference to the file
[Link], which contains attributes that have different values for different periods in time, as well as different values for different
locations. In this case the startDateTime and endDateTime tags are used to define the columns in the DBF file that contain the start and end
dateTimes for each row. A given row in the DBF file contains values that are only valid between the time period for that row. This period is defined
by the optional startDateTime and endDateTime for that row. If a row has no startDateTime, then it is valid always before the endDateTime. If a
row has no endDateTime, then it is valid always after the startDateTime. If a row has no startDateTime and no endDateTime, then it is always
valid.

<esriShapeFile>
<file>PumpStations</file>
<geoDatum>WGS 1984</geoDatum>
<id>%ID%</id>
<name>%ID%</name>
<x>%X%</x>
<y>%Y%</y>
<z>0</z>
<attributeFile>
<dbfFile>PumpStationsAttributes</dbfFile>
<id>%ID%</id>
<timeZoneOffset>+05:00</timeZoneOffset>
<dateTimePattern>dd-MM-yyyy HH:mm</dateTimePattern>
<startDateTime>%START%</startDateTime>
<endDateTime>%EIND%</endDateTime>
<attribute id="speed">
<number>%FREQ%</number>
</attribute>
<attribute id="discharge">
<number>%POMPCAP%</number>
</attribute>
</attributeFile>
</esriShapeFile>

]]>

locationIcons

Since 2009.02 it is possible to define the location icon with a new option in the locationSets derived from Shape DBF files. You can define the
location icon with the element iconName. The icon files should be defined as complete file name and this file should be available in the
Config\IconFiles directory. If you want to refer to Config\IconFiles\[Link], you should define the iconName as

[Link]]]>

The old method by defining icons in the systemconfigfiles\[Link] is still available.

locations

The regional configuration file Locations is not needed any more, except for other locations that are not supplied in a DBF file.

idMaps

101
<locationIdPattern internallocationset="Pattern Stations" internallocationpattern="H-*"
externallocationpattern="*"/>

..
or
..
<function externallocationfunction="@CITECTLOC@" internallocationset="VV_Q.meting"
internalparameter="[Link]" externalparameterfunction="@IDMAP_DEBIET@"/>

]]>

See that actual idMapping schema for all possible options.

Notice that you can use the location attributes as a function to map to the correct locations. You can create strings based on the attributes, like:

! uses the complete attribute value


externalParameterFunction="@IDMAP_DEBIET@"

! uses two concatenated attribute values


externalParameterFunction="@CITECTLOC@_@IDMAP_DEBIET@"

! uses an attribute values concatenated with a fixed string


externalParameterFunction="@CITECTLOC@_DEBIET"

displayGroups

See all available options in the actual schema. The useful options for using together with the DBF configuration are explained here. Both options
automatically generate the list of the locations in the shortcut trees. The list of locations is ordered alphabetically.

singleLocationDisplays
Adds multiple displays at once to this display group. Every display will show only one location.

singleParentLocationDisplays
Adds multiple displays at once to this display group. Every display will show only childs for one parent location, and the the parent location itself
when specified in the time series sets.

<singleParentLocationDisplays>
<locationSetId>VV_P.[Link]</locationSetId>
<locationSetId>VV_P.meting</locationSetId>
<parentLocationSetId>VV_P.[Link]</parentLocationSetId>
<parentLocationSetId>VV_P.meting</parentLocationSetId>
<plotId>meteo</plotId>
</singleParentLocationDisplays>

]]>

Thresholds

you can use now ...Function alternatives for all the values

<levelThresholdId>LevelWarn</levelThresholdId>
<description>.....</description>
<valueFunction>@SOFT_MAX@</valueFunction>
<upActionLogEventTypeId>TE.571</upActionLogEventTypeId>

]]>

ValidationRuleSets

you can use now ...Function alternatives for all the values, like

extremeValuesFunctions
sameReadingFunctions
etc...

102
<levelThresholdId>LevelWarn</levelThresholdId>
<description>.....</description>
<valueFunction>@SOFT_MAX@</valueFunction>
<upActionLogEventTypeId>TE.571</upActionLogEventTypeId>

]]>

CoefficientSetFunctions

In the new transformation module it is possible to define transformations with embedded coefficientSetFunctions in a transformation config file.
For a given transformation, e.g. StructurePumpFixedDischarge, there is a choice between a coefficientSetFunctions object and a coefficientSet
object. The coefficientSetFunctions object is the same as its corresponding coefficientSet counterpart, only all elements with a value are replaced
by elements with a function. A function is a function expression that can refer to location attributes, e.g. "@discharge@ / 60". See the following
xml example.

<structure>
<pumpFixedDischarge>
...
<coefficientSetFunctions>
<discharge>@discharge@ / 1000</discharge>
</coefficientSetFunctions>
...
</pumpFixedDischarge>
</structure>

]]>

CoefficientSetFunctions are currently (as of build 30246) supported for the following transformations: userSimple, stageDischargePower,
dischargeStagePower, filterLowPass and all structure transformations. See the pages of those specific transformations for configuration
examples.

different types of attributes


When using coefficientSetFunctions, note that the elements can have different types, e.g. float, boolean, enumeration. For each
coefficientSetFunctions type see the schema definition of its corresponding coefficientSet counterpart for the types of the
different elements. The type of an attribute as defined in the locationSets configuration file must match the type of the element in
which the attribute is used.

For elements of type float (e.g. userSimple coefficient value) the attribute should be defined as a number attribute in
the locationSets configuration file as follows:

<number>%COEF_A%</number>

]]>

For elements of type string, boolean (e.g. structureCrumpWeir energyHeadCorrection) or enumeration (e.g.
stageDischarge type) the attribute should be defined as a text attribute in the locationSets configuration file as follows:

<text>%EHCORR%</text>

]]>

Coefficients that depend on location and time

103
A coefficientSetFunction can be very useful when using coefficients that depend on location and/or time. In that case the coefficientSetFunction
needs to be defined only once with a link to the correct attributes. The attributes are defined in a DBF file. Then a transformation run will use the
coefficientSetFunction to create coefficientSets for each location and time-period by taking the required values from the attributes from the DBF
file automatically.

time-dependent attributes
If several attributes are used in the same coefficientSetFunction, then it is still possible to have some of those attributes
time-independent and some time-dependent. However all the time-dependent attributes that are used in a given coefficientSet
should be defined with exactly the same time-periods in the DBF file.

Coefficients with multiple values (tables)

Some transformations require a table, e.g. a head-discharge table, in a coefficientSet. For the purpose of tables it is possible to define a given
attribute in a DBF file with multiple values. To do this make multiple rows with the same location and same period, only with different values for
the attributes. If a given attribute is used in a table in a coefficientSetFunctions object, then for each location and period the multiple values that
are defined for that location and period will be converted to a table during a transformation run. This only works for elements in a
coefficientSetFunctions object that are designated as table elements. An element in a coefficientSetFunctions object is designated as a table
element if, according to the schema, the element can occur only once in the coefficientSetFunctions object, but can occur multiple times in the
corresponding coefficientSet object. This is how a transformation run knows that it should search for multiple values for attributes to create a
table. This is the case e.g. for the headDischargeTableRecord element in the StructurePumpHeadDischargeTable transformation, which would be
used as in the following xml example. In this case the "head" and "discharge" attributes should have multiple values defined in the DBF file so that
a head-discharge table can be created.

tables
All attributes, e.g. "head" and "discharge", that are used in the same table element, e.g. headDischargeTableRecord, should
have the same number of values defined per location per period. It is still possible to have a different number of values for
different periods and different locations, as long as there are as many head values as discharge values per location per period.

<structure>
<pumpHeadDischargeTable>
...
<coefficientSetFunctions>
<headDischargeTableRecord head="@head@" discharge="@discharge@ * 1000"/>
</coefficientSetFunctions>
...
</pumpHeadDischargeTable>
</structure>

]]>

Sample input and output

Sample input and output. You can attach files if necessary

Error and warning messages

Description of errors and warnings that may be generated

Error: Error message

Action: Action to fix

Known issues

Describe all known issues and unexpected behaviour

Related modules and documentation

Links to related parts of the system

Technical reference

Entry in moduleDescriptors: Specification of: ENTRY and DESCRIPTION in the SystemConfigFiles\[Link]

104
<moduleDescriptor id="ENTRY">
<description>DESCRIPTION</description>

23 Qualifiers
Function: Qualifiers to parameters

Where to Use? Time series

Why to Use? To reduce the number of parameters

Description: Gives a qualifier to a parameter, like "minimum" of "observed"

Available since: DelftFEWS200803

Contents

Contents
Overview
Configuration
Qualifier definition
Time Series

Overview

To be able to give additional information to a parameter without creating lots of extra parameters, we introduced the feature of qualifiers.
Qualifiers are used to define a unique time series, next to the locationId and parameterId. Examples are series where you want to derive the daily
minimum, maximum and mean values of an observed series of water levels. The original series is a regular series with parameterId "H" and no
qualifier, where the other series have the same parameterId "H", but qualifiers like "min", "mean" and "max".

Configuration

Qualifier definition

Qualifiers are defined in the regionConfigFiles directory. When available on the file system, the name of the XML file is for example:

Qualifiers 1.00 [Link]

An example looks like:

<qualifiers xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link] xmlns="
[Link]
<qualifier id="min" name="min">
<description>minimum</description>
</qualifier>
<qualifier id="max" name="max">
<description>maximum</description>
</qualifier>
</qualifiers>
]]>

Time Series

The most useful way is first to read all locations from the DBF into one locationSet, where all attributes are assigned.
See for example:

105
<moduleInstanceId>ImportCAW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<qualifierId>min</qualifierId>
<locationSetId>Boezem_Poldergemaal_H.meting</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-6000" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
<synchLevel>5</synchLevel>

]]>

24 Topology
Function: Configure topology of an IFD environment

Where to Mandatory for a IFD installation


Use?

Why to Use? The [Link] is necessary to be able to use panels like the topology panel and the forecast panel

Description: Topology panel is used to define the topology of an IFD environment. Also the behaviour of the forecast panel which is used
to start
IFD runs can be configured here.

Available DelftFEWS201001
since:

Contents

Contents
Overview
Configuration
Nodes definition
Configuration options which apply to all nodes
enableAutoRun
enableAutoSelectParameters
Configuration options which apply to individual nodes
Groupnodes
WorkflowId
StateSelection
LocalRun
Viewpermission
Leaf nodes
NextNodeId
PreviousNodeId
LocationId
FilterId
MapExtendId
Schema

Overview

The [Link] is an mandatory configuration file when you are setting up an IFD-environment. This configuration file is used to configure the
topology of a region.

The topology is defined by individual nodes and their connectivity. The topology can be viewed in the topology panel, which shows a block
diagram of the topology, or in the forecast panel, which shows

a tree view of the topoloy. The behaviour of the forecast panel can also be configured in the topology-file. For example a workflow can be
configured for a topology-node. By default the workflow will

run locally when the node is selected in the forecast panel. This can be switched off by setting the option enableAutoRun to false.

The [Link] plays a central role in configuring an IFD-environment since it is used to configure the forecast panel which is the central panel
in an IFD-environment.

106
Configuration

Nodes definition

The topology of a region is configured by defining the indvidual nodes off a region and grouping them. Below an example from the topology of the
abrfc region

<nodes id="ABRFC">
<workflowId>ABRFC_Forecast</workflowId>
<nodes id="NMWTX" name="NMWTX">
<workflowId>NMWTX_Forecast</workflowId>

<node id="EGLN5" name="EAGLE NEST DAM">


<workflowId>EGLN5_Forecast</workflowId>
</node>
<node id="CMMN5" name="CIMARRON 4SW">
<previousNodeId>EGLN5</previousNodeId>
<workflowId>CMMN5_Forecast</workflowId>

]]></node></nodes></nodes>

In the example above we see that the region abrfc has two leaf nodes CMMN5 and EGLN5. They are grouped in the group NMWTX. The group
NMWTX is part of the toplevel node ABRFC.

This simple example shows how a topology can be defined and how the nodes and groupnodes can be group together. It is also possible to
configure the connnectivity between nodes. This can be done

by using the tag previousNodeId. In the example above we can see that EGLN5 is upstream of node CMMN5. The connectivity between nodes is
visualised in the topology panel.

Configuration options which apply to all nodes

The [Link] has two types of configuration options. The first group is applied to all nodes, the second group is applied to individual nodes. In
this part the first group of options will be explained.

The following global options are available

enableAutorun
enableAutoSelectParameters

These global options are configured at the top of the [Link] before the definition of the nodes.

enableAutoRun

This option is set to true by default. If a topology node is selected in the forecast panel and a workflow is configured for this node and the option is
enabled than the associated workflow will automaticly run locally. By setting this option to false this behaviour can be switched off.

enableAutoSelectParameters

This option is set to false by default. In a node is selected and a filter is configured for that node than the filter will be selected automaticly. If this
option is also enabled than the parameters of that

filter will also be selected automaticly. Because the parameters are also selected after selecting the node the plot display will automaticly show
the time series of the filters in the plot display.

Configuration options which apply to individual nodes

The second group of configuration options are applied to indvidual nodes or a group of nodes. These options are defined in the nodes to which
these options should be applied.

Groupnodes

The following options are available for groupNodes:

workflowId
stateSelection
localRun

107
viewPermission

All options are optional

WorkflowId

The workflowid is optional for a node. If a workflow is configured this workflow is automaticly started after selection of the node if the option
enableAutorun is set to true. The workflow can

also be started from the forecast panel or the modifier panel.

StateSelection

The forecast panel also allows the forecaster to select a state. The default state selection can be configured with this option.

Possible options are:

coldState
warmState
noInitialState

LocalRun

This option can be used to configure if the workflow of this node should be run locally or at the server. By default workflow of leaf nodes are run
locally and workflows of group nodes are

run at the server. Local runs are considered to be temporary runs. The results of these runs are deleted when FEWS is stopped.

Viewpermission

With this option an (optional) viewpermission can be configured. If a user is not allowed to view this node it will not be visible in the forecast panel.

Leaf nodes

The following options are available for leaf nodes:

nextNodeId
previousNodeId
locationId
filterId
mapExtendId
workflowId
initialState
localRun
viewPermission

NextNodeId

This option is used to configure the next node of a topology node in the case that two topology nodes have configured a node to be the previous
node. The nextNodeId indicates which node is considered to be the next node when going downstream by using the next segment button in the
topology panel

PreviousNodeId

The connectivity between nodes is configured by using the previousNodeId-option.

LocationId

This option can be used to connect a location to a topology node. After selection of a node the configured locations are automaticly selected in the
filters.

FilterId

If a filter is configured to a topology node it will automaticly be selected after selection of the topology node.

MapExtendId

If a mapExtendId is configured the map will automaticly zoom to the configured map extend after selection of the node.

The remaing options:workflowId, initialState, localRun and viewPermission are described in the section groupNodes.

108
Schema

25 ModifierTypes
What [Link]

Required no

Description Definition of modifiers in an IFD environment

schema location [Link]

Contents
Contents
Schema
Introduction
Time series modifiers
Single value modifiers
Constant value modifiers
Enumeration modifiers
Time series modifier
Mark unreliable modifier
Compound modifier
Missing value modifier
Switch option modifier
Option modifiers
Module parameter modifiers
Change weight times modifier
Blending steps modifier
Disable adjustment modifier
Sample years modifier
Module parameter modifier
Change ordinates modifier
Reverse order modifiers
Rating curve modifiers
Shift rating curve modifiers

Schema

109
110
Introduction
A forecaster can modify a forecast with so-called Modifiers. Within FEWS there are two
types of modifiers: time series modifiers and parameter modifiers. Parameter modifiers can
modify a parameter of a model or of a transformation.

Time series modifiers modify a time series. The original time series is stored in the database.
However as soon as this time series is retrieved from the database the modifier will be applied to it.
But it is important to note that the original time series is always available in the database.

The [Link] defines which modifiers are available within a fews configuration. The modifiers can be
created in the modifiers panel. TimeValue-modifiers can also be created in the plot display by graphicly
editing a time series or by changing

values in the table.

Time series modifiers

Single value modifiers

A single value modifier is a modifier which modifies only one value at one time step in a time series.
The forecaster can define a single value modifier in the modifier panel by selecting a date and a value.
The combination of the selected time and value is the definition of the single value modifier.

An example of the use of a single value modifier is the WECHNG-modifier of the NWS. This modifier sets the
snow water-equivalent for the date specified. The single value modifier is applied to an empty time series which holds
after the application of the modifier only the value of the modifier. This time series is used as an input time series
to the model. The model adapter reads the time series and knows that it has to change the snow water-equivalent for the
mod date to the specified value.

Display

The display for the single value modifiers is shown below.

The user can enter a value in the text box by entering a value and by clicking on the spinner box next to it.
The value can also be adjusted by the slider bar.
The date of the modifier can be selected in the area above which the forecaster can enter a value for the modifier.
The units of the modifier is shown at the right side of the slider bar.

111
schema

timeSeries

The tag timeseries is used to define to which timeseries this modifier can be applied.

softLimits

The slider in the display is bounded to the soft limits defined. However they can be overruled by entering a higher or lower value in the text box.

hardLimits

The values entered in the text box or the slider are bounded by the hard limits defined.

defaultTime

The default time of the modifier. Currently two options are available time zero and start run.

defaultValue

It is possible to assign a default value to a single value modifier.

The following options are available:

default value
derive a default value from a time series
derive default value from a statistical function

When a default value is configured the modifier will always default to that value.
In case the second option is chosen than it is possible to define a timeseries-filter from which the default value should be derived.
The modifier will look for a value at the time for which the modifier is defined.
The last option allows the forecaster to configure a statistical function from which the value should be derived.
Currently only the principal component analysis-functions support this option.

When the principal component analysis is run in the plot display by selecting the principal component analysis-function the output value of this
function will be default value for the modifier.

112
Configuration example

<timeSeries>
<parameterId>WECHNG</parameterId>
</timeSeries>
<softLimits>
<maximumValue>5</maximumValue>
<minimumValue>0</minimumValue>
</softLimits>
<hardLimits>
<minimumValue>0</minimumValue>
</hardLimits>
<defaultTime>start run</defaultTime>
<defaultValue>0</defaultValue>

]]>

First the id and name of the modifier is declared. In this case this instance of the singeValueModifier will be identified by wechng.
The timeSeries-part identifies that this modifier can be applied to any time series which have the parameter WECHNG.
The modifier has soft limits configured. These limits are used to limit the slider bar in the display.
In this example the slider bar will start at 0 and end at 5. But these soft limits can be overruled by
manually typing a value lower than zero or higher than 5.
The hardLimits identify the upper and lower limit of the mod and they can not be overruled.
This means that for this mod only the maximum value of the soft limit of 5 can be overruled because there
is a minimum value configured in the hard limits of 0. A single value modifier is only applied at one time step.
By default the time step is set to the start of the run in this modifier. The default value is set to 0.

Constant value modifiers

Constant value modifiers are very similar to single value modifiers. But instead of modifying a single value at a
particular point in time, they modify a time series over a period of time with a fixed value.

An example of the use of the constant value modifier is the MFC-modifier. This modifier adjusts the melt factor of the
snow17-model over the specified period of time with the specified value. It is (just as the WECHNG-modifier) applied to an
empty time series and used as an input time series to the snow17-model.

Display

Below the display of a constant value modifier is shown. Which is very similar to the display of single value modifier.
Note however that this modifier has a start and an end time. The constant value of the modifier can be specified in the
text box or with the slider. The period can be defined by using the start and end date boxes.

Schema

Below the schema of the constant value modifier.

113
timeseries

The user can define a timeseries filter in this tag to define to which time series the modifier can be applied.

softLimits

The slider in the display is bounded to the soft limits defined. However they can be overruled by entering a higher or lower value in the text box.

hardLimits

The values entered in the text box or the slider are bounded by the hard limits defined.

defaultstarttime

The default start time of the modifier can be defined here.

Possible options are:

startrun
time zero

defaultendtime

The default end time of the modifier can be defined here.

Possible options are:

time zero
end run

114
defaultvalue

A default value can be defined here.

Configuration example

Below an example configuration of a constant value modifer is shown.

<timeSeries>
<parameterId>MFC</parameterId>
</timeSeries>
<softLimits>
<maximumValue>10</maximumValue>
<minimumValue>0</minimumValue>
</softLimits>
<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>
<defaultValue>1</defaultValue>

]]>

This id of this instance of the constant value modifier is mfc and its name is MFC. It will only be applied to time series
which have the parameterid MFC because a timeseries-filter with parameterid MFC is defined.

No hardlimits are defined but the softLimits are set to a range of 0-10. The slider will have a range of 0 till 10.
But they can be overruled with entering a higher value in the text box.

Enumeration modifiers

Enumeration modifiers are modifiers in which the user can select an option from a dropdown-list. This is modifier is applied to an period
of time.

An example of the use of the eneration modifier is the rain snow modifier from the NWS. In this modifier the forecaster can determine
the precipitation in the snow17-model. Only two options are available rain and snow. If the forecaster chooses option rain than
a value of 1 is set into the timeseries, if an option snow is chosen than the value 2 is set into the timeseries at the specified time.
The modifier is applied to an empty time series and used an input to the model. The model knows that if value 1 is set into the
timeseries that the user has chosen option rain and that if value is 2 that option snow was choosen.

Display

Below an example of the display for an enumeration modifier.

Schema

115
timeseries

The user can define a timeseries filter in this tag.

descriptionEnumeration

Define the text value in the display which is shown before the dropdown list

enumeration

Define the list of options available in the dropdownlist and its associated value which will be placed into
the time series.

defaultstarttime

The default start time of the modifier can be defined here.

Possible options are:

startrun
time zero

defaultendtime

The default end time of the modifier can be defined here.

Possible options are:

time zero
end run

Configuration example

116
Below is an example of the configuration of an enumeration modifier.

<timeSeries>
<parameterId>RAINSNOW</parameterId>
</timeSeries>
<descriptionEnumeration>choose precipitation:</descriptionEnumeration>
<enumeration>
<item value="1" text="rain"/>
<item value="2" text="snow"/>
</enumeration>
<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>

]]>

This modifier is applied to every time series which has parameter id RAINSNOW because a filter is defined with only the parameterId
RAINSNOW.

The text of the label in front of the dropdownlist is configurable. The items in the dropdownlist are also configurable. For this modifier the
forecaster can choose between options rain and snow. If snow is selected a value of 2 in set into the time series for the selected period.
If rain is selected a value of 1 is written into the time series.
The numbers are treated as flags by the model to which the time series is passed.

Time series modifier

The time series modifier is a modifier which allows the forecaster to edit a timeseries by selecting points in a graph or by
changing values in a modifier. In most applications of this modifier the forecaster is directly editing a time series which is used
by the models. For example it might be used to directly edit the precipitation. This is contrary to how for example the single value modifier
WECHNG is used. This modifier edits an empty time series which is than read by the snow17-model which modifies its state based on the input
from
this modifier. A time series modifier always has a start- and enddate. Optionally (if configured) a valid time is available.

The forecaster can edit this time series by making changes in table or in the graph. The changes in the graph are made by clicking in the graph.
When the user clicks from left to right then the values between the points are interpolated.
When the user clicks from right to left only the newly added or changed points are adjusted but no interpolation will be done between
the last two points. When more than one time series is shown in the display it is possible to make a selection of which time series should edited
when making changes by clicking in the graph.
The time series which should be changed can be selected by clicking on the legend of that example time series in the graph

Besides editing the time series by editing values in the graph or table another type can be selected by the dropdownbox.

Available options are:

add
substract
mulitply
divide
replace
missing
ignore time series
time series

When one of the options add, substract, mulitply, divide or replace is choosen than a text box in which a value can be entered appears next to the
operation type-dropdownbox.

117
The options add, substract, mulitply, divide or replace are self-explaining. They add,substract, multiply,divide or replace the timeseries with the
specified value over the specified period of time.

The option missing replaces the values in the time series with missing values over the specified period of time, the ignore time series sets the
value over the specified period of time to unreliable.

The last option time series is the default option which will selected after startup of this modifier and this option allows the forecaster to freely edit
the timeseries.

An example of the use of the time series modifier is the ROCHNG-modifier which is used by NWS to modify the runoff time series.

Display

Below a screenshot of the timeseries modifier.

Schema

Below the schema of the timeseries modifier.

118
119
timeSeries

This tag can be used to identify to which timeseries this modifier can be applied.

resolveInWorkflow
In the tag timeSeries is a filter defined which defines which timeseries can be modified with this timeseries. If the tag
resolveInWorkflow is set than the modifier can be applied to all timeseries in the current workflow to which the defined time
series filter applies. In an IFD-environment the current workflow is the workflow which is associated to the selected topology
node.

resolveInPlots
This tag can only be used in IFD-environments. If this tag is enabled than the timeseries-filter is also applied to all timseries
in the plots associated with the current selected topologynode.

editInPlots
It is possible to create a timeseries modifier in the plot displays. This can be done by selecting a timeseries by selecting a legend.
After selection the timeseries can be modified by graphicly editing the timeseries or by changing values in the graph. This feature can
be disabled by setting this option to false.

createContinousModifiers
If a modifier is created in the graph by default one modifier will be created. However when the option createContinousModifiers is disabled one
modifier will
be created for every continious range of modifications made. For example if the forecaster changes a 6 hours timeseries at 00z and at 12z
but not a 0600z than by default this will result in creating a single modifier, but when this option is disabled two modifiers will be
created. One for each continious range of changes. In this case there is a change at 00z and one at 12z therefore two modifiers will
be created.

Configuration example

Below an example of how a time series modifier should be configured.

<timeSeries>
<moduleInstanceSetId>SACSMA_Forecast</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>INFW</parameterId>
<locationSetId>Gages_Catchments</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<ensembleId>QPF</ensembleId>
</timeSeries>
<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>
<resolveInWorkflow>true</resolveInWorkflow>
<resolveInPlots>false</resolveInPlots>

]]>

This modifier can be applied to the time series identified in tag timeseries.
The modifier will have a start time equal to the start of the run and will end at the end of the run.
The option resolveInWorkflow is set to true and the option resolveInPlots is set to false.
This means that the IFD will search for time series which might be modified in the workflow of the selected
node but it will not search in the time series which are displayed in the plots for this node.

Mark unreliable modifier

This modifier sets all the values in a time series to unreliable over a period so the data will not
be used in the models, but the original values will be displayed.
The display is very similar to the display used for the timeseries modifier however the dropdownbox is disabled
and the option ignore timeseries is enabled.
The forecaster can only edit the start and end dates of the period in which the time series will be set to invalid.
In the Modifiers Display table the unreliable values in the modified time series are marked yellow.

An example of the use of this modifier is the modifier IGNORETS. This modifier is used by the NWS

120
to arrange that the the model RESSNGL or the tranformation AdjustQ ingores certain types of data.
By setting the correct filter in configuration only certain input time series of ressngl or adjustQ
can be ignored by using the modifier.

Display

Below an example of the display of the mark unreliable modifier.

Schema

timeSeries
This tag can be used to identify which timeseries this modifier can be applied.

defaultstarttime

The default start time of the modifier can be defined here.

121
Possible options are:

startrun
time zero

defaultendtime

The default end time of the modifier can be defined here.

Possible options are:

time zero
end run

Configuration example

Below an configuration example of this type of modifier

<timeSeries>
<moduleInstanceSetId>RESSNGL_Forecast</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>PELV</parameterId>
<locationSetId>Reservoirs</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
</timeSeries>
<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>

]]>

Compound modifier

The compound modifier can be used to modify a set of time series with slider bars. Each slider shows a reference
value in blue. If no modification is made the value of the slider will be equal to the reference value. If a modification
is made the slider will always be equal to the value of the modifier. Too indicate that a modification was made the text box
will be made yellow.

An example of the use of the compound modifier is the sacco-modifier. This modifier is used to modify the state of the Sacramento-model.
Each slider represents a stateparameter. In blue the current value is shown, the slider is equal to current value of the model or if the
stateparameter is changed it will be equal to the modification.

Display

Below an example of the display of this modifier.

Schema

122
slider
For each slider the time series which holds the reference values should be configured, and the time series which should contain the modified
value should be configured. Each slider also has maximum value. This maximum is retrieved from the module parameter file of the model. The
tag maximumAllowedValueParameterId identifies which parameter should be used to identify the maximum.

current time series


This time series holds the current value of the model and will be used to determine the value of the blue reference value.

modified time series


If a parameter is changed the modifier will be applied to this time series

maximumAllowedParameterId
The maximum of the slider can be derived from the moduleparameterfile by identifying the parameterId which holds the value
of the maximum

hardLimits
It also possible to define the minimum and maximum of the modififications by hard coding them in the configuration.

defaultTime
Default of modifier date. Possible options are startrun and time zero.

Configuration example

Below an example of the configuration of a compound modifier. In this example a part of the sacco configuration is not shown but only the
configuration of one of the five sliders is shown.

123
<slider>
<currentTimeSeries>
<moduleInstanceSetId>SACSMA_Forecast</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>UZTWC</parameterId>
<locationSetId>Gages_Catchments</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
</currentTimeSeries>
<modifiedTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>UZTWC</parameterId>
<locationSetId>Gages_Catchments</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
</modifiedTimeSeries>
<maximumAllowedValueParameterId>UZTWM</maximumAllowedValueParameterId>
</slider>
<defaultTime>start run</defaultTime>

]]>

Missing value modifier

The missing value modifier can be used to set the values in a time series to missing over a period of time. The user can only define the period of
time over which this modifier is active.

The panel which is used for this modifier is very similar to the panel of the time series modifier.

The dropdowbox however which is used to select an operation type is disabled and set to the type Missing.

An example of the use of this modifier is the SETMSNG-modifier which is used by the NWS. To set the value of certain time series
to missing this modifier is used.

Display

An example of the missing value modifier is shown below.

Schema

124
timeseries

This tag can be used to identify to which timeseries this modifier should be applied.

defaultStartTime
The default start time of the modifier. The available options are startrun and time zero.

offsetDefaultStartTime
The offset start time compared to the option defined in defaultStartTime. For example when an offset of 1 day is configured
in this option and the defaultStartTime is set to timezero than the default starttime of the modifier will be set to
time zero plus 1 day.

defaultEndTime
The default end time of the modifier. The available options are time zero and end run.

offsetDefaultEndTime
The offset of the end time compared to the option defined in defaultEndTime.

expiryTime
This tag can be used to overrule the default expiry time.

resolveInWorkflow
In the tag timeSeries is a filter defined which defines which timeseries can be modified with this timeseries. If the tag
resolveInWorkflow is set than the modifier can be applied to all timeseries in the current workflow to which the defined time
series filter applies. In an IFD-environment the current workflow is the workflow which is associated to the selected topology
node.

resolveInPlots
This tag can only be used in IFD-environments. If this tag is enabled than the timeseries-filter is also applied to all timseries
in the plots associated with the current selected topologynode.

Below an example of the configuration of a missing value modifier.

125
<timeSeries>
<parameterId>QIN</parameterId>
</timeSeries>
<defaultStartTime>start run</defaultStartTime>
<offsetDefaultStartTime unit="day" multiplier="1"/>
<defaultEndTime>time zero</defaultEndTime>
<offsetDefaultEndTime unit="day" multiplier="100"/>
<expiryTime unit="day" multiplier="100"/>
<resolveInWorkflow>false</resolveInWorkflow>
<resolveInPlots>true</resolveInPlots>

]]>

This missing value modifier can only be applied to time series which have the parameterid QIN because the tag timeseries defines a
timeseries-filter with parameterId QIN.

When a missing value modifier is created from the modifiers panel by default the start time of the modifier will be equal to the start of the run plus
1 day.

The end of the modifier will default to time zero plus 100 days.

The tag resolveInWorkflow is set to false and the resolveInPlots tag is set to true which means that the modifier can be applied to all time series in
the plots of the node, but will not be applied to the time series identified in the workflow of the node. In this example there are no time series
defined to which this modifier should be applied. This means it can applied to all time series which are defined in the plots of a node.

Switch option modifier

This modifier allows the forecaster to choose one of the configured time series. If the chosen time series was defined as a timeValue-timeseries
than the forecaster will also have the option to enter a value. If the timeseries was defined as a boolean time series than the forecaster
cannot enter a value and the textbox for the value will be grayed out.

An example of the use of this modifier is the SSARREG-modifier of the NWS. This modifier is used to set the regulation options for a basin.
By using the radio-button a regulation option can be selected. For most regulation options a value can be entered. However the option
FREEFLOW
can only be switched on.

Below an example of the display of a switch option modifier.

For each data at a model time step one of the options can be selected by the radio-button-field. In the value field the forecaster can enter a value
or if the option is only a switch on option, the value field is blocked. The add-icon automatically adds a new entry to the table. By default the new
entry will have the data of the current row plus one model time step. The delete-icon deletes all the selected rows. The entries are allways in
sequence.

126
schema

Configuration example

Below an example of a switch option modifier. For each regulation option available in the display a time series should be defined. The parameterid
of the configured time series will be used as the name of the regulation option in the column regulation.

<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETH</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETQ</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETS</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<timeValueTimeSeries>

127
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETDH</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETDQ</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<booleanTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>FREEFLOW</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</booleanTimeSeries>
<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETDS</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<startTime>start run</startTime>
<effectiveDate/>

]]>

Option modifiers

This modifier is very similar to the switch option modifier. However this modifier doesn't allow to define a option per date.
It only allows to define one option which will be always valid after creation of the modifier.

An example of the use of this modifier is the rainfall_switch of the seqwater-system. This option allows the forecaster to choose
a forecast type (user defined forecast, no rainfall forecast or use the rainfaill forecast).
Secondly it is also possible to choose which rainfaill observations to use the forecast.

Display

Below an example of a option modifier. In this case the example shows the rainfall switch-modifier.

128
Schema

Below the schema of this modifier.

timeValueTimeSeries

First the timeValueTimeSeries are defined. The parameterid of the defined timeseries will be used as an identifier
in radiobutton which can be used to select an option. When an option is selected which is defined as a timeValue-timeseries
than the user can also define a value.

booleanTimeSeries

This option allows the user to define option-types which can only be selected by used, but doesn't offer the possibility to
enter a additional value.

expiryTime

This option can be used to define an expiry time for this modifier which overrules the default expiry time.

Configuration example

129
<booleanTimeSeries>
<parameterId>Grid</parameterId>
<qualifierId>observed</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>Stations</parameterId>
<qualifierId>observed</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>SeqGrid</parameterId>
<qualifierId>observed</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>SeqStations</parameterId>
<qualifierId>observed</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>Forecast ON</parameterId>
<qualifierId>forecast</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>Forecast OFF</parameterId>
<qualifierId>forecast</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>User ON</parameterId>
<qualifierId>forecast</qualifierId>
</booleanTimeSeries>
<expiryTime unit="day" multiplier="1000"/>

]]>

As an example the configuration of the option modifier is shown. This modifier has 6 boolean timeseries defined.
This means that 6 options will available in the display. The options will be split into two groups based on the
qualifierId. In this example it will mean that we will have observed group with Stations, SegGrid, SegStations and
a forecast group with Forecast On, Forecast off and User on.

Module parameter modifiers

Change weight times modifier

This modifier is used in combination with the transformation mergeWeighted. This transformation has two input time series
and creates an output timeseries by taking the weighted average of both input time series. To be able to use this modifier
the transformation must use a module parameter file to define its parameters.

An example of the use of this modifier is BLENTEMP and BLENDPREC. Both modifiers are used by the NWS.
BLENDTEMP is used to blend the observed and simulated temperature time series.
BLENDPREC is used to blend the observed and simulated precipitation time series.
Both blend operations are done as part of the preprocessing step prior to creating a forecast.

Display

Below an example of a change weight time series.

130
The user can add rows with the add-button and delete rows by selecting a row and pressing the delete button.
The first column of the table is used to define the value of the offset of timezero and the second column define the timeunit.
The combination of column 1 and 2 define the total offset from time zero. The third column is calculated from the first two.

In the display-example the first row indicates that at an offset of 1 day the weight of the first time series is 1. This means
that the weight of the second time series is 0. The output of the tranformation mergeWeighted will be equal to the first timeseries
at 1 day after timezero. At 5 days after time zero the weight of the first timeseries will be 0 the weight of the second time series
will therefore be 1. The output at this timestep will be equal to the second time series. Between both time steps the weight will
be determined by linear interpolation.

Schema

Configuration example

<changeWeightTimesModifier id="BLENDTEMP" name="BLENDTEMP"/>

]]>

To be able to use this modifier the only thing that the configuration has to do is to declare that this modifier should
can used by defining it in the [Link] and assign an id and name to it.

It is possible to define more than one changeWeightModifier. By disabling certain changeWeightModifiers in a workflow it can
be arranged that only one type can be used in a workflow. This might be usefull to give meaningfull names to the modifiers so that
the user can identify the name what the modifier will be doing.

Blending steps modifier

The blending steps modifier is a modifier which can only be used in combination with the transformation AdjustQ.
Secondly adjustQ-transformation should use a moduleparameterfile to define its parameters to be able to use this modifier.

This transformation uses observed discharges and simulated discharges to create an output timeseries.

131
One of the parameters of the adjustQ-transformation is the blending steps.
This parameter determines in how many steps the blend from the observed time series to the simulated time series is done.
The blending steps modifier is used to modify this parameter. The modifier doesn't have a start- and/or endtime and is always valid.
The last applied blending steps modifier is always applied. Only one blending steps modifier can be defined in a fews configuration.

An example of the blending steps modifier is the CHGBLEND-modifier. This modifier is used by the NWS to modify the blending steps of the
adjustQ-operation.

Below is an example of a blending steps modifier. The forecaster can enter the value in the text box and/or change it with the up and down arrows
next to the text box.

Schema

Below an example of the configuration an adjustQModifier.


The only thing the configurator has to configure is the id of the modifier and its name.
By doing this the configuration declares that it is allowed to use the blending steps modifier.

<blendingStepsModifier id="chgblend" name="chgblend"/>

]]>

Disable adjustment modifier

The transformation adjustQ creates a discharge time series by using observed discharge time series and simulated observed
discharge time series. When this modifier is applied the observed time series are ignored and the output will be equal to the
simulated discharge time series. This modifier can, like the blending steps modifier, only be used in combination with the
adjustQ-modifier.
Secondly adjustQ-transformation should use a moduleparameterfile to define its parameters to be able to use this modifier.
The moduleparameterfile should define the (optional) parameter disableAdjustment.

Below an example

132
<group id="default">
<parameter id="blendingSteps">
<intValue>1</intValue>
</parameter>
<parameter id="interpolationType">
<stringValue>difference</stringValue>
</parameter>
*<parameter id="disableAdjustment">
<boolValue>false</boolValue>
</parameter>*
</group>

]]>

Typical use of this modifier is espadjq which is used by the NWS to disable the adjustQ-operation in the forecast.

Display

Below an example of the display for this modifier. The forecaster cannot select a start- and/or enddate. This means that if this modifier
is active the adjustQ-operation is disabled.

schema

Below the schema of the disableAdjustmentModifier.

Configuration example

The configurator only has to configure the id and the name of the modifier. By doing this FEWS knows that it is allowed
to use this modifier at a each adjustQ-operation which uses a moduleparameterfile and has the tag disableAdjustment in its

133
moduleparameterfile defined.

<blendingStepsModifier id="chgblend" name="CHGBLEND"/>


<disableAdjustmentModifier id="espadjq" name="ESPADJQ"/>

]]>

Sample years modifier

The transformation sample historic creates ensembles based on historic time series.
The sample years modifier can only be used in combination with this transformation.
To be able to use this modifier the transformation sample historic should use a module parameter file to define its configuration options.

An example of the use of this modifier is the modifier HistoricWaterYears which is in use by NWS.
It is used by the forecasters to overrule the default sample years in the transformation.

Display

Below an example of the display of this modifier.

The forecaster can modifiy the default sample years by changing the start year and end year in the display.

Schema

Configuration example

Below an example of the configuration of this modifier.

<sampleYearsModifier id="historicwateryears" name="historicwateryears"/>

]]>

Module parameter modifier

The module parameter modifier is a generic module parameter file editor which can be used to modifiy every module parameter file.

134
It is possible to limit the number of module parameter files which can be modified by applying a filter. It is also possible to have
more than one module parameter is a fews region. By giving the modifiers a different name and a different filter it is possible to
define two modifiers which each modifiy a certain parameter of a certain module parameter file.

An example of the use of this modifier is the BASEFLOW-modifier of the NWS. The modifier modifies the BASEFLOW-parameter of the UNITHG-
model.

Display

Below an example of the baseflow-modifier.

Schema

Below the schema of the baseflow-modifier.

filter

Define a filter based on parameterids. This filter will be used to determine which moduleparameterfiles can be edited with this modifier and which
part

of the moduleparamterfile can be editted.

Configuration example

135
Below an example of the configuration of a module parameter modifier.

With the tag filter can be identified which module parameter files can be modified.

In the example below every module parameter file with the tag CONSTANT_BASE_FLOW can be modified.

The filter is also used to filter which part of the module parameter file can be modified.

In the example below only the module parameters with id CONSTANT_BASE_FLOW are shown in the modifiers display and are editable.

<filter>
<moduleParameterId>CONSTANT_BASE_FLOW</moduleParameterId>
</filter>
<defaultValidTime/>

]]>

Change ordinates modifier

This modifier can be used to change the ordinates of the module parameter file of the unit Hydrograph-model.

The ordinates can be changed in the table or in the graph. When the user presses the apply button the ordinates are adjusted by using a
volume-correction.

The volume correcton will ensure that the volume without the modifier applied is the same as the volume of unit hydrograph after the modifier is
applied.

Display

Below an example of the display of this modifier.

Schema

136
defaultStartTime
The default start time of the modifier. The available options are startrun and time zero.

defaultEndTime
The default end time of the modifier. The available options are time zero and end run.

offsetDefaultEndTime
The offset of the end time compared to the option defined in defaultEndTime. For example when the default end
time of the modifier is set to end run and an offset of 100 days is defined than the default end time of the modifier
will be set to end run plus 100 days.

defaultValidTime
If this option is configured than a valid time can be choosen for this modifier. The valid time always default to time zero.

Configuration example

Below an example of the configuration of this modifier.

<changeOrdinatesModifier id="unithg" name="UNITHG">


<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>
<defaultValidTime/>
</changeOrdinatesModifier>

]]>

Reverse order modifiers

137
This modifier can be used to reverse the data hierarchy of the merge simple transformation.

When this modifier is active on the transformation the data hierachy is reversed.

An example of the use of this modifier is the switchts-modifier of the NWS. With this modifier the forecasters temporarily favor one

timeseries above the other because the timeseries which normally is used as the primary timeseries is considered to be less reliable.

Display

Below an example of the display of a reverse order modifier. The display is empty. The forecaster can only set a start- and endtime of the
modifier.

If configured it is also to enter a valid time for this modifier.

Schema

Below an example of the display of this modifier. This display is blank, the forecaster can only enter a period in which this modifier is active.

defaultStartTime
The default start time of the modifier. The available options are startrun and time zero.

138
defaultEndTime
The default end time of the modifier. The available options are time zero and end run.

offsetDefaultEndTime
The offset of the end time compared to the option defined in defaultEndTime.

defaultValidTime
If this option is configured than a valid time can be choosen for this modifier. The valid time always default to time zero.

Configuration example

Below a configuration example.

<reverseOrderModifiers id="switchTs" name="SWITCHTS">


<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>time zero</defaultEndTime>
</reverseOrderModifiers>

]]>

Rating curve modifiers

Shift rating curve modifiers

Rating curve modifiers are used to modify a rating curve.

The rating curve can be modified by shifting the whole rating curve a constant value or by multiplying it with a factor.

The constant value or the multiplication factor is calculated by the following procedure.

the forecaster defines a stage/discharge pair,


from the given stage the discharge is calculated by using the rating curve,
the difference or factor between the given discharge and the calculated discharge is calculated.

This type of modifier is in use by ncrfc one of the rfc's of the NWS. They use this modifier to temporarily modify the rating curve.

However when new rating curves are available they are imported in their system.

Display

An example of the display of this modifier is shown below. The forecaster can define a stage/discharge pair by defining a pair in the textboxes.

However it is also possible to double click on a point in the graph to define a pair. From the defined stage/discharge pair automaticly the constant
value

or multiplication factor is derived, which is displayed besides the given stage/discharge pair. The radio button at the top of the display can be
used to

switch between the two types of modifier (constant value or percentage).

139
Schema

140
defaultStartTime
The default start time of the modifier. The available options are startrun and time zero.

offsetDefaultStartTime
The offset start time compared to the option defined in defaultStartTime. For example when an offset of 1 day is configured
in this option and the defaultStartTime is set to timezero than the default starttime of the modifier will be set to
time zero plus 1 day.

defaultEndTime
The default end time of the modifier. The available options are time zero and end run.

offsetDefaultEndTime
The offset of the end time compared to the option defined in defaultEndTime.

Configuratie voorbeeld

<shiftMultiplyRatingCurveModifier id="qpcshift" name="qpcshift">


<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>
<offsetDefaultEndTime unit="day" multiplier="100"/>
</shiftMultiplyRatingCurveModifier>

]]>

26 TimeSteps
Function: Configure predefined timesteps for a fews environment

141
Where to Use? To define verbose timesteps or to define yearly or monthly time steps

Why to Use? Yearly and monthly time steps can only be configured in the [Link]. For verbose timesteps it might be

usefull to define them once in the [Link] and refer to them from other configuration files.

Description: Definition of timesteps which can be referenced from other configuration files

Available since:

Contents

Contents
Overview
Configuration
Schema

Overview

The [Link] can be used to configure timesteps. This file is usefull to define verbose timesteps and refer to the definition of these timesteps
to

from other configuration files.

Configuration

When available on the file system, the name of the XML file is for example:

TimeSteps 1.00 [Link]

TimeSteps Fixed file name for the TimeSteps configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Schema

Below the schema of the [Link] configuration files.

142
timeStep

Attributes;

143
- id: Unique Id of level threshold. This id should be used when referencing to this definitation from other configuration files.

The following link describes in detail of to configure a timestep.

[Link]

yearlyTimeStep

A timeStep that defines a pattern of dates in a year. This pattern will be repeated every year.

Each date in the year can have a different aggregation period (season).

The start of the aggregation period is exclusive and the end of the aggregation period is inclusive.

This yearlyTimeStep is meant for seasons, therefore limited to four dates.

If more than four dates in a year are required, then please use the monthDays option in the timeStep element instead of this yearlyTimeStep

Schema yearly time step

Below the schema of the yearly time step

Below a configuration example

<monthDay value="--01-02" start="--01-01" end="--01-02"/>


<monthDay value="--04-03" start="--04-02" end="--04-03"/>
<monthDay value="--07-04" start="--07-03" end="--07-04"/>
<monthDay value="--10-05" start="--10-04" end="--10-05"/>

]]>

To define a yearly time step an id should be configured. Secondly the monthDays of the time steps should be configured.

144
In this example the yearly time step has 4 monthDays. The start attribute defines the start of the aggregation period and the
end tag defines the end of the aggregation period. The value defines the value of the monthDay itself.

Monthly time step

A timeStep that defines a pattern of days in a month. This pattern will be repeated every month. Each day in the month can have a different
aggregation period. The start of the aggregation period is exclusive and the end of the aggregation period is inclusive.

Schema monthly time step

Below the schema of the monthly time step.

Below a configuration example

<day value="02" start="01" end="02"/>


<day value="11" start="10" end="11"/>

]]>

05 Configuring the available DELFT-FEWS modules

Introduction
All functionality used be DELFT-FEWS in processing dynamic data and running external forecasting modules is configured in a module instance.
These are then executed in a logical sequence as defined in a workflow.

A variety of modules is available in DELFT-FEWS to provide specific functionality. Examples include interpolation, running external modules,
data import etc. The modules available are defined in the ModuleDescriptors in the System configuration. This defines the Java classes used to
run each module, and assigns a recognizable name for that module (e.g. Transformation). These Java classes implement the workflow plug-in
interface to DELFT-FEWS. The list of available modules can be extended through adding classes implementing this plug-in interface.

To carry out a specific piece of data processing, and instance of a module is configured. This instance specifies the input data, the output data
and the required steps in processing the data. Each module instance is given a unique name with which it is identified in the module instance

145
section of the configuration. To link an instance of a module to the type of module available, the module instance is registered in the
ModuleInstanceDescriptors in the Regional Configuration section.

The list of module instances in


DELFT-FEWS includes:

Interpolation Module
Transformation Module
Import Module
Export Module
General Adapter Module
Lookup Table Module
Correlation Module
Error Correction Module
Report Module
Report Export Module
Performance Indicator Module

Many of the configuration items required will include references to strings. To avoid duplication, a tag can be defined in the
[Link] file in the root configuration and the tag name used in the XML file (see also System Configuration).

Contents
01 Interpolation Module
02 Transformation Module
03 Import Module
04 Export modules
05 General Adapter Module
06 Lookup Table Module
07 Correlation Module
08 Error Correction Module (ARMA)
09 Report Module
10 Performance Indicator Module
11 Amalgamate Import Data Module
12 Archive Module
13 Rolling Barrel Module
14 Support Location Module
15 Scenario Module
16 Pcraster Transformation (pcrTransformation)
17 WorkflowLooprunner
18 Mass-balances
19 Rating curves
20 Transformation Module (Improved schema)
21 Secondary Validation
22 forecastLengthEstimator
23 Decision Module
24. ImportAmalgamate

01 Interpolation Module
What [Link]

Description Configuration for interpolation module

schema location [Link]

Entry in ModuleDescriptors <moduleDescriptor id="Interpolation">


<description>General Interpolation Component</description>
<className>[Link]</className>
</moduleDescriptor>

Interpolation Module Configuration


The Interpolation module generates data at desired locations or at desired points in time by means of either a serial or spatial interpolation
technique. It is applied for the filling in of data gaps in measured on-line data, as well as to derive spatially distributed data for meteorological time
series, such as precipitation and temperature, based on information available at neighbouring locations.

146
Two methods of interpolation are available;

Serial interpolation

In serial interpolation mode, interpolation is done to fill any gaps in a time series. The interpolation module will only consider the time series itself
in filling these gaps. Interpolation methods that can be used are;

Filling of gaps with a default value


Filling of gaps by linear interpolation
Filling of gaps by block interpolation
Extrapolation of gaps at start or end of a time series

All these methods can be configured to only fill gaps that are not more than of a given duration. Essential to the understanding of the Interpolation
module is that the module does not have the capability to identify gaps due to potentially unreliable data in a time series. It will only provide an
alternative value for those data points of which the quality flag is set to Unreliable. The validation module can be configured to identify unreliable
data and set quality flags as appropriate.

Spatial Interpolation

In spatial interpolation mode, the interpolation can be either applied to fill gaps in time series, or to create a new time series for a location using
data from other (spatially distributed) locations. Spatial interpolation can also be applied for sampling scalar time series from grid time series, for
re-sampling grids, or for creating grids from time series data. Different methods of spatial interpolation are available;

spatial interpolation using Kriging


spatial interpolation using Inverse Distance Weighting
spatial interpolation using bi-linear interpolation
Averaging of grid cells within a sub-basin boundary
Averaging of grid cells within a sub-basin boundary using XML data
spatial interpolation using clossest distance
input average times output Area
Renka-Cline Triangulation
Sum of grid cells within a sub-basin boundary.

When available as configuration on the file system, the name of the XML file for configuring an instance of the interpolation module called for
example InterpolateHBV_Forecast may be:

InterpolateHBV_Forecast 1.00 [Link]

InterpolateHBV_Forecast File name for the InterpolateHBV_ForecastData configuration.


1.00 Version number
default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 54 Root element of the interpolation module configuration

interpolationSet

Root element for the definition of an interpolation step. Multiple entries may exist.

Attributes;

interploationId : Id of the interpolation defined. Used for reference purposes only. This Id will be included in log messages generated.

serialInterpolation

Root element for the definition of serial interpolation options.

147
spatialInterpolation

Root element for the definition of spatial interpolation options.

timeSeriesInputSet

Input time series set. Note that when the interpolation module is used to fill gaps in time series the input time series set is the same as the output
time series set. The time series sets may include either a single location or a locationSet. Note that the latter may not always be possible when
using the "default" interpolation option, as the default may be location specific.

outputSet

Output time series set. Note that when the interpolation module is used to fill gaps in time series the input time series set is the same as the
output time series set. Identification is only required when the series generation option is used in spatial interpolation. The locations defined in this
timeSeriesSet, and their geographical attributes, determine the locations of the series generated.

Serial interpolation

The serial interpolation option is used to define interpolation options for filling gaps in the time series identified. Multiple methods of interpolation
may be identified. These will be executed in order of definition for the same time series (e,g, first linear interpolation, then an extrapolation and
finally filling remaining gaps with default values).

Figure 55 Elements for defining serial interpolation options in the Interpolation module configuration.

serialInterpolationOption

Selection of type of serial interpolation. Enumeration of available options is;

linear ; for linear interpolation between available values


block ; for block interpolation (note: the last available value is then used until a new value available).
extrapolate ; for extrapolation at start or end of series. Extrapolation uses the last or first value to fill the gaps at the end or start of the
series.
default ; for replacing unreliable values with a default.

gapLength

Maximum gap length of unreliable data in seconds which will be filled using the interpolation option defined. If the gap is longer, then none of the
values will be replaced.

defaultValue

Default value to use to replace interpolation values.

Spatial interpolation

The serial interpolation option is used to define interpolation options for filling gaps in the time series identified using available data from other
(spatially distributed) locations. This method can be used to either fill gaps, or to create a new time series.

148
Figure 56 Elements for defining spatial interpolation options in the Interpolation module configuration.

interpolationOption

Selection of type of spatial interpolation. Enumeration of available options is;

inversedistance ; for inverse distance weighted interpolation between available values at spatially distributed locations.
bilinear ; for bilinear interpolation between available values at spatially distributed locations.
kriging ; for interpolation using Kriging between available values at spatially distributed locations.
gridcellavaraging; for interpolation of time series based on averaging grid cells (used for example for establishing catchment averages
where the catchment size is much larger than the grid cell size).
Closest distance; for interpolation of time series based on the closest distance between available values at spatially distributed locations.
An extra option is to interpolate from a grid to a longitudinal profile.

interpolationType

Specify if spatial interpolation is used for filling gaps in series or for generating a new series. Note in the latter case the output variable will need to
be defined. This also defines if the output variable is a grid time series or a scalar time series. The available options are:

seriesfilling: for filling gaps in time series (scalar timeseries only).


seriesgeneration: for creating a new time series.

valueOption

149
Option to determine how input values are used. Enumeration of available options is;

normal ; for using values as is.


residual: for applying linear regression first and applying spatial interpolation on residuals of values only.
splitwithelevation; for applying different linear regression parameters above and below an elevation split (required only when elevation is
considered).__

variogram

Root element for the semi-variogram to be used when Kriging is applied.

variogram:type

Type of variogram to be used. Enumeration of available options is;

exponential ;

Gaussian ;

Linear ;

Spherical ;

_power

is the correlation coefficient, and the distance between parameter pairs.

variogram:nugget

Nugget of the variogram

variogram:slope

Slope of the variogram. Used for linear variogram types.

variogram:sill

Sill of the variogram.

variogram:range

Range of the variogram.

numberOfStations

Number of stations to consider in spatial interpolation. Used in Inverse distance when taking a limited number of stations into account. The
nearest stations will be used in preference.

regressionElevation

Elevation level at which the regression split is applied.

minimumValue

150
Minimum value of the output data. For interpolation of rainfall data this should be set to zero. Numerically the interpolation may produce invalid
(negative) data.

distanceParameters

Distance parameters for computing actual distances between locations when projection is geographical (WGS1984). Four parameters are
required.

debug

Optional debug level. Spatial interpolation is implemented through a DLL. This can produce a log file, depending on level specified. A setting of 1
is the lowest level, a setting of 4 is highest (can produce very extensive log files).

coordinateFile

Coordinate file allocating grid cells to be considered per location. This coordinate file follows a specific format. Locations to be interpolated to are
indicated through their spatial location. After each location a list of grid cells (m,n coordinates) to be considered is included.

coordinateSystem

Indicates if coordinate system is longitude-latitude this is defined as 1. If not 0 is used and distances are calculated in metres.

inverseDistancePower

Power applied to the inverse distance interpolation.

02 Transformation Module
What [Link]

Description Configuration for the transformation module

schema location [Link]

Entry in ModuleDescriptors <moduleDescriptor id="Transformation">


<description>General Transformation Component</description>
<className>[Link]</className>

</moduleDescriptor>

151
Transformation Module Configuration
The Transformation module is a general-purpose module that allows for generic transformation and manipulation of time series data. The module
may be configured to provide for simple arithmetic manipulation, time interval transformation, shifting the series in time etc, as well as for applying
specific hydro-meteorological transformation such as stage discharge relationships etc.

The Transformation module allows for the manipulation and transformation of one or more time series. The utility may be configured to provide
for;

Manipulation of one or more series using a standard library of arithmetic operators/functions (enumerated);
Addition, subtraction, division, multiplication
Power function, exponential function
Hydro-meteorological functions like:
Deriving discharges from stages
Compute potential evaporation
Calculating weighted catchment average rainfall
Shifting series in time
Time interval conversion:
Aggregation
Dis-aggregation
Converting non-equidistant to equidistant series
Creating astronomical tide series from harmonic components
Handling of typical profiles
Data hierarchy
Selection of (tidal) peaks
statistics

When available as configuration on the file system, the name of the XML file for configuring an instance of the transformation module called for
example TransformHBV_Inputs may be:

TransformHBV_Inputs 1.00 [Link]

TransformHBV_Inputs File name for the TransformHBV_Inputs configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 57 Root element of the Transformation module.

152
transformationSet

Root element for the definition of a transformation (processing an input to an output). . Multiple entries may exist.

Attributes;

transofrmationId : Id of the transformation defined. Used for reference purposes only. This Id will be included in log messages
generated

Figure 58 Elements of the definition of an input variable.

inputVariable

Definition of the input variables to be used in transformation. This may either be a time series set, a typical profile or a set of (harmonic)
components. The InputVariable is assigned an ID. This ID is used later in the transformation functions as a reference to the data.

Attributes;

variableId : ID of the variable (group).Later used in referencing the variable.


variableType : Optional type definition of variable (defaults to "any")
convertDatum : Optional Boolean flag to indicate if datum is to be converted.

Available harmonic components are listed in the attached file.

timeSerieSet

Definition of an input variable as a time series set (see TimeSeriesSet definition).

timeStep

Time step for typical profile if variable to be defined as typical profile.

Attributes;

unit (enumeration of: second, minute, hour, day, week, nonequidistant)


multiplier defines the number of units given above in a time step (not relevant for nonequidistant time steps)**
divider same function as the multiplier, but defines fraction of units in time step.**

relativeViewPeriod

Relative view period of the typical profile to create. If this is defined and the time span indicated is longer than the typical profile data provided,
then the profile data will be repeated until the required time span is filled. If the optional element is not provided then the typical profile data will be
used only once.

data

Data entered to define the typical profile. Data can be entered in different ways. The typical profile can be defined as a series of values at the
requested time step, inserted at the start of the series, or it can be mapped to specific time values (e.g. setting a profile value to hold at 03:15 of
every day). Which of these is used depends on the attributes defined.

Attributes;

value : Required value for each step in the profile


monthDay : Attribute value indicating the value entered is valid for a month/day combination. The year value is added depending on the
year value in which it is used. The string has the format "-[month][day]". For example the 23^rd^ of August is "--08-23".
dateTime : Attribute value indicating the value entered is valid for a specific date time combination. The string has the format "[year]
[month][day]T[hour]:[minute]:[second]". For example the 23^rd^ of August is "1984-12-31T[Link]".

153
time : Attribute value indicating the value entered is valid for a specific time, irrespective of the date. The date value is added run time.
The string has the format "[hour]:[minute]:[second]". For example "[Link]".

timeZone

Optional specification of the time zone for the data entered (see timeZone specification).

timeZone:timeZoneOffset

The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.
(e.g. +01:00)

timeZone:timeZoneName

Enumeration of supported time zones. See appendix B for list of supported time zones.

arithmeticFunction

Root element for defining a transformation as an arithmetic function (see next section for details).

hydroMeteoFunction

Root element for defining one of the available hydro-meteorological transformations.

ruleBasedTransformation

Root element for defining a rule based transformation (see next section for details on rules).
Attributes;

rule : definition of aggregation approach. Enumeration of;


selectpeakvalues
selectlowvalues
selectpeakvalueswithincertaingap
selectlowvalueswithincertaingap
equitononequidistant
equitononequidistantforinstantaneousseries
equitononequidistantforaccumulativeseries
datahierarchy
typicalprofiletotimeseries
zerodegreealtitudelevel
datatotimeseries

aggregate

Root element for defining a time aggregation transformation (rules are discussed below)
Attributes;

rule : definition of aggregation approach. Enumeration of;


instantaneous
accumulative
mean
constant

disaggregate

Root element for defining a time dis-aggregation transformation (rules are discussed below)
Attributes;

rule: definition of disaggregation approach. Enumeration of;


instantaneous
accumulative
disaggregateusingweights
constant

nonequidistantToEquidistant

Root element for defining transformation of an non-equidistant time series to an equidistant time series. (rules are discussed below)
Attributes;

rule: definition of approach. Enumeration of;

154
zero
missing
linearinterpolated
equaltolast

Statistics

Root element for defining statistical transformations.

Season: the statistics transformation can also be carried out for a specific season which is defined by a start and end date. If multiple seasons
are specified, then the statistics transformation will be carried out separately for each specified season. A warning will be given when seasons
overlap in time.

startMonthDay: defines start time of season "--mm-dd"


endMonthDay: defines end time of season "--mm-dd"
timeZone

Function:

available functions *
max
min
sum
count
mean
median

155
standardDeviation
percentileExceedence
percentileNonExceedence
quartile
skewness
kurtosis
variance
rsquared
rootMeanSquareError
isBlockFunction:* *if true, the statistical parameters are calculated for each time window defined by the time step of the output time
series, e.g. time step year leads to yearly statistical parameters. If false and output time series time step is set to nonequidistant, the
statistical parameters are calculated for the relative view period (one value for the whole period) or for the individual season if applied.
inputVariableId
outputVariableId
value: if function percentileExceedence or percentileNonExceedence is chosen, the desired percentile has to be defined, e.g. 75-th
percentile => value="75"
ignoreMissing: if true, all missings of the input time series are not taken into account in the statistical calculation.
seasonal: this option is only relevant when using seasons. If true (default), then one result value per season per year is returned. If false,
then for each season only one (combined) result value is returned. For example when seasonal is false, the month January is specified
as a season, the input time series contains data for a period of ten years and the function max is specified, then the result will be the
maximum of all values in January in all ten years. Note: if a specific season (e.g. January 2006) is not fully contained within the input time
series, then this specific season is not used in the calculations. For example if the month January is specified as a season and the input
time series contains only data from 15 January 2006 to 1 March 2008, then only January 2007 and January 2008 will be used in the
calculations. In this case January 2006 will not be used in the calculations.

ArithmeticFunction & hydroMeteoFunction

Through definition of an arithmetic function, a user defined equation can be applied in transforming a set of input data to a set of output data. Any
number of inputs may be defined, and used in the user defined function. Each input variable is identified by its Id, as this is used configuring the
function. The function is written using general mathematical operators. A function parser is used in evaluating the functions (per time step) and
returning results. These are again assigned to variables which can be linked to output time series through the variableId.

Rather than use a usedDefinedFunction, a special function can also be selected from a list of predefined hydroMeteoFunctions. When selected
this will pose requirements on other settings.

Transformations may be applied in segments, with different functions or different parameters used for each segment. A segment is defined as
being valid for a range of values, identified in one of the input variables (see example below).

Figure 59 Example of applying segments to a time series

156
Figure 60 Elements of the Arithmetic section of the transformation module configuration

segments

Root element for defining segments. When used this must include the input variable Id used to determine segments as an attribute.
Attributes;

limitVariablId : Id of input variable used to test against segment limits.

segment

Root element for definition of a segment. At least one segment must be included.

limitLower

Lower limit of the segment. Function defined will be applied at a given time step only if value at that time step in the variable defined as
limitVariable is above or equal to this value.

limitUpper

Upper limit of the segment. Function defined will be applied at a given time step only if value at that time step in the variable defined as
limitVariable is below this value (below or equal only for the highest segment).

functionType

Element used only when defining a predefined hydroMeteoFunction. Depending on selected function, specific requirements will hold for defining
input variables and parameters. If a special function is selected then the user defined function element is not defined; Enumeration of available
options is (the most important are discussed below);

simpleratingcurve ; for applying a simple power law rating curve.


weigthtedaverage : special function for calculating weighted average of inputs. When a value in one of the inputs is missing, the
remaining inputs will be used and the weights rescaled to unity.
penman: for calculating evaporation using Penman
penmannortheast: specific implementation of Penman formula
qhrelationtable : allows application of a rating curve using a table.
degreemanipulation
accumulation: this calculates a moving sum. For this the window needs to be configured. For a given output time the output value equals
the sum of the input values within the period (currentOutputTime - window, currentOutputTime). The start of the period is exclusive and
the end of the period is inclusive.

userDefinedFunction

Optional specification of a user defined function to be evaluated using the function parser. Only the function need be defined, without the equality
sign. The function is defined as a string and may contain Id's of inputSeries, names of variables and constants defined, and mathematical
operators
Operators offered

scalar series: +, -, /, *, ^, sin, cos, tan, asin, acos, atan, sinh, cosh, tanh, asinh, acosh, atanh, log, ln, exp, sqrt, abs, pow, min, max,
minSkipMissings, maxSkipMissings, sumSkipMissings, average
operators for conversion of grid to scalar series: spatialMin, spatialMax, spatialSum, spatialSumSkipMissings, spatialAverage

157
h54 constant
Allows definition of a constant to be used in the function.

coefficient

Optional element to allow coefficients for use in the function to be defined. These coefficients are allocated and Id for later use in the function
defined. For user defined functions specific coefficients need to be defined. Multiple entries may be defined.
Attributes;

coefficientId : Id of the coefficient defined. This can be used in the function.


coefficientType : identification of the coefficient. Applied in rule based configurations.
value: value of the coefficient

tableColumnData

Definition of a table to use for transforming input variable to output variables.


Attributes;

nrOfColumns: number of columns in table (should equal 2).


variableIdCol1 Input variable associated with first column
variableIdCol2 Output variable associated with first column

tableColumnData:data

Element containing data for each row in the table


Attributes;

col1: value for column 1


col2: value for column 2

outputVariable

Id of the output variable from the function. This may be saved to the database by associating the Id to an outputVariable.

flag

Optional element to force saving the result data for the segment with a given flag. This may be used for example to force data from a segment as
doubtful. Enumeration is either "unreliable" or "doubtful". if data is reliable the element should not be included.

Stage-Discharge and Discharge-Stage transformation

Stage discharge transformations can be defined using the simpleratingcurve option of the hydroMeteoFunctions. To apply this certain properties
must be defined in each segment.

For stage-discharge transformation the requirements are;

Coefficient values for coefficientId's "a", "b" and "c" must be defined.
Rating curve formula is Q = a * (H+b) ^c
Input variable Id must be "H"
Output variable Id must be "Q".
limitVariableId must be "H".

Example:

158
For stage-discharge transformation the requirements are;

Coefficient values for coefficientId's "a", "b" and "c" must be defined.
Input variable Id must be "Q"
Output variable Id must be "H".
limitVariableId must be "Q".

Example:

Establishing catchment average precipitation

Catchment average rainfall can be determined by weighting input precipitation time series. The weightedavarege option of the
hydroMeteoFunctions can be applied to include the option of recalculation of weights if one of the input locations is missing. To apply this certain
properties must be defined in each segment.

For establishing catchment average precipitation the requirements are;

functionType must be set to weightedavarege


Weights are given as coefficient values with coefficientId's "a", "b" and "c" etc.

159
Additional coefficients may be defined to allow for altitude correction.

Example:

Aggregation, disaggregation and non-equidistant to equidistant

This set of transformations allows temporal aggregation and disaggregation of time series. The time step defined in the input variable and the
output variable determine the howthe time steps are migrated. The configuration need only define the rule followed in aggregation/disaggregation.
Aggregation and disaggregation can only be used to transform between equidistant time steps. A nonequidistant series can be transformed to an
equidistant series using the appropriate element (see above).

Aggregation rules;

Instantaneous: apply instantaneous resampling- ie value at cardinal time step in output series is same as in input time series at that time
step.
accumulative : value in output time series is accumulated sum of values of time steps in input time series (use in for example aggregating
rainfall in mm).
mean value in output time series is mean of values of time steps in input time series (use in for example aggregating rain rate in mm/hr).
constant

Disaggregation rules;

Instantaneous: apply linear interpolation- ie value at cardinal time step in output series is same as in input time series at that time step.
Values in between are interpolated.
accumulative : value in output time series is derived as equal fraction of valuein input series. Fraction is determined using ration of time
steps.
Disaggregateusingweights value in output time series weighted fraction of input value. Weights are defined as coefficients. These are
sub-elements to the disaggregation element. The number of coefficients defined should be equal to the disaggregation ration (i.e. 24
when disaggregating from day to hour). The coefficient Id's should be numbered 1 to n..
constant value in output time series at intermediate time steps is equal to the last available value in the input time series.
Rules for mapping non-equidistant time series to equidistant time series
zero value in output time series is zero if time values do not coincide
missing value in output time series is missing if time values do not coincide
linearinterpolated value in output time series is interpolated linearly between neighbouring values in input time series
equaltolast value in output time series is equal to last available value in input time series.

Rule based transformations

The set of rule based transformations is a library of specific data transformation functions. Configuration of the rule based transformation is the
same as in the Arithmetic transformation. However, each rule may have specific requirements on the elements that need to be defined. Many
parameters that affect the transformation will need to be defined as a coefficient, using the appropriate coefficientType definition.

The rule based transformations can be grouped into four main sections;

Selection of peak or low values from a time series.


Resampling a equidistant time series set using time values specified in a non-equidistant time series set.
Data hierarchy
Various transformations.

Selection of peak or low flow values

Selection of peaks and lows

Set of rules to allow selection of peaks and lows from an input time series.

160
Enumerations in the rule attribute of the ruleBasedTransformation element;

selectpeakvalues
selectlowvalues
selectpeakvalueswithincertaingap
selectlowvalueswithincertaingap
__
The first two enumerations will select all peaks or lows in the time series. The second two will select peaks only if there is a defined gap
in time between peaks. If not they are considered to be of dependent and only the highest peak of the dependent sets will be returned.

Requirements for definitions of peak selections using gaps to define independence are;

A coefficientId "a" must be defined. The coefficientType must be set to "gaplengthinsec". The value attribute defines the length of the
minimum gap in seconds.
A coefficientId "b" must be defined with coefficientType "peaksbeforetimezero". The value attribute defines the maximum number of
peaks to consider before T0.
A coefficientId "c" must be defined with coefficientType "peaksaftertimezero". The value attribute defines the maximum number of peaks
to consider before T0.
A coefficientId "d" must be defined with coefficientType "totalnumberofpeaks". The value must be set to zero.

The following two coefficients are optional:

A coefficientId "e" with coefficientType "skipjustbeforetimezero" indicates how many peaks to skip just before T0.
A coefficientId "f" with coefficientType "skipjustaftertimezero" indicates how many peaks to skip just after T0.

They default to 0.

Example:

<ruleBasedTransformation rule="selectpeakvalueswithincertaingap">
<segments limitVariableId="X1">
<segment>
<coefficient coefficientId="a" coefficientType="gaplengthinsec" value="2700"/>
<coefficient coefficientId="b" coefficientType="peaksbeforetimezero" value="3"/>
<coefficient coefficientId="c" coefficientType="peaksaftertimezero" value="4"/>
<coefficient coefficientId="d" coefficientType="totalnumberofpeaks" value="0"/>
<coefficient coefficientId="e" coefficientType="skipjustbeforetimezero" value="2"/>
<coefficient coefficientId="f" coefficientType="skipjustaftertimezero" value="2"/>
<outputVariableId>Y1</outputVariableId>
</segment>
</segments>
</ruleBasedTransformation>

In this example:

The time between two local maxima (peaks) should be at least 2700 seconds or 45 minutes.
Only the last three peaks before T0 and the first four peaks after T0 are considered.
The last two peaks just before T0 are skipped, leaving only the third last one.
Similarly the first peaks just after T0 are skipped, leaving the third and fourth ones.

Sampling values from equidistant time series

This section of the rule based transformation can be applied to sample items from an equidistant time series at the time values in a
non-equidistant time series. This may be required when applying transformations to a non-equidistant time series. The values to add will first need
to be resampled to the right time value. An example is when wind and wave information is required at the time of the tidal peaks for entry in a
lookup table.

Enumerations in the rule attribute of the ruleBasedTransformation element;

equitononequidistant
equitononequidistantforinstantaneousseries
equitononequidistantforaccumulativeseries
__
The first two elements are equivalent. The last will consider accumulations of the input variable up to the time value sampled.

Requirements for definitions of resampling equidistant time series are;

The limitVariableId attribute of the segements element must be the non-equidistant time series which determines the time values at which
the equidistant series is to be sampled.
The userDefinedFunction must contain the equidistant time series to be sampled

161
The outputVariableId must resolve to a non-equidistant time series.
__
Example:

Data Hierarchy

This is a simple method to merge overlapping equidistant time series in a single equidistant series. Gaps in foremost (first) series will be filled with
data of second series if a valid value is available at the current time step, otherwise the gap is filled with data from the third series and so on until
no more time series are available. Only missing data values and unreliable values are filled. Doubtful values remain in the result series as
doubtful.

Figure 61 Schematic example of merging series using data hierarchy.

In example above Series 1 is the most important time series, Series 2 has a lower hierarchy and series 3 has the lowest hierarchy. The resulting
time series has values from all 3 series as shown in figure above.

Data hierarchy poses no specific requirements to variables defined. Only the Id of the output variable is of importance.

Creating time series from typical profiles

Typical profiles can be defined in the inputVariable as described above. To use a typical profile it must first be mapped to a dynamic time series.
This can then be retrieved in a later configuration of a module for use.

Enumerations in the rule attribute of the ruleBasedTransformation element;

typicalprofiletotimeseries:
datatotimeseries

The first type of mapping is used when the typical profile has a concept of date/time (e.g. must be mapped to specific dates or time values). The
second is used when only a series of data is given. The time series is then filled with the first data element given as the first time step of the
relative view period to be created.

Typical profile mapping poses no specific requirements to variables defined. Only the Id of the output variable is of importance.

outputVariable

Definition of the output variables to be written following transformation. See the inputVariable for the attributes and structure. The output variable
can only be a TimeSeriesSet (typical profiles are only used as inputs). The OutputVariable is assigned an ID. This ID must be defined as the
result of the transformation.

162
03 Import Module

Introduction
The import module allows data from external source to be imported into DELFT-FEWS. Data may be provided to FEWS in a variety of formats.
The approach taken in the import module is that a class is defined for each of the file formats that can be imported.

Data is imported from specified directories. An attempt is made to import all files in the directories and subdirectories configured. If a file
conforms to the expected format then the data will be imported. If the file does not conform to the expected format, it will not be imported, but will
be moved to a configurable directory with failed import files.

Note that Delft-FEWS can only import the specific data formats that are listed here. Delft-FEWS assumes data types for a
configured import to remain the same over time as Delft-FEWS is usually part of an operational system. This means that it will
not have the flexibility in importing data that for example programs like Matlab and Excel have. Instead, for each new filetype a
dedicated import must be written. However, the list of supported filetypes is ever increasing and adding new imports is fairly
simple.

You can select the files to be imported via the directory and its subdirectories where the files live and by means of a file mask,
which is then used to match the file names against.

Two main groups of import can be defined;

Importing data in the XML format defined by the Environment Agency, UK.
Importing of various data formats (including ASCII formats, png files- e.g. meteosat images- grids and GRIB files).

On importing data, the approach to be used for converting flags, units, locations and parameters can be defined. These conversions are
identified by referring to the appropriate configuration files (see Regional Configuration). When data is imported to an equidistant time series, a
time tolerance may also be defined. If the time recorded is within this tolerance it will be snapped to the cardinal time step in the imported series.

When available as configuration on the file system, the name of the XML file for configuring an instance of the import module called for example
ImportRTS may be:

ImportRTS 1.00 [Link]

ImportRTS File name for the ImportRTS configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Import Module Configuration


Available data types
Custom time series import formats using java
Import data using OPeNDAP
Import Module configuration options

See also How to Import data

There are similarities between the import module and the General Adapter module as both allow import of data into
DELFT-FEWS from an external database. The philosophy of the two modules is, however, different. In the import module there
is no prior expectance on the availability of data to be imported. Data that is available is imported and the module will not fail if
insufficient data is available. In the General Adapter there are stricter controls on the availability of data, and errors may occur
if insufficient data is available.

Note that two main classes are defined for the import module. One for the specific EA XML time series import and one for the
general time series import (including GRIB imports).

These are defined in the moduleDescriptors in SystemConfiguration. The first is normally referred to as "EAImport", the
second as "TimeSeriesImport"

163
Available data types

Documented Imports

ArcInfoAscii — Imports time series data (grids) ArcInfoAscii files.


ArcWat
Bayern — Imports ASCII type time series data (level forecasts) from Bayern, location Raunheim am Main.
BIL Import
BUFR
CERF — Imports ASCII type GRID data in Comma Separated Value (CSV) format from CERF model.
CSV
Database import
Delft3D-Flow
Delft-Fews Published Interface timeseries Format (PI) Import
DINO
DINO Service
DIVER MON
EasyQ
FewsDatabase Import
generalCsv
GermanSnow — imports grid time series from the ASCII file produced by German SNOW model
Gray Scale Image
HCS — Imports ASCII type time series data from the Australian Bureau of Meteorology
hdf4 — Imports time series data stored in the HDF4 file format.
HYMOS
HymosAscii
IFKIS — Imports time series data from tabular ASCII formatted file
IJGKlepstanden — Import for IJsselmeer Klepstanden
IP1 — Imports ASCII type time series data from CSV formatted files.
Keller IDC — Imports time series data stored in Keller IDC files.
KNMI CSV
KNMI EPS
KNMI HDF5
KNMI IRIS
KNMI SYNOP
Landsat-HDF5
LMW — Imports time series directly from the Landelijk Meetnet Water database
LUBW — Imports ASCII type time series data from the Landes Umwelt Baden Wurtenberg Forecasting center in Germany
Matroos NetCDF — Imports NetCDF type gridded time series data from MATROOS Forecast databases
McIdasArea
MM3P
Msw
NETCDF-CF_GRID — Imports grid data from NetCDF-CF file formats
NETCDF-CF_PROFILE — Imports profile data from NetCDF-CF file formats
NETCDF-CF_TIMESERIES — Imports timeseries data from NetCDF-CF file formats
NetcdfGridDataset
NOOS — Imports NOOS type time series data from MATROOS Forecast databases
NTUQUARTER Import
NTURAIN Import
Obserview
Pegelonline — Imports time series provided by Pegelonline
Radolan — Imports radar data from DWD-Hydrometeorologie.
Singapore OMS Lake Diagnostic System files
SSE
SWE
TMX
Wiski
WQCSV — Imports csv type time series data from some dutch Water Quality Laboratia
WSCC csv — Imports csv type time series data from the Woodleigh System Control Centre in Singapore

Available Imports

Please note the new types are added regularly. Most of the imports are Custom made for specific file formats.

Type String Description Data Type

AHD scalar

ArcInfoAscii ArcInfo/Arcview Ascii grid format grid

164
ArcWat ArcWat DBF scalar

Bayern Level forecasts (ASCII) from Raunheim am Main scalar

BC2000 scalar

BFG scalar

BIL grid

BUFR Meteorological data grid

CERF Import CERF Ascii grid in CSV format grid

CSV Simple CSV format scalar

COSMO7_COR grid

database generic database import scalar/grid?

Delft3D-Flow Import D3D-Flow point-based results, stored in NEFIS structures scalar

DINO TNO DINO ASCII files scalar

DINO Service SOAP import for GrondWaterService of TNO scalar

DiverMon Diver MON ASCII files scalar

DSS scalar

DWD-LM grid

DWD-LM2 grid

DWD-GME grid

EA Environment Agency format scalar

EasyQ Easy Q format scalar

EKSW scalar

EKSW2005 scalar

EVN scalar

Era15 scalar

FOC SEPA format scalar

FewsDatabase scalar/grid

generalCsv configurable csv import scalar

GermanSnow grid

GHD scalar

GrayscaleImage grid

GRIB GRIB format used by meteorological institutes. grid


External parameter in IdMap should be the parameter number of the grib1-GDS section

(octet 9) for grib1 and the Parameter name (long name, replace ' ' by '_') for grib2

GRIB2 Newer version of the GRIB format used by meteorological institutes. grid
Imports grib, grib2 and netcdf
External parameter in IdMap should be the parameter number of the grib1-GDS section

(octet 9) for grib1 and the Parameter name (long name, replace ' ' by '_') for grib2

GRIBBASIC grid

GRIBCOSMO GRIB reader to handle ensembles where each member is in a different file. grid
Do not use a filePatettern-identifier

HCS Exchange format used by the Australian Bureau of Meteorology scalar

165
hdf4 (not yet available on Linux) grid

hdfSoilMoisture grid

Hims scalar

Hydris scalar

HymosAscii (OTT) scalar

HYMOS HYMOS Transfer database scalar

IFKIS scalar

IJGKlepstanden Import for IJsselmeer Klepstanden scalar

IP1 CSV type import for FEWS-Basque scalar

Keller Keller IDC scalar

KNMI scalar

KNMICSV KNMI CSV files, one column with data scalar

KNMIEPS KNMI Ensemble files scalar

KNMI-HDF5 KNMI radar files (not yet available on Linux) grid

KNMIIRIS KNMI IRIS files (daily values) scalar

KNMISYNOPS KNMI SYNOPS files (hourly values) scalar

Landsat-HDF5 Landsat data files grid

LMW Direct connection to "Landelijk Meetnet Water" scalar

LUBW scalar

Matroos NetCDF NetCDF files from MATROOS database grid

McIdasArea grid

MeteoFranceAscii scalar

MM3P Metasphere Import Files (CSV style files) scalar

Modis Images from the MODIS satellites grid

Mosaic scalar

Msw MSW (mfps) csv files, with observed levels and flows in Rijn and Maas scalar

Mosm scalar

NETCDF-CF_GRID import module to import grid data from NetCDF files grid

NETCDF-CF_PROFILE import module to import longitudinal profile data from NetCDF files profile

NETCDF-CF_TIMESERIES import module to import timeseries data from NetCDF files scalar

NetcdfGridDataset import module to import grid data from NetCDF, Grib1 and Grib2 files grid

Nimrod grid

NimrodMultipleDir grid

NOOS NOOS data stored in the Matroos database scalar

NTURAIN Import NTU datalogger csv like files scalar

NTUQUARTER Import NTU datalogger csv like files, multiple columns scalar

Pegelonline Pegelonline ASCII time series files scalar

PI Delft-Fews Published Interface scalar

PMDSynoptic scalar

166
PMDTelemetric scalar

Obserview Obserview ASCII format scalar

OTT HymosAscii scalar

Radolan Radar data from DWD-Hydrometeorologie grid

RijnlandRTD scalar

SSE Scottish & Southern Electric ASCII files scalar

SHD scalar

SHEF scalar

SingaporeLDS Singapore OMS Lake Diagnostic System files scalar

SMA scalar

SMAecmwf scalar

SwissRadar grid

Synop scalar

SWE Imports timeseries from a Sensor Web Enabled service scalar

Tmx TMX Access MDB database files scalar

TmxCSV TMX CSV export files scalar

TTRR scalar

UVF Universelles Variables Format scalar

WapdaTelemetric scalar

Wiski Wiski ZRXP format scalar

WSCCCsv csv type files. Location per column scalar

Wsd scalar

WQCSV Water Quality csv file sample

HCS

Contents

Contents
Overview
Functionality and limitations
Configuring the Import
Flag Conversion
Unit conversions
The file format
Performance
Java Source code

Overview

Imports time series data in a tabular ASCII format from the Australian Bureau of Meteorology (HCS). The files consist of a set of header lines and
then lines with a fixed number of fields. The fields are separated by a comma and the order is fixed. Multiple locations and parameters can be put
in a single file.

Functionality and limitations

The import can read scalar data in any timestep from the HCS files.
The header information in the HCS file is mostly ignored but the timezone information (line 7) is used.
The unit is set during import and can be used to convert data on import using the Unit Conversion functionality.
Comments from the HCS file are also imported.
The HCS data quality flags are set on importing (a flagconversion has to be set up to actually use them, see below)

167
Configuring the Import

The reader is named HCS which should be configured in the general section of the import. An example configuration is shown below:

<?xml version="1.0" encoding="UTF-8"?>


<!-- edited with XMLSpy v2007 sp2 ([Link] by WL | Delft Hydraulics (WL | Delft
Hydraulics) -->
<timeSeriesImportRun xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<import>
<general>
<importType>HCS</importType>
<folder>$IMPORT_FOLDER$/hcs</folder>
<backupFolder>$BACKUP_FOLDER$/hcs</backupFolder>
<idMapId>IdImportHcs</idMapId>
<flagConversionsId>HcsFlagConversions</flagConversionsId>
<importTimeZone>
<timeZoneOffset>+10:00</timeZoneOffset>
</importTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportHcs</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Gauges_P.obs</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportHcs</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Gauges_H.obs</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>

An example IdMapping file (that maps the location and parameter Ids) is shown below:

<?xml version="1.0" encoding="UTF-8"?>


<idMap version="1.1" xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link] [Link]
<parameter internal="[Link]" external="WL"/>
<location internal="44198-01-01" external="44198-01-01"/>
</idMap>

Flag Conversion

Flag conversions can be used to convert data quality flag from external sources to Delft-FEWS flags. The BOM HCS format defines the following
quality flags:

Quality Code Description Example

1 As observed.

2 As observed with normalised time window. e.g 9AM or rounded to nearest hour.

3 Derived from other observation/s.

4 Interpolated from other observation event/s.

168
5 As observed and validated (Quality Controlled).

6 Void (Bad) observation.

7 Observation where canister reset or calibration change has occurred.

To use the flags in Delft-FEWS a flagconversion file has to be set up. A working example is attached to this page. The table below summarizes
the translation used:

HCS HCS Description Delft-FEWS Delft-FEWS Name Delft-FEWS Description


code code

1 As observed 0 ORIGINAL_RELIABLE Observed value retrieved from external data source.


Value is valid, marked as original reliable as validation
is yet to be done

2 As observed with normalised time 0 ORIGINAL_RELIABLE Observed value retrieved from external data source.
window Value is valid, marked as original reliable as validation
is yet to be done

3 Derived from other observations 0 CORRECTED_RELIABLE The original value was removed and corrected.
Correction may be through by interpolation or manual
editing

4 Interpolated from other observation 0 CORRECTED_RELIABLE The original value was removed and corrected.
events Correction may be through by interpolation or manual
editing

5 As observed and validated (quality 0 CORRECTED_RELIABLE The original value was removed and corrected.
controlled) Correction may be through by interpolation or manual
editing

6 Void (bad) observation 6 Missing_Unreliable Observed value retrieved from external data source.
Value is invalid due to validation limits set. Value is
removed

7 Observation where canistor reset 0 ORIGINAL_DOUBTFUL Observed value retrieved from external data source.
or calibration change has occured Value is valid, but marked as suspect due to soft
validation limits being exceeded

Unit conversions

On importing the units rom the HCS file are set. These can be used in the Unitconversion to convert the data on import.

The file format

The file format is described in the BOM document "External Agency Hydrological data Transfer - Client Requirements, Version 2.20".

An example of a file is shown below:

# HEADER: Agency Id: BoM


# HEADER: File Generation Date: 2008-08-01T[Link]z
# HEADER: File Format: BOM-HCS
# HEADER: File Format Version: 2.0
# HEADER: Generated by (system): TimeStudio
# HEADER: Number of Records: 11
# HEADER: Local ObsTime Offset: 0
# HEADER: Data Fields: IndexNo, SensorType, SensorDataType, SiteIdType, SiteId, ObservationTimestamp,
RealValue, Unit, SensorTypeParam1, SensorTypeParam2, Quality, Comment
1,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.150000,"metres","LGH",,1,""
2,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.200000,"metres","LGH",,1,""
3,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.150000,"metres","LGH",,1,""
4,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.200000,"metres","LGH",,1,""
5,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.150000,"metres","LGH",,1,""
6,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.200000,"metres","LGH",,1,""
7,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.150000,"metres","LGH",,1,""
8,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.200000,"metres","LGH",,1,""
9,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.150000,"metres","LGH",,1,""
10,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.200000,"metres","LGH",,1,""
11,"WL",1,"SLSR","44198-01-01","2008-02-01T[Link]z",1.150000,"metres","LGH",,1,""

169
Performance

On a 2.7Ghz Dual core laptop the import is able to import 188Mb of hcs files (1899984 lines) in 24 seconds including basic validation of the data.

Java Source code

[Link]

[Link]

{
private static final Logger log = [Link]([Link]);
private TimeZone defaultTimeZone = null;
private LineReader reader = null;
private TimeSeriesContentHandler handler = null;

@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler handler)
throws IOException {
[Link] = reader;
[Link] = handler;

[Link]("");
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
[Link](true);

parseHeader();

for (String[] buffer = new String[12]; [Link](',', '\"', buffer) != -1;) {


[Link](buffer[1]);
[Link](buffer[4]);
String timeText = buffer[5];
if ([Link](timeText) == null) continue;
TimeZone timeZone = defaultTimeZone;
if ([Link](timeText, "Z")) {
timeZone = [Link];
timeText = [Link](0, [Link]() - 1);
}
[Link](timeZone, "yyyy-MM-dd'T'HH:mm:ss", timeText);
[Link]('.', buffer[6]);
[Link](buffer[7]);
[Link](parseTimeStep(buffer, timeZone));
[Link](buffer[10]);
[Link](buffer[11]);
[Link](header);
[Link]();
}
}

private TimeStep parseTimeStep(String[] buffer, TimeZone timeZone) {


if ([Link](buffer[9]) == null) return null;
if ([Link](buffer[2], "1")) return null;
try {
return [Link]([Link](buffer[9])
* TimeUnit.SECOND_MILLIS, timeZone);
} catch (NumberFormatException e) {
[Link]("Can not parse time step " + buffer[9] + " at " +
[Link]());
return null;
}
}

private void parseHeader() throws IOException {


defaultTimeZone = [Link]();

170
[Link](200);
for (String line; (line = [Link]()) != null; [Link](200)) {
line = [Link]();
if ([Link](0) != '#') {
// this is not a header row, undo read line
[Link]();
break;
}

// Supported formats:
// # HEADER: Local ObsTime Offset: 1:30
// # HEADER: Local ObsTime Offset: +01:30
// # HEADER: Local ObsTime Offset: 9
// # HEADER: Local ObsTime Offset: -01:00
String[] elements = [Link](line, ':');
if ([Link] < 3) continue;
[Link](elements);
if (!elements[1].equalsIgnoreCase("Local ObsTime Offset")) continue;

String timeZone = [Link] >= 4 ? elements[2] + ':' + elements[3] : elements[2]


+ ':' + "00";
timeZone = [Link]() <= timezone;="timeZone;" (parseexception=
"(ParseException" catch="catch" ]]="]]" timezone="timeZone" time="time" '+'="'+'" parse="parse" "=""
" '0'="'0'" [Link](0)="=" ||="||" at="at" timezone);="timeZone);"
[Link]());="[Link]());" +="+" defaulttimezone="
[Link]("GMT"" not="not" [Link]("can="[Link]("Can" 4="4" '-'="'-'" :=":"
}="}" e)="e)" {="{" ?="?" try="try" zone="zone"/>

[Link]

, VirtualInputDirConsumer {
private static final Logger log = [Link]([Link]);

private static final int BUFFER_SIZE = 2048;

@Override
public void setVirtualInputDir(VirtualInputDir virtualInputDir) {
[Link] = virtualInputDir;
}

private enum HeaderElement {


type(F.R), locationId(F.R),
parameterId(F.R), qualifierId(F.M), ensembleId, ensembleMemberIndex,
timeStep(F.R | F.A), startDate(F.R | F.A), endDate(F.R | F.A), forecastDate(F.A),
missVal, longName, stationName, units,
sourceOrganisation, sourceSystem, fileDescription,
creationDate, creationTime, region, thresholds;

interface F {
int A = 1 << 0; // attributes
int R = 1 << 1; // required;
int M = 1 << 2; // multple;
}

private final int flags;

HeaderElement() {
[Link] = 0;
}

HeaderElement(int flags) {
[Link] = flags;
}

171
public boolean isRequired() {
return (flags & F.R) != 0;
}

public boolean hasAttributes() {


return (flags & F.A) != 0;
}

public boolean isMultipleAllowed() {


return (flags & F.M) != 0;
}
}

// fastDateFormat is used to keep track of last time zone and lenient


private FastDateFormat fastDateFormat = [Link]("yyyy-MM-dd", "HH:mm:ss",
[Link], [Link], null);

private boolean invalidHeaderTimeDetected = false;

private HeaderElement currentHeaderElement = null;

private static final HeaderElement[] HEADER_ELEMENTS = [Link]();

private PiTimeSeriesHeader header = new PiTimeSeriesHeader();


private List<String> qualfiers = new ArrayList<String>();
private long timeStepMillis = 0;
private TimeStep timeStep = null;
private long startTime = Long.MIN_VALUE;
private long endTime = Long.MIN_VALUE;
private float missingValue = [Link];
private String creationDateText = null;
private String creationTimeText = null;

private TimeSeriesContentHandler timeSeriesContentHandler = null;

/**
* For performance reasions the pi time series format alllows that the values are stored in
* a separate bin file instead of embedded in the xml file.
* The bin file should have same name as the xml file except the extension equals bin
* In this case all time series should be equidistant.
*/
private VirtualInputDir virtualInputDir = [Link];
private InputStream binaryInputStream = null;
private byte[] byteBuffer = null;
private float[] floatBuffer = null;
private int bufferPos = 0;
private int bufferCount = 0;

private XMLStreamReader reader = null;


private String virtualFileName = null;

private static boolean lenient = false;

/**
* For backwards compatibility. Earlier versions of the PiTimeSeriesParser were tollerant
about the date/time format
* and the case insensitive for header element names.
* This parser should not accept files that are not valid according to pi_timeseries.xsd
* When old adapters are not working you can UseLenientPiTimeSeriesParser temporaray till
the adapter is fixed
*
* @param lenient
*/

172
public static void setLenient(boolean lenient) {
[Link] = lenient;
}

public PiTimeSeriesParser() {
[Link](lenient);
}

@Override
public void parse(XMLStreamReader reader, String virtualFileName, TimeSeriesContentHandler
timeSeriesContentHandler) throws Exception {
[Link] = reader;
[Link] = virtualFileName;
[Link] = timeSeriesContentHandler;

String virtualBinFileName = [Link](virtualFileName, "bin");

// time zone can be overruled by one or more time zone elements in the pi file
[Link]([Link]());

if (![Link](virtualBinFileName)) {
parse();
return;
}

binaryInputStream = [Link](virtualBinFileName);
try {
if (byteBuffer == null) {
byteBuffer = new byte[BUFFER_SIZE * NumberType.FLOAT_SIZE];
floatBuffer = new float[BUFFER_SIZE];
}
parse();
boolean eof = bufferPos == bufferCount && [Link]() == -1;
if (!eof)
throw new IOException("More values available in bin file than expected based on
time step and start and end time\n" + [Link](virtualFileName, "bin"
));

} finally {
bufferPos = 0;
bufferCount = 0;
[Link]();
binaryInputStream = null;
}
}

private void parse() throws Exception {


[Link](XMLStreamConstants.START_DOCUMENT, null, null);
[Link]();
[Link](XMLStreamConstants.START_ELEMENT, null, "TimeSeries");
[Link]();

while ([Link]() != XMLStreamConstants.END_ELEMENT) {


parseTimeZone();
readTimeSeries();
}

[Link](XMLStreamConstants.END_ELEMENT, null, "TimeSeries");


[Link]();
[Link](XMLStreamConstants.END_DOCUMENT, null, null);

private void readTimeSeries() throws Exception {

173
[Link](XMLStreamConstants.START_ELEMENT, null, "series");
[Link]();
parseHeader();
if (binaryInputStream == null) {
while ([Link]() == XMLStreamConstants.START_ELEMENT &&
[Link]([Link](), "event")) {
parseEvent();
}

if ([Link]() == XMLStreamConstants.START_ELEMENT) {
// skip comment
[Link](XMLStreamConstants.START_ELEMENT, null, "comment");
[Link]();
[Link]();
}
} else {
readValuesFromBinFile();
}
[Link](XMLStreamConstants.END_ELEMENT, null, "series");
[Link]();
}

private void parseHeader() throws Exception {


[Link](XMLStreamConstants.START_ELEMENT, null, "header");
if ([Link]() > 0) {
throw new Exception("Attributes are not allowed for header element ");
}
[Link]();
initHeader();
do {
detectHeaderElement();
parseHeaderElement();
} while ([Link]() != XMLStreamConstants.END_ELEMENT);

if ([Link]() == Long.MIN_VALUE) [Link](startTime);


initiateTimeStep();
[Link](timeStep);
if (![Link]()) [Link]([Link](new
String[[Link]()]));
if (creationDateText != null) {
try {
long creationTime = [Link](creationDateText,
creationTimeText);
[Link](creationTime);
} catch (ParseException e) {
throw new Exception("Can not parse creation date/time " + creationDateText + ' ' +
creationTimeText);
}
}
[Link](header);
if (startTime != Long.MIN_VALUE && endTime != Long.MIN_VALUE) {
[Link](new Period(startTime, endTime));
}

[Link](XMLStreamConstants.END_ELEMENT, null, "header");


[Link]();
}

private void parseEvent() throws Exception {


assert binaryInputStream == null;
[Link](XMLStreamConstants.START_ELEMENT, null, "event");
String timeText = [Link](null, "time");
String dateText = [Link](null, "date");
String valueText = [Link](null, "value");

174
String flagText = [Link](null, "flag");
String commentText = [Link](null, "comment");

if (timeText == null)
throw new Exception("Attribute time is missing");

if (dateText == null)
throw new Exception("Attribute date is missing");

if (valueText == null)
throw new Exception("Attribute value is missing");

try {
[Link]([Link](dateText, timeText));
} catch (ParseException e) {
throw new Exception("Can not parse " + dateText + ' ' + timeText);
}

if (flagText == null) {
[Link](0);
} else {
try {
[Link]([Link](flagText));
} catch (NumberFormatException e) {
throw new Exception("Flag should be an integer " + flagText);
}
}
[Link](commentText);

try {
float value = [Link](valueText);
// we can not use the automatic missing value detection of the content handler
because the missing value is different for each time series
if (value == missingValue) {
value = [Link];
} else {

[Link]([Link](valueText, '.'));
}
[Link](value);
[Link]();
} catch (NumberFormatException e) {
throw new Exception("Value should be a float " + valueText);
}
[Link]();
[Link](XMLStreamConstants.END_ELEMENT, null, "event");
[Link]();
}

private long parseTime() throws Exception {


String dateText = [Link](null, "date");
if (dateText == null) {
throw new Exception("Attribute " + currentHeaderElement +
"-date is missing");
}
String timeText = [Link](null, "time");
if (timeText == null) {
throw new Exception("Attribute " + currentHeaderElement +
"-time is missing");
}

long time;
try {
time = [Link](dateText, timeText);

175
} catch (ParseException e) {
throw new Exception("Not a valid data time for "
+ currentHeaderElement + ' ' + dateText + ' ' + timeText, e);
}

[Link]();
return time;
}

private long parseTimeStep() throws Exception {


String unit = [Link](null, "unit");
if (unit == null) {
throw new Exception("Attribute unit is missing in " + currentHeaderElement);
}

TimeUnit tu = [Link](unit);
if (tu != null) {
String multiplierText = [Link](null, "multiplier");
int multiplier;
if (multiplierText == null) {
multiplier = 1;
} else {
try {
multiplier = [Link](multiplierText);
} catch (NumberFormatException e) {
throw new Exception([Link](e), e);
}

if (multiplier == 0) {
throw new Exception("Multiplier is 0");
}
}

String dividerText = [Link](null, "divider");


int divider;
if (dividerText == null) {
divider = 1;
} else {
try {
divider = [Link](dividerText);
} catch (NumberFormatException e) {
throw new Exception([Link](e), e);
}

if (divider == 0) {
throw new Exception("dividplier is 0");
}
}
[Link]();
return [Link]() * multiplier / divider;
} else {
[Link]();
return 0;
}
}

private void initHeader() {


[Link]();
[Link](virtualFileName);
currentHeaderElement = null;
timeStep = null;
timeStepMillis = 0;
startTime = Long.MIN_VALUE;

176
endTime = Long.MIN_VALUE;
missingValue = [Link];
creationDateText = null;
creationTimeText = "[Link]";
[Link]();
}

private void readValuesFromBinFile() throws Exception {


TimeStep timeStep = [Link]();
if (![Link]()) {
throw new Exception("Only equidistant time step supported when pi events are stored in
bin file instead of xml");
}

boolean equidistantMillis = [Link]();


long stepMillis = equidistantMillis ? [Link]() : Long.MIN_VALUE;
try {

177
for (long time = startTime; time <= parameterid:="parameterId:" forecastdate:=
"forecastDate:" exception([Link](e),="Exception([Link](e),"
for="for" buffer_size="BUFFER_SIZE" [Link]([Link]());=
"[Link]([Link]());" javadoc="javadoc"
(![Link]([Link](),="(![Link]([Link](),"
missingvalue)="missingValue)" "timezone");=""timeZone");" timestepmillis;="timeStepMillis;" float=
"float" fillbuffer()="fillBuffer()" missing="missing" of="of" time="time" "timezone"))=
""timeZone"))" file="file"
[Link](parseensemblememberindex([Link]()));=
"[Link](parseEnsembleMemberIndex([Link]()));"
[Link]([Link]());=
"[Link]([Link]());" ensemblememberindex:="ensembleMemberIndex:"
(numberformatexception="(NumberFormatException" (timestepmillis="(timeStepMillis" !=
"XMLStreamConstants.START_ELEMENT)" endtime;)="endTime;)" or="or" %="%" *="*" +="count;"
[Link]([Link]());="[Link]([Link]());" -="-"
not="not" content="content" timestep:="timeStep:" [Link]([Link]());=
"[Link]([Link]());" [Link]();="[Link]();" exception(
"not="Exception("Not" (![Link])="(![Link])"
byteorder.little_endian);="ByteOrder.LITTLE_ENDIAN);" fillbuffer();="fillBuffer();" start="start"
timezonefromdouble="[Link](offset);" each="each" buffercount="
byteBufferCount" xmlstreamexception="XMLStreamException" try="try" we="we" endtime="parseTime();"
different="different" series="series" eofexception("bin="EOFException("Bin" use="use" switch=
"switch" null,="null," e);="e);" private="private" timestep=
"[Link](timeStepMillis," bufferpos="0;"
[Link]([Link]());="[Link]([Link]());"
[Link](xmlstreamconstants.end_element,="[Link](XMLStreamConstants.END_ELEMENT,"
(bufferpos="=" bytebuffercount);="byteBufferCount);" rounded="rounded" parsethresholds()=
"parseThresholds()" ([Link]()="([Link]()"
[Link](parsetype([Link]()));=
"[Link](parseType([Link]()));"
[Link](time);="[Link](time);" ||="||"
[Link](bytebuffer,="[Link](byteBuffer," creationtimetext=
"[Link]();" void="void" creationdatetext="[Link]();"
filedescription:="fileDescription:" [Link](value);=
"[Link](value);" short");="short");"
[Link](parsetime());="[Link](parseTime());"
[Link]([Link]());="[Link]([Link]());" new="new"
@suppresswarnings({"overlylongmethod"})="@SuppressWarnings({"OverlyLongMethod"})" ioexception="
IOException" (ioexception="(IOException" assert="assert" [Link]());="
[Link]());" [Link]([Link]());="
[Link]([Link]());" }="}" timezoneoffsetmillis);="
timeZoneOffsetMillis);" {="{" (value="=" parsethresholds();="parseThresholds();" count="count"
[Link](timezonefromdouble);="this
.[Link](timeZoneFromDouble);" exception="Exception" creationdate:="
creationDate:" break;="break;" arraylist<string="ArrayList<String" long="long" has="has"
floatbuffer,="floatBuffer," (timezoneoffsetmillis="(timeZoneOffsetMillis" timezone="timeZone"
format",="format"," (currentheaderelement)="(currentHeaderElement)" value="[Link];"
missingvalue="parseMissingValue([Link]());" sourcesystem:="sourceSystem:"
stationname:="stationName:" locationid:="locationId:" be="be" numbertype.float_size;=
"NumberType.FLOAT_SIZE;" -1)="-1)" starttime="parseTime();" parsetimezone()="parseTimeZone()"
timestepmillis="parseTimeStep();" bytebuffercount="byteBufferCount" int="int" initiatetimestep()=
"initiateTimeStep()" throws="throws" automatic="automatic" bytebuffercount,="byteBufferCount," 0,=
"0," double="double" 0)="0)" [Link]([Link]());=
"[Link]([Link]());" and="and"
[Link]([Link]());="[Link]([Link]());"
timezoneoffsetmillis="-startTime" (count="=" ensembleid:="ensembleId:" detection="detection"
default="default" longname:="longName:" e)="e)" irregular="Irregular" enddate:="endDate:"
startdate:="startDate:" case="case" sourceorganisation:="sourceOrganisation:" used.");="used.");"
(bytebuffercount="(byteBufferCount" [Link]([Link]());="
[Link]([Link]());" valid="valid" creationtime:="creationTime:"
qualifierid:="qualifierId:" can="can" [Link]([Link]());="
[Link]([Link]());" catch="catch" numbertype.float_size="
NumberType.FLOAT_SIZE" while="while" because="because" throw="throw" units:="units:" if="if"
(equidistantmillis)="(equidistantMillis)" return;="return;"
[Link]();="[Link]();"
read="read" offset="[Link]([Link]());" too="too" is="is"
buffercount,="bufferCount," ([Link]())="([Link]())" minutes="minutes"
parseheaderelement()="parseHeaderElement()" buffercount)="bufferCount)" type:="type:" continue;="
continue;" the="the" wil="wil" [Link]("header="[Link]("Header" timeunit.minute_millis=
"TimeUnit.MINUTE_MILLIS" missval:="missVal:" see="see" region:="region:" this
.invalidheadertimedetected="true;" thresholds:="thresholds:" handler="handler"> ids = new
ArrayList<String>();

178
ArrayList<String> names = new ArrayList<String>();
ArrayList<String> stringValues = new ArrayList<String>();
do {
if ([Link]() == XMLStreamConstants.START_ELEMENT) {
String id = [Link](null, "id");
String name = [Link](null, "name");
String stringValue = [Link](null, "value");
[Link](id);
[Link](name);
[Link](stringValue);
}
[Link]();
} while (![Link]().equals([Link]()));
float[] values = new float[[Link]()];
for (int i = 0; i < [Link]; i++) {
values[i] = [Link]([Link](i));
}
[Link]([Link](new String[[Link]()]), [Link](new
String[[Link]()]), values);
}

private static float parseMissingValue(String gotString) throws Exception {


try {
return [Link](gotString);
} catch (NumberFormatException e) {
throw new Exception([Link](e), e);
}
}

private static int parseEnsembleMemberIndex(String gotString) throws Exception {


int index = [Link](gotString);
if (index < 0) {
throw new Exception("Negative ensemble member index not allowed " + gotString);
}
return index;
}

private static ParameterType parseType(String gotString) throws Exception {


ParameterType type = [Link](gotString);
if (type == null) {
throw new Exception("Type in header should be instantaneous or accumulative and not "
+ gotString);
}
return type;
}

private void detectHeaderElement() throws Exception {


if ([Link]() != XMLStreamConstants.START_ELEMENT)
throw new Exception("header element expected");

String localName = [Link]();


HeaderElement element;
try {
element = [Link]([Link], localName);
assert element != null; // contract of valueOf
} catch (Exception e) {
throw new Exception("Unknown header element: " + localName);
}

if (currentHeaderElement == element && [Link]()) return;

if (currentHeaderElement != null && [Link]() < [Link]()) {


throw new Exception("Header elements in wrong order: " + localName);
}

179
if (currentHeaderElement == element) {
throw new Exception("Duplicate header element: " + localName);
}

if ([Link]() > 0 && ![Link]()) {


throw new Exception("Attributes are not allowed for header element " + localName);
}

int nextOrdinal = currentHeaderElement == null ? 0 : [Link]() + 1;

// order is correct and no duplicate so currentHeaderElement can not be last header


element
assert nextOrdinal < HEADER_ELEMENTS.length;
HeaderElement nextHeaderElement = HEADER_ELEMENTS[nextOrdinal];
if ([Link]() && nextHeaderElement != element) {
throw new Exception("Required header item missing: " + nextHeaderElement);
}

currentHeaderElement = element;
}

public TimeZone getTimeZone() {


return [Link]();
}
}
]]></String></String></String></String></String></=></String></String>

[Link]

, VirtualInputDirConsumer {
private static final Logger log = [Link]([Link]);

private DefaultTimeSeriesHeader timeSeriesHeader = new DefaultTimeSeriesHeader();


private VirtualInputDir virtualInputDir = null;
private PiMapStacksReader mapStackReader = null;
private LockableContentHandler contentHandler = null;
private String virtualFileName = null;

@Override
public void parse(XMLStreamReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = virtualFileName;
[Link] = new PiMapStacksReader(reader, virtualFileName,
[Link](), [Link]());
try {
parse(contentHandler);
} finally {
[Link]();
}
}

private void parse(final TimeSeriesContentHandler contentHandler) throws Exception {


GeoDatum mapStackGeoDatum = [Link]();
final GeoDatum geoDatum = mapStackGeoDatum != null ? mapStackGeoDatum :
[Link]();

[Link] = new LockableContentHandler(contentHandler) {


@Override
public void setNewTimeSeriesHeader(TimeSeriesHeader header) {
// only called for bil files
[Link]([Link]()
+ 1);

180
[Link](timeSeriesHeader);
}

@Override
public GeoDatum getDefaultGeoDatum() {
return geoDatum;
}
};

[Link](timeSeriesHeader);
[Link](true);

while ([Link]()) {
[Link]([Link]());
[Link]([Link]());
[Link]([Link]());
[Link](-1);
[Link]([Link]());

long[] times = [Link]();

switch ([Link]()) {
case ASCII:
parseAscii(times);
break;

case PCRGRID:
parsePcRaster(times);
break;

case USGS:
parseUsgs(times);
break;
}
}
}

private void parseAscii(long[] times) throws Exception {


String fileNamePattern = [Link]();
[Link](timeSeriesHeader);
EsriAsciiGridParser ascParser = new EsriAsciiGridParser();
for (int i = 0; i < [Link]; i++) {
[Link](times[i]);
String fileName = [Link](fileNamePattern, i, [Link]);
if (![Link](fileName)) {
[Link](fileName + " referenced in " + virtualFileName + " is missing");
[Link](times[i]);
[Link](null);
[Link]();
continue;
}
LineReader ascReader = [Link](fileName);
try {
[Link](ascReader, fileName, contentHandler);
} finally {
[Link]();
}
}
}

private void parsePcRaster(long[] times) throws Exception {


String fileNamePattern = [Link]();

181
[Link](timeSeriesHeader);
PcRasterParser pcrParser = new PcRasterParser();
for (int i = 0; i < [Link]; i++) {
[Link](times[i]);
String fileName = [Link](fileNamePattern, i, [Link]);
if (![Link](fileName)) {
if ([Link]()) [Link](fileName + " is missing, assume missing value
grid");
[Link](times[i]);
[Link](null);
[Link]();
continue;
}
// todo handle virtual file
File file = new File(fileName);
if (![Link]()) {
File defaultDir = new File(virtualFileName).getParentFile();
file = new File(defaultDir, fileName);
}
[Link](file, [Link]);
}
}

private void parseUsgs(long[] times) throws Exception {


PiBilParser bilParser = new PiBilParser();
[Link](times);
[Link](virtualInputDir);
[Link]([Link]());
BufferedInputStream inputStream =
[Link]([Link]());
try {
[Link](inputStream, [Link](), [Link]);
} finally {
[Link]();
}
}

@Override
public void setVirtualInputDir(VirtualInputDir virtualInputDir) {
[Link] = virtualInputDir;
}
}
]]>

HymosAscii

Overview

Imports time series data from files in Hymos ASCII format with five header lines containing a description of the time series:

The first line contains keyword "FIXE"


The second line contains the keyword "PAR"
the 3rd line is the number of time series in the file (= number of value columns)
fourth line (if 3rd line is "1") and further lines (if 3rd line is "2" or more) start with locationId and then a parameterID.
All other lines contain the time (in yyyy mm dd HH MM format) as the first field and the values for each time step in the next fields
separated by a tab.

Import type

The import type is HymosAscii. There is no particular file extension required.

Example

182
Here is a simple example:

FIXE
PAR
1
NOOR HH4 418 8
2008 1 1 0 2 6048 1 1 114.906
2008 01 01 00 15 0.478
2008 01 01 00 30 0.478
2008 01 01 00 45 0.478
2008 01 01 01 00 0.478
2008 01 01 01 15 0.478
2008 01 01 01 30 0.478
2008 01 01 01 45 0.478

Details of the import format

The field separator is a tab.

LMW

Overview

This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Imports time series data directly from the Dutch Water Authorities' data centre (Landelijk Meetnet Water). The FEWS LMW import function uses
the SIP protocol to import the data directly from a remote database, it is therefore required to have an internet connection. In the FEWS import
configuration file a username and password have to be entered, these can be requested from the LMW data centre directly and not through
Deltares. Without an appropriat eusername/password combination it is not possible to import data. Currently the import only works on Windows
computers as there is no LINUX library available to access the LMW database.

Configuration (Example)

A complete import module configuration consists of an ID Mapping file, a Flag Mapping file, a Unit Mapping file, and an Import Module Instance
file. See the attached example configuration files.

ModuleConfigFiles

The following example of an Import Module Instance will import the time series as equidistant 10 minute series for timezone GMT+1 hour for a
period of 24 hour before the current system time.

183
[Link]
<general>
<importType>LMW</importType>
<serverUrl>[Link]
<user>......</user>
<password>.......</password>
<relativeViewPeriod unit="hour" startoverrulable="true" endoverrulable="true" start="-24"
end="1"/>
<idMapId>IdImportLMW</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<flagConversionsId>ImportMSWFlagConversions</flagConversionsId>
<missingValue>-1000000000</missingValue>
<missingValue>99999</missingValue>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>LMW</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportLMW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>LMW_h</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
....
<externUnit unit="cm" parameterid="[Link]"/>
<externUnit unit="0.01 m3/s" parameterid="[Link]"/>

]]>

IDMapping

The ID Mapping configuration is very important because this is used by the import function to make requests to the LMW database. In this
example two locations have been included, Almen (externalLocation="ALME") and Lobith (externalLocation="LOBI"). Both these locations have
an external parameter "H10", imported in FEWS as "H.m" waterlevels. Both these series are observations and need therefore a qualifier "WN". A
complete list of the location and parameter ID's of all series can be requested from LMW.

[Link]
<map externalparameter="H10" internalparameter="H.m" internallocation="H-RN-ALME"
externalparameterqualifier="WN" externallocation="ALME"/>
<map externalparameter="H10" internalparameter="H.m" internallocation="H-RN-0001"
externalparameterqualifier="WN" externallocation="LOBI"/>
]]>

FlagConversion

The LMW database has a data quality flags for each value in the database, a complete list of quality flags can be requested from LMW.

UnitConversion

Important in the above configuration is that a unitconversion is forced to convert the waterlevels from cm to m NAP. In LMW all water levels are
stored in cm.

Some Issues

Most parameters will be observation data, but some, like the astronomic tide or forecast data, are predictions. If the data are not
observation data, you must use the qualifier to the parameter to indicate the SIP command to be used.
For observation data the external qualifier is "WN"
For forecast data the external qualifier is "VW"
For astronomical data the external qualifier is "AS"
The data in the LMW database cover a period of one month only and the data are retrieved per day.

184
MM3P

Overview

MM3P files are essentially CSV files (comma-separated values) with the following characteristics:

The first line contains the identification of the columns (see the example below). This line is expected to be present, but it is simply
skipped by the reading routines.
The ID of the location appears in the first column. The ID of the parameter appears in the second column. These are taken to be the
external names for the location and parameter.
The time base column is used to optionally identify the time step of a series. This makes it possible to have both a 15 minutes and an
hourly time series in one file.
The derivation is an enumeration of VAL (actual value) and AVG (average value during last time step). It is read as qualifier and can be
used in the idMapping to store the series at the correct destination
The date and time for each observed value appears in the fifth column in the format "YYYY-mm-DD HH:MM".
The value itself appears in the sixth column.
There may be more than one location and more than one parameter in the file - each combination will become a new time series.

Example file

Here is an example of such a file (note that a comma (,) is used as the separator exclusively and that the decimal separator is a period (.))

OS_NAME,PT_NAME,Timebase,Derivation,Timestamp,Value,Manual
TKK001,LEVEL,15M,VAL,2009-03-13 05:00,10.655,
TKK001,LEVEL,15M,VAL,2009-03-13 04:45,10.65063,
TKK001,LEVEL,15M,VAL,2009-03-13 04:30,10.64937,
TKK001,LEVEL,15M,VAL,2009-03-13 04:15,10.65563,
TKK001,LEVEL,15M,VAL,2009-03-13 04:00,10.6575,
TKK001,LEVEL,15M,VAL,2009-03-13 03:45,10.6525,
TKK001,LEVEL,15M,VAL,2009-03-13 03:30,10.65,
TKK001,LEVEL,15M,VAL,2009-03-13 03:15,10.65125,
TKK001,LEVEL,15M,VAL,2009-03-13 03:00,10.64937,
TKK001,LEVEL,15M,VAL,2009-03-13 02:45,10.65,
TKK001,LEVEL,15M,VAL,2009-03-13 02:30,10.65063,
TKK001,LEVEL,15M,VAL,2009-03-13 02:15,10.65625,
TKK001,LEVEL,15M,VAL,2009-03-13 02:00,10.65312,
TKK001,LEVEL,15M,VAL,2009-03-13 01:45,10.66,
TKK001,LEVEL,15M,VAL,2009-03-13 01:30,10.64937,
TKK001,LEVEL,15M,VAL,2009-03-13 01:15,10.65688
TKK001,LEVEL,15M,VAL,2009-03-13 01:00,10.65938
TKK001,LEVEL,15M,VAL,2009-03-13 00:45,10.65688

Configuration

Notice that the importType should be defined as MM3PCSV.

185
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>MM3PCSV</importType>
<folder>$IMPORT_FOLDER_MM3P$</folder>
<maxAllowedFolderSizeMB>250</maxAllowedFolderSizeMB>
<failedFolder>$IMPORT_FAILED_FOLDER$/MM3P</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$/MM3P</backupFolder>
<idMapId>IdMM3P</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>MM3P</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMM3P</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>TKK001</locationId>
<locationId>TKK101</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
]]></import></timeSeriesImportRun>

Pegelonline

Overview

This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Imports time series data that have been profided by Pegelonline ([Link] The FEWS Pegelonline import function is a
straight forward ASCII file import function.

Configuration (Example)

A complete import module configuration consists of an ID Mapping file, a Unit Mapping file and an Import Module Instance file. See the attached
example configuration files.

ModuleConfigFiles

The following example of an Import Module Instance will import the time series as equidistant 15 minute time series for timezone GMT+1 hour.

186
[Link]
<general>
<importType>Pegelonline</importType>
<folder>$IMPORT_FOLDER_PEGELONLINE$</folder>
<idMapId>IdImportPegelOnline</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<missingValue>-777</missingValue>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>PegelOnline</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportPegelOnline</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationSetId>PegelOnline_H</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<externUnit unit="cm" parameterid="H.m"/>

]]>

IDMapping

The ID Mapping configuration file links the external Pegelonline IDs to the internal FEWS ID's. External Pegelonline IDS are for example "WT_O"
for water temperature and "W_O" for water level.

[Link]
<parameter internal="H.m" external="W_O"/>
<location internal="H-RN-0693" external="2390020"/>
<location internal="H-RN-0984" external="279100000100"/>
<location internal="H-RN-WURZ" external="2430060"/>
]]>

UnitConversion

Important in the above configuration is that a unitconversion is forced to convert the waterlevels from cm to meter. In Pegelonline all water levels
are stored in cm, the unit is however included in the pegelonline data file header.

Example File

PegelOnline example file

WQCSV

Overview

This import is available in DELFT-FEWS versions after 2008.03

Imports time series data in csv format, specially added for some of the Dutch Waterboards. This import format has some special features
compared to other time series import formats. Water quality is mostly analysed from a sample, there fore the sample id is a required field in this
file. The data is seperated by a ";" and contains 15 columns with data. Because the data files do not contain any information on the content of the
different columns, the layout and number of columns is fixed.

Column Content

187
1 Sample ID

2 Location ID

3 Location name (not used on importing)

4 X-coordinate (not used on importing)

5 Y-coordinate (not used on importing)

6 Date (format "dd-mm-yyyy")

7 Time (format "hh-mm-ss")

8 Parameter name (not used on importing)

9 Extra Parameter Info (not used on importing)

10 Label / Detection Flag (can be < or > )

11 Value

12 Extra Parameter Info (not used on importing)

13 Unit

14 Hoedanigheid (not used on importing)

15 Parameter ID

The Location ID's, Parameter ID's and Units can be converted to DELFT-FEWS ID's and units using the different mapping tables.

Configuring the Import

The reader is named WQCSV which should be configured in the general section of the import. An example import configuration is shown below:

188
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link]
[Link] xmlns="
[Link]
<import>
<general>
<importType>WQCSV</importType>
<folder>$IMPORT_FOLDER_WQCSV$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_WQCSV$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_WQCSV$</backupFolder>
<idMapId>IdImportWQCSV</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>WQCSV</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportWQCSV</moduleInstanceId>
<valueType>sample</valueType>
<parameterId>ZS</parameterId>
<locationSetId>WQLocaties</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportWQCSV</moduleInstanceId>
<valueType>sample</valueType>
<parameterId>BZV4</parameterId>
<locationSetId>WQLocaties</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportWQCSV</moduleInstanceId>
<valueType>sample</valueType>
<parameterId>BZV1</parameterId>
<locationSetId>WQLocaties</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

The file format

The file format is just plain ASCII with the columns seperated by a semicolon ";"

Example:

189
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Afvoer geschat
l/s;;;15;NVT;l/s;OW;afGeschat
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Afvoer
l/s;;;n.b.;NVT;l/s;OW;afL/s
2006010177;OGANS900;Gansbeek
Schelkenspoort;204.22;367.63;09-01-2006;[Link];Ammonium-N;NH4;;0,2;N;mg/l;OW;NH4
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];BZV met Atu/5
dagen;BZV5;<;1;O2;mg/l;OW;BZV1
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Cadmium
(Cd);Cd;;0,088;NVT;ug/l;OW;Cd2W
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Calcium
(Ca);Ca;;36;NVT;mg/l;OW;Ca2W
2006010177;OGANS900;Gansbeek
Schelkenspoort;204.22;367.63;09-01-2006;[Link];Chloride;Cl;;14;NVT;mg/l;OW;Cl
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Chroom
(Cr);Cr;;1,2;NVT;ug/l;OW;Cr2W

Note:

Make sure the date and time formats are correct (dd-mm-yyyy and hh-mm-ss).
Make sure each line has only 15 columns with 14 ";" characters separating the columns.
Only columns 1, 2, 6, 7, 10, 11, 13 and 15 are required, all other columns can be left empty.

java source code

[Link]

[Link]

{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws IOException {
[Link]("");
[Link]("n.b.");
[Link]("nb");
[Link]("n,b,");
[Link]('?');

DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();

for (String[] buffer = new String[15]; [Link](';', buffer) != -1;) {


[Link](buffer[0]);
[Link](buffer[1]);
[Link]([Link](), "dd-MM-yyyy", buffer[5],
"HH:mm:ss", buffer[6]);
[Link](getOutOfDetectionRangeFlag(buffer[9]));
[Link](',', buffer[10]);
//header .setQualifierIds(buffer[11]); temporarily deleted
[Link](buffer[12]);
[Link](buffer[14]);
[Link](header);
[Link]();
}
}

private static OutOfDetectionRangeFlag getOutOfDetectionRangeFlag(String qualityflag) {


if ([Link]("<")) return="return"
outofdetectionrangeflag.below_detection_range;="OutOfDetectionRangeFlag.BELOW_DETECTION_RANGE;"
([Link]("="([Link]("" if="if">")) return
OutOfDetectionRangeFlag.ABOVE_DETECTION_RANGE;
return OutOfDetectionRangeFlag.INSIDE_DETECTION_RANGE;
}
}
]]></"))>

190
ArcInfoAscii

Overview

This import is available in DELFT-FEWS versions after 28-1-2007

Imports time series data (grids) ArcInfoAscii files. The time and parameter information are encoded in the filename. Example:

Rain_20071010231500.asc (parameterId_YYYYMMDDHHMMSS.extension)

parameterId = Rain

year = 2007

month = 10

day = 10

hours = 23

min = 15

sec = 00

This import always uses the locationid ARC_INFO_LOC as the external location. The import can read from a zip stream. As such, you can
multiple grids within a single .zip file. Useful for large amounts of grids or very large grids.

Configuring the Import

The reader is named ArcInfoAsciiGrid which should be configured in the general section of the import:
<importType>ArcInfoAsciiGrid</importType>.
Example:

<?xml version="1.0" encoding="UTF-8"?>


<!--Sample XML file generated by XMLSPY v2004 rel. 3 U ([Link]
<timeSeriesImportRun xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<import>
<general>
<importType>ArcInfoAsciiGrid</importType>
<folder>Import</folder>
<idMapId>IdImportSidbAsc</idMapId>
<geoDatum>Ordnance Survey Great Britain 1936</geoDatum>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportSidbAsc</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>ARC_INFO_LOC</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>

As the file format does not include geographical datum information you must configure the geoDatum field in the general section

The IdMapping file should always refer to the external location ARC_INFO_LOC.
Example:

191
<?xml version="1.0" encoding="UTF-8"?>
<idMap version="1.1" xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<parameter external="CEH" internal="[Link]"/>
<location external="ARC_INFO_LOC" internal="My_Location"/>
</idMap>

The file format

Introduction

ARC ASCIIGRID refers to a specifc interchange format developed for ARC/INFO rasters in ASCII format. The format consists of a header that
specifies the geographic domain and resolution, followed by the actual grid cell values. Usually the file extension is .asc.

ncols 157
nrows 171
xllcorner -156.08749650000
yllcorner 18.870890200000
cellsize 0.00833300
0 0 1 1 1 2 3 3 5 6 8 9 12 14 18 21 25 30 35 41 47 53
59 66 73 79 86 92 97 102 106 109 112 113 113 113 111 109 106
103 98 94 89 83 78 72 67 61 56 51 46 41 37 32 29 25 22 19
etc...

Geographic header

Coordinates may be in decimal or integer format.


ncols xxxxx

ncols refers to the number of columns in the grid and xxxxx is the numerical value
nrows xxxxx

nrows refers to the number of rows in the grid and xxxxx is the numerical value
xllcorner xxxxx

xllcorner refers to the western edge of the grid and xxxxx is the numerical value
yllcorner xxxxx

yllcorner refers to the southern edge of the grid and xxxxx is the numerical value
cellsize xxxxx

cellsize refers to the resolution of the grid and xxxxx is the numerical value
nodata_value xxxxx

nodata_value refers to the value that represents missing data and xxxxx is the numerical value. The default is -9999.

ArcWat

Overview of ArcWat DBF import functionality

Arcwat provides DBF IV files that can be imported through the data import module of Delft-FEWS. The file is rather basic:
it contains two fixed colums, named "STAMP" and "WAARDE". The STAMP column contains the date-time and the "WAARDE" column contains
the data value.

The STAMP column should be defined as numeric value (19.1) with the next format: yyyyMMddHHmmss.0
for example: 20080830080718.0, which is read as the next date-time: August 30th, 2008, [Link]

The WAARDE column should contain the values in a string, with a dot as decimal seperator.

An error will be raised in case the STAMP or WAARDE column does not contain a value that meets these requirements.

The missing value is defined as "-999.99"

192
The importType should be "ARCWAT_DBF".

The location and parameter ID are derived from the file name. If the file name is "LOC_PAR_INFO.DBF", the base file name will be splitted by the
underscore character. The first token of the file name is used as external location and the second as external parameter ID.

The attached example file "Station1_Parameter1_20081004_1600.dbf" will be parsed into external location ID "Station1" and external parameter
ID "Parameter1".

The DBF files can not be supplied in a ZIP file.

Configuration

In the import moduleInstance the next definition should be used to import ArcWat DBF files:

<general>
<importType>ARCWAT_DBF</importType>
<folder>....
.....
</general

Example

193
194
BIL Import

Overview

TimeSeries reader for BIL grid files. The identifier for this reader is "BIL". For each BIL file to be imported two other files should also be present:

1. [Link] <- bil file with data


2. [Link] <- header file with geo-referencing and data description
3. [Link] <- timestep file

File format

In the above example the [Link] files contains the actual data, the [Link] describes the bil file and the [Link] file contains the date/time
information for the bil file.

Contents of the hdr file:

;
; ArcView Image Information
;
NCOLS 534
NROWS 734
NBANDS 2
NBLOCKS 4
NBITS 16
LAYOUT BIL
BYTEORDER I
SKIPBYTES 0
MAPUNITS CELSIUS;METERS
ULXMAP 300000
ULYMAP 300000
XDIM 100
YDIM 100

The BIL import assign a numerical id (starting at 0) to each parameter in the BIL file. This information is needed to set up the idmapping table (see
below). The NRBLOCK parameter denotes the number of timesteps. As such, it is possible to have multiple timesteps in a single bil file. The
NRBANDS parameter denotes the number of parameter in the file. The MAPUNITS keywords are not used in the FEWS reader.

Contents of the tim file:

200001011200 60
200001011300 60
200001011400 60
200001011500 60

The first column in the time files contains: YYYYMMDDHHMM, the second column the number of minutes in the timesteps. The second column
is presently ignored and may be omitted.

To read 32bit float bil files you will need to specify:

NRBITS 0

In the future we plan to also support the PIXELTYPE keyword in the header which will alleviate the need for the hack described
here.

Configuration

An example import configuration to import a bil file is shown below:

195
<?xml version="1.0" encoding="UTF-8"?>
<timeSeriesImportRun xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<import>
<general>
<importType>BIL</importType>
<folder>d:/import/bil</folder>
<idMapId>bilMapId</idMapId>
</general>
<timeSeriesSet>
<moduleInstanceId>GridImportBIL</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>BIL</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>GridImportBIL</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>BIL</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>

The idmap used in this example:

<?xml version="1.0" encoding="UTF-8"?>


<idMap xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link] version="1.1">
<!---->
<parameter internal="[Link]" external="1"/>
<parameter internal="[Link]" external="0"/>
<!---->
<location internal="BIL" external="BIL"/>
</idMap>

BUFR
BUFR data files are used extensively in the meteorological and oceanographic community to exchange
all manner of data, but most importantly satellite images. More information: [Link] and [Link]

Satellite images (meteorological data)

The BUFR files are assumed to contain the image for one parameter and one time. The name of the
parameter is encoding as the "X" and "Y" fields of the corresponding record, in particular: 1000X+Y.
The reason for this is that the names as reported by BUFR utilities are contained in external
configuration files. It is easier to use the X and Y fields (see the documentation for this type
of files) than to distill the names from the configuration files.

The name of the import type is "BUFR".

Timeseries data (oceanograpic data)

BUFR files containing timeseries data can be read using the "WMOBUFR" import type. The import functions use the following conventions:

196
The name of the location is the string associated with the data item with "fxy" code 0/01/019. Usually this is the human readable name.
The parameter name is constructed from the "fxy" code as: f-xx-yyy, for instance "0-11-011" is the wind direction at 10 m above sea or
ground level. The reason for using this encoding is that it is contained in the file itself, whereas the description "Wind direction (10 m)" is
found in an external file.

Note:

Some BUFR files, one example being files produced by the Wavenet measurement system in the UK, contain extra information rendering them
unuseful for the library that implements the WMOBUFR import type. Instead use the BUFR type. The files may contain only a single time, though
multiple parameters. (If the WMOBUFR library can not properly handle them, then parameters that you know to be present will be missing.)

The names of the parameters are slightly different then: they are formed as an integer number from the "fxy" code - so that fxy = 0 22 70
(significant wave height) becomes "22070" instead of "0-22-070".

Background: BUFR Tables

When using BUFR files, you should at least have a basic understanding of the philosophy of the file format. A BUFR file consists of one or more
messages, each containing data and a complete description of these data.

However, the description is encoded: each part is identified by the so-called fxy code, a code consisting of three numbers, f, x and y, that are used
to retrieve information from several tables. These tables (see the subdirectory "bufr" under the directory "bin" of the Delft-FEWS installation)
contain the descriptive strings:

The name of the institute that did the measurements


Description of the instruments or measurement methods
Description of the parameter that is stored and in what unit the data are expressed

The Delft-FEWS import module uses but a few pieces of the available information, notably the location ID, the parameter ID and the unit of the
values.

If you need to define the external ID for the parameters, then consult these tables, as they contain all the information you need.

CSV

Overview

Imports time series data from files in CSV format with three header lines containing a description of the time series:

The first line contains the location names, but the line is used only to determine the field separator and the decimal separator (see below)
The second line contains the keyword "Location Ids" as the first field and then the IDs for the locations for which the time series are given.
These IDs are the external IDs, found in an ID map.
The third line contains the IDs of the parameters.
All other lines contain the time (in yyyy-mm-dd HH:MM:SS format) as the first field and the values for each time series in the next fields.
Values between -1000.0 and -999.0 (inclusive) are regarded as missing values.

Furthermore, if you need to specify the unit in which the parameter value is expressed, you can do this by adding the unit in square brackets to
the ID of the parameter:

Rainfall [mm/day]

would thus mean the rainfall expressed in mm per day.

The CSV files can be supplied in a ZIP file.

Import type

The import type is CSV. There is no particular file extension required.

Example

Here is a simple example:

197
Location Names,Bewdley,Saxons Lode
Location Ids,EA_H-2001,EA_H-2032
Time,Rainfall,Rainfall
2003-03-01 [Link],-999,-999
2003-03-01 [Link],1.000,1.000
2003-03-01 [Link],2.000,2.000
2003-03-01 [Link],3.000,3.000
2003-03-01 [Link],4.000,4.000
2003-03-01 [Link],-999,5.000
2003-03-01 [Link],6.000,6.000
2003-03-01 [Link],7.000,7.000
2003-03-01 [Link],8.000,8.000
2003-03-01 [Link],9.000,9.000
2003-03-01 [Link],10.000,10.000
2003-03-01 [Link],11.000,11.000
2003-03-01 [Link],12.000,12.000
2003-03-01 [Link],13.000,13.000
2003-03-01 [Link],14.000,14.986

Details of the import format

If the first line contains a comma, the decimal separator is taken to be a period (.), otherwise it is supposed to be a semicolon (;) and the decimal
separator is taken to be a comma. This way locale-specific CSV files are supported.

The field separator is either a comma or a semicolon. Tabs are not supported.

Java source code

[Link]

[Link]

* A detailed description can be found in JIRA issue FEWS-1995


*<pre>
*Example
*Location Names,Bewdley,Saxons Lode
*Location Ids,EA_H-2001,EA_H-2032
*Time,Rainfall,Rainfall
*2003-03-01 [Link],-999,-999
*2003-03-01 [Link],1.000,1.000
*2003-03-01 [Link],2.000,2.000
*2003-03-01 [Link],3.000,3.000
*2003-03-01 [Link],4.000,4.000
*2003-03-01 [Link],-999,5.000
*2003-03-01 [Link],6.000,6.000
*2003-03-01 [Link],7.000,7.000
*2003-03-01 [Link],8.000,8.000
*2003-03-01 [Link],9.000,9.000
*2003-03-01 [Link],10.000,10.000
*2003-03-01 [Link],11.000,11.000
*2003-03-01 [Link],12.000,12.000
*2003-03-01 [Link],13.000,13.000
*2003-03-01 [Link],14.000,14.986
*</pre>
*/
public class CsvTimeSeriesParser implements TextParser<TimeSeriesContentHandler> {
private static final char[] QUOTE_CHARACTERS = new char[]{'\'', '\"'};
private char decimalSeparator = '\0';
private int columnCount = 0;
private LineReader reader = null;
private TimeSeriesContentHandler contentHandler = null;
private char columnSeparatorChar = '\0';

@Override

198
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws IOException {
[Link] = contentHandler;
[Link](-999.9999f, -999f);

[Link] = reader;
parseHeader();

for (String[] buffer = new String[columnCount]; [Link](columnSeparatorChar,


buffer) != -1;) {
[Link]([Link](), "yyyy-MM-dd
HH:mm:ss", buffer[0]);
for (int i = 1; i < columnCount; i++) {
[Link](i);
[Link](decimalSeparator, buffer[i]);
[Link]();
}
}
}

// The first few lines contain vital information about the file:
// - Whether the separator character is a , or a ;
// - The names of the parameters and locations

private void parseHeader() throws IOException {


String locationNamesLine = [Link]();
if ([Link](";") && ![Link](",")) {
columnSeparatorChar = ';';
decimalSeparator = ',';
} else {
columnSeparatorChar = ',';
decimalSeparator = '.';
}

String[] locationIdsLine = [Link](columnSeparatorChar);


String[] parameterIdsAndUnitsLine = [Link](columnSeparatorChar);

if ([Link] != [Link])
throw new IOException("Number of locations not the same as the number of parameters\n"
+ [Link]());

columnCount = [Link];

DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();


for (int i = 1; i < columnCount; i++) {
[Link](locationIdsLine[i]);
String parAndUnit = parameterIdsAndUnitsLine[i];
[Link](getName(parAndUnit));
[Link](getUnit(parAndUnit));
[Link](i, header);
}
}

private static String getName(String string) {


String res = [Link](string, '[');
if (res == null) res = string;
if (res == null) return null;
return [Link](res, QUOTE_CHARACTERS);
}

private static String getUnit(String string) {


String res = [Link](string, "[", "]");
if (res == null) return null;
return [Link](res, QUOTE_CHARACTERS);

199
}

}
]]></TimeSeriesContentHandler>

Database import

This is a placeholder for more extensive documentation

Overview

TimeSeries reader for a database. The identifier for this reader is "database". This import allows a FSS or stand alone system to import data from
r database. The import reads the database directly.

Configuration

This reader supports the tableMetadata element in the general section of the timeseriesImportrun:

En example file is attached to this page.

Delft-Fews Published Interface timeseries Format (PI) Import

Overview

The Delft-Fews Published interface format (PI) consists of a number of xsd schemas defining a number of XML formats for the exchange of data.
The timeseries format deals with (scalar) timeseries data.

Time series data represent data collected over a given period of time at a specific location. Within the PI XML format timeseries files can contain
both equidistant times series and non-equidistant series. Multiple time series can be stored in a single file. All time and date information is given in
GMT unless otherwise stated. Each series has a small header followed by a number of events. An event is defined by a date/time, a value and an
(optional) quality flag. A missing value definition can be defined in the files. The default (and preferred) missing value definition is NaN.

As described in the timeseries XML schemas a single quality flag may be given. It is up to the data supplier to describe the meaning of the quality
flags used. Delft-Fews will map these to internal flags on import.

A detailed description of all PI formats can be found at


[Link] The latest versions of the schema files are always
available at the Deltares website at: [Link]

The file format

Please consult the full documentation ([Link] and the xsd


schema fro more details.

200
<timeZone>0.0</timeZone>
<series>
<header>
<type>instantaneous</type>
<locationId>locA</locationId> <!-- Put the locationId here.
the locationId is defined by the data supplier. Delft-Fews
will map this to an internal location if needed.
-->
<parameterId>[Link]</parameterId> <!-- Put the parameterId here.
the parameterIdis defined by the data supplier. Delft-Fews
will map this to an internal location if needed. -->
<timeStep unit="second" multiplier="3600"/>
<!-- start and end date/time are required! -->
<startDate date="2006-08-23" time="[Link]"/>
<endDate date="2006-08-24" time="[Link]"/>
<missVal>-8888.0</missVal>
<longName>Bobbio Trebbia</longName>
<units>m</units>
</header>
<event value="8.66" date="2006-08-23" time="[Link]"/>
<event value="9.66" date="2006-08-23" time="[Link]"/>
<event value="8.66" time="[Link]" flag="33" date="2006-08-23"/>
<event value="-8888.0" date="2006-08-23" time="[Link]"/>
<event value="8888.0" date="2006-08-23" time="[Link]"/>
<event value="8888.0" time="[Link]" flag="9" date="2006-08-23"/>
<event value="8888.0" time="[Link]" flag="99" date="2006-08-23"/>
<event value="-8888.0" time="[Link]" flag="33" date="2006-08-24"/>
</series>
<series>
<header>
<type>instantaneous</type>
<locationId>locB</locationId> <!-- Put the locationId here.
the locationId is defined by the data supplier. Delft-Fews
will map this to an internal location if needed.
-->
<parameterId>[Link]</parameterId> <!-- Put the parameterId here.
the parameterIdis defined by the data supplier. Delft-Fews
will map this to an internal location if needed. -->
<timeStep unit="second" multiplier="3600"/>
<!-- start and end date/time are required! -->
<startDate date="2006-08-23" time="[Link]"/>
<endDate date="2006-08-24" time="[Link]"/>
<missVal>-999.0</missVal>
<longName>Fitz</longName>
<units>m</units>
</header>
<event value="2.66" date="2006-08-23" time="[Link]"/>
<event value="2.66" date="2006-08-23" time="[Link]"/>
<event value="2.66" date="2006-08-23" time="[Link]"/>
<event value="-2.0" date="2006-08-23" time="[Link]"/>
<event value="2.0" date="2006-08-23" time="[Link]"/>
<event value="2.0" date="2006-08-23" time="[Link]"/>
<event value="2.0" date="2006-08-23" time="[Link]"/>
<event value="-2.0" date="2006-08-24" time="[Link]"/>
</series>

]]>

A layout of the schema that defines this format is shown below:

201
Java source code

[Link]

[Link]

, VirtualOutputDirConsumer {
public enum EventDestination {XML_EMBEDDED, SEPARATE_BINARY_FILE, ONLY_HEADERS}

private EventDestination eventDestination = EventDestination.XML_EMBEDDED;


private PiVersion version = PiVersion.VERSION_1_2;

private final FastDateFormat dateFormat = new FastDateFormat("yyyy-MM-dd");


private final FastDateFormat timeFormat = new FastDateFormat("HH:mm:ss");
private TimeSeriesContent timeSeriesContent = null;
private LineWriter writer = null;
private ContentHandler xmlContentHandler = null;
private VirtualOutputDir virtualOutputDir = null;

202
private LittleEndianDataOutputStream binaryOutputSteam = null;

private final AttributesImpl attributesBuffer = new AttributesImpl();

public PiTimeSeriesSerializer() {
}

public EventDestination getEventDestination() {


return eventDestination;
}

public void setEventDestination(EventDestination eventDestination) {


[Link] = eventDestination;
}

public PiVersion getVersion() {


return version;
}

public void setVersion(PiVersion version) {


if (version == null)
throw new IllegalArgumentException("version == null");

[Link] = version;
}

@Override
public void setVirtualOutputDir(VirtualOutputDir virtualOutputDir) {
[Link] = virtualOutputDir;
}

@Override
public void serialize(TimeSeriesContent timeSeriesContent, LineWriter writer, String
virtualFileName) throws Exception {
[Link] = timeSeriesContent;
[Link] = writer;

[Link]([Link]());
[Link]([Link]());

if (eventDestination == EventDestination.SEPARATE_BINARY_FILE) {
if (virtualOutputDir == null)
throw new IllegalStateException("virtualOutputDir == null");
binaryOutputSteam = new
LittleEndianDataOutputStream([Link]([Link](virtualFileName
"bin")));
try {
serialize();
} finally {
[Link]();
}
return;
}

binaryOutputSteam = null;
serialize();
}

private void serialize() throws Exception {


XMLSerializer serializer = new XMLSerializer();
OutputFormat of = new OutputFormat("XML", "UTF-8", true);
[Link](of);
[Link](writer);
xmlContentHandler = [Link]();

203
[Link]();
[Link]();
addAttribute("xmlns", "[Link]
addAttribute("xmlns:xsi", "[Link]
addAttribute("xsi:schemaLocation", [Link]("pi_timeseries.xsd"));
addAttribute("version", [Link]());
[Link]("", "TimeSeries", "TimeSeries", attributesBuffer);
writeElement("timeZone", [Link]((double)
[Link]().getRawOffset() / (double) TimeUnit.HOUR_MILLIS));
for (int i = 0, n = [Link](); i < n; i++) {
[Link](i);
[Link](null, null, "series", null);
writeHeader();
writeEvents();
[Link](null, null, "series");
}
[Link](null, null, "TimeSeries");
[Link]();
}

private void writeEvents() throws Exception {


for (int i = 0, n = [Link](); i < n; i++) {
[Link](i);
if (![Link]()) continue;
writeEvent([Link]());
}
}

private void writeHeader() throws Exception {


TimeSeriesHeader header = [Link]();
PiTimeSeriesHeader piHeader = header instanceof PiTimeSeriesHeader ? (PiTimeSeriesHeader)
header : new PiTimeSeriesHeader();

[Link](null, null, "header", null);


writeElement("type", [Link]() == null ? "instantaneous" :
[Link]().getName());
writeElement("locationId", [Link]() == null ? "unknown" :
[Link]());
writeElement("parameterId", [Link]() == null ? "unknown" :
[Link]());
if ([Link]() >= PiVersion.VERSION_1_4.ordinal()) {
for (int i = 0, n = [Link](); i < n; i++) {
writeElement("qualifierId", [Link](i));
}
}
if ([Link]() >= PiVersion.VERSION_1_4.ordinal() && [Link]() !=
null && ![Link]().equals("main")) {
writeOptionalElement("ensembleId", [Link]());
writeOptionalElement("ensembleMemberIndex", [Link]());
}

writeTimeStep(header);
writePeriod();
if ([Link]() >= PiVersion.VERSION_1_5.ordinal()) writeTime("forecastDate",
[Link]());

writeElement("missVal", [Link]([Link]()));
writeOptionalElement("longName", [Link]());
writeOptionalElement("stationName", [Link]());
writeOptionalElement("units", [Link]());
writeOptionalElement("sourceOrganisation", [Link]());
writeOptionalElement("sourceSystem", [Link]());
writeOptionalElement("fileDescription", [Link]());

204
if ([Link]() != Long.MIN_VALUE) {
writeElement("creationDate", [Link]([Link]()));
writeElement("creationTime", [Link]([Link]()));
}

writeOptionalElement("region", [Link]());

[Link](null, null, "header");


}

private void writePeriod() throws SAXException {


TimeStep timeStep = [Link]().getTimeStep();
Period period = [Link]();
Period headerPeriod;
if (period == [Link]) {
// create a dummy period
long now = [Link]([Link]());
headerPeriod = new Period(now, now);
} else {
headerPeriod = period;
}

writeTime("startDate", [Link]());
writeTime("endDate", [Link]());
}

private void writeTime(String name, long time) throws SAXException {


if (time == Long.MIN_VALUE) return;
[Link]();
addAttribute("date", [Link](time));
addAttribute("time", [Link](time));
writeAttributes(name);
}

private void writeTimeStep(TimeSeriesHeader header) throws SAXException {


TimeStep timeStep = [Link]();
[Link]();

// todo add support for month time step


if ([Link]()) {
long seconds = [Link]() / 1000;
addAttribute("unit","second");
addAttribute("multiplier", [Link](seconds));
} else {
addAttribute("unit", "nonequidistant");
}

writeAttributes("timeStep");
}

private void writeEvent(long time) throws Exception {


if (eventDestination == EventDestination.ONLY_HEADERS) return;
if (eventDestination == EventDestination.SEPARATE_BINARY_FILE) {
[Link]([Link]());
return;
}

[Link]();

addAttribute("date", [Link](time));
addAttribute("time", [Link](time));
addAttribute("value", [Link]('.'));
addAttribute("flag", [Link]());
if ([Link]() >= PiVersion.VERSION_1_3.ordinal()) {

205
String comment = [Link]();
if (comment != null) addAttribute("comment", [Link]());
}
writeAttributes("event");
}

private void writeOptionalElement(String elementName, int index) throws SAXException {


if (index == -1) return;
writeElement(elementName, [Link](index));
}

private void writeOptionalElement(String elementName, String s) throws SAXException {


if (s == null) return;
if ([Link]().length() == 0) return;
writeElement(elementName, s);
}

private void writeElement(String name, String value) throws SAXException {


[Link](null, null, name, null);
[Link]([Link](), 0, [Link]());
[Link](null, null, name);
}

private void writeAttributes(String name) throws SAXException {


[Link]("", name, name, attributesBuffer);
[Link](null, null, name);
}

private void addAttribute(String name, String value) {


[Link]("", name, name, "CDATA", value);
}
}
]]>

DINO

Overview

Imports time series data from the TNO DINO files.

There are two types of DINO ASCII files:

GWS_PutXXXXXXX files
Put_XXXXXXX files

Where XXXXXX is the location ID.

Status

Import is coded and tested.


Both file types can be imported.
File can be ',' or ';' seperated

Not yet supported:

The column "BIJZONDERHEID" is not yet imported or interpreted.

Configuration (Example)

A complete import module configuration consists of an ID Mapping file and a Import Module Instance file.

ModuleConfigFiles/

The following example of an Import Module Instance will import the time series as non-equidistant series.

206
[Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>DINO</importType>
<folder>$IMPORT_FOLDER_DINO$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_DINO$</failedFolder>
<idMapId>IdImportDINO</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>DINO</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportDINO</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DINO_G.meting_nonequidistant</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

IdMapFiles/

Defines mappings between DINO and FEWS parameters and locations.

[Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<!--DINO locaties-->
<map externalparameter="STAND (MV)" internalparameter="[Link]" internallocation="B45F0142"
externalparameterqualifier="1" externallocation="B45F0142"/>
<map externalparameter="STAND (MV)" internalparameter="[Link]" internallocation="B51F0423"
externalparameterqualifier="1" externallocation="B51F0423"/>
</idMap>
]]>

Important in this configuration is the externalParameterQualifier, this is used to indicate the Filternumber.

Example File/

GWS_PutB45H0224.csv

java source code

[Link]

[Link]

207
{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
[Link](reader);

String[] columnNames = null;


String[] buffer = null;
char columnSeparator = '\0';
int headerIndex = -1;

for (String line; (line = [Link]()) != null;) {


if (columnSeparator == '\0') {
if ([Link](',') != -1) columnSeparator = ',';
if ([Link](';') != -1) columnSeparator = ';';
if (columnSeparator == '\0') continue;
}

if ([Link](line, columnSeparator).trim().equalsIgnoreCase("Locatie")) {
columnNames = [Link](line, columnSeparator);
[Link](columnNames);
buffer = new String[[Link]];
headerIndex++;
continue;
}

// skip meta data header


if (columnNames == null || headerIndex % 2 == 0) continue;

[Link](line, columnSeparator, buffer);


[Link](buffer[0]);
[Link](buffer[1]);
String time = buffer[2];
String pattern = [Link]('-') > -1 ? "dd-MM-yyyy" : "yy/MM/dd HH:mm:ss";
[Link]([Link](), pattern, time);

for (int i = 3; i < [Link]; i++) {


// Check for column with name BIJZONDERHEID, add as flag or comment (ToDo)
if (columnNames[i].length() == 0) continue;
if (columnNames[i].equalsIgnoreCase("BIJZONDERHEID")) {
continue;
}
[Link](columnNames[i]);
[Link](header);
[Link]('.', buffer[i]);
[Link]();
}
}
}
}
]]>

DIVER MON

Overview

Imports time series data from Diver loggers. The files have a sort of Windows ini file format with file extension (*.mon). The format of the MON ini
files is not well defined. Many programs interpret the structure differently and have various names for the ini file sections and parameters.

Sections: Section declarations start with '[' and end with ']'; i.e. '[Logger settings]' or '[Instrument info]'.
Parameters or item: this is the content of a section with an '=' sign between the key and the value; i.e. "location = abc"

208
The Date format used is: "yyyy/MM/dd HH:mm:ss"

Configuration (Example)

A complete import module configuration consists of an ID Mapping file and a Import Module Instance file.

ModuleConfigFiles

The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT+1 with a time step of 1
hour. Many times the MON files do not save the data at rounded hourly tims, therefore a tolerance has been added to map the imported data to
correct hourly interval time series.

[Link]
<timeSeriesImportRun ......"="......"">
<import>
<general>
<importType>DIVERMON</importType>
<folder>$IMPORT_FOLDER_MON$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_MON$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_MON$</backupFolder>
<idMapId>IdImportMON</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>MON</dataFeedId>
</general>
<tolerance locationsetid="ImportMON_H.meting.cm_uur" timeunit="minute" unitcount="30"
parameterid="[Link]"/>
<timeSeriesSet>
<moduleInstanceId>ImportMON</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>ImportMON_H.meting.cm_uur</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

IdMapFiles

ID mapping fefines mappings between Diver MON and FEWS parameters and locations. Remember that ID mapping is case sensitive.

sample of [Link]
<map externalparameter="niveau" internalparameter="[Link]" internallocation=
"Dorperdijk_beneden" externallocation="Dorperdijk beneden"/>

]]>

Example file

There is a wide range of MON file types, this is just one example.

sample of 13PB175646_03_16_0708_23_07.mon

Accepted Mon file Sections and Parameters

The Mon Import module in Delft-FEWS does not parse all data in the MON file. The important sections and parameters are the following:

209
[Logger settings] or [Instrument info]; Read location Id from this section
Location or Locatie ; location Id
[Channel 1] or [Kanaal 1] ; Not used
[Channel 2] or [Kanaal 2] ; Not used
[Series settings] or [Instrument info from data header] ; Not used
[Channel 1 from data header] or [Kanaal 1 from data header]; Read Parameter Id
Identification or Identificatie = Parameter Id
[Channel 2 from data header] or [Kanaal 2 from data header]; Read Parameter Id
Identification or Identificatie = Parameter Id
[Data]; Data values with the different channels in columns. The datavalues may have a "."or a ","as decimal seperator, both options are
accepter by the import function.

When the MON file is not in the correct format a warning message is returned. Known problems are missing location ID's or parameter ID's in the
MON files.

Java source code

[Link]

[Link]

{
private static final Logger log = [Link]([Link]);

/* headers are written in dutch and english */


private enum DiverHeaderConstants {
DUTCH_HEADER_CONSTANTS("Instrument info",
"Instrument info from data header", "Kanaal", "Locatie", "Identificatie",
"Bereik", "Data"),
ENGLISH_HEADER_CONSTANTS("Instrument info",
"Instrument info from data header", "Channel", "Location", "Identification",
"Range", "Data"),
LOGGER_HEADER_CONSTANTS("Logger settings",
"Series settings", "Channel", "Location", "Identification", "Range", "Data"),
MIXED_HEADER_CONSTANTS("Instrument info",
"Instrument info from data header", "Kanaal", "Location", "Identification",
"Range", "Data"),
MIXED_HEADER_CONSTANTSNEW("Instrument info",
"Instrument info from data header", "Kanaal", "Location", "Identification",
"Range", "Data"),
INCOMPLETE_HEADER_CONSTANTS("Instrument info",
"Instrument info from data header", "Kanaal", "Locatie", "Identificatie",
"Bereik", "Data");

String loggerSettings;
String seriesSettings;
String channel;
String location;
String chnlId;
String chnlRange;
String data;

DiverHeaderConstants(String loggerSettings, String seriesSettings, String channel,


String location, String chnlId, String chnlRange, String data) {
[Link] = location;
[Link] = seriesSettings;
[Link] = chnlId;
[Link] = chnlRange;
[Link] = loggerSettings;
[Link] = channel;
[Link] = data;
}
}

private LineReader reader = null;


private String virtualFileName = null;
private TimeSeriesContentHandler contentHandler = null;

210
private int channelCount = -1;

@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = reader;
[Link] = virtualFileName;
[Link] = contentHandler;
[Link](99999f);
[Link](true);
parseHeaders();
parseData();
}

/**
* Parses the datarows of the diver file. Data is written as a timestamp, followed by a
number of columns. Each
* column refers to a channel that is declared in the Series settings
*
*/
private void parseData() throws Exception {
int nrOfLines = [Link]([Link]().trim());

long lineCounter = 0;
char columnSeparator = '\0';
String[] buffer = new String[channelCount + 2];
for (String line; (line = [Link]()) != null;) {
if (columnSeparator == '\0') {
columnSeparator = [Link]('\t') == -1 ? ' ' : '\t';
}
if ([Link]("END OF DATA")) break;
[Link](line, columnSeparator, buffer);
[Link]([Link](), "yyyy/MM/dd", buffer[0],
"HH:mm:ss", buffer[1]);
for (int i = 0; i < channelCount; i++) {
String valueText = buffer[i + 2];
if (valueText == null) continue;
valueText = [Link](',', '.');
[Link](i);
[Link]('.', valueText);
[Link]();
}
lineCounter++;
}

if ([Link]())
[Link]("Number of expected values: " + nrOfLines + " Found values: " +
lineCounter);

/**
* Parses the available channels from the *.mon file. Multiple channels can exist in a file.
Typically the number
* of channels will be 2 but files with 1 channel do exist.
*/
private void parseHeaders() throws Exception {
IniFile iniFile = parseHeaderIniFile();
DiverHeaderConstants headerConstants = determineLanguage(iniFile);

String loc = [Link]([Link], [Link]);


if ([Link](loc) == null) {
throw new Exception("No location found");

211
}
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
String[] subjects = [Link]();

channelCount = 0;

for (int i = 1; ; i++) {


String subject = [Link]() + ' ' + i + " from data
header";
if ([Link](subjects, subject) == -1) {
if ([Link] == 0) {
throw new Exception("No channel identification found");
}
return;
}
// subject is found, get id and range
String id = [Link](subject, [Link]);

if ([Link]() == 0) {
[Link]("Identification for channel: '" + subject + "' is not set in File: " +
virtualFileName);
// set dummy parameter
id = "Not Defined";
}

[Link](id);
[Link](loc);
[Link](channelCount, header);
channelCount++;
}
}

private IniFile parseHeaderIniFile() throws Exception {


List<String> headerLines = new ArrayList<String>();
for (String line; (line = [Link]()) != null;) {
if ([Link]().equalsIgnoreCase("[Data]")) {
BufferedReader reader = new BufferedReader(new
StringReader([Link](headerLines, '\n')));
return new IniFile(reader, virtualFileName);
}
[Link](line);
}

throw new Exception("No data found");


}

private static DiverHeaderConstants determineLanguage(IniFile iniFile) throws Exception {


String[] subjects = [Link]();

if ([Link] > 0 && subjects[0].equalsIgnoreCase("Logger settings")) return


DiverHeaderConstants.LOGGER_HEADER_CONSTANTS;

String subject2 = [Link] < 2 ? "" : subjects[1];


if ([Link]("channel 1")) return
DiverHeaderConstants.ENGLISH_HEADER_CONSTANTS;
if ([Link]("kanaal 1")) return
DiverHeaderConstants.DUTCH_HEADER_CONSTANTS;
if ([Link]("channel 1 from data header")) return
DiverHeaderConstants.ENGLISH_HEADER_CONSTANTS;
if ([Link]("kanaal 1 from data header")) return
DiverHeaderConstants.DUTCH_HEADER_CONSTANTS;

if ([Link] > 5) {

212
if (subjects[4].equalsIgnoreCase("kanaal 1 from data header")) return
DiverHeaderConstants.MIXED_HEADER_CONSTANTS;
if (subjects[4].equalsIgnoreCase("kanaal1 from data header")) return
DiverHeaderConstants.MIXED_HEADER_CONSTANTSNEW;
}
if ([Link] > 3) {
if (subjects[2].equalsIgnoreCase("kanaal 1 from data header")) return
DiverHeaderConstants.INCOMPLETE_HEADER_CONSTANTS;
}

// the file format is corrupt


throw new Exception("File format not recognized as a valid diver (.mon) file, check the
headers");
}
}
]]></String></String>

FewsDatabase Import

Overview

TimeSeries reader for a FEWS Master Controller database. The identifier for this reader is "Fewsdatabse". This import allows a FSS or stand
alone system to import data from another FEWS master controller database. The import reads the database directly and does NOT communicate
with the MC. The following limitations apply:

1. It can only handle external historical and external forecasting data


2. It does not use the timestep as a key. As such the behavior is undefined if the remote database has the same series (location/parameter)
with different timesteps.
3. The relativeviewPeriod element in the general section is required although the schema (see below) indicates it is optional.

Configuration

The schema extension of the import module is shown below:

213
An example configuration file is attached to this page. This file imports one timeseries from the eami00 database on the fewdbsvr04 server; the
figure below shows this file in XML-SPY grid format:

database types

Syntax for SQL Server:

[Link]
<jdbcConnectionString>jdbc:jtds:sqlserver://MYSERVER:1433;databaseName=MYNAME</
jdbcConnectionString>
<user>fews</user>
<password>123</password>
]]>

Syntax for oracle:

214
[Link]
<jdbcConnectionString>jdbc:oracle:thin:@ fewsdbsvr[Link]mi00</jdbcConnectionString>
<user>fews</user>
<password>123</password>
]]>

Syntax for localDatastore, that should be placed into an import directory. Notice that jdbcDriverClass etc are not required, but import folder is
required now. The system automatically detects if the type of the localdatastore (Acces or firebird)

Fewsdatabase
<folder>$IMPORT_FOLDER_FEWS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_FEWS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_FEWS$</backupFolder>
]]>

Gray Scale Image

Overview

Meteosat Images are generally imported as images in <filename>.png format. The Meteosat images constitute a time series of png images, that
are geo-referenced by means of a specific world file. Each image needs its own world file, which in case of PNG carries the extension
<filename>.pgw.

Import of images in another format, such as JPEG is also possible. The corresponding world file for a JPEG file has the extension <filename>.jpg.
The images are imported via a common time series import, for which a specific image parameter needs to be specified in a parameterGroup via
the parameter id image.

Configuration (Example)

The regional parameters XML file must have a special parameter for the images that are imported.

[Link]
<parameterType>instantaneous</parameterType>
<unit>-</unit>
<valueResolution>8</valueResolution>
<parameter id="image">
<shortName>image</shortName>
</parameter>

]]>

The value resolution indicates the resolution of the values of the pixels (grey tones) in the Meteosat images. In this case 8 grey tones are
resampled into a single grey tone for storage space reductions. In the module for the timemeseries import run for a Meteosat image the import is
then configured as follows:

215
[Link]
<general>
<importType>GrayScaleImage</importType>
<folder>$REGION_HOME$/Import/MeteoSat</folder>
<idMapId>IdImportMeteosat</idMapId>
</general>

<timeSeriesSet>
<moduleInstanceId>ImportMeteosat</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>image</parameterId>
<locationId>meteosat</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>4</synchLevel>
<expiryTime unit="day" multiplier="750"/>
</timeSeriesSet>

]]>

The goereferenced image can then be displayed in the grid display.

hdf4

Overview

Imports time series data stored in the HDF4 file format.

Notice that the file name should contain the date and time for the data in the following format:

*_yyyymmddHHMM_*.*

that is:

The file name without the extension should contain the date and time between two underscores, for instance
AMSR_E_L2A_BrightnessTemperatures_P07_200604152307_D.hdf
The date and time are given as four digits for the year, two digits for respectively month, day, hour and minute (seconds are ignored; the
format may not currently contain a "T" to separate date and time or a "Z" to indicate the timezone).

All parameters in the file are assumed to be defined on one and the same grid, defined in the configuration.

Configuration (Example)

<timeSeriesSet>
<moduleInstanceId>ImportMODISHDF4</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>sRb5</parameterId>
<locationId>MODIS_GRID</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>7</synchLevel>
<expiryTime unit="day" multiplier="14"/>
</timeSeriesSet>

HYMOS

Overview of HYMOS transfer database import functionality

HYMOS provides a database format to transfer time series. Two formats of HYMOS Transfer Databases are provided by HYMOS, related to the
HYMOS versions: both 4.03 and 4.50. The transfer database files are in MS Access format (*.mdb).

216
The transfer files can be imported through the data import module of Delft-FEWS.

The importType should be "HymosTransferDb".

Notice that FlagConversion should be applied to convert from the HYMOS flags to FEWS flags. See the attached conversion files.

The files can not be supplied in a ZIP file.

Configuration

In the import moduleInstance the next definition should be used to import ArcWat DBF files:

<general>
<importType>HymosTransferDb</importType>
<folder>$IMPORT_FOLDER_HYMOS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_HYMOS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_HYMOS$</backupFolder>
<idMapId>IdImportHYMOS</idMapId>
<flagConversionsId>ImportHYMOSFlagConversions</flagConversionsId>
<importTimeZone>
.....
</general

Export from FEWS to HYMOS

It is also possible to export time series from Delft-FEWS to HYMOS transfer databases. The export is standard available in the table of the Time
Series Display (Save as .. menu option). However, this option only uses the correct flag conversion if you have defined also the Hymos export in
the file menu of the explorer.

You can activate this by adding the next definition to your explorer configuration:

<interactiveExportFormats>
<interactiveExportFormat>
<name>HYMOS Transferdatabase 4.03</name>
<exportType>Hymos 4.03</exportType>
<IdMapId>IdHYMOS</IdMapId>
<flagConversionsId>ExportHYMOSFlagConversions</flagConversionsId>
</interactiveExportFormat>
<interactiveExportFormat>
<name>HYMOS Transferdatabase 4.50</name>
<exportType>Hymos 4.5</exportType>
<IdMapId>IdHYMOS</IdMapId>
<flagConversionsId>ExportHYMOSFlagConversions</flagConversionsId>
</interactiveExportFormat>
</interactiveExportFormats>

Note: all the timeseries within the "interactiveExportFormat" block should have the same time step, as otherwise Hymos will not accept the file.

Java source code

[Link]

[Link]

{
private static final Logger log = [Link]([Link]);

private Connection connection = null;


private TimeSeriesContentHandler handler = null;
private Map<String, string="String"> unitMap = null;
private long timeZoneOffset = 0L;

private float[] values = FloatArrayUtils.EMPTY_ARRAY;


private byte[] byteBuffer = ByteArrayUtils.EMPTY_ARRAY;
private short[] shortBuffer = ShortArrayUtils.EMPTY_ARRAY;

217
private static class SeriesInfo {
int seriesNr = 0;
String locationID = null;
String dataType = null;
String unit = null;
String tableName = null;
TimeZone timeZone = null;
float missVal = -999F;
float traceVal = 0.0F;
boolean forecast = false;

@Override
public String toString() {
return seriesNr + ":" + dataType + ':' + locationID + ':' + tableName;
}
}

public HymosTransferDbTimeSeriesParser() {
}

@Override
public void parse(Connection connection, TimeSeriesContentHandler contentHandler) throws
Exception {
[Link] = connection;
[Link] = [Link]().getRawOffset();

unitMap = readUnitMap();

SeriesInfo[] seriesInfos = readSeriesInfos();


for (int i = 0; i < [Link]; i++) {
SeriesInfo seriesInfo = seriesInfos[i];
try {
read(seriesInfo);
} catch (Exception e) {
[Link]("Reading time serie failed " + seriesInfo, e);
}
}
}

@SuppressWarnings({"OverlyLongMethod"})
private SeriesInfo[] readSeriesInfos() throws SQLException {
ArrayList<SeriesInfo> list = new ArrayList<SeriesInfo>();
Statement statement = [Link]();
try {
ResultSet resultSet = [Link]("SELECT * FROM Series");
try {
int seriesNrColumnIndex = [Link]("ID");
int realStatColumnIndex;
try {
realStatColumnIndex = [Link]("REALSTAT");
} catch (SQLException e) {
realStatColumnIndex = [Link]("LocationId");
}
int dataTypeColumnIndex;
try {
dataTypeColumnIndex = [Link]("DATATYPE");
} catch (SQLException e) {
dataTypeColumnIndex = [Link]("ParameterId");
}
int tableNameColumnIndex = [Link]("TABLENAME");
int missValColumnIndex;
try {
missValColumnIndex = [Link]("MISSVAL");
} catch (SQLException e) {

218
missValColumnIndex = [Link]("MissingValue");
}

int traceValColumnIndex;
try {
traceValColumnIndex = [Link]("TRACEVAL");
} catch (SQLException e) {
traceValColumnIndex = -1;
}

int forecastColumnIndex;
try {
forecastColumnIndex = [Link]("FORECAST");
} catch (SQLException e) {
forecastColumnIndex = -1;
}

while ([Link]()) {
SeriesInfo seriesInfo = new SeriesInfo();
[Link] = [Link](seriesNrColumnIndex);
if ([Link]()) [Link]("Parse series info for " +
[Link]);
[Link]
= [Link](realStatColumnIndex).trim();

[Link]
= [Link](dataTypeColumnIndex).trim();

[Link] = [Link]([Link]);
// if ([Link] == null)
// [Link]([Link], "Can not find
datatype/parameter in table parameters " + [Link]);

[Link]
= [Link](tableNameColumnIndex).trim();

if ([Link] == null) {
throw new SQLException("Table name is not specified for series:" +
[Link]);
}

[Link] = [Link](missValColumnIndex);
if ([Link]()) [Link] = [Link];

if (traceValColumnIndex != -1) {
[Link] = [Link](traceValColumnIndex);
if ([Link]()) [Link] = 0;
}

if (forecastColumnIndex != -1) {
[Link] = [Link](forecastColumnIndex);
}

if ([Link]()) [Link]("series info parsed " + seriesInfo);

[Link](seriesInfo);

}
} finally {
[Link]();

219
}
} finally {
[Link]();
}
return [Link](new SeriesInfo[[Link]()]);
}

private Map<String, string="String"> readUnitMap() throws SQLException {

Map<String, string="String"> res = new HashMap<String, string="String">();


Statement statement = [Link]();
try {
ResultSet resultSet = [Link]("SELECT * FROM Parameter");
try {
int idColumnIndex = [Link]("ID");
int unitColumnIndex = [Link]("UNIT");

while ([Link]()) {
String parId = [Link](idColumnIndex);
String unit = [Link](unitColumnIndex);
[Link](parId, unit);
}
} finally {
[Link]();
}
} finally {
[Link]();
}

return res;
}

@SuppressWarnings({"OverlyLongMethod"})
private void read(SeriesInfo seriesInfo) throws IOException, SQLException {
[Link]("Start reading table " + [Link] + " for " + [Link] +
" and " + [Link]);

int rowCount = 0;
int missingValueCount = 0;
int traceValueCount = 0;
float minValue = Float.POSITIVE_INFINITY;
float maxValue = Float.NEGATIVE_INFINITY;
long minTime = Long.MAX_VALUE;
long maxTime = Long.MIN_VALUE;

boolean hasLabel = hasLabel(seriesInfo);


String labelSql = hasLabel ? ", LABEL" : "";
String sql;
if ([Link]) {
sql = "SELECT FORECASTDATE, VALUEDATE, FORECASTVALUE" + labelSql + " FROM [" +
[Link] + ']';
} else {
sql = "SELECT MEASDATE, MEASVALUE" + labelSql + " FROM [" + [Link] + ']'
;
}

int forecastDateColumn = [Link] ? 1 : -1;


int valueDateColumn = [Link] ? 2 : 1;
int valueColumn = [Link] ? 3 : 2;
int labelColumn = [Link] ? 4 : 3;

220
Statement statement = [Link]();
try {
ResultSet rows = [Link](sql);
int columnType = [Link]().getColumnType(valueColumn);
boolean binary = [Link](columnType);
try {
DefaultTimeSeriesHeader header = null;
while ([Link]()) {
rowCount++;
long time = [Link](valueDateColumn).getTime() - timeZoneOffset;
[Link](time);
if (time > maxTime) maxTime = time;
if (time < minTime) minTime = time;

long forecastTime = [Link]


? [Link](forecastDateColumn).getTime() - timeZoneOffset :
Long.MIN_VALUE;

if (header == null || [Link]() != forecastTime) {


header = new DefaultTimeSeriesHeader();
[Link]([Link]);
[Link]([Link]);
[Link]([Link]);
[Link](forecastTime);
[Link](header);
if ([Link]()) {
[Link]("Table skipped " + [Link] + " for " +
[Link] + " and " + [Link]);
return;
}
}

if ([Link]()) continue;

if (hasLabel)
[Link]([Link](labelColumn));

if (binary) {
byte[] bytes = [Link](valueColumn);
try {
InputStream inputStream;
try {
inputStream = new UnsyncBufferedInputStream(new GZIPInputStream(
new ByteArrayInputStream(bytes), 100000), 100000);
} catch (IOException e) {
inputStream = new ByteArrayInputStream(bytes);
}
boolean asciiGrid = isAsciiGrid(inputStream);
if (asciiGrid) {
AsciiGridReader reader = new AsciiGridReader(inputStream,
"hymostransferdb", [Link]());
try {
Geometry geometry = [Link]();
if ([Link] != [Link]()) {
values = new float[[Link]()];
byteBuffer = new byte[[Link]() *
NumberType.INT16_SIZE];
shortBuffer = new short[[Link]()];
}
[Link](values);
[Link](geometry);
[Link](values);
[Link]();

221
} finally {
[Link]();
}
} else {
MosaicGridFileReader reader = new
MosaicGridFileReader(inputStream);
try {
Geometry geometry = [Link]();
if ([Link] != [Link]()) {
values = new float[[Link]()];
byteBuffer = new byte[[Link]() *
NumberType.INT16_SIZE];
shortBuffer = new short[[Link]()];
}

[Link](values, byteBuffer, shortBuffer);


[Link](geometry);
[Link]([Link]());
[Link](values);
[Link]();
} finally {
[Link]();
}
}

} catch (Exception e) {

[Link]("HymosTransferDbPare: Can not read grid " + header);

}
} else {
float value = [Link](valueColumn);
if (value == [Link]) {
missingValueCount++;
value = [Link];
}

// everyvalue that is about -999 is also recognised as missing value


// hack for taiwan
if (value > -1000 && value <= minvalue)="minValue)" (value="(value"
minvalue="value;" (![Link](value))="(![Link](value))" [Link])=
"[Link])" -998.99)="-998.99)" if="if" tracevaluecount++;="traceValueCount++;" value=
"0;" missingvaluecount++;="missingValueCount++;" }="}" {="{" <="<"> maxValue) maxValue = value;
}

[Link](value);
[Link]();

rowCount++;

} finally {
[Link]();
}
} finally {
[Link]();
}

if (rowCount == 0) {
[Link]("No values found");

222
} else {
Period period = new Period(minTime, maxTime);
[Link]("Period: " + period);
[Link]("Row count: " + rowCount);
[Link]("Missing value count: " + missingValueCount);
[Link]("Trace value count: " + traceValueCount);
}
}

private static boolean isAsciiGrid(InputStream inputStream) {


[Link](100);
try {
try {
InputStreamReader reader = new InputStreamReader(inputStream);
char[] chars = new char[99];
[Link](chars);
String header = new String(chars);
return [Link](header, "cols");
} finally {
[Link]();
}
} catch (Exception e) {
return false;
}
}

private boolean hasLabel(SeriesInfo seriesInfo) throws SQLException {


Statement statement = [Link]();
try {
[Link](1);
ResultSet resultSet = [Link]("SELECT * FROM [" + [Link]
+ ']');
try {
return [Link]([Link](), "LABEL");
} finally {
[Link]();
}
} finally {
[Link]();
}
}

}
]]></=></String,></String,></String,></SeriesInfo></SeriesInfo></String,>

KNMI CSV

Overview

Imports time series data from the KNMI CSV files that are delivered to Dutch waterboards. The files contain both daily rainfall and evaporation.
The files have an extension of "*.dat".

Configuration (Example)

A complete import module configuration consists of an ID Mapping file and a Import Module Instance file.

ModuleConfigFiles/

The following example of an Import Module Instance will import the time series as equidistant daily series for timezone GMT+1 hour. Notice that
FEWS should store the time at the end of the day. Therefore the import timezone should be -23:00 instead of +01:00.

223
[Link]
<timeSeriesImportRun ......"="......"">
<import>
<general>
<importType>KNMICSV</importType>
<folder>$IMPORT_FOLDER_KNMI$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI$</backupFolder>
<idMapId>IdImportKNMI</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>-23:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI_P.meting_dag</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="second" timezone="GMT+1" multiplier="86400"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI_E.ref.Makkink_dag</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="second" timezone="GMT+1" multiplier="86400"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

IdMapFiles/

Defines mappings between KNMI and FEWS parameters and locations.

sample of [Link]
<map externalparameter="908" internalparameter="[Link]" internallocation="KNMIDN"
externallocation="908"/>
<map externalparameter="911" internalparameter="[Link]" internallocation="KNMIDT"
externallocation="911"/>
]]>

Important in this configuration is the externalParameter and the externalLocation have the same identifier.

Example File/

ab0115a_aamaas.dat

Java source code

[Link]

224
[Link]

{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();

for (String[] buffer = new String[3]; [Link](',', buffer) != -1;) {


[Link](buffer[0]);
[Link](header);
[Link]([Link](), "yyyyMMdd", buffer[1]);
[Link]('.', buffer[2]);
[Link]();
}
}
}
]]>

KNMI EPS

Overview

Imports time series data with forecasts from the KNMI that are delivered to the Dutch waterboards. The files contain the next 52 forecasts:

deterministic forecast
control run
ensemble forecast of 50 members.

See KNMI site for information on all possible parameters and locations EPS. Two forecasts are supported: a forecast of 10 days and the forecast
of 15 days. Notice that the forecast of 15 days still contains a 10 day deterministic forecast only.

Notice that the rainfall forecast is supplied as an accumulative time series in 0.1 mm. All time series have a 6 hourly time step in GMT.

A complete forecast is supplied as a zipfile that contains individual files for each location. In each file the forecast timeseries for a list of
parameters are suppplied.

Example: the file "ECME_VEN_2007102912.zip" contains the next list of files:

ECME_VEN_200710291200_NL001_LC
ECME_VEN_200710291200_NL002_LC
ECME_VEN_200710291200_NL004_LC
ECME_VEN_200710291200_NL009_LC
ECME_VEN_200710291200_NL011_LC
ECME_VEN_200710291200_NL012_LC
ECME_VEN_200710291200_NL015_LC
ECME_VEN_200710291200_NL018_LC
ECME_VEN_200710291200_NL020_LC

for the stations NL001, NL002 etc. at forecast time 2007-10-29 12:00.

Configuration (Example)

A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.1 mm per 6 hour to mm/hr for example) it is also required to configure a Unit Conversion file.

ModuleConfigFiles

The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 6
hours.

225
[Link]
<timeSeriesImportRun ......"="......"">
<import>
<general>
<importType>KNMIEPS</importType>
<folder>$IMPORT_FOLDER_KNMI_EPS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI_EPS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI_EPS$</backupFolder>
<idMapId>IdImportEPS</idMapId>
<unitConversionsId>ImportKNMIUnits</unitConversionsId>
<importTimeZone>
<!--EPS is in GMT-->
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI-EPS</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-EPS</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<ensembleId>EPS</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-EPS</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-EPS</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>

<!--to let the import module know that the KNMI rainfall is an accumulative timeseries
in 0.1 mm/hr that should be disaggrated and converted to for example mm/hr-->
<externUnit unit="0.1 mm/6hr" cumulativesum="true" parameterid="[Link]"/>
<externUnit unit="0.1 mm/6hr" cumulativesum="true" parameterid="[Link]"/>
<externUnit unit="0.1 mm/6hr" cumulativesum="true" parameterid="[Link]"/>
</import>
</timeSeriesImportRun>
]]>

IdMapFiles

Defines mappings between KNMI and FEWS parameters and locations.

226
sample of [Link]
<parameter internal="[Link]" external="13011_deterministic"/>
<parameter internal="[Link]" external="13011_control"/>
<parameter internal="[Link]" external="13011_ensemble"/>
<location internal="KNMI_NL001" external="NL001"/>
<location internal="KNMI_NL002" external="NL002"/>
<location internal="KNMI_NL004" external="NL004"/>
<location internal="KNMI_NL009" external="NL009"/>
<location internal="KNMI_NL011" external="NL011"/>
<location internal="KNMI_NL012" external="NL012"/>
<location internal="KNMI_NL015" external="NL015"/>
<location internal="KNMI_NL018" external="NL018"/>
<location internal="KNMI_NL020" external="NL020"/>

]]>

Important in this configuration is the externalParameter is manipulated to identify the deterministic, control and ensemble forecasts. Therefore the
import module generates automatically a suffix to the parameter ID in the import files. If the import file contains a parameter "13011" for rainfall,
the import generates the next externalParameters: "13011_deterministic", "13011_control" and "13011_ensemble".

UnitConversionFile

Defines the conversion of the units that should be applied.

sample of [Link]
<unitConversions ...................="...................">
<unitConversion>
<inputUnitType>0.1 mm/6hr</inputUnitType>
<outputUnitType>mm/hr</outputUnitType>
<multiplier>0.01666667</multiplier>
<incrementer>0</incrementer>
</unitConversion>
........
........
</unitConversions>
]]>

Example Files

Name Size Creator Creation Date Comment

ECME_VEN_2007102912.zip 417 kB Klaas-Jan van Heeringen 15-11-2007 08:28 Example of the 15 day forecast zip file

ImportKNMI 1.00 [Link] 2 kB Klaas-Jan van Heeringen 15-11-2007 08:30 Module Config file

IdImportEPS 1.00 [Link] 0.9 kB Klaas-Jan van Heeringen 15-11-2007 08:30 Id Map file

ImportKNMIUnits 1.00 [Link] 1 kB Klaas-Jan van Heeringen 15-11-2007 08:30 Unit Conversion File

Java source code

[Link]

[Link]

227
{
private TimeSeriesContentHandler contentHandler = null;
private LineReader reader = null;

private long forecastTime = Long.MIN_VALUE;


private String[] buffer = new String[1000];

@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = reader;
[Link] = contentHandler;
[Link](99999F);

DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();


[Link](reader);

[Link]([Link]([Link](), ' '));

while ([Link](' ', buffer) != -1) {


String parameterId = buffer[0];
[Link]("yyyyMMddHH", buffer[1], [Link]());
forecastTime = [Link]();

[Link](parameterId + "_deterministic");
[Link](header);
parseValues();

[Link](parameterId + "_control");
[Link](header);
parseValues();

for (int i = 0; i < 50; i++) {


[Link](parameterId + "_ensemble");
[Link](i);
[Link](header);
parseValues();
}
}
}

private void parseValues() throws Exception {


int size = [Link](' ', buffer);
for (int i = 0; i < size; i++) {
[Link](forecastTime + i * 6 * TimeUnit.HOUR_MILLIS);
[Link]('.', buffer[i]);
[Link]();
}
}
}
]]>

KNMI HDF5

Overview

Imports time series data with radar rainfall information from the KNMI that are delivered to the Dutch waterboards. The files are in HDF5 file
format. See KNMI site for information on the files.

Notice that the rainfall forecast is supplied as an accumulative rainfall sum in 0.01 mm over the last 3 hours where the time is in GMT. The files
are supplied once per hour.

KNMI supplies several radar images, like:

228
5 minute radar intensity (uncalibrated)
accumulated calibrated precipitation of last 3 hours, supplied every hour
accumulated calibrated daily precipitation, supplied every day

The accumulated precipitation files contain the rainfall depth (in millimeters). The 5-minute radar intensity files contain a 8-bit value (0-255) that
represents the rainfall depth. To convert from this bit value to normal rainfall depth an additional conversion should be applied. The conversion
table is listed at KNMI site. The conversion can be done by a Transformation that uses a log-function that fits the conversion table.

This description continues using the accumulated precipitation as example configuration.

This import uses a general C++ DLL for reading the HDF5 files. On some Windows systems the correct runtime components of
Visual C++ Libraries are not installed by default. A Microsoft Visual C++ 2008 SP1 Redistributable Package must be installed on
the computers to solve the problem. Problems have been found on Windows 2003 and Windows 2008 server computers.

On Linux importing HDF5 files will fail if the operating system is too old. From available evidence,
the kernel must be at GLIBC 2.6.18 (see the output of the "uname -a" command).

Configuration (Example)

A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.01 mm per 3 hour to mm/hr for example) it is also required to configure a Unit Conversion file.

ModuleConfigFiles

The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 3
hours.

[Link]
<import>
<!--Radar-->
<general>
<importType>KNMI-HDF5</importType>
<folder>$IMPORT_FOLDER_KNMI_RADAR$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI_RADAR$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI_RADAR$</backupFolder>
<idMapId>IdImportRADAR</idMapId>
<unitConversionsId>ImportKNMIUnits</unitConversionsId>
<!--radar is in GMT-->
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI-RADAR</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>KNMI-RADAR</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" timezone="GMT+1" multiplier="3"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
<!--to let the import module know that the KNMI rainfall is in 0.01 mm/3hr that should be
converted to for example mm/hr-->
<externUnit unit="0.01 mm/3hr" parameterid="[Link]"/>
</import>

]]>

IdMapFiles

229
Defines mappings between KNMI and FEWS parameters and locations.

sample of [Link]
<map externalparameter="image_data" internalparameter="[Link]" internallocation=
"KNMI-RADAR" externallocation="KNMI-RADAR"/>

]]>

UnitConversionFile

Defines the conversion of the units that should be applied.

sample of [Link]
<unitConversions ...................="...................">
<unitConversion>
<inputUnitType>0.01 mm/3hr</inputUnitType>
<outputUnitType>mm/hr</outputUnitType>
<multiplier>0.00333333</multiplier>
<incrementer>0</incrementer>
</unitConversion>
........
........
</unitConversions>
]]>

Grid definition

Defines the definition of the radar grid. This definition is not read from the file as in GRIB files like HIRLAM is done. Therefore a
grid definition is required for the KNMI radar grid.

230
sample of [Link]
<grids ...........="...........">
<regular locationid="KNMI-RADAR">
<rows>765</rows>
<columns>700</columns>
<polarStereographic>
<originLatitude>90</originLatitude>
<originLongitude>0</originLongitude>
<trueScalingLatitude>60</trueScalingLatitude>
<equatorRadius>6378137</equatorRadius>
<poleRadius>6356752</poleRadius>
</polarStereographic>
<firstCellCenter>
<x>0</x>
<y>-3650500</y> <!-- = projectie_shift + 1/2 cellsize = 3650000 + 500 -->
<z>0</z>
</firstCellCenter>
<xCellSize>1000</xCellSize>
<yCellSize>1000</yCellSize>
</regular>
<!-- the old KNMI 2,5 km grid-->
<regular locationid="KNMI-RADAR2.5km">
<rows>256</rows>
<columns>256</columns>
<polarStereographic>
<originLatitude>90</originLatitude>
<originLongitude>0</originLongitude>
<trueScalingLatitude>60</trueScalingLatitude>
<equatorRadius>6378388</equatorRadius>
<poleRadius>6356912</poleRadius>
</polarStereographic>
<firstCellCenter>
<x>1250</x>
<y>-3728515</y> <!-- = projectie_shift + 1/2 cellsize = 3727265 + 1250 -->
<z>0</z>
</firstCellCenter>
<xCellSize>2500</xCellSize>
<yCellSize>2500</yCellSize>
</regular>
</grids>
]]>

Example Files

See attached files

KNMI IRIS

Overview

Imports time series data with observed daily rainfall from the KNMI that is delivered to the Dutch waterboards. The files are in CSV format with file
extension (*.dat) the next definition in the file:
<location ID>, <location name>, <X in km>, <Y in km>, <date in YYYYMMDD>, <value in 0.1 mm>. See the example file and the KNMI site.

Notice that the rainfall is measured at 08:00 UT (=GMT), but this time is not written in the file. Therefore the time will be read bij the FEWS import
reader as 00:00 hours. The rainfall is supplied as an accumulative time series over the last 24 hours. This requires the time step in FEWS to be
configured as

]]>

More information on the KNMI rainfall data sets can be found in the following document Maand Neerslag Overzicht.

Configuration (Example)

231
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.1 mm/day to mm/day for example) it is also required to configure a Unit Conversion file.

ModuleConfigFiles

The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 24
hours.

[Link]
<timeSeriesImportRun ......"="......"">
<import>
<!--IRIS (24h)-->
<general>
<importType>KNMIIRIS</importType>
<folder>$IMPORT_FOLDER_KNMI_IRIS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI_IRIS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI_IRIS$</backupFolder>
<idMapId>IdImportIRIS</idMapId>
<unitConversionsId>ImportKNMIUnits</unitConversionsId>
<!--data is supplied at 08:00 GMT, but in the file this time is not mentioned, so read as
00:00 hrs.
so the time zone offset (to GMT) should be -8 hrs-->
<importTimeZone>
<timeZoneOffset>-08:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI-IRIS</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-IRIS</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" timezone="GMT-8" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>

<!--to let the import module know that the KNMI rainfall is an accumulative timeseries in
0.1 mm/d
that should be converted to for example mm/d-->
<externUnit unit="0.1 mm/d" parameterid="[Link]"/>

</import>
</timeSeriesImportRun>
]]>

IdMapFiles

Defines mappings between KNMI and FEWS parameters and locations.

sample of [Link]
<map externalparameter="827" internalparameter="[Link]" internallocation="KNMI_827"
externallocation="827"/>
<map externalparameter="831" internalparameter="[Link]" internallocation="KNMI_831"
externallocation="831"/>
<map externalparameter="896" internalparameter="[Link]" internallocation="KNMI_896"
externallocation="896"/>
<map externalparameter="902" internalparameter="[Link]" internallocation="KNMI_902"
externallocation="902"/>
.....

]]>

232
Important in this configuration is the externalParameter and the externalLocation have the same identifier.

UnitConversionFile

Defines the conversion of the units that should be applied.

sample of [Link]
<unitConversions ...................="...................">
<unitConversion>
<inputUnitType>0.1 mm/d</inputUnitType>
<outputUnitType>mm/d</outputUnitType>
<multiplier>0.1</multiplier>
<incrementer>0</incrementer>
</unitConversion> ........
........
</unitConversions>
]]>

Example file

An example of a csv-file from IRIS to be imported using the KNMI-IRIS import Module.

sample of irisgegevens_20071025.dat

Example Files

See attached files

Java source code

[Link]

[Link]

{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();

for (String[] buffer = new String[6]; [Link](',', buffer) != -1;) {


[Link](buffer[0]);
[Link](buffer[0]);
[Link](header);
[Link]([Link](), "yyyyMMdd", buffer[4]);
[Link]('.', buffer[5]);
[Link]();
}
}
}
]]>

KNMI SYNOP

Overview

Imports time series data with observed hourly values from the KNMI that is delivered to the Dutch waterboards. The files are in a kind of CSV
format with file extension (*.txt), where not a comma but a ";" is used as seperator. See for a detailed contents and definition of the file the KNMI

233
site.

Notice that the parameters are not listed in the file. The parameters are hard coded in the import routines as defined at the KNMI site. Notice also
that text fields like "cloudy" are not imported. Only parameters that contain values should be read, like rainfall (RhRhRh).

Configuration (Example)

A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.1 mm/hr to mm/hr for example) it is also required to configure a Unit Conversion file.

ModuleConfigFiles

The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 6
hours.

[Link]
<timeSeriesImportRun ......"="......"">
<import>
<!--SYNOP (1h)-->
<general>
<importType>KNMISYNOPS</importType>
<folder>$IMPORT_FOLDER_KNMI_SYNOPS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI_SYNOPS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI_SYNOPS$</backupFolder>
<idMapId>IdImportSYNOPS</idMapId>
<unitConversionsId>ImportKNMIUnits</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI-SYNOPS</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-SYNOPS</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<externUnit unit="0.1 mm/hr" parameterid="[Link]"/>
</import>
</timeSeriesImportRun>
]]>

IdMapFiles

Defines mappings between KNMI and FEWS parameters and locations.

sample of [Link]
<map externalparameter="RhRhRh" internalparameter="[Link]" internallocation="KNMI_370"
externallocation="06370"/>
<map externalparameter="RhRhRh" internalparameter="[Link]" internallocation="KNMI_479"
externallocation="06479"/>

]]>

Important in this configuration is that the externalParameter are as defined at the KNMI site. They are not listed in the import files and therefore
hard coded in the import routines.

UnitConversionFile

234
Defines the conversion of the units that should be applied.

sample of [Link]
<unitConversions ...................="...................">
<unitConversion>
<inputUnitType>0.1 mm/hr</inputUnitType>
<outputUnitType>mm/hr</outputUnitType>
<multiplier>0.1</multiplier>
<incrementer>0</incrementer>
</unitConversion> ........
........
</unitConversions>
]]>

Example file

Defines the conversion of the units that should be applied.

sample of 2007102503_decoded_synops_NL.txt

Example Files

See attached files

h3 Java source Code


[Link]

[Link]

235
{
private static final String[] COLUMNS = {"IX", null, "N", null, "ff", "fxfx", "TTT", "TnTnTn",
"TxTxTx", "TgTgTg",
"TwTwTw", "TdTdTd", "UU", "VVVV", "PPPP", "tr", "RRR", "RhRhRh", "Dr", "QQQ", "ddd"};

private static final int PARAM_IDX_OFFSET = 6;


private static final int DDD_COLUMN_INDEX = [Link](COLUMNS, "ddd");

@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
String line;
// find first line
while ((line = [Link]()) != null && [Link]() < 2) {
// do nothing
}

[Link]("*");
String[] lineItems = new String[28];
// lines are spread actually over two rows
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
while (line != null && [Link]() > 1) {
line += [Link]();
int count = [Link](line, ';', '\"', lineItems);

if (count != 28)
throw new Exception("read data does not contain expected number of columns");

[Link]([Link](), "yyyyMMddHH", lineItems[0


]);

for (int i = 0; i < [Link]; i++) {


if (COLUMNS[i] == null) continue;
String value = lineItems[i + PARAM_IDX_OFFSET].trim();
if ([Link]() == 0) continue;
if (i == DDD_COLUMN_INDEX && [Link]("990")) {
[Link]([Link]);
[Link]([Link]);
} else {
[Link]('.', value);

[Link](OutOfDetectionRangeFlag.INSIDE_DETECTION_RANGE);
}
[Link](lineItems[1]);
[Link](COLUMNS[i]);
[Link](header);
[Link]();
}
// read next line to start gathering new line info
line = [Link]();
}
}
}
]]>

Landsat-HDF5

Overview

Imports time series data from the Landsat satellite. The files are in HDF5 file format. See the NASA site for information on the files.

Each file contains a single image of one particular meteorological parameter. The following parameters are supported (note: these are the names
of the data items in the HDF5 files):

236
"LAI"
" ET" (note the three spaces in front!)
"ET"
"FVC"
"LST"
"SZA"
"SC" - snow cover

This import uses a general C++ DLL for reading the HDF5 files. On some Windows systems the correct runtime components of
Visual C++ Libraries are not installed by default. A Microsoft Visual C++ 2008 SP1 Redistributable Package must be installed on
the computers to solve the problem. Problems have been found on Windows 2003 and Windows 2008 server computers.

On Linux importing HDF5 files will fail if the operating system is too old. From available evidence,
the kernel must be at GLIBC 2.6.18 (see the output of the "uname -a" command).

Configuration (Example)

A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.01 mm per 3 hour to mm/hr for example) it is also required to configure a Unit Conversion file.

ModuleConfigFiles

The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 6
hours.

[Link]
<import>
<!--Meteo data-->
<general>
<importType>Landsat-HDF5</importType>
<folder>$IMPORT_FOLDER_LANDSAT$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_LANDSAT$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_LANDSAT$</backupFolder>
<idMapId>IdImportLandsat</idMapId>
<unitConversionsId>ImportLandsatUnits</unitConversionsId>
<!--radar is in GMT-->
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>Landsat</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportLandsat</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>Snow-cover</parameterId>
<locationId>Landsat-grid</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" timezone="GMT+1" multiplier="3"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
</import>

]]>

IdMapFiles

Defines mappings between Landsat and FEWS parameters and locations.

237
sample of [Link]
<map externalparameter="SC" internalparameter="Snowcover" internallocation="Landsat-grid"
externallocation="Landsat-grid"/>

]]>

Grid definition

Defines the definition of the Landsat grid. As not all this information is present in the Landsat files, it needs to be defined in this
way.

sample of [Link]
<grids ...........="...........">
<regular locationid="GridName">
<rows>651</rows>
<columns>1701</columns>
<geostationarySatelliteView>
<centralMeridian>0.0</centralMeridian>
</geostationarySatelliteView>
<firstCellCenter>
<!-- First x must be -COFF*2**16/CFAC, first y must be (NR-LOFF)*2**16/LFAC as found
in the HDF5 files
Cell sizes must be 2**16/CFAC and 2**16/LFAC -->
<x>-1.4796</x>
<y>8.6854</y>
</firstCellCenter>
<xCellSize>0.00480387</xCellSize>
<yCellSize>0.00480387</yCellSize>
</regular></grids>
]]>

Description of the coordinate system:

The Landsat files contain a set of attributes, of which COFF, LOFF, CFAC and LFAC are the most important ones, as they can be used
to determine the coordinate system.
According to the document [Link] the image coordinates have to
be converted to intermediate coordinates x and y that in turn can be converted into longitude and latitude.
For the Delft-FEWS configuration we need the extremes for x and y (as the satellite image is a rectangle in these coordinates).
The firstCellCenter's x and y need to be computed as:

The cell sizes are to be determined as 2^16 / CFAC and 2^16 / LFAC.
The centralMeridian may be taken from the Landsat file, but care must be taken: as we give the cell centers a shift may be needed to get
the image right.

Example Files

See attached files

Note on possible problems

We have seen problems with this import on a number of systems, notably Windows server 2008. It turned out that on those systems the
underlying runtime libraries need several extra DLLs that are not required on other systems (see for instance the page on known problems for the
HDF viewer). Installing the .NET 3.5 Service Pack 1 solves this problem (
[Link]

LUBW

Overview

Imports time series data in ASCII format from the Landes Umwelt Baden Wurtenberg Forecasting Centre in Germany. The LUBW files contain a
single parameter for a single location. The parameter follows implicitly from the file extension; eg. in the file [Link] the parameter is the
file extention QVHS (discharge). The first line in the file is a header with information on the location and data period in the file.

238
Configuring the Import

The reader is named LUBW which should be configured in the general section of the import. An example import configuration is shown below:

<?xml version="1.0" encoding="UTF-8"?>


<timeSeriesImportRun xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<import>
<general>
<importType>LUBW</importType>
<folder>$IMPORT_FOLDER_LUBW$</folder>
<idMapId>IdImportLUBW</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportLUBW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>LUBW_Rijn</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportLUBW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>LUBW_Rijn</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<externUnit parameterId="[Link]" unit="cm"/>
</import>
</timeSeriesImportRun>

An example IdMapping file (that maps the location and parameter Ids of the also attached example input file) is shown below:

<?xml version="1.0" encoding="UTF-8"?>


<idMap version="1.1" xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link] [Link]

<map internalParameter="[Link]" internalLocation="H-RN-0689" externalParameter="WVHS"


externalLocation="QMAXAU_ARIMA"/>
<map internalParameter="[Link]" internalLocation="H-RN-0689" externalParameter="QVHS"
externalLocation="QMAXAU_ARIMA"/>

</idMap>

The file format (Example: [Link])

239
K_MAXAU *03.01.2009 T0=0 DT=1 N=462 KM=999.99 MODUL(208) VT=15.01.09-05:00
736.00 736.00 736.00 728.00 725.00 721.00 721.00 721.00 721.00 721.00
725.00 732.00 732.00 736.00 743.00 747.00 755.00 751.00 755.00 758.00
762.00 762.00 762.00 766.00 762.00 762.00 762.00 758.00 758.00 755.00
755.00 751.00 751.00 747.00 743.00 740.00 732.00 728.00 725.00 717.00
717.00 717.00 713.00 710.00 710.00 710.00 710.00 706.00 703.00 699.00
............

NOTE :

1. All the columns in text file will be separated by space " " character
2. The parameter id used for mapping must always be configured in upper case in the ID mapping configuration file.
3. The Header line in the example file contains the following information:
a. K_MAXAU is the location Id
b. *03.01.2009 is the date of first data element in file
c. N=462 are the number of data elements in file
d. VT=15.01.09-05:00 is the external forecast time

Matroos NetCDF

Overview

This import is available in DELFT-FEWS versions after 25-11-2008

Imports time series data in NetCDF format from MATROOS Forecast databases. The import reader creaters a perl URL for direct data retrieving
from Matroos. This NetCDF data import retrieves regulare and/or irregular grids from the MATROOS database. There are also three types of
NOOS import functions in Delft FEWS to import scalar time series from MATROOS, see the NOOS import function for this type.

The import function for direct retrieval of grid data (maps) is matroos_netcdfmapseries.

More information on the retrieval of time series from Matroos can be found on: [Link]

Configuring the Import

An example of the matroos_netcdfmapseries configuration will be given here. The reader is named matroos_netcdfmapseries which should be
configured in the general section of the import. The general section must also contain the server URL and a correct username and password if
you need to log-in. The relativeViewPeriod in the general section is used to select the period to retrieve data for.

An example import configuration is shown below:

240
ImportMatroosMap 1.00 [Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>matroos_netcdfmapseries</importType>
<serverUrl>[Link]
<user>XXXXX</user>
<password>XXXXx</password>
<relativeViewPeriod unit="hour" start="-1" end="12"/>
<idMapId>IdImportMatroosMap</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>Matroos</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMatroos_Maps</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>Snelheid.u.F0</parameterId>
<locationId>hmcn_zeedelta</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="30"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportMatroos_Maps</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>Snelheid.v.F0</parameterId>
<locationId>hmcn_zeedelta</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="30"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

When a locationSet or multiple time series sets are configured in the import module instance, the Noos readers will construct URL's for each time
series and retreive the data from the Matroos database sequentially. An improvement of the import readers could be to construct a more complex
URL and retreive the data for multiple time series in one URL query.

The IdMapping configuration is very important because this maps the internal FEWS Id's to the Matroos Id's. In the IdMapping the following
FEWS and Matroos elements are mapped:

the FEWS externalLocation is used to map the Matroos source element


the FEWS externalParameter is used to map the Matroos color element

The FEWS external qualifiers can be used to add extra information to the URL. In the example below the following information is stored in the
external qualifiers. This information is used to resample the grid. MATROOS will interpolate theoriginal grid to the grid definition you give in the
qualifiers.

the FEWS externalQualifier1 is used to map the Matroos coordsys element


the FEWS externalQualifier2 is used to map the Matroos xmin and xmax elements
the FEWS externalQualifier3 is used to map the Matroos ymin and ymax elements
the FEWS externalQualifier4 is used to map the Matroos xn and yn elements

An example of URL can be; [Link]


source=hmcn_zeedelta&color=velv&coordsys=RD&xmin=54000&xmax=67000
&ymin=442000&ymax=449000&xn=101&yn=101&from=200811250810&to=200811252110&timezone=gmt&format=nc

An example IdMapping file for the matroos_netcdfmapseries reader is shown below:

241
IdImportMatroosMap 1.00 [Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<map internalparameter="Snelheid.u.F0" externalqualifier3="ymin=442000&ymax=449000"
externalqualifier4="xn=101&yn=101" externalqualifier1="coordsys=RD" externalqualifier2=
"xmin=54000&xmax=67000" externallocation="hmcn_zeedelta" externalparameter="velu"
internallocation="hmcn_zeedelta"/>
<map internalparameter="Snelheid.v.F0" externalqualifier3="ymin=442000&ymax=449000"
externalqualifier4="xn=101&yn=101" externalqualifier1="coordsys=RD" externalqualifier2=
"xmin=54000&xmax=67000" externallocation="hmcn_zeedelta" externalparameter="velv"
internallocation="hmcn_zeedelta"/>
</idMap>
]]>

When importing grids in the FEWS database it is required to configure the grid characteristics in the [Link] file. The grid characteristics must
be similar to the grids imported from MATROOS.

Grids 1.00 [Link]


<description>HMCN Zeedelta Model</description>
<rows>101</rows>
<columns>101</columns>
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<firstCellCenter>
<x>54000</x>
<y>449000</y>
</firstCellCenter>
<xCellSize>130</xCellSize>
<yCellSize>70</yCellSize>

]]>

NetCDF format

The NetCDF format used can be found on the MATROOS webpage and the FEWS-PI pages.

Msw

MSW import (MFPS)

Overview

Imports time series data from MSW CSV files that are delivered from MFPS. The files contain both observed levels and flows in the main Dutch
rivers. The files have an extension of "*.csv".

Configuration (Example)

A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. See the attached example configuration
files.

ModuleConfigFiles/

The following example of an Import Module Instance will import the time series as equidistant daily series for timezone GMT+1 hour. Notice that
FEWS should store the time at the end of the day. Therefore the import timezone should be -23:00 instead of +01:00.

242
[Link]
<general>
<importType>MSW</importType>
<folder>$IMPORT_FOLDER_MSW$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_MSW$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_MSW$</backupFolder>
<idMapId>IdImportMSW</idMapId>
<unitConversionsId>ImportMSWUnits</unitConversionsId>
<flagConversionsId>ImportMSWFlagConversions</flagConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>MSW</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMSW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>MSW_H</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
....
<externUnit unit="cm" parameterid="[Link]"/>

]]>

UnitConversion

Important in this configuration is that a unitconversion should be forced to convert the waterlevels from cm to m NAP.

Example File/

[Link]

Java source code

[Link]

[Link]

{
private static final Logger log = [Link]([Link]);

private LineReader reader = null;


private String virtualFileName = null;
private TimeSeriesContentHandler contentHandler = null;
private String fileUnit = null;
private DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();

@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = contentHandler;
[Link] = virtualFileName;

[Link]("-999");
[Link] = reader;

243
[Link]('#');

parseHeader();
if ([Link]()) return;
parseData();
}

/**
* Read exactly 6 header-lines and extract:
* from line 2: location Id
* from line 3: parameter Id
*/
private void parseHeader() throws Exception {
String[] headerLines = [Link](reader, 6);
if ([Link] < 6) {
throw new Exception("[Link]: Header of the file " + [Link] + " has
unknown format.");
}
[Link]([Link](headerLines[1], '='));
[Link]([Link](headerLines[2], '='));
[Link](header);
}

/**
* Reads the file and put read data to the TimeSeriesContentHandler
* @return true if at least 1 line is read, otherwise false
* @throws IOException if any unexpected error occur while reading the file
*/
private void parseData() throws Exception {

//Read first data line


String[] firstLine = [Link](';');
if (firstLine == null ){
throw new Exception("File contains no lines with data: "+[Link]);
}

//Get the unit from the first data line


if ([Link] < 4) {
throw new Exception("File contains no unit specification : " + [Link]);
}

[Link](firstLine);

[Link] = firstLine[3];

//Put unit to the header and ask if this header is wanted (i.e. are data from this file
wanted ?)
[Link]([Link]);
[Link](header);
if ([Link]()) return;

//Parse other data from this data line and put them to the timeseries handler
parseDataLine(firstLine);

//Read remaining lines, parse the data and put them to the timeseries handler
for (String[] line; (line = [Link](';')) != null;) {
[Link](line);
parseDataLine(line);
}
}

/**

244
* Parse from each line the following data:
* from column 1: date
* from column 2: time
* from column 4: unit
* from column 5: flag
* from column 6: value
*
* Unit must be the same in all records, i.e. equal to [Link] that is read from the
first data record.
*/
private void parseDataLine(String[] line) throws IOException {

//Check whether the line contains the obligatory 6 columns


if ([Link] != 6) {
[Link]("[Link]: Line contains less than 6 columns at line "+
[Link]());
return;
}

[Link]([Link](), "yyyy/MM/dd", line[0],


"HH:mm", line[1]);
if ([Link]()) return;

//Check unit (only if the unit already read)


if (!line[3].equals([Link])) {
[Link]("[Link]: Line contains an unexpected unit at line " +
[Link]() +
", "+[Link]+ " wil be used.");
}

[Link](line[4]);
[Link]('.', line[5]);
[Link]();
}
}

]]>

NETCDF-CF_PROFILE

Overview

This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Imports profile time series data from NetCDF files which comply to the CF standard. More information about the cf standards can be found at:
[Link]

See also the following two other types of NetCDF-CF imports that are available:

Time series (NETCDF-CF_TIMESERIES)


Grids (NETCDF-CF_GRID)

In DELFT-FEWS versions 2011.02 and later this import type can also be used to import data using OPeNDAP, see Import data
using OPeNDAP.

Configuring the import

An example of the NETCDF-CF_PROFILE import will be given here.

245
ImportNetcdf_Profile 1.00 [Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>NETCDF-CF_PROFILE</importType>
<folder>$IMPORT_FOLDER$/NETCDF</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$</backupFolder>
<idMapId>IdImportNetCDF</idMapId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportNetcdf_Profile</moduleInstanceId>
<valueType>longitudinalprofile</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>SobekProfiles_WL</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

An example of the IdMapping used for the NETCDF-CF_PROFILE import will be given below.
Note that in the IdMapping of the parameters, the external name must match the variable names as used by the netcdf file exactly (case
sensitive). The locations that are mapped refer to branch id's which are defined in the [Link].

IdImportNetCDF 1.00 [Link]


<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">

<parameter internal="[Link]" external="waterlevel"/>

<location internal="Maastakken_NDB(Haringvliet)" external="Maastakken_NDB(Haringvliet)"/>


<location internal="Rijntakken_NDB_NWW" external="Rijntakken_NDB_NWW"/>
<location internal="Rijntakken_AmsterdamRijnkanaal" external="Rijntakken_AmsterdamRijnkanaal"
/>
<location internal="Rijntakken2_NDB2(NieuweWaterweg)" external=
"Rijntakken2_NDB2(NieuweWaterweg)"/>
<location internal="Rijntakken_IJssel" external="Rijntakken_IJssel"/>
<location internal="IJssel_IJsselmeer" external="IJssel_IJsselmeer"/>
<location internal="Markermeer_VeluweRandmeren" external="Markermeer_VeluweRandmeren"/>

</idMap>
]]>

An example of the branches file is shown below.

246
Branches 1.00 [Link]
<branches xmlns:xsi="[Link] xmlns="[Link]
" xsi:schemalocation="[Link]
[Link] version="1.1">
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<branch id="Maastakken_NDB(Haringvliet)">
<branchName>Maastakken_NDB(Haringvliet)</branchName>
<startChainage>1030</startChainage>
<endChainage>321624</endChainage>
<pt label="R_MS_001_1" chainage="1030" z="40.32" z_rb="51.34" y="308594.236" x=
"176029.1129"/>
<pt label="R_MS_001_2" chainage="2061" z="41.79" z_rb="50.92" y="309427.7428" x=
"176631.808"/>
...
<pt label="N_NDB_92" chainage="321624" z="-7.82" z_rb="2.79" y="436953" x="57935.1"/>
</branch>
...
<branch id="Markermeer_VeluweRandmeren">
...
</branch>
</branches>
]]>

The locationSetId used by the ImportNetcdf_Profile.xml must contain the branches defined in the above IdMapping.

LocationSets 1.00 [Link]


<locationSets xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.1">
<locationSet id="SobekProfiles_WL" name="Sobek Profiles WL">
<locationId>Maastakken_NDB(Haringvliet)</locationId>
<locationId>Rijntakken_NDB_NWW</locationId>
<locationId>Rijntakken_AmsterdamRijnkanaal</locationId>
<locationId>Rijntakken2_NDB2(NieuweWaterweg)</locationId>
<locationId>Rijntakken_IJssel</locationId>
<locationId>IJssel_IJsselmeer</locationId>
<locationId>Markermeer_VeluweRandmeren</locationId>
</locationSet>
</locationSets>
]]>

NETCDF-CF_GRID

Overview

This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Imports grid time series data from NetCDF files which comply to the CF standard. More information about the cf standards can be found at:
[Link]

See also the following two other types of NetCDF-CF imports that are available:

Time series (NETCDF-CF_TIMESERIES)


Profiles (NETCDF-CF_PROFILE)

In DELFT-FEWS versions 2011.02 and later this import type can also be used to import data using OPeNDAP, see Import data
using OPeNDAP.

Import Configuration

247
An example of the NETCDF-CF_GRID import will be given here.

ImportNetcdf_Grid 1.00 [Link]


<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>NETCDF-CF_GRID</importType>
<folder>$IMPORT_FOLDER$/NETCDF</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$</backupFolder>
<idMapId>IdImportNetCDF</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportNetcdf_Grid</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>Snelheid.u.F0</parameterId>
<locationId>hmcn_zeedelta</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="30"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

Id Map Configuration

An example of the IdMapping used for the NETCDF-CF_GRID import is shown below.

IdImportNetCDF 1.00 [Link]


<map externalparameter="velocity" internalparameter="Snelheid.u.F0" internallocation=
"hmcn_zeedelta" externallocation="hmcn_zeedelta"/>

]]>

Grids Configuration

When importing grids in the FEWS database it may be required to configure the grid characteristics in the [Link] file. The grid characteristics
must be similar to the grid imported from the NetCDF file.

Grids 1.00 [Link]


<description>HMCN Zeedelta Model</description>
<rows>101</rows>
<columns>101</columns>
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<firstCellCenter>
<x>54000</x>
<y>449000</y>
</firstCellCenter>
<xCellSize>130</xCellSize>
<yCellSize>70</yCellSize>

]]>

Import of Waterwatch NetCDF data

For the import of Waterwatch NetCDF data a special NetCDF import type can be used "NETCDF-CF_GRID-NW". This import type has been
added in July 2011 to the FEWS 2010.01 and 2011.01 builds, and will be available in the 2011.02 build. Waterwatch NetCDF data for Dutch
waterboards requires the Transverse Mercator projection to be used. This regular grid projection has been added to the FEWS code in October
2011.

248
Grids 1.00 [Link]
<rows>1309</rows>
<columns>1049</columns>
<transverseMercator>
<originLatitude>0.0</originLatitude>
<originLongitude>3.0</originLongitude>
<scaleFactorAtOrigin>0.9995999932289124</scaleFactorAtOrigin>
</transverseMercator>
<gridCorners>
<geoDatum>WGS 1984</geoDatum>
<upperLeft>
<x>3.3474039424011828</x>
<y>53.58134813984449</y>
</upperLeft>
<lowerRight>
<x>7.0253359554942705</x>
<y>50.572267443880236</y>
</lowerRight>
</gridCorners>

]]>

NETCDF-CF_TIMESERIES

Overview

This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Imports scalar time series data from NetCDF files which comply to the CF standard. More information about the cf standards can be found at:
[Link]

See also the following two other types of NetCDF-CF imports that are available:

Profiles (NETCDF-CF_PROFILE)
Grids (NETCDF-CF_GRID)

In DELFT-FEWS versions 2011.02 and later this import type can also be used to import data using OPeNDAP, see Import data
using OPeNDAP.

Configuring the import

An example of the NETCDF-CF_TIMESERIES import will be given here.

249
ImportNetcdf_Timeseries 1.00 [Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>NETCDF-CF_TIMESERIES</importType>
<folder>$IMPORT_FOLDER$/NETCDF</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$</backupFolder>
<idMapId>IdImportNetCDF</idMapId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportNetcdf_Timeseries</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DMFlowPoints</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

An example of the IdMapping used for the NETCDF-CF_TIMESERIES import will be given below.
In this example, the mapped locations correspond to the locations of the locatiesSet as defined above in the ImportNetcdf_Timeseries.xml.

Note that in the IdMapping of the parameters and locations, the external name must match the variable and location names as used by the netcdf
file exactly (case sensitive).

IdImportNetCDF 1.00 [Link]


<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter internal="[Link]" external="afvoerdm_takken"/>
<!--DMFlowPoints-->
<location internal="DMTak_1001" external="1001"/>
<location internal="DMTak_1002" external="1002"/>
<location internal="DMTak_1003" external="1003"/>
<location internal="DMTak_1004" external="1004"/>
<location internal="DMTak_1006" external="1006"/>
...
<location internal="DMTak_6113" external="6113"/>
<location internal="DMTak_6114" external="6114"/>
<location internal="DMTak_6115" external="6115"/>
</idMap>
]]>

NOOS

Overview

This import is available in DELFT-FEWS versions after 28-10-2008

Imports time series data in ASCII format from MATROOS Forecast databases. The import reader creaters PhP URL's for direct data retrieving
from Matroos. There are three types of NOOS URL's supported by the NOOS import function:

1. get_series.php for direct retrieval of scalar time series.


2. get_maps1d_series.php for retrieval of scalar time series from the Matroos maps1d database.
3. get_map2series.php for retrieval of scalar time series from the Matroos map data, interpolated in space

250
More information on the retrieval of time series from Matroos can be found on: [Link]

For the three types of series retrieval URL's three import readers have been made in FEWS:

1. noos_timeseries
2. noos_1dmapseries
3. noos_mapseries

Configuring the Import

An example of the noos_timeseries configuration will be given here. The reader is named noos_timeseries which should be configured in the
general section of the import. The general section must also contain the server URL and a correct username and password if you need to log-in.
The relativeViewPeriod in the general section is used to select the period to retrieve data for.
Special attention should be given to the timezone; FEWS retrieves all Noos data from the Matroos database in GMT.

An example import configuration is shown below:

<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>noos_timeseries</importType>
<serverUrl>[Link]
<user>XXX</user>
<password>YYYY</password>
<relativeViewPeriod unit="hour" startoverrulable="true" endoverrulable="true" start="-12"
end="36"/>
<idMapId>IdImportMatroos</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>Matroos</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMatroos</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>hoekvanholland</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

When a locationSet or multiple time series sets are configured in the import module instance, the Noos readers will construct URL's for each time
series and retreive the data from the Matroos database sequentially. An improvement of the import readers could be to construct a more complex
URL and retreive the data for multiple time series in one URL query.

The IdMapping configuration is very important because this maps the internal FEWS Id's to the Matroos Id's. In the IdMapping the following
FEWS and Matroos elements are mapped:

the FEWS externalLocation is used to map the Matroos loc or node element
the FEWS externalParameter is used to map the Matroos unit element
the FEWS externalParameterQualifier is used to map the Matroos source element

In case the noos_mapseries reader is used, the FEWS externalLocation is used to map a fews Matroos loc elements, namely coordsys, x and
y.

An example IdMapping file for the noos_timeseries and noos_1dmapseries readers is shown below:

251
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<map externalparameter="waterlevel_astro" internalparameter="[Link]" internallocation=
"hoekvanholland" externalparameterqualifier="observed" externallocation="hoekvanholland"/>
</idMap>
]]>

An example IdMapping file for the noos_mapseries reader is shown below:

<idMap xmlns:xsi="[Link] xmlns="[Link]


xsi:schemalocation="[Link]
[Link] version="1.1">
<map externalparameter="waterlevel_astro" internalparameter="[Link]" internallocation=
"hoekvanholland" externalparameterqualifier="observed" externallocation=
"&coordsys=RD&x=52670&y=449847"/>
</idMap>
]]>

The NOOS file format

#------------------------------------------------------
# Timeseries retrieved from the MATROOS maps1d database
# Created at Tue Oct 28 [Link] CET 2008
#------------------------------------------------------
# Location : MAMO001_0
# Position : (64040,444970)
# Source : sobek_hmr
# Unit : waterlevel
# Analyse time: 200709020100
# Timezone : MET
#------------------------------------------------------
200709010000 -0.387653201818466
200709010010 -0.395031750202179
200709010020 -0.407451331615448
200709010030 -0.414252400398254
200709010040 -0.425763547420502
200709010050 -0.43956795334816
200709010100 -0.309808939695358
200709010110 -0.297703713178635
200709010120 -0.289261430501938
200709010130 -0.256232291460037

NTUQUARTER Import

Overview

TimeSeries reader for NTUQUARTER Datalogger files. These contain observed telemetry data for several paremmeters from NTU (National
Technical University, of Singapore). The identifier for this reader is "NTUQUARTER".

The timeSeries reader for NtuQuarter Datalogger files (NTUGauge) used for the Singapore OMS. These contain observed telemetry data for
several parameters send by NTU (National Technical University, of Singapore). These contain Channel_Level, Velocity, Temperature,
Conductivity, pH, Turbidity, NTU DO, Battery and Flow. The locationID is encoded in the filename e.g: MC02_Quarter.dat contains data for
locationId MC02.

Colums are:

Date/time
number
Level, m (by SW or SL) (parameter name: level)
Channel_Level, m (by US level sensor) (parameter name: channel_level)
Velocity, m/s (parameter name: velocity)
Temperature, oC (parameter name: temperature)
Conductivity, mS/cm (parameter name: conductivity)
pH (parameter name: ph)
Turbidity, ( parameter name: turbidity)
NTU DO, mg/L (parameter name: ntu_do)

252
Battery, V (parameter name: battery)
Flow, m3/s (parameter name: flow)

Configuration

The parameter name will be used to set the external parameterId to be used for the idmapping in the import.

Example:

"2007-05-08 [Link]",7892,0.809,0,-0.187,28.76,0.36,7.56,141.9,2.03,12.86272,-3.933358
"2007-05-08 [Link]",7893,0.849,0,-0.167,29.04,0.413,7.59,144.8,2.61,12.87867,-3.686358
"2007-05-08 [Link]",7894,0.89,0,-0.137,29.37,0.475,7.65,146,2.48,12.87363,-3.17018
"2007-05-08 [Link]",7895,0.929,0,-0.109,29.68,0.629,7.67,146.3,3.26,12.85852,-2.632786
"2007-05-08 [Link]",7896,0.966,0,-0.13,30.11,0.907,7.76,147.3,3.96,12.8686,-3.26508
"2007-05-08 [Link]",7897,1.003,0,-0.094,30.4,1.161,7.78,147.5,4.44,12.85601,-2.451332

The second column (record number) is not used in the import

Java source code

[Link]

[Link]

* These contain Channel_Level, Velocity, Temperature, Conductivity, pH, Turbidity, NTU DO,
Battery, Flow
* <p/>
* <p/>
* The locationID is encoded in the filename e.g: MC02_Quarter.dat contains data for
* locationId MC02
* <p/>
* <pre>
* Colums are:
* Date/time
* number
* Level, m (by SW or SL) (parameter name: level)
* Channel_Level, m (by US level sensor) (parameter name: channel_level)
* Velocity, m/s (parameter name: velocity)
* Temperature, oC (parameter name: temperature)
* Conductivity, mS/cm (parameter name: conductivity)
* pH (parameter name: ph)
* Turbidity, ( parameter name: turbidity)
* NTU DO, mg/L (parameter name: ntu_do)
* Battery, V (parameter name: battery)
* Flow, m3/s (parameter name: flow)
* <p/>
* Example:
* "2007-05-08 [Link]",7892,0.809,0,-0.187,28.76,0.36,7.56,141.9,2.03,12.86272,-3.933358
* "2007-05-08 [Link]",7893,0.849,0,-0.167,29.04,0.413,7.59,144.8,2.61,12.87867,-3.686358
* "2007-05-08 [Link]",7894,0.89,0,-0.137,29.37,0.475,7.65,146,2.48,12.87363,-3.17018
* "2007-05-08 [Link]",7895,0.929,0,-0.109,29.68,0.629,7.67,146.3,3.26,12.85852,-2.632786
* "2007-05-08 [Link]",7896,0.966,0,-0.13,30.11,0.907,7.76,147.3,3.96,12.8686,-3.26508
* "2007-05-08 [Link]",7897,1.003,0,-0.094,30.4,1.161,7.78,147.5,4.44,12.85601,-2.451332
* </pre>
* The second column (data number) is not used in the import
* <p/>
*/
public class NtuQuarterTimeSeriesParser implements TextParser<TimeSeriesContentHandler> {
private static final Logger log = [Link]([Link]);

private static final String[] PARAMETER_NAMES = new String[]{


"level", "channel_level", "velocity", "temperature", "conductivity", "ph",
"turbidity", "ntu_do", "battery", "flow"};

private String virtualFileName = null;

253
private DefaultTimeSeriesHeader timeSeriesHeader = new DefaultTimeSeriesHeader();

@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = virtualFileName;

[Link]('#');
parseLocationIdFromFileName();

for (int i = 0; i < PARAMETER_NAMES.length; i++) {


[Link](PARAMETER_NAMES[i]);
[Link](i, timeSeriesHeader);
}

for (String[] items = new String[PARAMETER_NAMES.length + 2]; [Link](',', items)


!= -1;) {
[Link]([Link](), "yyyy-MM-dd HH:mm:ss",
items[0]);
for (int i = 0; i < PARAMETER_NAMES.length; i++) {
[Link](i);
[Link]('.', items[i + 2]);
[Link]();
}
}
}

private void parseLocationIdFromFileName() throws Exception {

/* parsefile name This is locationid and type concanated


e.g. MC02_Rain.dat we only need the locationID
*/
String fileName = [Link](virtualFileName);

/* Check if filename contains the file name seperator */


// spit on underscore - first items are the externalLocatioId, second Parameter (Rain)
String[] fileNameParts = [Link](fileName, '_');

if ([Link] < 2)
throw new Exception("File with name <" "=""" +="+" [Link]="this
.virtualFileName"> cannot be parsed to find location Id");

[Link](fileNameParts[0]);

if (!fileNameParts[1].equals("Quarter")) {
[Link]("File <" "=""" +="+" filename="fileName"> contains unexpected ending <" "="""
+="+" filenameparts[1]="fileNameParts[1]"> Expected Quarter");
}

if ([Link]())
[Link]("File <" "=""" +="+" filename="fileName"> contains data for external
locationdId <" "=""" +="+" filenameparts[0]="fileNameParts[0]"> and reader type <Quarter>");
}

}
]]></Quarter></"></"></"></"></"></TimeSeriesContentHandler>

NTURAIN Import

Overview

TimeSeries reader for NTURAIN Datalogger files. These contain observed telemetry data for Rain send by NTU (National Technical University, of
Singapore). The identifier for this reader is "NTURAIN".

254
Configuration

The locationID is encoded in the filename e.g: MC02_Rain.dat contains data for locationId MC02. an example file plus configuration (IDmap and
import module configuration) is attached to this page. The external parameter is always "Rain".

Columns are:
Date/time number Rain(mm)

Example:

"2007-04-30 [Link]",5594,0

"2007-04-30 [Link]",5595,0

"2007-04-30 [Link]",5596,0

"2007-04-30 [Link]",5597,0

"2007-04-30 [Link]",5598,0

The second column (data number) is not used in the import

Java source code

[Link]

[Link]

* These contain observed telemetry data for Rain


* <p>
*
* The locationID is encoded in the filename e.g: MC02_Rain.dat contains data for
* locationId MC02
*
* </p><pre>
* Colums are:
* Date/time number p(mm)
*
* Example:
* "2007-04-30 [Link]",5594,0
* "2007-04-30 [Link]",5595,0
* "2007-04-30 [Link]",5596,0
* "2007-04-30 [Link]",5597,0
* "2007-04-30 [Link]",5598,0
* </pre>
* The second column (data number) is not used in the import
* <p>

*/
public class NtuRainTimeSeriesParser implements TextParser<TimeSeriesContentHandler> {
private static final Logger log = [Link]([Link]);

private String virtualFileName = null;


private TimeSeriesContentHandler contentHandler = null;

@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = virtualFileName;
[Link] = contentHandler;
[Link]('#');

255
parseParameterLocationIdFromFileName();

if ([Link]()) return;

for (String[] buffer = new String[2]; [Link](',', buffer) != -1;) {


[Link]([Link](), "yyyy-MM-dd HH:mm:ss",
buffer[0]);
[Link]('.', buffer[2]);
[Link]();
}
}

private void parseParameterLocationIdFromFileName() throws IOException {

/* parsefile name This is locationid and type concanated


e.g. MC02_Rain.dat we only need the locationID
*/
String fileName = [Link](virtualFileName);

/* Check if filename contains the file name seperator */


String[] fileNameParts = [Link](fileName, '_');

if ([Link] < 2)
throw new IOException("File with name <" "=""" +="+" [Link]="this
.virtualFileName"> cannot be parsed to find location Id");

DefaultTimeSeriesHeader timeSeriesHeader = new DefaultTimeSeriesHeader();

// spit on underscore - first items are the externalLocatioId, second Parameter (Rain)
[Link](fileNameParts[0]);
if (!fileNameParts[1].equals("Rain")) {
[Link]("File <" "=""" +="+" filename="fileName"> contains data for external
parameter <" "=""" +="+" filenameparts[1]="fileNameParts[1]"> Forcing to Rain");
}
// Set to "Rain" Anyway after sending the warning
[Link]("Rain");
if ([Link]()) {
[Link]("File <" "=""" +="+" filename="fileName"> contains data for external
locationdId <" "=""" +="+" filenameparts[0]="fileNameParts[0]"> and parameter <" '="'" +="+"
filenameparts[1]="fileNameParts[1]">');
}

[Link](timeSeriesHeader);
}

}
]]></"></"></"></"></"></"></TimeSeriesContentHandler></p>

SSE

Overview

Imports time series data from Scottish & Southern Electric (SSE) ASCII files.

SSE file characteristics

SSE File format is expected to contain 4 columns with information (location, value, date-time, unit)
File must be ',' seperated
All comment lines start with a '*'
Date format in the files must be of the form 'dd/MM/yyyy HH:mm:ss'
The data file has no parameter in th efile, the unit is used as external parameter Id

Configuration (Example)

256
A complete import module configuration consists of an ID Mapping file, an Import Module Instance file. Unitconversion can also be included while
importing. A complete set of configuration files for importing SSE files with unit conversion is attached SSE_Import.zip.

ModuleConfigFiles/

The following example of an Import Module Instance will import the time series as non-equidistant series.

ImportSSE 1.00 [Link]


<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>SSE</importType>
<folder>$IMPORT_SSE_FOLDER$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$</failedFolder>
<backupFolder>$BACKUP_SSE_FOLDER$</backupFolder>
<idMapId>IdImportSSE</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneName>GMT</timeZoneName>
</importTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportSSE</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>115304</locationId>
<locationId>336370</locationId>
<locationId>335609</locationId>
<locationId>335612</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="100"/>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

IdMapFiles/

Defines mappings between SSE and FEWS parameters and locations.

[Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<map externalparameter="Water Level" internalparameter="[Link]" internallocation="115304"
externallocation="Level in Loch Benevean"/>
<map externalparameter="Water Level" internalparameter="[Link]" internallocation="336370"
externallocation="Level in Loch Cluanie"/>
<map externalparameter="Water Level" internalparameter="[Link]" internallocation="335609"
externallocation="River Moriston Level at Torgoyle Bridge"/>
<map externalparameter="Water Level" internalparameter="[Link]" internallocation="335612"
externallocation="Level in Loch Glascarnoch"/>
</idMap>
]]>

Important in this configuration is that the external locations are location names.

Example File/

257
SSE_Test.txt

Java source code/

[Link]

[Link]

{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws IOException {
[Link]('*');

DefaultTimeSeriesHeader timeSeriesHeader = new DefaultTimeSeriesHeader();

for (String[] buffer = new String[4]; [Link](',', buffer) != -1;) {


[Link](buffer[0]);
[Link]('.', buffer[1]);
[Link]([Link](), "dd/MM/yyyy HH:mm:ss",
buffer[2]);
[Link](buffer[3]);
[Link](buffer[3]);
[Link](timeSeriesHeader);
[Link]();
}
}
}
]]>

TMX

Contents

Contents
Overview
Status
Configuration (Example)
ModuleConfigFiles/
IdMapFiles/
Import of TMX in CSV file format
Java source code

Overview

Imports time series data from the Microsoft Access database file.

There are two types of the import depending on type of measurement station:

analog - old data format


digital - new data format

The difference between two formats in how time series are stored in the database.

For analog data type it is .mdb file containing a set of tables where each separate time series is stored in a separate table.
In case of digital data type one table in the tmx mdb file can contain many time series.

What does tmx abbreviation means?

There is no flag column when data type is digital?

Status

Both analog and digital data format can be used.

258
Configuration (Example)

The configuration files below define import of 4 time series from the tmx .mdb file:

Data Format Parameter (tmx) Location (tmx) Parameter (fews) Location (fews)

analog Ai1 Loc063 P1.m tmx_location1

analog Ao1 Loc063 P2.m tmx_location2

digital 1 46 P3.m tmx_location3

digital 1 51 P3.m tmx_location4

Note
Tmx database (mdb) may contain data for both digital and analog types of data.
When data are in analog format they usually stored in tables which names are defined using location and parameter name, e.g.:
Loc063Ao1, Loc063Ai1. In case of digital format everything is stored in one table, e.g.: ReportAo or ReportDi. When in digital
tables other columns than the regular ActualValue column must be imported the externalParameterQualifier can be used to
indicate the correct table; ReportDi_Open to import the MinutesOpen column of the ReportDi table.

ModuleConfigFiles/

Time series which are listed in this file can be imported into fews.

Defines what time series can be imported from the TMX .mdb file and which tables contain their values.

[Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] xmlns="
[Link]
<import>
<general>
<importType>Tmx</importType>
<folder
>../junit_test_output/nl/wldelft/fews/system/plugin/dataImport/TimeSeriesImportTestData/import/tmx</
folder>
<idMapId>tmxMapId</idMapId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
</general>

<!-- Analog, table = Loc063Ai1 -->


<timeSeriesSet>
<moduleInstanceId>ImportTmx</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>P1.m</parameterId> <!-- parameter = Ai1 -->
<locationId>tmx_location1</locationId> <!-- location = Loc063 -->
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="0" end="11"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>

<!-- Analog, table = Loc063Ao1 -->


<timeSeriesSet>
<moduleInstanceId>ImportTmx</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>P2.m</parameterId> <!-- parameter = Ao1 -->
<locationId>tmx_location1</locationId> <!-- location = Loc063 -->
<timeSeriesType>external historical</timeSeriesType>

259
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="11"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>

<!-- Digital, table = ReportAo, defined in the mapping -->


<timeSeriesSet>
<moduleInstanceId>ImportTmx2</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>P3.m</parameterId> <!-- Channel = 1 -->
<locationId>tmx_location2</locationId> <!-- LocCode = 46 -->
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="400" end="470"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>

<!-- Digital, table = ReportAo, defined in the mapping -->


<timeSeriesSet>
<moduleInstanceId>ImportTmx</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>P3.m</parameterId> <!-- Channel = 1 -->
<locationId>tmx_location3</locationId> <!-- LocCode = 51 -->
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="400" end="470"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

IdMapFiles/

Defines mappings between TMX and FEWS parameters and locations.

[Link]
<!-- analog -->
<map externalparameter="Ai1" internalparameter="P1.m" internallocation="tmx_location1"
externallocation="Loc063"/>
<map externalparameter="Ao1" internalparameter="P2.m" internallocation="tmx_location1"
externallocation="Loc063"/>

<!-- digital -->


<map externalparameter="1" internalparameter="P3.m" internallocation="tmx_location2"
externalparameterqualifier="ReportAo" externallocation="46"/>
<map externalparameter="1" internalparameter="P3.m" internallocation="tmx_location3"
externalparameterqualifier="ReportAo" externallocation="51"/>

]]>

Import of TMX in CSV file format

The importtype is called TmxCsv

1. Missing values

If string can not be parsed as a number it is assumed as a missing value,

e.g.: "---", "???", ">>>", "<<<"

Example:

Datum;Tijd;Ai8;Ai1;Ai3;Ai2;Ai7;Ai9

260
;;Gem;Gem;Gem;Gem;Gem;Gem
25-05-2005;12:00;16.257;15.958;16.135;15.026;15.513
25-05-2005;13:00;16.257;15.958;16.135;15.026;15.507
25-05-2005;14:00;--;-;-;-;--
25-05-2005;15:00;16.257;15.958;16.135;15.026;15.494
...
09-01-2006;01:00;1.648;?;?;?;?;???
09-01-2006;02:00;0.399;12.743;12.606;12.333;?;?
...
27-08-2007;01:00;>>>;>>>;>>>;>>>;>>>;>>>
27-08-2007;02:00;<<<;<<<;<<<;<<<;<<<;<<<
...

2. Date and time format

It is assumed that date and time comes always in the "DD-MM-YYYY hh:mm" format. Since december 2008 also the time format of
"DD-MM-YYYY hh:mm:ss" (with seconds) supported.

TimeZone can be used in the configuration file to set an offset.

3. 2nd line

2nd line containing ";;Gem;Gem;Gem;Gem;Gem;Gem" is always skipped.

4. Reader also assumes Date and Time always come as a 1st and 2nd columns.

5. File naming convention

TMX CSV files to be imported by FEWS should be in the following format:

<location> anything [Link]

Location should be at the first position and should be the same as location defined in the [Link]
Location must be separated by " " - space from the rest of file or ".csv", for example:

File Name Valid

location1 [Link]

[Link]

location1 .csv

location1_01012007.csv

[Link]

6. Two new warnings have been added to this import:

The systems gives a warning when data in the database is overwritten by new data:

29.02.2008 [Link] WARN - [Link]: ModuleInstance=ImportTMX, locationId=0002_boven, parameterId=[Link],


timestep=10minutes, period=2006-01-01 [Link] - 2006-01-31 [Link], number of timesteps changed 744

The system gives a warning when multiple series are imported for one location-parameter combination:

29.02.2008 [Link] WARN - Multiple time series sets found for parameter-internal/external=[Link]/Ai2
location-internal/external=WAM0400_afwat_kan/WAM0400 ensemble member main$0

Java source code

[Link]

[Link]

* Column 1: Status (byte)


* Column 2: TimeStamp (date/time)
* Column 3: Value (single)
* <p/>

261
* Each table contains a unique parameter/location combination. In other words in the Id Mapping
the location id is
* and parameter id is identical (see also WISKI import).
* <p/>
* When importing, consider only those tables included in the IdMapping used. There are
additional tables in the
* database. These should be ignored. If defined to be read, then an error can be generated
indicating the format
* of requested table to read is wrong.
* <p/>
* The Status column can be translated using the flag mapping functionality.
* <p/>
* Note: The time at midnight is sometimes offset by a few seconds. This may then not be
imported.
* The import module can apply the tolerance functionality to import this to the cardinal time
step.
* <p/>
* Example (columns truncated)
* <p/>
* Loc063Di18
* --------------------------------
* Status TimeStamp Value
* --------------------------------
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* <p/>
* <p/>
* Please see also [Link] for more documentation.
*/

public class TmxTimeSeriesParser implements DatabaseParser<TimeSeriesContentHandler>,


TimeSeriesHeadersConsumer {
private static final Logger log = [Link]([Link]);

private TimeSeriesHeader[] headers = null;


private TimeSeriesContentHandler contentHandler = null;
private Connection connection = null;

@SuppressWarnings({"AssignmentToCollectionOrArrayFieldFromParameter"})
@Override
public void setTimeSeriesHeaders(TimeSeriesHeader[] timeSeriesHeaders) {
[Link] = timeSeriesHeaders;
}

@Override
public void parse(Connection connection, TimeSeriesContentHandler contentHandler) throws
Exception {
[Link] = connection;
[Link] = contentHandler;

for (TimeSeriesHeader header : headers) {


try {
parse(header);
} catch (Exception e) {

262
String msg = "[Link]: time series parameter = " +
[Link]()
+ ", location: " + [Link]()
+ " can not be imported, table name = " + getTableName(header) + '\n' +
[Link]();

[Link](msg, [Link]() ? e : null);


}
}
}

private String getSql(TimeSeriesHeader header) {


String tableName = getTableName(header);
if (![Link](connection, tableName)) return null;

boolean digital = [Link]() > 0;


if (digital) return "SELECT ReportDate, " + getValueColumnName(header) + " FROM " +
tableName + " WHERE Channel=? AND LocCode=?";

return "SELECT TimeStamp, " + getValueColumnName(header) + ", Status FROM " + tableName;
}

private static String getTableName(TimeSeriesHeader header) {


if ([Link]() == 0) return [Link]() +
[Link]();
String qualifier = [Link](0);
if ([Link]("ReportDi_Open"))
return "ReportDi";
if ([Link]("ReportDi_Closed"))
return "ReportDi";
return qualifier;
}

private static String getValueColumnName(TimeSeriesHeader header) {


if ([Link]() == 0) return "Value";
String qualifier = [Link](0);
if ([Link]("ReportDi_Open")) return "MinutesOpen";
if ([Link]("ReportDi_Closed")) return "MinutesClosed";
return "ActualValue";
}

private void parse(TimeSeriesHeader header) throws SQLException, IOException {


String sql = getSql(header);
if (sql == null) return;

[Link](header);
assert ![Link]();

if ([Link]()) [Link]("Parse " + sql);

PreparedStatement statement = [Link](sql);


try {
boolean digital = [Link]() > 0;
if (digital) {
[Link](1, [Link]());
[Link](2, [Link]());
if ([Link]()) [Link]("par=" + [Link]() + " loc=" +
[Link]());
}
ResultSet resultSet = [Link]();
try {
parseResultSet(resultSet);
} finally {
[Link]();

263
}
} finally {
[Link]();
}
}

private void parseResultSet(ResultSet resultSet) throws SQLException, IOException {


long timeZoneOffset = [Link]().getRawOffset();
ResultSetMetaData metaData = [Link]();
int type = [Link](2);
boolean hasFlag = [Link]() == 3;
boolean bool = type == [Link] || type == [Link];
while ([Link]()) {
[Link]([Link](1).getTime() - timeZoneOffset);
if ([Link]()) continue;
[Link](hasFlag ? [Link](3) : 0);
if (bool) {
[Link]([Link](2) ? -1f : 0f);
} else {
[Link]([Link](2));
}
[Link]();
}
}
}
]]></TimeSeriesContentHandler>

Wiski

Overview

The ZRXP format is a ASCII data exchange file format of Wiski. The file may contain one or more time series.
The time series are defined by a header line that starts with #REXCHANGE.
Directly after this keyword the ID of the time series is defined: #REXCHANGE013S050
After the keyword RINVAL the missing value is defined.
In a complete example of the header

#REXCHANGE013S050|*|RINVAL-777|*|

the next items will be read:

time series ID = 013S050


missing value = -777

A new time series starts simply with a new header. It also means that the previous series has ended.

The ZRXP files can be supplied in a ZIP file.

More recent versions of ZRXP files may contain a separate location and parameter instead of #REXCHANGE. The location is defined by the
keyword SANR and the parameter by CNAME.
The CUNIT keyword defines the unit and the RINVAL the missing value.

Configuration

ZRXP does not identify both a location and a parameter ID. It only has a time series ID. To import these series into FEWS you have to set both
the external location ID and parameter ID to this time series ID.

Example of the import module instance:

264
<import>
<general>
<importType>WISKI</importType>
<folder>$IMPORT_FOLDER$/zrxp</folder>
....
</general>

Example of the id mapping:

<?xml version="1.0" encoding="UTF-8"?>


<idMap version="1.1" xmlns="[Link]
xmlns:xsi="[Link] xsi:schemaLocation="[Link]
[Link]
<map internalParameter="H" internalLocation="013" externalParameter="013S050"
externalLocation="013S050"/>
<map internalParameter="H" internalLocation="013" externalParameter="013S065"
externalLocation="013S065"/>
.....

Example file

#REXCHANGE013S050|*|RINVAL-777|*|
20081111000000 7.538
20081111002000 7.541
20081111004000 7.544
20081111010000 7.547
20081111012000 7.549
20081111014000 7.550
20081111020000 7.553
20081111022000 7.554
20081111024000 7.555

where

locationId = 13S050
parameterId = 13S050
unit = not defined
missing value = -777

or

#SANR7424|*|#SNAMEPiding|*|
#REXCHANGE7424_N_15|*|#CNAMEN|*|#CUNITmm|*|RINVAL-777|*|
20091211001500 0
20091211003000 0
20091211004500 0
20091211010000 0
20091211011500 0

where

locationId = 7424
parameterId = N
unit = mm
missing value = -777

or

265
#TSPATH/82/82_3/WATHTE/cmd.p|*|
#LAYOUT(timestamp,value,status,interpolation_type)|*|
#TZUTC+01|*|
#CUNITm|*|
20100127000000 -0.94 200 102
20100127001500 -0.93 200 102
20100127003000 -0.93 200 102

where

locationId = 82_3
parameterId = WATHTE
qualifierId = cmd.p (which will be translated to cmd_p)
timezone = GMT+1
unit = m

Note that the status and the interpolation_type are combined to form a flag which can be mapped in the flag mapping. This is done by multiplying
the status by 1000 and adding the interpolation_type (example below). The status should also be between 0 and 999.

20100227000709 3.0 200 103 => 200103


20100227000709 3.0 0 103 => 103
20100227000709 3.0 200 0 => 200000
20100227000709 3.0 200 => 200000

Java source code

[Link]

[Link]

private static final Logger log = [Link]([Link]);

private LineReader reader = null;


private TimeSeriesContentHandler contentHandler = null;
private DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
private TimeZone headerTimeZone; //timeZone read from the file header
private String virtualFileName;

@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = virtualFileName;
[Link] = contentHandler;
[Link](-777.0f);

[Link] = reader;
[Link]('?');
[Link](true);

[Link]();
[Link] = null;

[Link](500);
String[] buffer = new String[2];
for (String line; (line = [Link]()) != null; [Link](500)) {
line = [Link]();
if ([Link]("ENDOFFILE")) return;

if ([Link](0) == '#') {
[Link]();
parseHeader();
continue;
}

266
if ([Link]() == null && [Link]() == null)
throw new Exception("Not a valid wiski file, REXCHANGE, CNAME, SANR tags are all
missing in the file header");

if ([Link]()) continue;

[Link](line, ' ', buffer);


if ([Link] != null) {
[Link]([Link], "yyyyMMddHHmmss", buffer[0]);
} else {
[Link]([Link](), "yyyyMMddHHmmss",
buffer[0]);
}
[Link]('.', buffer[1]);
[Link]();
}
}

/**
* Read metadata from the #-records. Metadata block is followed by the timeseries-records
* but the timeseries-records may be also omitted. In this case the Metadata block MUST
start
* with a record that begins with ## !
* Empty records wil be ignored.
* <p/>
* The meaning of the keys is:
* TZ : time zone. TZ are UTC0 and UTC+/-x (e.g. UTC+1 or UTC-2).
* TSPATH : /site id/location id/parameter id/ts shortname
* example TSPATH/160/160_1/WATHTE/cmd.p
* only location id and parameter id is parsed and used
* SANR : location id. Used only if not specified with TSPATH
* CNAME: parameter id. Used only if not specified with TSPATH
* CUNIT: unit
* RINVAL: missing value
* REXCHANGE: location-parameter. Wil be used only if the metadata block does not contain
keys TSPATH, SANR or CNAME.
* The string specified by keyword REXCHANGE represents location Id and also parameter-id
(so locations Id and parameter Id equals)
*
* @throws IOException if the header format is incorrect
*/
private void parseHeader() throws IOException {
[Link]();
[Link] = null;

String tspathPar = null;


String tspathQual = null;
String tspathLoc = null;
String fallbackParLoc = null;

for (String line; (line = [Link]()) != null; [Link](500)) {


line = [Link]();
if ([Link](0) != '#') {
[Link]();
break;
}

String tzString = parseKeyValue("TZ", line);


if (tzString != null) {
[Link] = parseTimeZone(tzString, [Link], this
.[Link]().getID());
}

//Parse location id and parameter specified with keyword TSPATH

267
//format: TSPATH/<site id="id">/<station id="id">/<parameter
shortname="shortname">/<ts shortname="shortname">
//example: TSPATH/160/160_1/WATHTE/cmd.p (contains always all these 4 elements )
//<ts shortname="shortname"> is read as qualifier
String tspath = parseKeyValue("TSPATH", line);
if (tspath != null) {
String[] buffer = [Link](tspath, '/');
if ([Link] != 5 || buffer[2].length() < 1 || buffer[3].length() < 1) {
throw new IOException("Not a valid wiski file, TSPATH has a incorrect format:
" + tspath +
" expected: TSPATH/<site id="id">/<station id="id">/<parameter
shortname="shortname">/<ts shortname="shortname">");
}
tspathLoc = buffer[2];
tspathPar = buffer[3];
tspathQual = buffer[4].replace('.', '_'); // dots are not allowed in fews as
internal qualifiers, replace dots with underscores
}
String locationId = parseKeyValue("SANR", line);
if (locationId != null) [Link](locationId);
String parameterId = parseKeyValue("CNAME", line);
if (parameterId != null) [Link](parameterId);
String unit = parseKeyValue("CUNIT", line);
if (unit != null) [Link](unit);
String missingValue = parseKeyValue("RINVAL", line);
if (missingValue != null) [Link](missingValue);
String parLoc = parseKeyValue("REXCHANGE", line);
if (parLoc != null) fallbackParLoc = parLoc;

if (tspathPar != null && tspathLoc != null) {


//If par id, qualifier id and loc are specified with TSPATH, use them , even if the
keywords SANR and SNAME are also present in the file
[Link](tspathPar);
[Link](tspathQual);
[Link](tspathLoc);
} else if ([Link]() == null || [Link]() == null) {
[Link](fallbackParLoc);
[Link](fallbackParLoc);
}
[Link](header);
}

//Returns value or null if the key not found in the buffer


private static String parseKeyValue(String key, String buffer) {
int keyPos = [Link](key);
if (keyPos == -1) return null;
int endValuePos = [Link](";*;", keyPos + [Link]());
if (endValuePos == -1) endValuePos = [Link]("|*|", keyPos + [Link]());
if (endValuePos == -1) return null;
return [Link](keyPos + [Link](), endValuePos);
}

//Parse time zone. Note: UTC always expected , since no other code wil occur according to
the Wiski 7 format
//Allowed formats are: UTC0 and UTC+/-x (e.g. UTC+1 or UTC-2).
private static TimeZone parseTimeZone(String buffer, String fileName, String defaultTimeZone)
throws IOException {

if ([Link]("UTC") != 0 || [Link]() < 4) {


[Link](fileName + ": invalid timezone specified with TZ keyword - " + buffer + " ,
" + defaultTimeZone + " wil be used.");
return null;
}

268
String strOffset = [Link](3);
TimeZone timeZone;
try {
double offset = [Link](strOffset);
timeZone = [Link](offset);
} catch (NumberFormatException e) {
throw new IOException("Invalid timeZone specified with TZ keyword:" + buffer, e);
}
return timeZone;
}
}

]]></ts></parameter></station></site></ts></ts></parameter></station></site>

WSCC csv

Overview

This import is available in DELFT-FEWS versions after 06-12-2007

Imports time series data in csv format from the Woodleigh System Control Centre in Singapore. The first line is a header for each column
indicating the location. The filename encodes the parameter. Eg., in the file Aname_RF.txt is parameter is RF (rainfall). If the column after a data
columns has a header named "Qf" it is interpreted as a column holding quality flags for the data column. The flags are converted to DELFT-FEWS
data flags using the flagconversions mapping.

Configuring the Import

The reader is named WSCCCsv which should be configured in the general section of the import. An example import configuration is shown
below:

<?xml version="1.0" encoding="UTF-8"?>


<timeSeriesImportRun xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<import>
<general>
<importType>WSCCCsv</importType>
<folder>$IMPORT_FOLDER$/WSCC</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$</failedFolder>
<idMapId>IdImportWSCC</idMapId>
<flagConversionsId>ImportFlagConversions</flagConversionsId>
<importTimeZone>
<timeZoneOffset>+08:00</timeZoneOffset>
</importTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportWSCC</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>WSCC_Level</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="3650"/>
</timeSeriesSet>
</import>
</timeSeriesImportRun>

An example IdMapping file (that maps the first column of the also attached example input file) is shown below:

269
<?xml version="1.0" encoding="UTF-8"?>
<idMap version="1.1" xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<parameter internal="[Link]" external="LEV"/>
<parameter internal="[Link]" external="FLW"/>
<parameter internal="[Link]" external="RF"/>
<parameter internal="[Link]" external="VOL"/>
<!-- vlume -->
<location internal="LowerPierce" external="S23-USR-LEV-RES-1"/>
</idMap>

An example flag conversions file for the WSCC data is shown below:

<?xml version="1.0" encoding="UTF-8"?>


<!--WSCC flag conversion file for Import-->
<flagConversions xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<!-- F - Telemetry Failed
N - Telemetry Normal
M - ManuallySet ( value set by the opeartor)
B - Blocked ( alarm disabled by the operator )
C - Calculation Failure
-->
<flagConversion>
<inputFlag>
<name>Telemetry Failed</name>
<value>F</value>
</inputFlag>
<outputFlag>
<name>ORIGINAL_MISSING</name>
<value>9</value>
<description>Missing value in originally observed series.
Note this is a special form of both ORIGINAL/UNRELIABLE
and ORIGINAL/RELIABLE.</description>
</outputFlag>
</flagConversion>
<flagConversion>
<inputFlag>
<name>Telemetry Normal</name>
<value>N</value>
</inputFlag>
<outputFlag>
<name>ORIGINAL_RELIABLE</name>
<value>0</value>
<description>Observed value retrieved from external data source.
Value is valid, marked as original reliable as validation is yet to be done</description>
</outputFlag>
</flagConversion>
<flagConversion>
<inputFlag>
<name>ManuallySet ( value set by the opeartor)</name>
<value>M</value>
</inputFlag>
<outputFlag>
<name>CORRECTED_RELIABLE</name>
<value>1</value>
<description>The original value was removed and corrected.
Correction may be through byteerpolation or manual editing</description>
</outputFlag>
</flagConversion>
<flagConversion>
<inputFlag>

270
<name>Blocked ( alarm disabled by the operator )</name>
<value>B</value>
</inputFlag>
<outputFlag>
<name>ORIGINAL_UNRELIABLE</name>
<value>6</value>
<description>Observed value retrieved from external data source.
Value is invalid due to validation limits set. Value is removed.</description>
</outputFlag>
</flagConversion>
<flagConversion>
<inputFlag>
<name>Calculation Failure</name>
<value>C</value>
</inputFlag>
<outputFlag>
<name>ORIGINAL_DOUBTFUL</name>
<value>3</value>
<description>Observed value retrieved from external data source.
Value is valid, but marked as suspect due to soft validation limits being exceeded</description>
</outputFlag>
</flagConversion>
<defaultOuputFlag>
<name>ORIGINAL_RELIABLE</name>
<value>0</value>
<description>The data value is the original value retrieved from an
external source and it successfully passes all validation criteria set.</description>
</defaultOuputFlag>
<missingValueFlag>
<name>ORIGINAL_MISSING</name>
<value>9</value>
</missingValueFlag>

271
</flagConversions>

The file format

Date Time TN-1 QF-1 TN-2 QF-2 TN-3 QF-3 TN-N QF-N
DataType date time float Char(1) float Char(1) float Char(1) float Char(1)
DataFormat ddmmyyyy hh:mm [Link] [Link] [Link] [Link]

NOTE :

1. All the columns in text file will be separated by comma(,) character

1. Value of all the Columns contains the Rain Fall value for that particular time interval.
This rain fall value is derive from current (raw accumulated pulse) value - previous (raw accumulated pulse) value,
in other words the rain fall (in mm) within that time interval (i.e. 10 mins).

1. Definition of the Data quality flags(QF) as listed below;

F Telemetry Failed

N Telemetry Normal

M ManuallySet ( value set by the opeartor)

B Blocked ( alarm disabled by the operator )

C Calculation Failure

2. Name of the text file will be 'yyyymmddhhmm_PARAMETER'


Eg. 200703121500_RF
3. TN stands for TagName (parameter name) which defined in the SCADA System. Also user can customise the TN which may different
from the SCADA System

Example:

Date,Time,S23-USR-LEV-RES-1,Qf,S48(51)-UPP-LEV-RES-1,Qf,S46(50)-LPP-LEV-RES-1,Qf,S46(24)-MCP-LEV-RES-1,Qf06052007,02:10

Singapore OMS Lake Diagnostic System files

Singapore OMS Lake Diagnostic System file format (Since 2010_01)

Files in the Singapore OMS Lake Diagnostic System file format are expected to be text files.

Implementation details

First, the location is determined from the first two characters of the filename. The import module will scan the input for the following lines:

SCHEDULE A
SCHEDULE B
Data Section

The parameters are collected from lines containing a slash '/', where the actual parameterId is taken from the left part before the slash. The
parameters for SCHEDULE A are retrieved from the lines between SCHEDULE A and SCHEDULE B. The parameters for SCHEDULE B are
retrieved from the lines between SCHEDULE B and Data Section. The actual data with measurements is retrieved from the lines after the Data
Section tag. Each line with data that starts with 'A' or 'B' will contain also a number of space delimited values. The first value is the Julian day
since 1-Jan-2000. The second value is the number of seconds in that day. The timestamp that is derived from this is supposed to be in same
timezone as specified in the importTimeZone value of the import configuration. The other values are measurement data, for each parameter in the
corresponding schedule. The import will determine the timestep from the difference in time between the first two data rows for each schedule. The
other rows in a schedule must follow the same timestep for the schedule otherwise the import run will produce error messages. Also when either
no data values for schedule A or B are found, a warning messages is produced.

Sample Input

The following sample shows equidistant timeseries data for schedule A with a timestep of 30 seconds and equidistant timeseries data for
schedule B with a timestep of 15 minutes. The sample also shows the definition of two parameters TCHN 1 and TCHN 2 for schedule A and two
parameters CHAN 8 for schedule B. The current implementation rounds the actual timestamps to make them acceptable as equidistant data for

272
FEWS.

Sample configuration file for importing Singapore OMS Lake Diagnostic System files

<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="


[Link]
[Link] xmlns="
[Link]
<import>
<general>
<importType>SingaporeLDS</importType>
<folder>$IMPORT_FOLDER$/Lds</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$/Lds</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$/Lds</backupFolder>
<idMapId>IdImportLds</idMapId>
<importTimeZone>
<timeZoneOffset>+08:00</timeZoneOffset>
</importTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportLds</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>LDS_NTU-CWR</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="10"/>
</timeSeriesSet>
</import>

</timeSeriesImportRun>
]]>

Sample idMapping

When working with parameters with different depths the identifiers sometimes have to be mapped.
For instance suppose the parameterId in the input file was TCHN 1 and the location derived from the filename was t4, this case can easily be
mapped to a internal identifiers MCH1 and [Link] using the following id mapping.

<idMap xmlns:xsi="[Link] xmlns="[Link]


xsi:schemalocation="[Link]
[Link] version="1.1">
<map externalparameter="TCHN 1" internalparameter="[Link]" internallocation="MCH1"
externallocation="t4"/>
...
</idMap>
]]>

EasyQ
Files in the EasyQ file format are expected to be text files. The recommended timezone to be configured is (GMT+1). The input files for the parser
are header files (.hdr) describing the data files and the data files themselves (.sen or .dat). The locationId is retrieved from the filename. If there is
a space in the filename, the locationId will be taken from the part before the first space.

Structure of the header file

The parser seeks the position in the header file where the variables are defined. This section always starts with 'Data file format' and a separator
line. The next line contains the reference to the data file. The parser will ignore the specified the absolute file path but only scan whether the
extension .dat or .sen has been provided. The lines following the path describe the columns in the data file. The following sample illustrates a
piece of a typical header section. Each line contains a 1-based column index, followed by the header name, and finally the unit. The parser
expects fixed column-lengths.

273
Structure of the data files

The following snippet illustrates a sample data file. The first six columns are used to set the time step for the measurements.

McIdasArea
Introduction

The McIdasArea files format is available at Space Science and Engineering Center. Basically each McIdas Area file is a binary file containing
several header blocks followed by a data block. The values in the datablock are for MinoSil 4-byte big endian integers.

Julian day conversion for MinoSil

The data files for Minosil differ slightly from the 2006 specifications. Instead they use the function below for calculating the day. The first header
block is the directory block, which contains 64 integers. The following snippet illustrates how the datestamp is calculated from a pair of integers
starting at the specified offset.

CUMULATIVE_DAYS[monthIndex]) monthIndex++;
int month = (monthIndex + 1) % 12;
if (month == 0) month = 12;

int dayOfMonth = days;


if (month > 1) {
dayOfMonth -= CUMULATIVE_DAYS[monthIndex - 1];
}
String timeString = "" + getIntVariable(offset + 1);
int length = 6 - [Link]();
for (int i = 0; i < length; i++) {
timeString = "0" + timeString;
}
int hours = [Link]([Link](0, 2));
int minutes = [Link]([Link](2, 4));
int seconds = [Link]([Link](4, 6));
return [Link](years, month, dayOfMonth, hours, minutes, seconds);
}
]]>

Sample import configuration

<idMap xmlns:xsi="[Link] xmlns="[Link]


xsi:schemalocation="[Link]
[Link] version="1.1">
<map externalparameter="none" internalparameter="[Link]" internallocation="Radar2000"
externallocation="none"/>
</idMap>
]]>

Sample import run configuration

274
<import>
<general>
<importType>McIDASArea</importType>
<folder>$IMPORT_FOLDER$/Radar</folder>

<idMapId>ImportRadar</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportRadar</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radar2000</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="60"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
<externUnit unit="0.001 mm" parameterid="[Link]"/>
</import>

]]>

Sample grids configuration

<regular locationid="Radar2000">
<rows>760</rows>
<columns>760</columns>
<lambertConformalConic>
<originLatitude>0</originLatitude>
<originLongitude>0</originLongitude>
<firstStandardParallelLatitude>33.5</firstStandardParallelLatitude>
<secondStandardParallelLatitude>46.5</secondStandardParallelLatitude>
</lambertConformalConic>
<firstCellCenter>
<x>-1012857.84</x>
<y>5541058.269</y>
</firstCellCenter>
<xCellSize>2000</xCellSize>
<yCellSize>2000</yCellSize>
</regular>
...
]]>

Keller IDC

Overview

Imports time series data from Keller IDC files.


The data from the different channels is read. Usually this are the pressures and temperatures at the two sensors and the pressure difference,
which is the indication for the water level fluctuations.

Next to the time series, read from the channels, also the next meta information is read and stored:

InstallationDepth
HeightOfWellhead
Offset
WaterDensity
BatteryCapacity

Notice: to store the data from the channels you should use the channel number as external parameter. To store the meta data you should use the
above listed keys as external parameter (case sensitive!).

Configuration

275
keller
..
]]>

<parameter internal="[Link]" external="1"/>


<parameter internal="[Link]" external="2"/>
<parameter internal="[Link]" external="4"/>
<parameter internal="[Link]" external="5"/>
<parameter internal="Inhangdiepte" external="InstallationDepth"/>
<parameter internal="Bovenkant buis" external="HeightOfWellhead"/>
<parameter internal="Referentieniveau" external="Offset"/>
<parameter internal="Dichtheid" external="WaterDensity"/>
<parameter internal="Batterijspanning" external="BatteryCapacity"/>
]]>

java source code

[Link]

[Link]

package [Link];

import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];

import [Link];
import [Link];

/**
* TimeSeries reader for Keller AG *.IDC files
*
*/
public class IdcTimeSeriesParser implements BinaryParser<TimeSeriesContentHandler> {

private static final Logger log = [Link]([Link]);

// Constanten
private static final int DF_ID_FILE = 0;
private static final int DF_ID_DEVICE = 1;
private static final int DF_ID_DATA = 2;
private static final int DF_ID_UNITS = 3;
private static final int DF_ID_PROFILE = 4;
private static final int DF_ID_CONFIG = 5;
private static final int DF_ID_WL_CONVERTED = 6;
private static final int DF_ID_AIR_COMPENSATED = 7;
private static final int DF_ID_INFO = 8;

//
private LittleEndianDataInputStream is = null;
private TimeSeriesContentHandler contentHandler = null;
private int rawTimeZoneOffset;

/**
* Parse Keller AG *.idc bestand
*
* @param inputStream
* @param virtualFileName
* @param contentHandler
* @throws Exception
*/

276
@Override
public void parse(BufferedInputStream inputStream, String virtualFileName,
TimeSeriesContentHandler contentHandler) throws Exception {

[Link] = new LittleEndianDataInputStream(inputStream);


[Link] = contentHandler;
[Link] = [Link]().getRawOffset();

boolean abVersion0310 = false;


String locationId = "";

float[] userValArr = new float[12];

float installationDepth = 0f;


float heightOfWellhead = 0f;
float offset = 0f;
float waterDensity = 0f;
byte batteryCapacity = 0;

long startTime = Long.MIN_VALUE;

DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();

DefaultTimeSeriesHeader headerEq = new DefaultTimeSeriesHeader();

// Continue to read lines while


// there are still some left to read
while (true) {
[Link](1);
int nextByte = [Link]();
if (nextByte == -1) break; // EOF
[Link]();
// Read block ID
short blockId = readBlock();

switch (blockId) {
case DF_ID_FILE:
// File Identification
String version = readString();
if ([Link]()) [Link]("version = " + version);
break;

case DF_ID_DEVICE:

// Device properties
int lw = [Link]();
int w1 = lw / 65536;
int w2 = lw % 65536;

int klasse = w1 / 256;


int groep = w1 % 256;
int jaar = w2 / 256;
int week = w2 % 256;

if (jaar == 3) {
if (week >= 10) {
abVersion0310 = true;
}
}
if (jaar > 3) {
abVersion0310 = true;
}

int serialNumber = [Link]();

boolean configuredAsWaterlevel = [Link]();

locationId = readString();
String comment = readString();

277
if ([Link]()) {
[Link]("serial number = " + serialNumber);
[Link]("configured as waterlevel = " + configuredAsWaterlevel);
[Link]("comment = " + comment);
[Link]("location id = " + locationId);
}

break;

case DF_ID_DATA:

// Data records
int z = [Link]();
for (int i = 0; i < z; i++) {
// datum 8 bytes double
// channel 1 byte
// 3 bytes skip
// value 4 bytes float
// lw 4 bytes int
// 4 bytes skip
// The date is stored as the number of days since 30 Dec 1899. Quite why it is
not 31 Dec is not clear. 01 Jan 1900 has a days value of 2.
double doubleTime = [Link]();

long time = [Link](doubleTime);

// Bewaar lijst met tijden voor de reeks met inhangdiepte's


if (startTime == Long.MIN_VALUE) startTime = time;

[Link](new Long(time));
byte channel = [Link]();

[Link](channel);

[Link](is, 3);
float singleValue = [Link]();
[Link](singleValue);
[Link]();
int longValue = [Link]();
// Skip 4 bytes
[Link](is, 4);
}
break;

case DF_ID_UNITS:

boolean retval = parseUnits(locationId, header);


if (!retval){
[Link]("Bestand bevat niet de juiste eenheden");
throw new Exception("Bestand bevat niet de juiste eenheden");
}
break;

case DF_ID_PROFILE:

// Read device profile


for (int i = 0; i < [Link]; i++) {
userValArr[i] = [Link]();
if ([Link]()) [Link]("userValArr " + userValArr[i]);
}

installationDepth = userValArr[2];
if ([Link]()) [Link]("installation depth " + installationDepth);

heightOfWellhead = userValArr[3];
if ([Link]()) [Link]("Height of wellhead above sea level " +
heightOfWellhead);

offset = userValArr[4];

278
if ([Link]()) [Link]("Offset " + offset);

waterDensity = userValArr[5];
if ([Link]()) [Link]("Water density " + waterDensity);

short availableChannels = [Link]();

if ((availableChannels & 2) == 2) {
float p1min = [Link]();
float p1max = [Link]();
if ([Link]()) [Link]("P1 min " + p1min);
if ([Link]()) [Link]("P1 max " + p1max);
}
if ((availableChannels & 4) == 4) {
float p2min = [Link]();
float p2max = [Link]();
if ([Link]()) [Link]("P2 min " + p2min);
if ([Link]()) [Link]("P2 max " + p2max);
}
if ((availableChannels & 8) == 8) {
float t1min = [Link]();
float t1max = [Link]();
if ([Link]()) [Link]("T1 min " + t1min);
if ([Link]()) [Link]("T1 max " + t1max);
}
if ((availableChannels & 16) == 16) {
float tob1min = [Link]();
float tob1max = [Link]();
if ([Link]()) [Link]("TOB1 min " + tob1min);
if ([Link]()) [Link]("TOB1 max " + tob1max);
}
if ((availableChannels & 32) == 32) {
float tob2min = [Link]();
float tob2max = [Link]();
if ([Link]()) [Link]("TOB2 min " + tob2min);
if ([Link]()) [Link]("TOB2 max " + tob2max);
}

break;

case DF_ID_CONFIG:

// Record configuration
int startDate = [Link]();
int stopDate = [Link]();
lw = [Link]();

int recordedChannels = lw / 65536;


int recordModus = lw % 65536;

float trigger1 = [Link]();


float trigger2 = [Link]();

if (abVersion0310) {
int recFixCounter = [Link]();
short recModCounter = [Link]();
} else {
lw = [Link]();
int recFixCounter = lw / 65536;
int tmp = lw % 65536;
short recModCounter = (short) tmp;
}

short sw = [Link]();

int tmp = sw / 256;


short recModChannel = (short) tmp;
tmp = sw % 256;
short recSaveCounter = (short) tmp;

279
short recFastModCounter = [Link]();
boolean recEndless = [Link]();
break;

case DF_ID_WL_CONVERTED:

// Waterlevel converted
boolean convertedIntoWaterlevel = [Link]();
break;

case DF_ID_AIR_COMPENSATED:

// Airpressure compensation
boolean airCompensated = [Link]();
break;

case DF_ID_INFO:

// Additional information
batteryCapacity = [Link]();
for (int i = 0; i < 10; i++) {
int reserve = [Link]();
}
// Read CRC16 sum of the whole file
short crc16 = [Link]();
break;
}
}

// Inhangdiepte
[Link](locationId);
[Link](startTime);

[Link]("InstallationDepth");
[Link]("m");
[Link](headerEq);
[Link](installationDepth);
[Link]();

[Link]("HeightOfWellhead");
[Link]("m");
[Link](headerEq);
[Link](heightOfWellhead);
[Link]();

[Link]("Offset");
[Link]("m");
[Link](headerEq);
[Link](offset);
[Link]();

[Link]("WaterDensity");
[Link]("kg/m3");
[Link](headerEq);
[Link](waterDensity);
[Link]();

[Link]("BatteryCapacity");
[Link]("%");
[Link](headerEq);
[Link](batteryCapacity);
[Link]();
}

/**
* Read block identification
* @return block identification
* @throws IOException
*/

280
private short readBlock() throws IOException {
short block = [Link]();
short w1 = [Link]();
short w2 = [Link]();

int n = (65536 * w1) + w2;


return block;
}

/**
*
*
*
* @param locationId
* @param header
* @return
* @throws [Link]
*/
private boolean parseUnits(
String locationId,
DefaultTimeSeriesHeader header) throws IOException {

boolean retval = true;


short availableChannels = [Link]();
short amountOfUnits = [Link]();

for (int i = 0; i < amountOfUnits; i++) {

// Kanaal
byte channel = [Link]();

// Eenheid
byte[] bytes = new byte[7];
[Link](bytes, 0, 7);
String unit = new String(bytes, 1, bytes[0]);
if([Link]("m")){
retval = false;
}

[Link](unit);
if ([Link]("°C")) {
[Link]("deg C");
}

// Multiplier
float multiplier = [Link]();
// Offet
float offset = [Link]();

// Description
bytes = new byte[41];
[Link](bytes, 0, 41);
String description = new String(bytes, 1, bytes[0]);
[Link](is, 3);

if ([Link]()) {
[Link]("channel " + channel);
[Link]("multiplier " + multiplier);
[Link]("offset " + offset);
[Link]("unit " + unit);
[Link]("description " + description);
}

[Link](locationId);
[Link]([Link](channel));

[Link](channel, header);
}

281
return retval;
}

/**
* Read a string from file
* @return string from file
* @throws Exception
*/
private String readString() throws IOException {
String retval = "";
// lees lengte van de string
short length = [Link]();
if (length > 0) {
// Create the byte array to hold the data
byte[] bytes = new byte[length];

int nob = [Link](bytes, 0, length);


if (nob == length) {
retval = new String(bytes);
}
}
return retval;

282
}
}

Obserview

Overview

The Obserview format is a ASCII data file format exported from the Obserview system. Each file may contain only one time series.
The time series are defined by the file name, there is no information on the ASCII file on the location or parameter.

Configuration

The Obserview file does not conmtain a location nor a parameter ID. To import these series into FEWS you have to map the file name to both the
set FEWS location ID and parameter ID. The configuration of the import module instance is not different from any other import type. Because the
file does not contain any parameter unit information, the "external" unit can be specified in the import module instance.

Example of the import module instance:

<general>
<importType>OBSERVIEW</importType>
<folder>$IMPORT_FOLDER$/Obsv</folder>
....
</general>
<timeSeriesSet>
<moduleInstanceId>Import_Obsv</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>ObsLocSet</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
....
<externUnit unit="0.01 m3/s" parameterid="[Link]"/>
<externUnit unit="mm" parameterid="[Link]"/>

]]>

Example of the id mapping:

<idMap xmlns:xsi="[Link] xmlns="[Link]


xsi:schemalocation="[Link]
[Link] version="1.1">
<map externalparameter="Trend [Link]" internalparameter="H" internallocation="Loc1"
externallocation="Trend [Link]"/>
<map externalparameter="Afvoer [Link]" internalparameter="Q" internallocation="Loc1"
externallocation="Afvoer [Link]"/>
.....
]]></idMap>

Example file

The ASCII file contains three columns divided by a ";".

Column 1: date (format DD-MM-YYYY)


Column 2: time (HH:mm:ss)
Column 3: value

283
27-11-2009;[Link];9119
27-11-2009;[Link];9118
27-11-2009;[Link];9118
27-11-2009;[Link];9125
27-11-2009;[Link];9123
27-11-2009;[Link];9120
27-11-2009;[Link];9124
27-11-2009;[Link];9120
27-11-2009;[Link];9123

generalCsv

Overview

Imports time series data from files in CSV format with one header line containing a column heades of the time series:

The first line contains the column names (fields) in the csv file, the line is used to determine the field separator and to determine the
names of the data columns
All other lines contain the date-time as field and the values for each time series.
Values between -1000.0 and -999.0 (inclusive) are regarded as missing values.

The CSV files can be supplied in a ZIP file.

Import type

The import type is generalCSV. There is no particular file extension required.

Example

Here is a simple example:

Time,Waterstand,Pomp-1 Born
04-05-2011 03:24,0.000000,-0.450000
04-05-2011 03:44,0.000000,-0.450000
04-05-2011 03:54,0.000000,-0.440000

.....

284
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>generalCSV</importType>
<folder>$IMPORT_FOLDER$/OBS</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$/OBS</backupFolder>

<table>
<dateTimeColumn pattern="dd-MM-yyyy HH:mm" name="Time"/>
<valueColumn unit="m" locationid="Bosscheveld" parameterid="[Link]" name=
"Waterstand"/>
<valueColumn unit="min" locationid="Bosscheveld" parameterid="[Link]" name=
"Pomp-1 Born"/>
</table>
<idMapId>IdImportOBS</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
</general>
</import>
</timeSeriesImportRun>
]]>

Details of the import format

If the first line contains a comma, the decimal separator is taken to be a period (.), otherwise it is supposed to be a semicolon (;) and the decimal
separator is taken to be a comma. This way locale-specific CSV files are supported.

The field separator is either a comma or a semicolon. Tabs are not supported.

DINO Service

Overview

Imports time series data from the GrondWaterService hosted by TNO.

The GrondWaterService offers a wide variety of data however only imports for the following data types have been implemented:

Ground water levels


Ground water statistics

Ground water levels

The ground water levels request returns a list of time/value pairs of the measured ground water levels for a measuring station.

Ground water statistics

The ground water statistics request returns a list containing a number of times and for each time a set of statistical parameters that apply to the
ground water level. The following parameters are returned:

MAX_LEVEL
MIN_LEVEL
STD_LEVEL
MEAN_LEVEL
MEDIAN_LEVEL
P10_LEVEL
P25_LEVEL
P75_LEVEL
P90_LEVEL

Configuration (Example)

A complete import module configuration consists of an ID Mapping file and a Import Module Instance file.

285
ModuleConfigFiles/

The following example of an Import Module Instance will import the time series as non-equidistant series.

[Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>dinoservice</importType>
<serverUrl>[Link]
<relativeViewPeriod unit="day" start="-365" end="0"/>
<idMapId>IdImportDino</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
<!-- <timeZoneOffset>+01:00</timeZoneOffset>-->
</importTimeZone>
<dataFeedId>tnonitg</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportDinoService</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>G.m</parameterId>
<locationId>NL-B32C0677-001</locationId>
<!-- <locationSetId>TNO-sensors(SWE)</locationSetId> -->
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="365"/>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportDinoService</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>NL-B32C0677-001</locationId>
<!-- <locationSetId>TNO-sensors(SWE)</locationSetId> -->
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="365"/>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportDinoService</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>NL-B32C0677-001</locationId>
<!-- <locationSetId>TNO-sensors(SWE)</locationSetId> -->
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="365"/>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportDinoService</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>NL-B32C0677-001</locationId>
<!-- <locationSetId>TNO-sensors(SWE)</locationSetId> -->
<timeSeriesType>external historical</timeSeriesType>

286
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="365"/>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

IdMapFiles/

Defines mappings between DINO and FEWS parameters and locations.

[Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">

<parameter externalqualifier="SFL" internal="G.m" external="findMeetreeks"/>


<parameter externalqualifier1="MAX_LEVEL" internal="[Link]" externalqualifier="SFL"
external="findGrondwaterStatistiek"/>
<parameter externalqualifier1="MIN_LEVEL" internal="[Link]" externalqualifier="SFL"
external="findGrondwaterStatistiek"/>
<parameter externalqualifier1="MEAN_LEVEL" internal="[Link]" externalqualifier="SFL"
external="findGrondwaterStatistiek"/>

<!--data reeksen voor grondwatergegevens: level-->


<location internal="NL-B01D0040-001" external="B01D0040-001"/>
<location internal="NL-B03D0308-001" external="B03D0308-001"/>
<location internal="NL-B03G0080-001" external="B03G0080-001"/>
<location internal="NL-B03G0080-002" external="B03G0080-002"/>
<location internal="NL-B06F0094-001" external="B06F0094-001"/>
<location internal="NL-B06F0094-002" external="B06F0094-002"/>
<location internal="NL-B06F0094-003" external="B06F0094-003"/>
...
</idMap>
]]>

Important in this configuration is the externalQualifier, this is used to map the statistical parameters to FEWS parameters.

GrondWaterService WSDL

[Link]

Example SOAP request and response/

Ground Water Levels - Request


<soapenv:Header/>
<soapenv:Body>
<ws:findMeetreeks>
<WELL_NITG_NR>B14G0057</WELL_NITG_NR>
<WELL_TUBE_NR>002</WELL_TUBE_NR>
<START_DATE>2007-01-01</START_DATE>
<END_DATE>2008-12-31</END_DATE>
<UNIT>SFL</UNIT>
</ws:findMeetreeks>
</soapenv:Body>

]]>

287
Ground Water Levels - Response
<S:Body>
<ns2:findMeetreeksResponse xmlns:ns2="[Link]
<GROUND_WATER_LEVELS>
<LEVELS>
<DATE>2007-01-01+01:00</DATE>
<LEVEL>187.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-02+01:00</DATE>
<LEVEL>190.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-03+01:00</DATE>
<LEVEL>193.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-04+01:00</DATE>
<LEVEL>188.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-05+01:00</DATE>
<LEVEL>190.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-05+01:00</DATE>
<LEVEL>189.0</LEVEL>
<REMARK/>
</LEVELS>
</GROUND_WATER_LEVELS>
</ns2:findMeetreeksResponse>
</S:Body>

]]>

java source code

[Link]

GermanSnow

Overview

This import is available in DELFT-FEWS versions after 2010.02

GermanSnow imports grid time series data from the ASCII file produced by German SNOW model.

The SNOW files store three data types:

simulated data for (usually) 30 hours before the forecast time,


forecasted data for (usually) 72 hours after the forecast time,
status of the snow cover

All these three data types can be imported with GermanSnow.

Structure of the SNOW file

The first line contains the forecast time in the format "YYYY MM DD HH". The time zone is always UTC.

288
The forcast time may be preceded by the keyword : " - Datum: YYYY MM DD HH".

The second record contains the parameter Id and the time offset in hours in relation to the forecast time.
The next lines contain grid data. The start point of the grid in the SNOW file is South - West.

The following snippet illustrates a sample data file :

2011 1 10 18
OBWN -29
2.0 2.0 2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 2.0 2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
OBND -29
1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 1.0 2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
OBWN -28
0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
OBND -28
......
......

The record with forecast time may be exceptionally followed with a line acoording to this format:
" - Limits: [ X1- X2; Y2- Y2] - [ X1- X2; Y2- Y2]"
This information is associated with the corrections made by SNOW model. The reader ignores this line

Parser features

The parser must know the size of the grid to read from the SNOW files. So, the user must configure the grid geometry in [Link] file.
The parser also expects that all parameters has the same grid size.

Delft3D-Flow

Overview

Import Delft3D Flow model results that are stored in the NEFIS file format. There are 2 types of Delft3D Flow model results:

Point-based output
Grid-based output.

In both cases data are stored in NEFIS file but the structure is different.

Input from NEFIS

Currently the Delft3DFlow import can import point-based results only which are stored in a number of NEFIS structures (groups, cells, elements
as they are defined in the NEFIS file format):

Group Element Type Unit Example Comment

his-const ITDATE INTEGER [2] YYYYYYDDMM, 20050101, start date and time
HHmmSS 100101

his-const TUNIT INTEGER sec 60 time unit

his-const NAMST CHARACTER*20 "st1", "st2", ... location names


[nLocations]

his-info-series ITHISC INTEGER [nCells] 0, 10, ... time step number for each cell in a group

his-series ZWL REAL[nCells] m 1.1, 1.2, ... water level values for each time step defined in
the ITHISC

289
Names and dimensions of the variables available in the NEFIS Delft3D Flow results file:

Variable Supported Name Group Element Comment


dimension dimension

ZWL Water-level in station (zeta point) [5761] [43]

ZVICWV Vertical eddy viscosity-3D in station (zeta [5761] [ 1]


point)

ZTUR Turbulent quantity per layer in station (zeta [5761] [ 1]


point)

ZTAUKS Bottom stress U in station (zeta point) [5761] [43]

ZTAUET Bottom stress V in station (zeta point) [5761] [43]

ZRICH Richardson number in station (zeta point) [5761] [ 1]

ZRHO Density per layer in station (zeta point) [5761] [ 1]

ZQYK V-discharge per layer in station (zeta point) [5761] [43, 1]

ZQXK U-discharge per layer in station (zeta point) [5761] [43, 1]

ZDICWW Vertical eddy diffusivity-3D in station (zeta [5761] [43, 1]


point)

ZCURW W-velocity per layer in station (zeta point) [5761] [ 1]

ZCURV V-velocity per layer in station (zeta point) [5761] [43, 1]

ZCURU U-velocity per layer in station (zeta point) [5761] [43, 1]

HYDPRES Non-hydrostatic pressure at station (zeta [5761] [ 1]


point)

GRO Concentrations per layer in station (zeta [5761] [ 1]


concentration of what
point)

FLTR Total discharge through cross section [5761] [16] can not be imported, requires velocity
(velocity points) points

DTR Dispersive transport through cross section [5761] [ 1] strange that dimension differs from
(velocity points) previous - 1 versus 16

CTR Monumentary discharge through cross [5761] [16] can not be imported, requires velocity
section (velocity points) points

CTR Advective transport through cross section [5761] [ 1] can not be imported, requires velocity
(velocity points) points

Configuration

To import location-based data from Delft3D Flow NEFIS file setup a TimeSeriesImport module configuration like:

290
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>Delft3DFlow</importType>
<folder
>../junit_test_output/nl/wldelft/fews/system/plugin/dataImport/TimeSeriesImportTestData/import/delft3dflow</
folder>
<idMapId>delft3dflowMapId</idMapId>
<importTimeZone><timeZoneOffset>+01:00</timeZoneOffset></importTimeZone>
</general>

<timeSeriesSet>
<moduleInstanceId>ImportDelft3DFlow</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Z.m</parameterId>
<locationId>shepelevo</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="4"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

CERF
Overview

CERF regionalised conceptual rainfall runoff model from Center for Ecology and Hydrology and the Environment Agency (UK).

The CERF gridded rainfall data is a simple CSV format file in which the data in each record refers to a specific day and the records increment in
chronological order. The columns within the file contain the daily rainfall for each grid cell within the export. These columns increment right to left
in "strips" of data for 1km cells for a common Northing but incrementing eastings. The eastings for these cells increment left to right.

Data format

The first column of the file contains the date (format: <day>/<month>/<year>). The first record contains the column headers which for a grid
extract covering a box denoted by the lower left and upper right grid coordinates (nnnE, mmmN), (nnn+10E, mmm+5N) would consist of the
following

Date, nnnE-mmmN,..nnn+10E-mmmN,nnnE-mmm+1N....nnn+10E-mmm+1N...etc.

Grid references are quoted in units of 1000m and rainfall data are in units of mm/day.

First element is always "Date" and can be ignored.


Next elements of header are the coordinates of grid cell centers starting at the lower left corner.
Grid cells increment first for the first row from west to east, then for the second row from west to east, etc. The rows increment from south to
north. The last grid cell is the upper right corner of the grid. The grid is always rectangular with cells of 1km x 1km, without missing cells,so only
coordinates of first and last column header are needed for grid definition.

A (truncated) example of a CERF Ascii Grid data file:

291
Date, 410E-505N , 411E-505N , 412E-505N , ... 454e-506n
01/12/1963, 1.53 , 1.51 , 1.46 , ... 0
02/12/1963, 0 , 0 , 0 , ... 0
03/12/1963, 0 , 0 , 0 , ... 0
04/12/1963, 0 , 0 , 0 , ... 0
05/12/1963, 0 , 0 , 0 , ... 0
06/12/1963, 0.17 , 0.15 , 0.12 , ... 0
07/12/1963, 0.77 , 0.74 , 0.69 , ... 1.32
08/12/1963, 0 , 0 , 0 , ... 0
09/12/1963, 0 , 0 , 0 , ... 0
.
.
.

Configuration

To import a CERF Grid data file, configure a module like:

<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>CERF</importType>
<folder>$REGION_HOME$/Import/CERF</folder>
<idMapId>idImportCERF</idMapId>
<unitConversionsId>CERF_UnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>CERF_import</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>P</parameterId>
<locationSetId>CERF_all</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

SWE

Overview

The Open Geospatial consortium has developed the concept of Sensor Web Enablement to make sensor data available for the web. A number of
services have been specified including associated XML-formats. The most important XML-specification for FEWS seems to be the Observations
and Measurements model ([Link] ).

The following actions were taken:

Investigation of the xml model and interface to determine which xml files/objects should be supported to implement the observation and
measurement model
Implementation of a SweTimeSeriesParser to test whether import from a URL fits the current import strategy of FEWS.

XML model investigation


Object/Model mapping
The SWE model depends on other models like Geographic Markup Language (GML), Geographic MetaData (GMD), OGC Web Services (OWS)
and the Open Geospatial Consortium (OGC) basic model itself. The number of xml files needed, to support a generic object/model mapping for

292
SWE, will involve hundreds of xml files. In FEWS, Castor is used to generate java classes for object/model mapping. A test was performed to see
whether Castor could generate code for a single SWE file; [Link]. Castor did not succeed due to the embedded references to external
XML files.

O & M Interface investigation


During investigation of the xml model it was found that the model supports a range of requests and responses to obtain actually the same data.
Although quite readable for human, an automated process like a time series parser will have a hard time investigating responses and determining
which data it received.

Before use!

During tests we managed to get servers to hang because the amount of XML data, that had to be returned by the server, caused OutOfMemory
exceptions. This occurred even for small queries (short period of just a few days and only one location and parameter) . To avoid these problems,
please ensure that you've tested your queries on the target service, and adjust your relative view period appropriately.

To avoid these memory exceptions, we've adjusted the import module so that it will send a request to the SWE service for every single location,
parameter combination in the IdMap file. A drawback of this approach is that the performance of the import has slowed down.

Configuration (Example)

In order to import data from a swe enabled service one has to configure the following items:

The url at which the swe service is listening


The mapping between internal locations and parameters to swe known offerings, procedures and observerved properties

The url can be configured in the moduleConfigFile for the desires SWE import. Use the serverUrl tag in the general section as described in the
following example:

293
[Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>SWE</importType>
<serverUrl>[Link]
<relativeViewPeriod unit="week" start="-100" end="0"/>
<idMapId>IdImportKNMI_SWE</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>tnonitg</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI_SWE</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-sensors(SWE)</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-5" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI_SWE</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-sensors(SWE)</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-5" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI_SWE</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-sensors(SWE)</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-5" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI_SWE</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-sensors(SWE)</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-5" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

294
The offerings, procedures and observedProperties are mapped in the idMap file for the swe import. See next example:

IdImportSwe
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link] [
[Link] version="1.1">
<parameter externalqualifier="weatherNL" internal="[Link]" external=
"urn:og[Link]phenomenon:precipitationIntensity"/>
<parameter externalqualifier="weatherNL" internal="[Link]" external=
"urn:og[Link]phenomenon:windSpeed"/>
<parameter externalqualifier="weatherNL" internal="[Link]" external=
"urn:og[Link]phenomenon:airTemperature"/>
<parameter externalqualifier="weatherNL" internal="[Link]" external=
"urn:og[Link]phenomenon:relativeHumidity"/>

<\!\--locaties gegevens:="gegevens:" voor="voor" knmi="KNMI" level-\-="level-\-">


<location internal="KNMI-6235" external="urn:ogc:object:sensor:KNMI:procedure06235"/>
<location internal="KNMI-6239" external="urn:ogc:object:sensor:KNMI:procedure06239"/>
<location internal="KNMI-6240" external="urn:ogc:object:sensor:KNMI:procedure06240"/>
<location internal="KNMI-6251" external="urn:ogc:object:sensor:KNMI:procedure06251"/>
<location internal="KNMI-6260" external="urn:ogc:object:sensor:KNMI:procedure06260"/>
<location internal="KNMI-6269" external="urn:ogc:object:sensor:KNMI:procedure06269"/>
<location internal="KNMI-6270" external="urn:ogc:object:sensor:KNMI:procedure06270"/>
<location internal="KNMI-6275" external="urn:ogc:object:sensor:KNMI:procedure06275"/>
<location internal="KNMI-6280" external="urn:ogc:object:sensor:KNMI:procedure06280"/>
<location internal="KNMI-6283" external="urn:ogc:object:sensor:KNMI:procedure06283"/>
<location internal="KNMI-6290" external="urn:ogc:object:sensor:KNMI:procedure06290"/>
<location internal="KNMI-6310" external="urn:ogc:object:sensor:KNMI:procedure06310"/>
<location internal="KNMI-6344" external="urn:ogc:object:sensor:KNMI:procedure06344"/>
<location internal="KNMI-6356" external="urn:ogc:object:sensor:KNMI:procedure06356"/>
<location internal="KNMI-6370" external="urn:ogc:object:sensor:KNMI:procedure06370"/>
<location internal="KNMI-6375" external="urn:ogc:object:sensor:KNMI:procedure06375"/>
<location internal="KNMI-6380" external="urn:ogc:object:sensor:KNMI:procedure06380"/>
<location internal="KNMI-6391" external="urn:ogc:object:sensor:KNMI:procedure06391"/>
</\!\--locaties></idMap>
]]>

Note the parameter mappings in the above example. Four different parameters are described:

The externalQualifier is required and describes which offering is used to get the data from the service.
The internal parameter name is mapped to the external 'observed property'
The internal location is mapped to the external 'procedure'

NetcdfGridDataset

This import is available in DELFT-FEWS versions after 2010.02

Import NetcdfGridDataset uses NetCdf to read grid data from grib1, grib2 en NC formats.
The NetcdfGridDataset supports :

importing ensembles,
importing grid data from separate layers (z-dimension)
importing grid data that are distributed over the several files, e.g. all forecasts in one file, one file per forecast, et cetera.

Starting with DELFT-FEWS versions 2011.02 NetcdfGridDataset replaces the grib1 specific import types that use JGrib decoder.
NetcdfGridDataset is in use as several import types.
This is due to the backward compatibility of the existing configurations, and due to the logical naming of the import types from the customer's point
of view.

The import types below are handled by NetcdfGridDataset:

Import Type File Format Remark

NetcdfGridDataset NC reads also grib1 and grib2, however with low performance

295
grib1 grib1

grib2 grib2

grib2dir grib2 in use for backward compatibility of the configurations

grib_hirlam_flipped grib2 in use for backward compatibility of the configurations

grib grib1 in use for backward compatibility of the configurations

gribBasic grib1 in use for backward compatibility of the configurations

gribCosmo grib1 in use for backward compatibility of the configurations

Note:
the old JGrib based imports Grib, GribBasic, GribCosmo can be still used as import types GribOld, GribBasicOld, GribCosmoOld

IP1

Overview

This import is available in DELFT-FEWS versions after 2011.02

Imports ASCII type time series data in CSV formatted files. Used by FEWS-Basque

Structure of the IP1 file

The data file contains one row with data columns, seperated by a comma (,).

The columns of interest are:

Column Description Id assigned by IP1 import

1 code location -

2 date/time -

10 temperature air [in 0.1 °C] temperature_air

18 rain [in 0.1 mm] precipitation

19 rain corrected [in 0.1 mm] precipitation_corr

21 water height (digital and priority) [in mm] waterlevel_1

22 water height (analogic) [in mm] waterlevel_2

34 discharge (flowmeter) [in m3/s] discharge

Date/time is formatted as "yyyyMMddHHmmss"

Location ID and date/time values are inluded in " (double quotes). These are removed by the import.

The following illustrates the content of a sample data file:

"C0D2","20110912233000",0,0,0,0,0,0,0,159.1,96.7,0,0,0,0,0,0,0,0,0,190,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13.37

Configuration

To import IP1 data, configure a module like:

<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link]
xmlns="[Link]
<import>
<!--IP1-->
<general>
<importType>IP1</importType>

296
<folder>$IMPORT_FOLDER_IP1$</folder>
<failedFolder>$IMPORT_FOLDER_IP1$</failedFolder>
<backupFolder>$BACKUP_FOLDER_IP1$</backupFolder>
<idMapId>IdImportIP1</idMapId>
<unitConversionsId>ImportIP1Units</unitConversionsId>
<importTimeZone>
<!--EPS is in GMT-->
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>IP1-DF</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>wlevel</parameterId>
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>T_air</parameterId>
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Precip</parameterId>
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>wlevel_ana</parameterId>
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Discharge</parameterId>

297
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>

<externUnit unit="0.1 mm" cumulativesum="true" parameterid="Precip"/>


<externUnit unit="0.1 mm" cumulativesum="true" parameterid="T_air"/>
</import>
</timeSeriesImportRun>

]]>

idMapping

The parser assigns IDs for the parameters as indicated in the table above.
To map these to current FEWS parameters an idMapping can be configured.
For example:

<idMap xmlns:xsi="[Link] xmlns="[Link]


xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter internal="wlevel" external="waterlevel_1"/>
<parameter internal="wlevel_ana" external="waterlevel_2"/>
<parameter internal="T_air" external="temperature_air"/>
<parameter internal="Precip" external="precipitation"/>
<parameter internal="Discharge" external="discharge"/>
<location internal="KNMI_NL001" external="C0D2"/>
</idMap>
]]>

IFKIS

Overview

Imports time series data from tabular ASCII formatted file

Input file structure

The input file is of ASCII type. Data is structured in 14 columns, separated by a whitespace.

The following snippet illustrates a sample data file :

810 2009 4 18 8 0.0000 4.5400 76.3414 122.3708 14.0900 1.5760 50.9380 0.0505 6.4438
810 2009 4 18 9 0.0000 5.2833 68.0622 221.7067 22.5300 2.0883 50.4450 0.0485 6.0487
810 2009 4 18 10 0.0000 6.1333 60.2494 248.2750 19.9600 2.5067 49.6933 0.0475 5.6804

The next table explains the columns from the input and states what parameter ID and parameter unit is assigned by the time series parser during
import:

Column Parameter description Assigned parameter ID unit

1 Location ID - -

2 Year - -

3 Month - -

4 Day - -

5 Hour - -

298
6 Precipitation precipitation mm

7 Air temperature air_temperature degc

8 Relative humidity relative_humidity %

9 Global radiation global_radiation W/m2

10 Sunshine duration sunshine_duration min

11 Wind speed windspeed m/s

12 Snow depth snow_depth cm

13 Discharge discharge m3/s

14 Vapour pressure vapour_pressure hPa

During import the location ID is set to the (numeric) value found in the first column of the row.

The time stamp is created from the next 4 columns (year, month, day, hour); minutes are set to zero, so values are hourly.

Configuration

To import configure a module like this:

<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>IFKIS</importType>
<folder>$IMPORT_FOLDER_IFKIS$</folder>
<failedFolder>$IMPORT_FOLDER_IFKIS$</failedFolder>
<backupFolder>$IMPORT_FOLDER_IFKIS$</backupFolder>
<idMapId>IdImportIFKIS</idMapId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>IFKIS-DF</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportIFKIS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>P</parameterId>
<locationSetId>IFKIS-LOCS</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>

.
.
.

</import>
</timeSeriesImportRun>

]]>

idMapping

The parser assigns the numerical ID found on each row to LocationID.


Parameters are named by the parser according to the table above.

299
To map these to current FEWS location and parameter an idMapping can be configured.
For example:

<idMap xmlns:xsi="[Link] xmlns="[Link]


xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter internal="P" external="precipitation"/>
<location internal="The_FEWS_location1" external="810"/>
</idMap>
]]>

IJGKlepstanden

Overview

Import IJsselmeer Klepstanden from ASCII formatted file.

Input file structure

The input file is of ASCII type.

The following snippet illustrates a sample data file :

Stevinsluis Tue Jun 1 2010 [Link],Kolk1,Zuid,Stand:0


Stevinsluis Tue Jun 1 2010 [Link],Kolk2,Zuid,Stand:3
Stevinsluis Tue Jun 1 2010 [Link],Kolk3,Zuid,Stand:1
Stevinsluis Tue Jun 1 2010 [Link],Kolk1,Zuid,Stand:2

Each row contains in fact 6 columns, however column seperation is not consistent.
Seperation characters in use are white space, comma, colon.
Data contained in each row are described as (first row from snippet above as example):

column Description Value

1 Location Stevinsluis

2 Time stamp Tue Jun 1 2010 [Link]

3 External qualifier 1 Kolk1

4 External qualifier 2 Zuid

5 Parameter ID Stand

6 Parameter value 0

The location , External qualifier 1 and External qualifier2 are concatenated during the import to form the LocationID used by the import to store the
time series.
For the first row in the data example above, the LocationID becomes Stevinsluis_Kolk1_Zuid

Configuration

In order to import IJGKlepstanden configure a module like this:

300
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>IJGKlepstanden</importType>
<folder>$IMPORT_FOLDER_IJGKlepstanden$</folder>
<failedFolder>$IMPORT_FOLDER_IJGKlepstanden$</failedFolder>
<backupFolder>$IMPORT_FOLDER_IJGKlepstanden$</backupFolder>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>IJGKS-DF</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportIJGKS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Klepstand</parameterId>
<locationSetId>IJGKS_Locs</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<ensembleId>IJGKS</ensembleId>
</timeSeriesSet>
.
.
.
</import>
</timeSeriesImportRun>
]]>

idMapping

Non matching LocationID and ParameterID assigned during import can be mapped to the ones in use by the FEWS system by defining an ID
Mapping.

For example:

<idMap xmlns:xsi="[Link] xmlns="[Link]


xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter internal="Klepstand" external="Stand"/>
<location internal="STS_01_S" external="Stevinsluis_Kolk1_Zuid"/>
</idMap>
]]>

Radolan

Overview

Imports high resolution precipitation analysis and forecast data from Radolan/Radvor-OP files from Deutscher Wetterdienst (DWD).
The data is GRID typed data.

File structure

A range of data products from DWD radar observations and/or derived from radar observations are delivered in binary files. The heading in the file
is ASCII text (hence human readable) and is used to determine for which parameter the file contains data.
The remainder of the file contains the data in binary format.

This import can read data for the following data products; the product identifier is used as parameter ID during import:

Product identifier

TZ

301
TH

RX

PC

PF

PN

PI

DX

DQ

DXQ

Configuration

To import configure a module like this:

<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>Radolan</importType>
<folder>$IMPORT_FOLDER_RADOLAN$</folder>
<failedFolder>$IMPORT_FOLDER_RADOLAN$</failedFolder>
<backupFolder>$IMPORT_FOLDER_RADOLAN$</backupFolder>
<idMapId>IdImportRadolan</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>RadolanRadar</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportRadolan</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>RadolanLocs</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="5"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>EPS</ensembleId>
</timeSeriesSet>

.
.
.

</import>
</timeSeriesImportRun>
]]>

idMapping

The parser assigns the product ID found in the header to ParameterID.


LocationID is taken from the header in case of PF, DX, DQ, DXQ.
In other cases LocationID is not set.

To map these to current FEWS location and parameter an idMapping can be configured.
For example:

302
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter internal="[Link]" external="RX"/>
<location internal="Loc10410" external="10410"/>
</idMap>
]]>

Bayern

Overview

This import is available in DELFT-FEWS versions after 2011.01

Imports ASCII type time series data (level forecasts) from Bayern, location Raunheim am Main.

Structure of the Bayern file

Data is to be obtained through a http request. Data obtained from the URL must be stored as an ASCII-file in order for the parser to process it.
The data consists of three sections; a header, the time series data and a footer.

The header consists of three rows; the first and last contains only dash characters and are ignored by the parser.
The middle row contains the german keyword for location and the numerical ID for the location seperated by a | (pipe character).
The parser sets both (external) LocationID and ParameterID to this numerical ID value.

In between header and footer are the time series date/time and values.
Date/time and values are again seperated by the | (pipe character).

Date/time is formatted as "[Link] HH:mm"

The following snippet illustrates a sample data file :

----------------------------
Messstelle | 24095302
----------------------------
21.07.2011 05:00 | 167
21.07.2011 06:00 | 165
21.07.2011 07:00 | 163
21.07.2011 08:00 | 160
.
.
.
23.07.2011 04:00 | 199
23.07.2011 05:00 | 199
----------------------------
Datenart: Wasserstand [cm]
Alle Vorhersagewerte ohne Gewähr.
Datenbankabfrage: 21.07.2011 09:33

The footer is completely ignored by the parser.

Configuration

To import forecast data from Bayern, stored into an Acii file, configure a module like:

303
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<!--Bayern-->
<general>
<importType>Bayern</importType>
<folder>$IMPORT_FOLDER_BAYERN$</folder>
<failedFolder>$IMPORT_FOLDER_BAYERN$</failedFolder>
<backupFolder>$BACKUP_FOLDER_BAYERN$</backupFolder>
<idMapId>IdImportBayern</idMapId>
<unitConversionsId>ImportBayernUnits</unitConversionsId>
<importTimeZone>
<!--EPS is in GMT-->
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>Bayern-DF</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportBayern</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>wlevel</parameterId>
<locationSetId>Bayern</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>Bayern</ensembleId>
</timeSeriesSet>
</import>
</timeSeriesImportRun>

]]>

idMapping

The parser assigns the numerical ID found in the header to LocationID as well as ParameterID.
To map these to current FEWS location and parameter an idMapping can be configured.
For example:

<idMap xmlns:xsi="[Link] xmlns="[Link]


xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter internal="[Link]" external="24095302"/>
<location internal="KNMI_NL001" external="24095302"/>
</idMap>
]]>

Custom time series import formats using java

Custom time series import formats using java


SINCE FEWS 2009.01

Fews allows you to write your own time series import format in Java.

In the import module one can specify the fully qualified class name and the bin directory that contains a jar file with compiled java code and
optimally other third party libaries. A jar file is just a zip file that contains you compiled java files.
e.g. classname = [Link]
and binDir = $REGION_HOME$/parsers/bin

The source code required for a simple format is only a few lines of code
A parser tells the content handler every thing it finds a file in the order it is available in the file. The content handler will map everything to the right

304
time series.
The content handler will do the id mapping, unit conversion, datum conversion, translating text to decimal values, translating text to date/times
with the specified time zone, convert missing values, trace values, validate the time series.

The content handler is also optimized for speed


It can handle 4 million lines per minute (about 400MB of text files)
It uses an optimized replacement for the java SimpleDateFormat and DecimalFormat to achieve this performance.

The import module will also open and close the files for you.
The import files can retain on the file system, in a zip file, tar file or gz file, ftp server, sftp server. The programmer will not notice the difference

Time Series Content handler

The interface is described in: [Link]

Types of parsers

Text parsers

Most import files are text based.

Binary parser

Grid coverage files are often binary

File parsers

This kind of parsers will use a third party library that not accepts streams

Database parsers

This parsers will read from a database.


Msaccess (mdb) of firebird (fdb) files are automatically recognized
It also possible to explicit configure a database connection with an external database in the import module

Server parsers

This kind of parsers will read data from a (internet) url


A url, username and password is provided to the parser

Additional consumer interfaces

PeriodConsumer

Database and server parsers often needs a period in their query to the database of server.
When this interface implemented the import module will provide an absolute period.

TimeSeriesHeadersConsumer

Database and server parsers often needs the location and parameter ids in their queries
When this interface implemented the import module will convert the FEWS headers with specified id map and provide them to the parser. The
mapping is used in the opposite direction compared to normal mapping. This can result in different mapping when the id map is not one internal to
one external and visa versa.

VirtualDirConsumer

With a virtual dir the parser can open meta files with additional information required for to parse file that retain in the same directory as the
imported file. For example some grid coverage formats need an additional file for the geo referencing.

Examples

TextParsers

[Link]

[Link]

305
[Link]

[Link]

[Link]

[Link]

[Link]

[Link]

[Link]

[Link]

[Link]

[Link]

[Link]

[Link]

[Link]

XML parsers

[Link]
[Link]

Binary parsers

[Link] (with header is separate file)

[Link] (Little endian IEEE floats and integers)

[Link]
, VirtualInputDirConsumer, FileFilter {
private BufferedImage bufferedImage = null;
private Graphics2D graphics = null;
private int[] rgbs = null;
private float[] values = null;
private VirtualInputDir virtualInputDir = null;
private String virtualFileName = null;
private TimeSeriesContentHandler contentHandler = null;

@Override
public boolean accept(File pathname) {
return getWorldFileExt([Link](pathname)) != null;
}

@Override
public void setVirtualInputDir(VirtualInputDir virtualInputDir) {
[Link] = virtualInputDir;
}

@Override
public void parse(BufferedInputStream inputStream, String virtualFileName,
TimeSeriesContentHandler contentHandler) throws Exception {
[Link] = virtualFileName;
[Link] = contentHandler;
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
[Link]("image");
[Link]("image");
[Link](header);

306
if ([Link]()) return;

[Link](getTime(new File(virtualFileName).getName(),
[Link]()));
if ([Link]()) return;

loadImage(inputStream, virtualFileName);
Geometry geometry = loadGeometry();
[Link](geometry);
[Link](1.0f);
[Link](values);
[Link]();
}

private Geometry loadGeometry() throws Exception {


Geometry res = [Link]();
if (res == null) res = readWorldFile();

if ([Link]() != [Link]()) {
throw new Exception("Width of image file (" + [Link]()
+ ") differs from number of cols (" + [Link]() + ") in grid required
geometry or world file");
}

if ([Link]() != [Link]()) {
throw new Exception("Height of image file (" + [Link]()
+ ") differs from number of rows (" + [Link]() + ") in grid required
geometry or world file");
}
return res;
}

private void loadImage(InputStream inputStream, String fileName) throws Exception {


String ext = [Link](fileName);
String jaiFormatId = getJaiFormatId(ext);

if (jaiFormatId == null)
throw new Exception("Unsupported bitmap format " + inputStream);

ParameterBlock parameterBlock = new ParameterBlock();


SeekableStream seekableStream = new
ByteArraySeekableStream([Link](inputStream));
try {
[Link](seekableStream);
RenderedOp image = [Link](jaiFormatId, parameterBlock);
try {
checkBuffers([Link](), [Link]());
[Link](0, 0, [Link](), [Link]());
[Link]([Link](), 0, 0, null);
for (int i = 0; i < [Link]; i++) {
int rgb = rgbs[i];
int a = rgb >>> 24 & 0xff;
if (a == 0) {
values[i] = [Link];
continue;
}
int r = rgb >>> 16 & 0xff;
int g = rgb >>> 8 & 0xff;
int b = rgb >>> 0 & 0xff;

values[i] = (r + g + b) / 3;
}
} finally {
[Link]();

307
}
} finally {
[Link]();
}
}

private void checkBuffers(int width, int height) {


if (bufferedImage != null && [Link]() == width &&
[Link]() == height) return;

bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);


graphics = [Link]();
[Link](ColorUtils.TRANSPARANT_COLOR);
DataBufferInt dataBufferInt = (DataBufferInt) [Link]().getDataBuffer();
rgbs = [Link]();
values = new float[width * height];
}

public static String getWorldFileExt(String bitMapExt) {


if ([Link]("tif")) return "tfw";
if ([Link]("gif")) return "gfw";
if ([Link]("png")) return "pgw";
if ([Link]("bmp")) return "bpw";
if ([Link]("pcx")) return "pxw";
if ([Link]("jpg")) return "jgw";

return null;
}

private static String getJaiFormatId(String bitmapExt) {


if ([Link]("tif")) return "tiff";
if ([Link]("gif")) return "gif";
if ([Link]("bmp")) return "bmp";
if ([Link]("png")) return "png";
if ([Link]("jpg")) return "jpeg";

return null;
}

private RegularGridGeometry readWorldFile() throws Exception {


String worldFileExt = getWorldFileExt([Link](virtualFileName));
String worldFile = [Link](virtualFileName, worldFileExt);
if (![Link](worldFile))
throw new FileNotFoundException("Can not file world file " + worldFile);

String[] values = [Link]([Link](worldFile));


double cellWidth = [Link](values[0]);
double cellHeight = -[Link](values[3]);
double x = [Link](values[4]);
double y = [Link](values[5]);
GeoDatum geoDatum = [Link]();
GeoPoint bitMapUpperLeft = [Link](x, y, 0);

return [Link](geoDatum, bitMapUpperLeft, cellWidth, cellHeight,


[Link](), [Link]());
}

public static long getTime(String fileName, TimeZone timeZone) throws Exception {


Pattern p = [Link]("\\d{8}.\\d{4}");
Matcher matcher = [Link](fileName);
boolean successfull = [Link]();
if (!successfull)
throw new Exception("File name should contain date time yyyyMMdd_HHmm " + fileName);

308
String dateTimeText = [Link]();
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyyMMdd" + [Link](8) +
"HHmm");
[Link](timeZone);
try {
return [Link](dateTimeText).getTime();
} catch (ParseException e) {
throw new Exception("File name should contain valid date time yyyyMMdd_HHmm " +
fileName);
}
}
}
]]>

[Link]
{
private static final Logger log = [Link]([Link]);

private Geometry geometry = null;

private int nx1 = 0;


private int ny1 = 0;
private int nz1 = 0;
private String projection = null;
private int scale = 0;
private int trulat1 = 0;
private int trulat2 = 0;
private int trulon = 0;
private float nwLon = 0.0F;
private float nwLat = 0.0F;
private float xyScale = 0.0F;
private float dx1 = 0.0F;
private float dy2 = 0.0F;
private int iBbMode = 0;
private String varName = null;
private String varUnit = null;
private int varScale = 0;
private int imissing = 0;
private int nradars = 0;
private String[] radarNames = null;
private float dxyScale = 0.0F;

private int size = 0;

private long time = Long.MIN_VALUE;

private float[] values = FloatArrayUtils.EMPTY_ARRAY;


private byte[] byteBuffer = ByteArrayUtils.EMPTY_ARRAY;
private short[] shortBuffer = ShortArrayUtils.EMPTY_ARRAY;

private LittleEndianDataInputStream is = null;


private TimeSeriesContentHandler contentHandler = null;

@Override
public void parse(BufferedInputStream inputStream, String virtualFileName,
TimeSeriesContentHandler contentHandler) throws Exception {
[Link] = contentHandler;
[Link] = new LittleEndianDataInputStream(inputStream);
parseHeader();
if ([Link]()) return;

309
if ([Link] != [Link]()) {
values = new float[[Link]()];
byteBuffer = new byte[[Link]() * NumberType.INT16_SIZE];
shortBuffer = new short[[Link]()];
}

if ([Link] != [Link]()) {
values = new float[[Link]()];
byteBuffer = new byte[[Link]() * NumberType.INT16_SIZE];
shortBuffer = new short[[Link]()];
}

// read all the complete grid at once for optimal performance


[Link](byteBuffer);
[Link](byteBuffer, 0, size * NumberType.INT16_SIZE, shortBuffer, 0, size,
ByteOrder.LITTLE_ENDIAN);

if ([Link]() > 0) [Link]("Too many bytes available in " + virtualFileName);

for (int i = 0; i < [Link]; i++) {


short shortValue = shortBuffer[i];
values[i] = shortValue == imissing ? [Link] : shortValue / varScale;
}

// content handle expect the rows from top to bottom


[Link](values, [Link](), [Link]());
[Link](values);
[Link]();
}

@SuppressWarnings({"OverlyLongMethod"})
private void parseHeader() throws IOException {
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();

int year = [Link]();


int month = [Link]();
int day = [Link]();
int hour = [Link]();
int minute = [Link]();
int second = [Link]();

Calendar gmtCalendar = new GregorianCalendar([Link]);


[Link](year, month - 1, day, hour, minute, second);
[Link]([Link]());

nx1 = [Link]();
ny1 = [Link]();
nz1 = [Link]();

projection = [Link](is, 4).trim();


scale = [Link]();
trulat1 = [Link]();
trulat2 = [Link]();
trulon = [Link]();
nwLon = (float) [Link]() / (float) scale;
nwLat = (float) [Link]() / (float) scale;
xyScale = [Link]();
int dx1int = [Link]();
int dy2int = [Link]();

dxyScale = (float) [Link]();

310
dx1 = (float) dx1int / dxyScale;
dy2 = (float) dy2int / dxyScale;
[Link](is, 4 * nz1);
iBbMode = [Link]();
[Link](is, 4 * 9);
[Link](is, 4);
varName = [Link](is, 20);

varUnit = [Link](is, 6);

varScale = [Link]();
imissing = [Link]();
nradars = [Link]();
if (nradars > 0) {
radarNames = new String[nradars];
for (int i = 0; i < nradars; i++) {
radarNames[i] = [Link](is, 4).trim();
}
}

size = nx1 * ny1 * nz1;

GeoPoint firstCellCenter = new Wgs1984Point(nwLat - dy2 / 2, nwLon + dx1 / 2);


geometry = [Link](GeoDatum.WGS_1984, firstCellCenter, dx1, dy2, ny1,
nx1);

[Link](geometry);
[Link](1.0f / varScale);
[Link](varName);
[Link](varUnit);
[Link](header);

debugHeader();
}

private void debugHeader() {


if (![Link]()) return;
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
[Link]([Link]);

List<String> header = new ArrayList<String>(10);


[Link]([Link](new Date(time)));
[Link]("nx1 = " + nx1);
[Link]("ny1 = " + ny1);
[Link]("nz1 = " + nz1);
[Link]("size = " + size);
[Link]("projection = " + projection);
[Link]("scale = " + scale);
[Link]("trulat1 = " + trulat1);
[Link]("trulat2 = " + trulat2);
[Link]("trulon = " + trulon);
[Link]("nw_lon = " + nwLon);
[Link]("nw_lat = " + nwLat);
[Link]("xy_scale = " + xyScale);
[Link]("dx1 = " + dx1);
[Link]("dy2 = " + dy2);
[Link]("i_bb_mode = " + iBbMode);
[Link]("varName = " + varName);
[Link]("varUnit = " + varUnit);
[Link]("var_scale = " + varScale);
[Link]("imissing = " + imissing);

311
[Link]("nradars = " + nradars);
if (radarNames != null) {
for (int i = 0; i < [Link]; i++) {
String radarName = radarNames[i];
[Link]("radarName = " + radarName);
}
}
[Link]("dxyScale = " + dxyScale);

[Link]([Link](header, '\n'));
}
}
]]></String></String>

[Link]
TimeSeriesContentHandler represents the classes
* to handle the timeseries data that are supplied by the timeseries parsers.
*/
public interface TimeSeriesContentHandler {

/**
* Defines time zone which should be used while importing time series.
* If time zone is defined in the file format - this TimeZone should not be used.
* @return
*/
TimeZone getDefaultTimeZone();

/**
* Adds a value that should be recognized as missing when calling {@link #setValue(float)},
{@link #setValue(char, String)} or {@link #setCoverageValues(float[])}
* {@link Float#NaN} is always recognized as missing
*/
void addMissingValue(float missingValue);

/**
* Adds a alpha numeric tag that should be recognized as missing when calling {@link
#setValue(char, String)}
* These alphanemeric missings are not recognized when using {@link #setValue(float)} or
{@link #setCoverageValues(float[])}
* The missings added with {@link #addMissingValue(float)} and {@link
#addMissingValueRange(float, float)} are also recognized
* {@link Float#NaN} is always recognized as missing
*
*/
void addMissingValue(String missingValue);

/**
* Adds a range of values that should be recognized as missing when calling {@link
#setValue} or {@link #setCoverageValues(float[])}
* NaN, null and an empty string, string with only spaces are always recognized as missing
*/
void addMissingValueRange(float minMissingValue, float maxMissingValue);

/**
* Creating an alias allows high speed switching between different headers
* E.g. For files with multiple parameters per row, for every row multiple switches are
required between different headers
* This will not work properly without defining an alias for every column
* The alias is ultimate fast in the range form 0 to 1000
*
* @param alias, integer for ultimata speed, good practice is to use the parameter column
index.
* @param header

312
*/
void createTimeSeriesHeaderAlias(int alias, TimeSeriesHeader header);

/**
* Changes the header that will be used when calling {@link #applyCurrentFields()}
* A call to this method will not consume any significant time
* {@link #setTimeSeriesHeader(TimeSeriesHeader)} is relatively time consuming
* @see #createTimeSeriesHeaderAlias(int, TimeSeriesHeader)
* @param alias defined with {@link #createTimeSeriesHeaderAlias (int, TimeSeriesHeader)}
* @throws IllegalArgumentException when alias is not created before
*/
void setTimeSeriesHeader(int alias);

/**
* Same as {@link #setTimeSeriesHeader(int)} , but slightly SLOWER
* The second time this method is called for the SAME header,
* there is NO new time series created but the first one is re-selected
* This method is relatively time consuming.
* When parsing multiple parameters per row use {@link #setTimeSeriesHeader (int)}
*/
void setTimeSeriesHeader(TimeSeriesHeader header);

/**
* Changes the time that will be used when calling {@link #applyCurrentFields()}
* A NEW time series is created, with a new forecast time and new ensemble member index
* A warning is logged when this method is called twice for the same header (historical non
ensemlbe time series)
*/
void setNewTimeSeriesHeader(TimeSeriesHeader header);

/**
* The parser should call this method when it starts parsing time/values and has any idea of
the period of the values that will come.
* This information is only used by this content handler for OPTIMALISATION and
* is never required and never results in an error when the real period is differs from the
estimated period
* @param period
*/
void setEstimatedPeriod(Period period);

/**
* Changes the time that will be used when calling {@link #applyCurrentFields()}
*
* @param time Represents the number of milliseconds since January 1, 1970, [Link] GMT
*/
void setTime(long time);

/**
* Changes the time that will be used when calling {@link #applyCurrentFields()}
*
* In addition to simple date format 24h will be recognized as 0h the next day
*
* @param timeZone. When not known use {@link #getDefaultTimeZone()}
* @param pattern see {@link SimpleDateFormat}, in addition for HH 24 will be recognized as
0:00 the next day
* @param dateTime leading and trailing spaces are ignored
*/
void setTime(TimeZone timeZone, String pattern, String dateTime);

/**
* Changes the time that will be used when calling {@link #applyCurrentFields()}
*
* In addition to simple date format 24h will be recognized as 0h the next day

313
*
* @param timeZone. When not known use {@link #getDefaultTimeZone()}
* @param datePattern see {@link SimpleDateFormat}
* @param date leading and trailing spaces are ignored
* @param timePattern see {@link SimpleDateFormat}, in addition for HH 24 will be
recognized as 0:00 the next day
* @param time leading and trailing spaces are ignored
*/
void setTime(TimeZone timeZone, String datePattern, String date, String timePattern, String
time);

/**
* Return false if any value for the selected time series with {@link #setTimeSeriesHeader}
is wanted
* When true parsing of ALL values for this time series can be skipped
*/
boolean isCurrentTimeSeriesHeaderForAllTimesRejected();

/**
* Return false if the value selected time {@link #setTimeSeriesHeader} and selected time
{@link #setTime(long)} is wanted
* When true parsing of the time and time series can be skipped
*/
boolean isCurrentTimeSeriesHeaderForCurrentTimeRejected();

/**
* Changes the flag that will be used for when calling {@link #applyCurrentFields()}
*/
void setFlag(int flag);

/**
* Changes the flag that will be used for when calling {@link #applyCurrentFields()}
*/
void setFlag(String flag);

/**
* Changes the sample id that will be used for when calling {@link #applyCurrentFields()}
*/
void setSampleId(String sampleId);

/**
* Changes the out of detection range that will be used when calling {@link
#applyCurrentFields()}
*/
void setOutOfDetectionRangeFlag(OutOfDetectionRangeFlag flag);

/**
* Changes the comment that will be used when calling {@link #applyCurrentFields()}
*/
void setComment(String comment);

/**
* When a overrullilng geometry is defined (in [Link]) this geometry will overturn the
geometry set with {@link #setGeometry(Geometry)}
* When there is a overrullilng geometry the content handler will log an error when the
number of rows and cols is not the same a set with {@link #setGeometry(Geometry)}
* @return
*/
Geometry getOverrulingGeometry();

/**
* Used by the parser to create a geometry when there is no geometry info available in the
file
*/

314
GeoDatum getDefaultGeoDatum();

/**
* Changes the geometry that will be used when calling {@link #applyCurrentFields()}
*
* When only the number of rows and cols are available use {@link
NonGeoReferencedGridGeometry#create(int, int))))|
* @see #getDefaultGeoDatum()
*/
void setGeometry(Geometry geometry);

/**
* Changes the value resolution that will be used when calling {@link #applyCurrentFields()}
* Only be used when the file format don't uses IEEE floats to store the values
* e.g. When there are only integers parsed the value resolution is 1.0
* e.g. When there at maximum to decimals the value resolution is 0.01;
* e.g. When file format store the values as integers and devides the integers by 5
afterwards the value resolution is 0.2
*
* @param valueResolution
*/
void setValueResolution(float valueResolution);

/**
* Changes the value that will be used when calling {@link #applyCurrentFields()}
*/
void setValue(float value);

/**
* Changes the value that will be used when calling {@link #applyCurrentFields()}
* When the value can not be parsed an error will be logged, no excepton is thrown
* Add missing value tags before calling this function {@link #addMissingValue(String)}
* e.g. addMissingValue('?')
*
* @param value leading and trailing spaces are ignored
*/
void setValue(char decimalSeparator, String value);

/**
* Puts the coverage values for the last set time and and last set header with the last set
flag and last set geometry
* When {@link #getOverrulingGeometry ()} returns not null there is no need to to set the
geometry
* When the active geometry does not have the same number of rows and cols an error message
is logged.
* For performance reasons do not parse the values when {@link
#isCurrentTimeSeriesHeaderForCurrentTimeRejected()} returns true
* For performance reasons do no recreate the values array for every time step again
*/
void setCoverageValues(float[] values);

/**
* Saves the current fields for the current time series header and current time.
* The current fields are not cleared so it is only required
* to update the changed fields for the next calll to {@link #applyCurrentFields()}
*/
void applyCurrentFields();
}
]]>

Import data using OPeNDAP


Function: Import data from an OPeNDAP server directly into Delft-FEWS

315
Where to Use? This can be used for importing NetCDF data into the Delft-FEWS system.

Why to Use? The advantage of importing NetCDF data directly from an OPeNDAP server, as opposed to importing local NetCDF files, is
that the files do not have to be stored locally. Furthermore if only part of a file is needed, then only that part will be
downloaded instead of the entire file. This can save a lot of network bandwidth (i.e. time) for large data files.

Preconditions: The data to import needs to be available on an OPeNDAP server that is accessible by the Delft-FEWS system.

Outcome(s): The imported data will be stored in the Delft-FEWS dataStore.

Available Delft-FEWS version 2011.02


since:

Contents

Contents
Overview
How to import data from an OPeNDAP server
Import configuration
Id map configuration
Import data from a single file
Import data from a catalog
Import data for a given variable
Import data for a given period of time
Import data for a given subgrid
Import data from a password protected server
Import data from a server that uses SSL
Known issues
Export of data
Related modules and documentation
Internal
External

Overview

OPeNDAP (Open-source Project for a Network Data Access Protocol) can be used to import NetCDF data from an OPeNDAP server directly into
Delft-FEWS. For more information on OPeNDAP see [Link] Currently only the import of NetCDF files from an OPeNDAP server is
supported. Three types of NetCDF data can be imported: grid time series, scalar time series and profile time series. For more information on
these specific import types see their individual pages: NETCDF-CF_GRID, NETCDF-CF_TIMESERIES and NETCDF-CF_PROFILE.

How to import data from an OPeNDAP server

Import configuration

Data can be imported into Delft-FEWS directly from an OPeNDAP server. This can be done using the Import Module. The following import types
currently support import using OPeNDAP:

import type usage

NETCDF-CF_GRID Use this for importing grid time series that are stored in NetCDF format

NETCDF-CF_TIMESERIES Use this for importing scalar time series that are stored in NetCDF format

NETCDF-CF_PROFILE Use this for importing profile time series that are stored in NetCDF format

To instruct the import to use OPeNDAP instead of importing local files, specify a server URL instead of a local import folder. Below is an example
import configuration with a serverUrl element.

316
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>NETCDF-CF_GRID</importType>
<serverUrl>[Link]
<startDateTime date="2007-07-01" time="[Link]"/>
<endDateTime date="2008-01-01" time="[Link]"/>
<idMapId>OpendapImportIdMap</idMapId>
<missingValue>32767</missingValue>
</general>
<timeSeriesSet>
<moduleInstanceId>OpendapImport</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>gridLocation1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>

Here the serverURL is the URL of a file on an OPeNDAP server. For details on specifying the URL see Import data from a single file or Import
data from a catalog below. The time series set(s) define what data should be imported into Delft-FEWS. Only data for the configured time series
sets is downloaded and imported, all other data in the import file(s) is ignored. For more details see Import Module configuration options.

Id map configuration

The import also needs an id map configuration file, that contains a mapping between the time series sets in the import configuration and the
variables in the file(s) to import. Below is an example id map configuration.

The external parameter id is case sensitive.

<idMap xmlns:xsi="[Link] xmlns="[Link]


xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter internal="[Link]" external="sst"/>
<location internal="gridLocation1" external="unknown"/>
</idMap>
]]>

Import data from a single file

To import data from a single file on an OPeNDAP server, the correct URL needs to be configured in the serverUrl element. To get the
correct URL for a single file:
1. Use a browser to browse to a data file on an OPeNDAP server, e.g.
[Link]
2. Copy the URL that is listed on the page after the keyword "Data URL:", e.g.
[Link]
3. Paste this URL in the serverUrl element in the import configuration file.

Import data from a catalog

Instead of specifying the URL of a single file on an OPeNDAP server, it is also possible to specify the URL of a catalog. The files on an
OPeNDAP server are usually grouped in folders and for each folder there is a catalog file available. The catalog usually contains a list of files and
subfolders, but can also refer to other catalog files. If the URL of a catalog file is specified for the import, then all files that are listed in the catalog
will be imported. Other catalogs that are listed in the specified catalog are also imported recursively.

A catalog file is usually called [Link]. The URL of a catalog file can be obtained in the following way.

For a THREDDS First browse to a folder on the server. Then copy the current URL from the address line and replace ".html" at the
opendap server: end of the url by ".xml".

317
For a HYRAX opendap First browse to a folder on the server. Then click on the link "THREDDS Catalog XML" on the bottom of the page.
server: Then copy the current URL from the address line.

For example to import data from the folder [Link] use the catalog URL
[Link] in the import configuration. For example:

<general>
<importType>NETCDF-CF_GRID</importType>
<serverUrl>[Link]
<startDateTime date="2007-07-01" time="[Link]"/>
<endDateTime date="2008-01-01" time="[Link]"/>
<idMapId>OpendapImportIdMap</idMapId>
<missingValue>32767</missingValue>
</general>
<timeSeriesSet>
<moduleInstanceId>OpendapImport</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>gridLocation1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>

]]>

Import data for a given variable

An import file (local or on an OPeNDAP server) can contain multiple variables. For each time series set in the import configuration the import uses
the external parameter id from the id map configuration to search for the corresponding variable(s) in the file(s) to import. If a corresponding
variable is found, then the data from that variable is imported. Only data for the found variables is downloaded and imported, all other data in the
import file(s) is ignored.
For NetCDF files the external parameter id is by default matched to the names of the variables in the NetCDF file to find the required variable to
import. There also is an option to use the standard_name attribute or long_name attribute of a variable in the NetCDF file as external parameter
id. To use this option add the variable_identification_method property to the import configuration, just above the time series set(s). For
example:

<general>
<importType>NETCDF-CF_GRID</importType>
<serverUrl>[Link]
<startDateTime date="2007-07-01" time="[Link]"/>
<endDateTime date="2008-01-01" time="[Link]"/>
<idMapId>OpendapImportIdMap</idMapId>
<missingValue>32767</missingValue>
</general>
<properties>
<string value="long_name" key="variable_identification_method"/>
</properties>
<timeSeriesSet>
<moduleInstanceId>OpendapImport</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>gridLocation1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>

]]>

The variable_identification_method property can have the following values:

variable_identification_method behaviour

318
standard_name All external parameter ids are matched to the standard_name attributes of the variables in the NetCDF file
to find the required variable(s) to import.

long_name All external parameter ids are matched to the long_name attributes of the variables in the NetCDF file to
find the required variable(s) to import.

variable_name All external parameter ids are matched to the names of the variables in the NetCDF file to find the required
variable(s) to import.

If the variable_identification_method property is not present, then variable_name is used by default. The variable_identification_method
property currently only works for the import types NETCDF-CF_GRID, NETCDF-CF_TIMESERIES and NETCDF-CF_PROFILE.

Currently it is not possible to import data from the same variable in the import file to multiple time series sets in Delft-FEWS. If
required, this can be done using a separate import for each time series set.

Import data for a given period of time

To import only data for a given period of time, specify either a relative period or an absolute period in the general section of the import
configuration file. See relativeViewPeriod, startDateTime and endDateTime for more information. The import will first search the metadata of each
file that needs to be imported from the OPeNDAP server. Then for each file that contains data within the specified period, only the data within the
specified period will be imported. The start and end of the period are both inclusive.
This can be used to import only the relevant data if only data for a given period is needed, which can save a lot of time. However, for this to work
the import still needs to search through all the metadata of the file(s) to be imported. So for large catalogs that contain a lot of files, it can still take
a lot of time for the import to download all the required metadata from the OPeNDAP server.

Example: to import only data within the period from 2007-07-01 [Link] to 2008-01-01 [Link], add the following lines to the import
configuration:

<endDateTime date="2008-01-01" time="[Link]"/>


]]>

Alternatively you can use the relativeViewPeriod element so set a period to import relative to the T0. If you do this you can use the manual
forecast dialog to set the period to import data from using the Cold/Warm state selection options.

Import data for a given subgrid

Importing data for a subgrid currently only works for regular grids.

This section only applies to the import of grid data. For data with a regular grid that is imported from a NetCDF file, it is in most cases not required
to have a grid definition in the [Link] configuration file. Because for regular grids the import reads the grid definition from the NetCDF file and
stores the grid definition directly in the datastore of Delft-FEWS. If for the imported data there is no grid definition present in the [Link]
configuration file, then data for the entire grid is imported.
To import data for only part of the original grid, it is required to specify a grid definition in the [Link] configuration file. The grid definition defines
the part of the grid that needs to be imported. In other words the grid definition defines a subgrid of the original grid. In this case only data for the
configured subgrid is downloaded and imported, the data for the rest of the original grid is ignored. The following restrictions apply:

The subgrid must be fully contained within the original grid.


The subgrid must have the same geodatum as the original grid.
The cellwidth of the subgrid must be the same as the cellwidth of the original grid within a margin of 10 percent.
The cellheight of the subgrid must be the same as the cellheight of the original grid within a margin of 10 percent.
All cell centers in the subgrid must coincide with cell centers in the original grid within a certain margin.

For example to import data for a sub grid from the URL [Link] use e.g. the following
grid definition in the [Link] file. In this example a subgrid of 5x5 cells is imported, where the cell center longitude coordinates range from 0 to 8
degrees and the cell center latitude coordinates range from 50 to 58 degrees.

319
<rows>5</rows>
<columns>5</columns>
<geoDatum>WGS 1984</geoDatum>
<firstCellCenter>
<x>0</x>
<y>58</y>
</firstCellCenter>
<xCellSize>2</xCellSize>
<yCellSize>2</yCellSize>

]]>

For more information about the configuration of grid definitions in Delft-FEWS see Grids.

Import data from a password protected server

For importing data from a password protected OPeNDAP server, it is required to configure a valid username and password for accessing the
server. This can be done by adding the user and password elements (see Import Module configuration options#user) to the import configuration,
just after the serverUrl element.

This currently only works for importing a single file, this does not work when using a catalog.

Example of an import configuration with user and password elements:

<general>
<importType>NETCDF-CF_GRID</importType>
<serverUrl>[Link]
<user>kermit</user>
<password>gr33n</password>
<startDateTime date="2007-07-01" time="[Link]"/>
<endDateTime date="2008-01-01" time="[Link]"/>
<idMapId>OpendapImportIdMap</idMapId>
<missingValue>32767</missingValue>
</general>
<timeSeriesSet>
<moduleInstanceId>OpendapImport</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>gridLocation1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>

]]>

Import data from a server that uses SSL

For importing data from an OPeNDAP server that communicates using SSL, the certificate of the server has to be either validated by a known
certificate authority or present and trusted in the local certificate store. To add a certificate to the local Delft-FEWS certificate store, first export the
certificate file from the server using a browser, then import the certificate file into the certificate store using e.g. the following command on the
command line:

G:\java\jre6\bin\[Link] -keystore G:\FEWS\[Link] -import -alias aliasName -file fileName


-trustcacerts

where fileName is the pathname of the certificate file, aliasName is the alias to use for the certificate, G:\java\jre6\bin\[Link] is the pathname
of the Java [Link] file (depends on your Java installation) and G:\FEWS\[Link] is the pathname of the keystore file in the
Delft-FEWS region home directory (depends on your Delft-FEWS installation). The keystore file in the Delft-FEWS region home directory is
automatically read each time when Delft-FEWS starts.

To export the certificate of a server using Firefox:


1. Browse to the server URL.

2.

320
2. Left click on the certificate icon.
3. Choose More Information -> Show Certificate -> Details -> Export
4. Follow the on screen instructions.

To export the certificate of a server using Internet Explorer:


1. Browse to the server URL.
2. Left click on the lock icon.
3. Choose View Certificates -> Details -> Copy to File
4. Follow the on screen instructions.

Known issues

Export of data

It is not possible to export data directly using the OPeNDAP protocol, since the OPeNDAP protocol only supports reading data from the
server. If it is required to export data from Delft-FEWS and make it available on an OPeNDAP server, then this can be done in two steps:
1. setup a separate OPeNDAP server that points to a given storage location. For instance a THREDDS server, which is relatively
easy to install. The OPeNDAP server picks up any (NetCDF) files that are stored in the storage location and makes these
available for download using OPeNDAP.
2. export the data to a NetCDF file using a Delft-FEWS export run. Export of grid time series, scalar time series and profile time
series is supported (respectively export types NETCDF-CF_GRID, NETCDF-CF_TIMESERIES and NETCDF-CF_PROFILE).
Set the output folder for the export run to the given storage location. That way the exported data will automatically be picked up
by the OPeNDAP server.

Related modules and documentation

Internal

Import Module
Import Module configuration options
Available data types
NETCDF-CF_GRID
NETCDF-CF_TIMESERIES
NETCDF-CF_PROFILE

External

[Link]
OPeNDAP
THREDDS

Import Module configuration options


What [Link]

Description Configuration for import module

schema location [Link]

Entry in ModuleDescriptors <moduleDescriptor id="TimeSeriesImportRun">


<description>Import module to import timeseries from the various grid-formats ie GRIB format</description>
<className>[Link]</className>
</moduleDescriptor>

Time Series Import Module


import
general
description
importType
folder , server Url or jdbc connection
failedFolder
user
password
relativeViewPeriod
startDateTime
endDateTime
idMapId
unitConversionsId
flagConversionsId
missingValue
importTimeZone

321
importTimeZone:timeZoneOffset
importTimeZone:timeZoneName
gridStartPoint
dataFeedId
tolerance
startTimeShift
startTimeShift:locationId
startTimeShift:parameterId
properties
timeSeriesSet
externUnit
gridRecordTimeIgnore
Example: Import of Meteosat images as time-series
EA Import module

Time Series Import Module

The time series import class can be applied to import data from a variety of external formats. The formats are included in an enumeration of
supported import types. Each of these enumerations is used for a specifically formatted file.

Figure 62 Elements of the TimeSeriesImport configuration

import

Root element for the definition of an import run task. Each task defined will import data in a specified format from a specified directory. For
defining multiple formats, different import tasks reading from different directories must be defined.

general

Root element for general definitions used in the import runs.

description

Optional description for the import run. Used for reference purposes only.

322
importType

Specification of the format of the data to be imported. The enumeration of options includes;

MSW : Import of data provided by the MSW System (Rijkswaterstaat, the Netherlands).
KNMI : Import of synoptic data from KNMI (Dutch Meteorological Service).
WISKI : Import of time series data from the WISKI Database system (Kisters AG).
DWD-GME : Import of NWP data of the DWD Global Modell, (German Meteorological Service). This is a grid data format.
DWD-LM : Import of NWP data of the DWD Lokal Modell, (German Meteorological Service). This is a grid data format.
GRIB : Import of the GRIB data format. General format for exchange of meteorological data.
EVN: Import of data in the EVN format (Austrian Telemetry)
METEOSAT: Import of images form meteosat satellite

folder , server Url or jdbc connection

Location to import data from. This may be a UNC path (ie located on the network), sftp, http or from a database.

JDBC example:

<general>
<importTypeStandard>database</importTypeStandard>
<jdbcDriverClass>[Link]</jdbcDriverClass>
<jdbcConnectionString>jdbc:mysql://[Link]/cwb_ac</jdbcConnectionString>
<user>sobek</user>
<password>Tohek>cwa</password>
<relativeViewPeriod startOverrulable="true" endOverrulable="true" start="-1" end="1"
unit="day"/>
<table name="qpe_sums_obs">
<dateTimeColumn name="rehdate"/>
<valueColumn name="rad_gz" unit="mm/hr" locationId="Qesums" parameterId="[Link]"
parser="Mosaic"/>
</table>
<table name="qpe_sums_fo">
<forecastDateTimeColumn name="createdate"/>
<dateTimeColumn name="raddate"/>
<valueColumn name="rad_gz" unit="mm/hr" locationId="Qpesums" parameterId="[Link]"
parser="Mosaic"/>
</table>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>QPE_Sum</dataFeedId>
</general>

sftp example:

<general>
<importType>TypicalAsciiForecast</importType>
<folder>s[Link]
<relativeViewPeriod startOverrulable="true" endOverrulable="true" start="-1" end="3"
unit="day"/>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>Forecast</dataFeedId>
</general>

http example:

323
<general>
<importType>RemoteServer</importType>
<serverUrl>[Link]
<relativeViewPeriod startOverrulable="true" endOverrulable="true" start="-1" end="0"
unit="day"/>
<idMapId>IdImportRO</idMapId>
<importTimeZone>
<timeZoneOffset>+10:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>RO</dataFeedId>
</general>

failedFolder

Folder to move badly formatted files to. This may be a UNC path (ie located on the network).

user

User name, required when importing from protected database connections or protected servers.

password

Password, required when importing from protected database connections or protected servers.

relativeViewPeriod

The relative period for which data should be imported. This period is relative to the time 0 of the run. When the start and end time are overrulable
the user can specify the download length with the cold state time and forecast length in the manual forecast dialog. It is also possible to import
data for an absolute period of time using the startDateTime and endDateTime elements.

startDateTime

Start date and time of the (absolute) period for which data should be imported. Start is inclusive. This dateTime is in the configured
importTimeZone. It is also possible to import data for a relative period of time using the relativeViewPeriod element.

endDateTime

End date and time of the (absolute) period for which data should be imported. End is inclusive. This dateTime is in the configured
importTimeZone. It is also possible to import data for a relative period of time using the relativeViewPeriod element.

idMapId

ID of the IdMap used to convert external parameterId's and locationId's to internal parameter and location Id's. Each of the formats specified will
have a unique method of identifying the id in the external format. See section on configuration for Mapping Id's units and flags.

unitConversionsId

ID of the UnitConversions used to convert external units to internal units. Each of the formats specified will have a unique method of identifying
the unit in the external format. See section on configuration for Mapping Id's units and flags.

flagConversionsId

ID of the FlagConversions used to convert external data quality flags to internal data quality flags. Each of the formats specified will have a unique
method of identifying the flag in the external format. See section on configuration for Mapping Id's units and flags.

missingValue

Optional specification of missing value identifier in external data format.

importTimeZone

Time zone the external data is provided in if this is not specified in the data format itself. This may be specified as a timeZoneOffset, or as a
specific timeZoneName.

importTimeZone:timeZoneOffset

The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.

324
(e.g. +01:00)

importTimeZone:timeZoneName

Enumeration of supported time zones. See appendix B for list of supported time zones.

gridStartPoint

Identification of the cell considered as the first cell of the grid. This may be in the upper left corner or in the lower left corner. Enumeration of
options include;

NW : for upper left


SW : for lower left

dataFeedId

Optional id for data feed. If not provided then the folder name will be used. This is is used in the SystemMonitorDisplay in the importstatus tab.

tolerance

Definition of the tolerance for importing time values to cardinal time steps in the series to be imported to. Tolerance is defined per
location/parameter combination. Multiple entries may exist.

Attributes;

locationId : Id of the location tolerance is to be considered for.


parameterId : Id for the parameter tolerance is to be considered for.
timeUnit : Spefication of time units tolerances is defined in (enumeration).
unitCount : integer number of units defined for tolerance.

startTimeShift

Specification of a shift to apply to the start time of a data series to be imported as external forecasting. This is required when the time value of the
first data point is not the same as the start time of the forecast. This may be the case in for example external precipitation values, where the first
value given is the accumulative precipitation for the first time step. The start time of the forecast is then one time unit earlier than the first data
point in the series. Multiple entries may exist.

startTimeShift:locationId

Id of the location to apply the startTimeShift to.

startTimeShift:parameterId

Id of the parameter to apply the startTimeShift to.

properties

Available since Delft-FEWS version 2010.02. These properties are passed to the time series parser that is used for this import. Some (external
third party) parsers need these additional properties. See documentation of the (external third party) parser you are using.

timeSeriesSet

TimeSeriesSet to import the data to. Multiple time series sets may be defined, and each may include either a (list of) locationId's ar a
locationSetId. Data imported is first read from the source data file in the format specifed. An attempt is then made to map the locationId's and the
parameterId's as specified in the IdMap's to one of the locations/parameters defined in the import time series sets. If a valid match is found, then
the time values are mapped to those in the timeSeriesSet, taking into account the tolerance for time values. A new entry is made in the timeSeries
for each valid match made.

For non-equidistant time series the time values imported will be taken as is. For equidistant time series values are only returned on the cardinal
time steps. For cardinal time steps where no value is available, no data is returned.

externUnit

For some data formats an external unit is not defined in the file to be imported. This elements allows the unit to be specified explicitly. This unit is
then used in possible unit conversions.

Attributes;

parameterId: Id of the parameter for which a unit is specified. This is the internal parameter Id.
unit: specification of unit. This unit must be available in the UnitConversions specified in the unitConversionsId element.

325
gridRecordTimeIgnore

Boolean flag to specify if the start of forecast is read from the GRIB file or if it is inferred from the data imported. In some GRIB files a start of
forecast is specified, but the definition of this may differ from that used in DELFT-FEWS.

When importing grid data from file formats where the attributes of the grid is not specified in the file being imported (ie the file is
not self-describing), a definition of the grid should be included in the Grids configuration (see Regional Configuration).

It is also advisable to define the Grid attributes fro self describing Grids such as those imported from GRIB files. If no GRIB data
is available, then DELFT-FEWS will require a specification of the grid to allow a Missing values grid to be created.

Example: Import of Meteosat images as time-series

Meteosat Images are generally imported as images in [filename].png format. The Meteosat images constitute a time series of png images, that
are geo-referenced by means of a specific world file. Each image needs its own world file, which in case of PNG carries the extension
[filename].pgw .
Import of images in another format, such as JPEG is also possible. The corresponding world file for a JPEG file has the extension [filename].jpg .
The images are imported via a common time series import, for which a specific image parameter needs to be specified in a parameterGroup via
the parameter id image .

_<parameterGroup id="image">_
_<parameterType\]instantaneous\[/parameterType>_
_<unit>-</unit>_
_<valueResolution>8</valueResolution>_
_<parameter id="image">_
_<shortName>image</shortName>_
_</parameter>_
_</parameterGroup>_

The value resolution indicates the resolution of the values of the pixels (grey tones) in the Meteosat images. In this case 8 grey tones are
resampled into a single grey tone for storage space reductions. In the module for the timemeseries import run for a Meteosat image the import is
then configured as follows:

<import>
<general>
<importType>GrayScaleImage</importType>
<folder>$REGIONHOME$/Import/MeteoSat</folder>
<idMapId>IdImportMeteosat</idMapId>
</general>

<timeSeriesSet>
<moduleInstanceId>ImportMeteosat</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>image</parameterId>
<locationId>meteosat</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>4</synchLevel>
<expiryTime unit="day" multiplier="750"/>
</timeSeriesSet>
</import>

The goereferenced image can then be displayed in the grid display.

EA Import module

A specific import class is available for importing time series data from the XML format specified by the UK Environment Agency. The configuration
items required are a sub-set of those required in the more generic time series import format. This is due to much of the required information being
available in the XML file itself (ie file is self describing).

326
Figure 63 Elements of the EAImport configuration.

04 Export modules

Introduction
The export module allows (observed and forecast) data from DELFT-FEWS to be exported for use in external sources. On exporting data, the
approach to be used for converting flags, units, locations and parameters can be defined. These conversions are identified by referring to the
appropriate configuration files (see Regional Configuration).

Available export modules


EA XML Export Module — Exports data to EA TimeseriesExchangeFormat
Export module — Exports data to several file formats
Export module, available data types — Description and examples of file formats
Rdbms Export — Exports historical time series data to RDBMS
Report Export — Retrieves reports generated by forecasting runs

EA XML Export Module


What [Link]

Required no

Description Export data (timeseries) from Delft-Fews to EA XML Format

schema location [Link]

327
Entry in ModuleDescriptors <moduleDescriptor id="ExportRun">
<description>Export module to export EATimeseriesDataExchangeFormat compliant files</description>
<className>[Link]</className>
</moduleDescriptor>

EA Export Module Configuration

File exported are written to the path specified. The file name of files to be exported. The filename is constructed as a time string (in milliseconds).
An optional prefix can be applied to the time stamp string.

When available as configuration on the file system, the name of the XML file for configuring an instance of the import module called for example
ExportForecast may be:

ExportForecast 1.00 [Link]

ExportForecast File name for the ExportForecast configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

328
Figure 64 Elements of the exportRun configuration.

folder

Folder to export data to. This may be a UNC path (ie located on the network).

idMapId

ID of the IdMap used to convert internal parameterId's and locationId's to external parameter and location Id's. See section on configuration for
Mapping Id's units and flags.

unitConversionsId

ID of the UnitConversions used to convert internal units to external units. See section on configuration for Mapping Id's units and flags.

flagConversionsId

ID of the FlagConversions used to convert internal data quality flags to external data quality flags. See section on configuration for Mapping Id's
units and flags.

exportFilePrefix

Optional string to use as prefix in the export file name.

exportMissingValue

Optional specification of missing value identifier in external data format.

temporaryFilePrefix

Optional prefix to the file name when writing the file. This can be used by systems reading the file to identify if the file is being written, thus
avoiding concurrent reading/writing of a file. If not defined the prefix "tmp" is used. On completion of the file, an atomic replace of the filename is
done.

exportTimeZone

Time zone the external data is exported to. This may be specified as a timeZoneOffset, or as a specific timeZoneName.

timeZoneOffset

The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.

329
(e.g. +01:00)

timeZoneName

Enumeration of supported time zones. See appendix B for list of supported time zones.

timeSeriesSet

TimeSeriesSets defining the data to be exported. Multiple time series sets may be defined, and each may include either a (list of) locationId's ar a
locationSetId.

GRDC Export Format

Overview

The GrdcTimeSeriesSerializer class can export any number of timeSeriesSet's but the following restrictions apply due to the nature of the GRDC
Near Real-Time Data Format Version 3.0:

for each locationId it expects exactly one timeSeriesSet wit parameterId='Water Level' and exactly one timeSeriesSet with
parameterId='Discharge'. When not configured properly, an exception will be thrown.
the GRDC format enforces a specific file naming convention. This should be configured properly. When this convention is violated, a
warning is given, but no exception is thrown.

The following flags are set statically, because they are not stored in FEWS:

is value directly determined' is always false


aggregation interval and offset are always 0
is ice cover' is always false
is ice jam' is always false
is weedage' is always false
is influenced by backwater' is always false

Configuration (Example)

A complete export module configuration consists of an ID Mapping file and a Export Module Instance file.

ModuleConfigFiles

330
[Link]
<timeSeriesExportRun ......"="......"">
<export>
<general>
<exportType>grdc</exportType>
<folder>$EXPORT_EFAS_FOLDER$</folder>
<exportFileName>
<name>-[Link]</name>
<prefix>
<currentTimeFormattingString>'NL-1008-'yyyyMMddHHmmss</
currentTimeFormattingString>
</prefix>
</exportFileName>
<validate>false</validate>
<idMapId>IdExportEFAS</idMapId>
<exportTimeZone>
<timeZoneName>GMT</timeZoneName>
</exportTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMSW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>H-MS-BORD</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-192" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</export>
</timeSeriesExportRun>
]]>

IdMapFiles

Defines mappings between FEWS parameters and locations and the expected GRDC locations and parameters.

[Link]
<idMap xmlns=".......">
<!---->
<parameter internal="Q.m" external="Discharge"/>
<parameter internal="H.m" external="Water Level"/>
<!---->
<location internal="H-MS-BORD" external="BORGHAREN"/>
<location internal="H-RN-0001" external="LOBITH"/>
</idMap>
]]>

Export module
What [Link]

Required no

Description Export data (timeseries) from Delft-Fews to several file formats

schema location [Link]

331
Entry in ModuleDescriptors <moduleDescriptor id="TimeSeriesExportRun">
<description>Export module to export timeseries to various formats</description>
<className>[Link]</className>
</moduleDescriptor>

Configuration
General
description
exportTypeStandard
exportType
folder
exportFileName
validate
idmapId
unitConversionsId
flagConversionsId
exportMissingValue/exportMissingValueString
omitMissingValues
exportTimeZone
convertDatum
metadata
timeseriesSet

Configuration

The export module can export timeseries for use in other systems. The configuration of the module is split into three sections:

General: Specify file name, data type etc...


metadata: Export specific settings
timeseriesSets: actual data to export

332
In the sections below the different elements of the configuration are described

General

333
description

An optional description

exportTypeStandard

This type specifies which reader should be used to read the file. The type must be one from the enumeration. Presently (2007/02) only bfg and pi
are included in this list.

exportType

This type specifies which reader should be used to read the file. It may be any string as long as this type is supported by the TimeSeriesExport
module. The list of supported types is given below.

folder

Folder (directory) in which to store the exported files.

exportFileName

This elements describes how to construct the filename(s) of the exported file(s).

If only the name element is given a fixed name is used for each export. The prefix and suffix elements describe how to create a filename prefix
and/or suffix. The temporaryPrefix is used to generate a prefix for the temporary file as it is being written. After that the fine is renamed.

334
validate

Optional element. Only applicable if the data are exported to the xml-file. This option activates the validation of the exported file against a XML
schema.

idmapId

Id of IdMap to be used for parameterId and locationId mapping

unitConversionsId

Id of UnitConversions to be used for unit mapping

flagConversionsId

Id of flagConversions to be used for flag mapping

exportMissingValue/exportMissingValueString

Missing value definition for this time series. Either a string or a number. Defaults to NaN if not defined.

omitMissingValues

335
If set to true records with missing values are not exported

exportTimeZone

TimeZone in which to export the data. Can either be a string (timeZoneName) or an offset (timeZoneOffset).

convertDatum

Convert datum to local datum during import. The conversion will be done for all parameters which use datum (as configured in [Link])
The local datum is defined in the z element in the [Link] file.

metadata

TO BE COMPLETED

timeseriesSet

Define the timeseriesset to be exported. Please note that not all exports support all timeseriestypes (e.g. csv only supports scalar type).

Export module, available data types

Available Exports

Please note the new types are added regularly. Most of the Exports are Custom made for specific file formats. The preferred
format for new scalar Exports is the Delft-Fews Published Interface Format (PI).

Type String Description Data Type

BfG BfG ASCII format scalar

CSV Standard CSV format scalar

DINO Dino Tuf format scalar

FLIWAS Fliwas ASCII format profile

GIN Export Gemeinsame Informationsplattform Naturgefahren XML format scalar

GRDC Global Runoff Data Centre ASCII format scalar

iBever iBever CSV format scalar

Menyanthes Menyanthes file format scalar

NetCDF Alert NetCDF alert files for MapD project scalar

NetCDF MapD NetCDF files for MapD project scalar

NETCDF-CF_GRID_MATROOS NetCDF export of grid data for Matroos grid

NETCDF-CF_GRID NetCDF export of grid data grid

NETCDF-CF_PROFILE_MATROOS NetCDF export of profile data for Matroos profile

NETCDF-CF_PROFILE NetCDF export of profile data profile

NETCDF-CF_TIMESERIES_MATROOS NetCDF export of timeseries data for Matroos scalar

NETCDF-CF_TIMESERIES NetCDF export of timeseries data scalar

PI FEWS Published Interface Format scalar

RAM Rhine alarm model ASCII format scalar

SHEF SHEF ASCII format scalar

TSD TSD ASCII Format scalar

UM Aquo Uitwissel Model Aquo (Dutch exchange XML file format) scalar

336
BfG Export

Introduction

Export scalar timeseries to bfg type format (example config).

Example

No example present

CSV Export

Introduction

Export scalar timeseries to csv type format (example config). The resulting csv files has three header rows. This first row contains the location
name for each data column, the second row the location Id for each data column, the third row the parameter. Date/time is in yyy-mm-dd
hh:mm:ss format.

Example

Location Name:,Bewdley,Saxons Lode


Location Id:,EA_H-2001,EA_H-2032
Time,Rainfall,Rainfall
2003-03-01 [Link],-999,-999
2003-03-01 [Link],1.000,1.000
2003-03-01 [Link],2.000,2.000
2003-03-01 [Link],3.000,3.000
2003-03-01 [Link],4.000,4.000
2003-03-01 [Link],-999,5.000
2003-03-01 [Link],6.000,6.000
2003-03-01 [Link],7.000,7.000
2003-03-01 [Link],8.000,8.000
2003-03-01 [Link],9.000,9.000
2003-03-01 [Link],10.000,10.000
2003-03-01 [Link],11.000,11.000
2003-03-01 [Link],12.000,12.000
2003-03-01 [Link],13.000,13.000
2003-03-01 [Link],14.000,14.000
2003-03-01 [Link],15.000,15.000
2003-03-01 [Link],16.000,16.000
2003-03-01 [Link],17.000,17.000
2003-03-01 [Link],18.000,18.000
2003-03-01 [Link],19.000,19.000
2003-03-01 [Link],20.000,20.000

Java source code

[Link]
[Link]

[Link]

{
private CsvTimeSeriesSerializer serializer = new CsvTimeSeriesSerializer();

@Override
public void serialize(TimeSeriesContent content, LineWriter writer, String virtualFileName)
throws Exception {
[Link](',');
[Link](';');
[Link](content, writer, virtualFileName);
}
}
]]>

337
[Link]

{
private char decimalSeparator = '.';
private char columnSeparator = ',';

@Override
public void serialize(TimeSeriesContent content, LineWriter writer, String virtualFileName)
throws Exception {
if ([Link]([Link]())) [Link](-999f);
[Link](true);

String[] locationHeader = new String[[Link]() + 1];


String[] parameterHeader = new String[[Link]() + 1];
locationHeader[0] = "";
parameterHeader[0] = "";

for (int i = 0, n = [Link](); i < n; i++) {


[Link](i);
TimeSeriesHeader timeSeriesHeader = [Link]();
locationHeader[i + 1] = [Link]();
parameterHeader[i + 1] = [Link]();
}

[Link](locationHeader, columnSeparator);
[Link](parameterHeader, columnSeparator);

String[] line = new String[[Link]() + 1];


for (int i = 0, n = [Link](); i < n; i++) {
[Link](i);
line[0] = [Link]([Link](), "yyyy-MM-dd HH:mm:ss");
for (int j = 0, m = [Link](); j < m; j++) {
[Link](j);
line[j + 1] = [Link](decimalSeparator);
}
[Link](line, columnSeparator);
}
}

public char getDecimalSeparator() {


return decimalSeparator;
}

public void setDecimalSeparator(char decimalSeparator) {


[Link] = decimalSeparator;
}

public char getColumnSeparator() {


return columnSeparator;
}

public void setColumnSeparator(char columnSeparator) {


[Link] = columnSeparator;
}
}
]]>

DINO Export

Introduction

Export scalar timeseries to DINO Tuf type format (example config).

338
The DINO-format is a text file with the extension .tuf.
It consists of a fixed block of text with information on the file.
Te lines in the text block are marked with a #.
The following lines contain the the information as specified below in 9 columns separated by a ,

Kolom 1: NITG code van het meetpunt


Kolom 2: Volgnummer van de meetreeks of filter
Kolom 3: Opnamedatum formaat: jjj/mm/dd
Kolom 4: Opnametijd formaat: hh:mm:ss
Kolom 5: De opname in cm (dus niet de berekende stand)
Kolom 6,7,8 worden door ons niet gebruikt maar wel de ; scheidingstekens worden geplaats
Kolom 9: Opmerking

NOTES:

assumed that the file always contains just 1 parameter for one or more locations;
parameter id or name is not mentioned in the file;
only non-missing values are written;
Number of decimals is zero
second column should contain the external parameter qualifier (Which is 01 in this example)
missing values are indicated with an empty position (, ,)?

Example

#TNO_NITGEXCHANGE_FILE=
#VERSION= 1, 1, 0
#FILE_SOURCE=
#FILE_DATE=
#DATA_SET_NAME_IN= DINO
#DATA_SET_NAME_OUT=
#REMARK=
#OBJECT_MEASUREMENT_TYPE= GWL
#COLUMN= 9
#COLUMN_INFO= 1, OBJECT_ID
#COLUMN_INFO= 2, OBJECT_SUB_ID
#COLUMN_INFO= 3, DATE, YYYY/MM/DD
#COLUMN_INFO= 4, TIME, HH24:MI:SS
#COLUMN_INFO= 5, VALUE, CM, MP
#COLUMN_INFO= 6, REM
#COLUMN_INFO= 7, QLT
#COLUMN_INFO= 8, REL
#COLUMN_INFO= 9, NOTE
#COLUMN_SEPERATOR= ,
#DATA_INSERT_METHOD=
#DATA_UPDATE_METHOD=
#EOH=
B58G0294,01,2007/09/14,[Link],134,,,,
B58G0294,01,2007/10/01,[Link],137,,,,
B58G0294,01,2007/10/14,[Link],134,,,,
B58G0294,01,2007/10/29,[Link],131,,,,
B58G0294,01,2007/11/15,[Link],120,,,,
B58G0294,01,2007/11/30,[Link],102,,,,
B58G0294,01,2007/12/18,[Link],109,,,,
B58G0294,01,2008/01/14,[Link],106,,,,
B58G0294,01,2008/01/28,[Link],105,,,,
B58G0294,01,2008/02/15,[Link],105,,,,
B58G0294,01,2008/03/03,[Link],116,,,,
B58G0294,01,2008/03/14,[Link],109,,,,
B58G0294,01,2008/03/31,[Link],84,,,,
B58G0295,01,2007/09/14,[Link],93,,,,
B58G0295,01,2007/10/01,[Link],82,,,,
B58G0295,01,2007/10/14,[Link],98,,,,
B58G0295,01,2007/10/29,[Link],98,,,,
B58G0295,01,2007/11/15,[Link],87,,,,
B58G0295,01,2007/11/30,[Link],89,,,,
B58G0295,01,2007/12/18,[Link],77,,,,
B58G0295,01,2008/01/14,[Link],75,,,,
B58G0295,01,2008/01/28,[Link],73,,,,
B58G0295,01,2008/02/15,[Link],67,,,,
B58G0295,01,2008/03/03,[Link],70,,,,
B58G0295,01,2008/03/14,[Link],70,,,,
B58G0295,01,2008/03/31,[Link],58,,,,
B58D0446,01,2007/09/14,[Link],287,,,,
B58D0446,01,2007/10/01,[Link],292,,,,

339
B58D0446,01,2007/10/14,[Link],292,,,,
B58D0446,01,2007/10/29,[Link],293,,,,
B58D0446,01,2007/11/15,[Link],280,,,,
B58D0446,01,2007/11/30,[Link],288,,,,
B58D0446,01,2007/12/18,[Link],280,,,,
B58D0446,01,2008/01/14,[Link],278,,,,
B58D0446,01,2008/01/28,[Link],282,,,,
B58D0446,01,2008/02/15,[Link],271,,,,
B58D0446,01,2008/03/03,[Link],272,,,,
B58D0446,01,2008/03/14,[Link],278,,,,
B58D0446,01,2008/03/31,[Link],263,,,,
B58G0296,01,2007/09/14,[Link],83,,,,
B58G0296,01,2007/10/01,[Link],80,,,,
B58G0296,01,2007/10/14,[Link],85,,,,
B58G0296,01,2007/10/29,[Link],73,,,,
B58G0296,01,2007/11/15,[Link],69,,,,
B58G0296,01,2007/11/30,[Link],66,,,,
B58G0296,01,2007/12/18,[Link],80,,,,
B58G0296,01,2008/01/14,[Link],78,,,,
B58G0296,01,2008/01/28,[Link],78,,,,
B58G0296,01,2008/02/15,[Link],78,,,,
B58G0296,01,2008/03/03,[Link],79,,,,
B58G0296,01,2008/03/14,[Link],76,,,,
B58G0296,01,2008/03/31,[Link],63,,,,
B58G0297,01,2007/09/14,[Link],80,,,,
B58G0297,01,2007/10/01,[Link],73,,,,
B58G0297,01,2007/10/14,[Link],80,,,,
B58G0297,01,2007/10/29,[Link],70,,,,
B58G0297,01,2007/11/15,[Link],56,,,,
B58G0297,01,2007/11/30,[Link],68,,,,
B58G0297,01,2007/12/18,[Link],76,,,,
B58G0297,01,2008/01/14,[Link],76,,,,
B58G0297,01,2008/01/28,[Link],77,,,,
B58G0297,01,2008/02/15,[Link],63,,,,
B58G0297,01,2008/03/03,[Link],65,,,,
B58G0297,01,2008/03/14,[Link],62,,,,
B58G0297,01,2008/03/31,[Link],48,,,,
B58D1904,01,2007/09/14,[Link],102,,,,
B58D1904,01,2007/10/01,[Link],101,,,,
B58D1904,01,2007/10/14,[Link],100,,,,
B58D1904,01,2007/10/29,[Link],97,,,,
B58D1904,01,2007/11/15,[Link],88,,,,
B58D1904,01,2007/11/30,[Link],86,,,,
B58D1904,01,2007/12/18,[Link],55,,,,
B58D1904,01,2008/01/14,[Link],79,,,,
B58D1904,01,2008/01/28,[Link],79,,,,
B58D1904,01,2008/02/15,[Link],77,,,,
B58D1904,01,2008/03/03,[Link],86,,,,
B58D1904,01,2008/03/14,[Link],71,,,,
B58D1904,01,2008/03/31,[Link],51,,,,
B58G0298,01,2007/09/12,[Link],199,,,,
B58G0298,01,2007/10/01,[Link],195,,,,
B58G0298,01,2007/10/16,[Link],204,,,,
B58G0298,01,2007/10/30,[Link],190,,,,
B58G0298,01,2007/11/15,[Link],176,,,,
B58G0298,01,2007/11/28,[Link],183,,,,
B58G0298,01,2007/12/28,[Link],0,,,,
B58G0298,01,2007/12/17,[Link],177,,,,
B58G0298,01,2007/12/28,[Link],,,,,
B58G0298,01,2008/01/15,[Link],166,,,,
B58G0298,01,2008/01/29,[Link],169,,,,
B58G0298,01,2008/02/18,[Link],168,,,,
B58G0298,01,2008/02/28,[Link],174,,,,
B58G0298,01,2008/03/12,[Link],175,,,,
B58G0298,01,2008/03/31,[Link],170,,,,
B58G0299,01,2007/09/12,[Link],194,,,,
B58G0299,01,2007/10/01,[Link],193,,,,
B58G0299,01,2007/10/16,[Link],194,,,,
B58G0299,01,2007/10/30,[Link],189,,,,
B58G0299,01,2007/11/15,[Link],168,,,,
B58G0299,01,2007/11/28,[Link],177,,,,
B58G0299,01,2007/12/17,[Link],165,,,,
B58G0299,01,2007/12/28,[Link],,,,,
B58G0299,01,2008/01/15,[Link],166,,,,
B58G0299,01,2008/01/29,[Link],171,,,,"aflezing was 1,17"

340
Fliwas Export

Introduction

Export scalar timeseries to fliwas type format (example config).

Example

341
<?xml version="1.0" encoding="UTF-8"?>
<fliwas
xsi:schemaLocation="[Link]
version="1.0" xmlns="[Link]
xmlns:xsi="[Link]
<header gebied="fews" datum="2003-03-01" tijd="[Link]" volgnummer="1.0">
<riviertak naam="EA_H-2001">
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="2.11"/>
<waterstand km="200" stand="2.11"/>
<waterstand km="400" stand="2.11"/>
<waterstand km="600" stand="2.11"/>
</voorspelling>
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="3.11"/>
<waterstand km="200" stand="3.11"/>
<waterstand km="400" stand="3.11"/>
<waterstand km="600" stand="3.11"/>
</voorspelling>
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="4.11"/>
<waterstand km="200" stand="4.11"/>
<waterstand km="400" stand="4.11"/>
<waterstand km="600" stand="4.11"/>
</voorspelling>
<maximum>
<waterstand km="27" datum="2003-03-01" tijd="[Link]" stand="1.31"/>
<waterstand km="28" datum="2003-03-01" tijd="[Link]" stand="1.41"/>
</maximum>
</riviertak>
<riviertak naam="EA_H-2002">
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="3.51"/>
<waterstand km="100" stand="3.51"/>
<waterstand km="300" stand="3.51"/>
<waterstand km="500" stand="3.51"/>
</voorspelling>
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="4.51"/>
<waterstand km="100" stand="4.51"/>
<waterstand km="300" stand="4.51"/>
<waterstand km="500" stand="4.51"/>
</voorspelling>
<maximum>
<waterstand km="29" datum="2003-03-01" tijd="[Link]" stand="1.71"/>
</maximum>
</riviertak>
<riviertak naam="EA_H-2032">
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="111" stand="1.91"/>
<waterstand km="222" stand="1.91"/>
</voorspelling>
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="111" stand="2.91"/>
<waterstand km="222" stand="2.91"/>
</voorspelling>
</riviertak>
</header>
</fliwas>

GIN Export

Introduction

GIN stands for 'Gemeinsame Informationsplattform Naturgefahren' and will be the central information hub for natural hazards events in
Switzerland. The GIN export functionality is available from 2010_01 onwards and patches will be made available for versions 2009_01 and

342
2009_02.

GIN Export Configuration

The configuration of GIN export module follows the normal pattern, except that two qualifiers are necessary to get the proper values for the
attributes 'datasourceProvider' and 'abbreviation'.

First put the qualifiers in Config/RegionConfigFiles/[Link]

<?xml version="1.0" encoding="UTF-8"?>


<qualifiers xmlns="[Link] xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<allowReferencingUndefinedQualifiers>false</allowReferencingUndefinedQualifiers>
...
<qualifier id="BAFU" name="BAFU"/>
<qualifier id="Abbreviation1" name="Abbreviation1"/>
<qualifier id="Abbreviation2" name="Abbreviation2"/>
...
</qualifiers>

For instance if your export configuration is in the file Config\ModuleConfigFiles\export\[Link],

<?xml version="1.0" encoding="UTF-8"?>


<timeSeriesExportRun xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<export>
<general>
<exportType>GIN_Export</exportType>
<folder>$EXPORT_FOLDER_ROOT$/GIN</folder>
<exportFileName>
<name>[Link]</name>
<prefix>
<timeZeroFormattingString>[Link]</timeZeroFormattingString>
</prefix>
</exportFileName>
...
</general>
<timeSeriesSet>
...
<parameterId>par1</parameterId>
<qualifierId>BAFU</qualifierId>
<qualifierId>Abbreviation1</qualifierId>
<locationId>loc1</locationId>
...
</timeSeriesSet>
</export>
<export>
<general>
...
</general>
<timeSeriesSet>
...
<parameterId>par2</parameterId>
<qualifierId>BAFU</qualifierId>
<qualifierId>Abbreviation2</qualifierId>
<locationId>loc2</locationId>
...
</timeSeriesSet>
</export>
</timeSeriesExportRun>

The following code shows a sample output file:

343
<?xml version="1.0" encoding="UTF-8"?>
<Collection datasourceName="loc1" datasourceProvider="BAFU1"
xmlns:xsi="[Link]
<prediction abbreviation="Abbreviation1" datasourceName="loc1" datasourceProvider="BAFU">
<run>
<inittime>2003-11-05T[Link].000+0100</inittime><!-- init time hydrological model-->
<inittime2>2003-11-05T[Link].000+0100</inittime2><!-- init time meteorological model-->
</run>
<member>3</member>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.5161157</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.749531</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.35472012</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.91763437</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.29822087</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.038461924</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.94585866</par1>
</preddata>
</prediction>
</Collection>

Example configuration

344
<?xml version="1.0" encoding="UTF-8"?>
<timeSeriesExportRun xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<export>
<general>
<exportType>GIN_Export</exportType>
<folder>$EXPORT_FOLDER_ROOT$/GIN</folder>
<exportFileName>
<name>.[Link]</name>
<prefix>
<timeZeroFormattingString>[Link]</timeZeroFormattingString>
</prefix>
</exportFileName>
<idMapId>IdExportGIN</idMapId>
<exportMissingValue>-999</exportMissingValue>
<omitMissingValues>true</omitMissingValues>
<exportTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</exportTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ARMA_OberThur_COSMO7</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<qualifierId>BAFU</qualifierId>
<qualifierId>abbreviation1</qualifierId>
<locationId>H-2181</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="0" startOverrulable="false" end="72" endOverrulable="true"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</export>
<export>
<general>
<exportType>GIN_Export</exportType>
<folder>$EXPORT_FOLDER_ROOT$/GIN</folder>
<exportFileName>
<name>.[Link]</name>
<prefix>
<timeZeroFormattingString>[Link]</timeZeroFormattingString>
</prefix>
</exportFileName>
<idMapId>IdExportGIN</idMapId>
<exportMissingValue>-999</exportMissingValue>
<omitMissingValues>true</omitMissingValues>
<exportTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</exportTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>EnsembleGIN</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<qualifierId>BAFU</qualifierId>
<qualifierId>abbreviation2</qualifierId>
<locationId>H-2181</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="0" startOverrulable="false" end="72" endOverrulable="true"/>
<readWriteMode>read only</readWriteMode>
<ensembleId>GIN</ensembleId>
</timeSeriesSet>
</export>
</timeSeriesExportRun>

345
GRDS Export

Introduction

Export scalar timeseries to GRDC type format (example config). GRDC-NRT-Format - for the exchange of near real-time hydrological data.

Example

# GRDC-NRT-Format - for the exchange of near real-time hydrological data


#
# National Station ID;Timestamp;Water level;Discharge;is missing value water level;is missing value
discharge;is directly determined water level?;is directly determined discharge?;is data reliable water
level?;is data reliable discharge?;aggregation interval water level&discharge;aggregation offset water
level&discharge;is ice cover?;is ice jam?;is weedage?;is influenced by backwater?
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2032;2003-03-01 [Link];-999;-999;1;1;0;0;0;0;0;;;;;
EA_H-2001;2003-03-01 [Link];0.35;1.35;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.35;2.35;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.34;3.34;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.34;0.34;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.34;0.34;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.34;0.34;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.34;0.34;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.34;0.34;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.34;0.34;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.33;0.33;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.33;0.33;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.33;0.33;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.33;0.33;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.33;0.33;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.33;0.33;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.33;0.33;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.33;0.33;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.33;0.33;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.32;0.32;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.32;0.32;0;0;0;0;1;1;0;;;;;
EA_H-2001;2003-03-01 [Link];0.32;0.32;0;0;0;0;1;1;0;;;;;

iBever Export

Introduction

The iBever file format is a special CSV ASCII format for water quality data that can be imported by iBever. In FEWS it can be used to export
sample time series.

Each timeseries value in the timeseries content is written as a separate row in the file.

346
Each row contains information on the location ID and name, parameter ID and name, date and time, value, unit, flag
Only non-missing values are printed.

More information on the iBever CSV format can be found on the iBever internet site:

[Link]

The Export module in FEWS exports the following columns

iBever tag FEWS Info

mpn_mpnomsch Location Name

mpn_mpnident Location ID

mwa_mwadtmb Measurement datum

mwa_mwatijdb Measurement Time

mwa_mwawrden Value

mep_domgwcod Unit

mco_domgwcod Compartment, at present only '10'

mrsinovs_domafkrt Detection Limit '<' or '>'

hoe_domgwcod Parameter qualifier

mps_domgwcod Parameter ID

Example configuration file (example config).

Example File

mpn_mpnomsch;mpn_mpnident;mwa_mwadtmb;mwa_mwatijdb;mwa_mwawrden;mep_domgwcod;mco_domgwcod;mrsinovs_domafkrt;hoe_domgwco
Sevenum buffer Groot Luttel;DSVNLUT1;2007-12-19;[Link];2.0;ug/kg;10;<;NVT;24DDTS
WB Tungelroysebeek trace 1;DTUNG001;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;24DDTS
WB Tungelroysebeek trace 2;DTUNG002;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;24DDTS
WB Tungelroysebeek trace 3;DTUNG003;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;24DDTS
WB Sevenum buffer Groot Luttel;DSVNLUT1;2007-12-19;[Link];2.0;ug/kg;10;<;NVT;44DDDS
WB Tungelroysebeek trace 1;DTUNG001;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;44DDDS
WB Tungelroysebeek trace 2;DTUNG002;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;44DDDS
WB Tungelroysebeek trace 3;DTUNG003;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;44DDDS
WB Sevenum buffer Groot Luttel;DSVNLUT1;2007-12-19;[Link];1.0;ug/kg;10;<;NVT;44DDES
WB Tungelroysebeek trace 1;DTUNG001;2007-05-09;[Link];1.0;ug/kg;10;<;NVT;44DDES
WB Tungelroysebeek trace 2;DTUNG002;2007-05-09;[Link];1.0;ug/kg;10;<;NVT;44DDES
WB Tungelroysebeek trace 3;DTUNG003;2007-05-09;[Link];1.0;ug/kg;10;<;NVT;44DDES
WB Sevenum buffer Groot Luttel;DSVNLUT1;2007-12-19;[Link];2.0;ug/kg;10;<;NVT;44DDTS

Java souce code

[Link]

[Link]

* Each timeseries value in the timeseries content is written as a serperate row in the
file.<br/>
* Each row contains information on the location ID and name, parameter ID and name, date and
time, value, unit, flag
* Only non-missing values are printed.
* <p/>
* The following header information is present for every timeseries array:
* <li>iBever header line</li>
*/
public class IbeverTimeSeriesSerializer implements TextSerializer<TimeSeriesContent> {
private static final String[] HEADER_LINE = {
"mpn_mpnomsch", /* 1 locationName */

347
"mpn_mpnident", /* 2 locationID */
"mwa_mwadtmb", /* 3 beginDatum */
"mwa_mwatijdb", /* 4 beginTijd */
"mwa_mwawrden", /* 5 meetwaarde */
"mep_domgwcod", /* 6 eenheid */
"mrsinovs_domafkrt", /* 7 Detectiegrens*/
"mps_domgwcod", /* 8 Parameter code*/
"hoe_domgwcod", /* 9 Hoedanigheid code*/
"mco_domgwcod", /* 10 compartiment */
"wtt_cod", /* 11 watertype */
"mpn_mrfxcoor", /* 12 xcoord */
"mpn_mrfycoor"}; /* 13 ycoord */

@Override
public void serialize(TimeSeriesContent content, LineWriter writer, String virtualFileName)
throws Exception {
[Link](HEADER_LINE, ';');

String[] line = new String[13];


for (int i = 0, n = [Link](); i < n; i++) {
[Link](i);
TimeSeriesHeader header = [Link]();

for (int j = 0, m = [Link](); j < m; j++) {


[Link](j);
if (![Link]()) continue;

String[] locationIdParts = [Link]([Link](), ';'); //split


locationid

line[0] = [Link]();
line[1] = locationIdParts[0];
line[2] = [Link]([Link](), "yyyy-MM-dd");
line[3] = [Link]([Link](), "HH:mm:ss");
line[4] = [Link]('.');
line[5] = [Link]();
line[6] = setOutOfDetectionRangeFlag([Link]());
line[7] = [Link]();

//Qualifiers can be entered or not. If not then enter defaults


line[8] = [Link]() == 0 ? "NVT" : [Link](0); //
Hoedanigheid code
//default altijd waarde 10, kan ook 80 zijn als het lucht betreft, of 40 voor
waterbodem
line[9] = [Link]() > 1 ? [Link](1) : "10" ; //
compartiment
line[10] = [Link] > 1 ? locationIdParts[1] : ""; // watertype
line[11] = [Link]([Link]().getX(0));
line[12] = [Link]([Link]().getY(0));

[Link](line, ';');
}
}
}

private static String setOutOfDetectionRangeFlag(OutOfDetectionRangeFlag


setOutOfDetectionRangeFlag) {
if (setOutOfDetectionRangeFlag == OutOfDetectionRangeFlag.BELOW_DETECTION_RANGE) return
"<"; "=""" return="return" (setoutofdetectionrangeflag="=" if="if"
outofdetectionrangeflag.above_detection_range)="OutOfDetectionRangeFlag.ABOVE_DETECTION_RANGE)">"
;
return "";
}
}

348
]]></";></TimeSeriesContent>

Menyanthes

Introduction

The Menyanthes file format is a special CSV ASCII format that can be imported by Menyanthes. In FEWS it can be used to export sample time
series.

Each export file consist of 3 parts, first a header followed by a description of the series and last the timeseries.

Each timeseries value in the timeseries content is written as a serperate row in the file.

Each row contains information on the location ID and Qualifier id, date and time, value, flag
Only non-missing values are printed.

More information on the Menyanthes format can be found on the Menyanthes internet site: ( [Link]

Description of file format

The Export module in FEWS exports the following header:

Menyanthes tag FEWS info

TITEL "FEWS Menyanthes Export"

GEBRUIKERSNAAM userName

PERIODE Content period start date

DATUM Content period end date

REFERENTIE NAP

The Export module in FEWS exports the following series description:

Menyanthes tag FEWS info

LOCATIE Location Id

FILTERNUMMER Qualifier id

EXTERNE AANDUIDING "n/a"

X COORDINAAT Geometry X

Y COORDINAAT Geometry Y

MAAIVELD Geometry Z

GESCHAT

MEETPUNT NAP

BOVENKANT FILTER

ONDERKANT FILTER

START DATUM Measurement datum start

EINDE DATUM Measurement datum end

The Export module in FEWS exports the following timeseries description:

Menyanthes tag FEWS info

LOCATIE Location Id

FILTERNUMMER Qualifier id

PEIL DATUM TIJD Measurement date time

349
STAND (NAP) Value

BIJZONDERHEID Flag

Example export file

TITEL: FEWS Menyanthes Export


GEBRUIKERSNAAM: ansink
PERIODE: 2009/02/20-2010/03/22
DATUM:2010/03/22 [Link]
REFERENTIE: NAP

LOCATIE,FILTERNUMMER,EXTERNE AANDUIDING,X COORDINAAT,Y COORDINAAT,MAAIVELD NAP,GESCHAT,MEETPUNT


NAP,BOVENKANT FILTER,ONDERKANT FILTER,START DATUM,EINDE DATUM
40145,1,n/a,156352.0,467817.0,2.27,,,,,2009/02/20,2010/03/22

LOCATIE,FILTERNUMMER,PEIL DATUM TIJD,STAND (NAP),BIJZONDERHEID


40145,1,2009/02/20 [Link],-1.36,8
40145,1,2009/02/20 [Link],-1.36,8
40145,1,2009/02/20 [Link],-1.36,8
40145,1,2009/02/20 [Link],-1.36,8

Source code

[Link]

NetCDF Alert Export

Introduction

To be completed

Example configuration file (example config).

Example

No example present

NETCDF-CF_GRID_MATROOS Export

Overview

This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Exports data to NetCDF files which comply to the CF standard.


More information about the cf standards can be found at: [Link]

There are six types of NetCDF-CF exports which can be defined:

Time series (NETCDF-CF_TIMESERIES)


Profiles (NETCDF-CF_PROFILE)
Grids (NETCDF-CF_GRID)
Time series (NETCDF-CF_TIMESERIES_MATROOS)
Profiles (NETCDF-CF_PROFILE_MATROOS)
Grids (NETCDF-CF_GRID_MATROOS)

Configuring the export

An example of the NETCDF-CF_GRID_MATROOS export can be found at NETCDF-CF_GRID. The only difference is that the exportType must
be changed to NETCDF-CF_GRID_MATROOS.

NETCDF-CF_GRID Export

350
Overview

This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Exports data to NetCDF files which comply to the CF standard.


More information about the cf standards can be found at: [Link]

There are six types of NetCDF-CF exports which can be defined:

Time series (NETCDF-CF_TIMESERIES)


Profiles (NETCDF-CF_PROFILE)
Grids (NETCDF-CF_GRID)
Time series (NETCDF-CF_TIMESERIES_MATROOS)
Profiles (NETCDF-CF_PROFILE_MATROOS)
Grids (NETCDF-CF_GRID_MATROOS)

Configuring the export

An example of the NETCDF-CF_GRID export will be given here.

ExportNetcdf_Grid 1.00 [Link]


<timeSeriesExportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<export>
<general>
<exportType>NETCDF-CF_GRID</exportType>
<folder>%REGION_HOME%/Export/netcdf/2D</folder>
<exportFileName>
<name>.nc</name>
<prefix>
<timeZeroFormattingString>yyyyMMddHHmm</timeZeroFormattingString>
</prefix>
</exportFileName>
<idMapId>IdExportNetCDF</idMapId>
<exportMissingValueString>-9999.0</exportMissingValueString>
</general>
<timeSeriesSet>
<moduleInstanceId>ExportNetcdf_Grid</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>NHI_H_L001</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-7" end="3"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</export>
</timeSeriesExportRun>
]]>

An example fo the IdMapping used for the NETCDF-CF_GRID export is shown below.
If the parameter has an entry in the standard name CF table, you can enter it in the externalQualifier1 attribute of the parameter. The value of this
qualifier will be added as the standard_name attribute for this variable in the netcdf exported file.

IdExportNetCDF 1.00 [Link]


<parameter externalqualifier1="groundwater (not a standard name, just as example)" internal=
"[Link]" external="groundwater"/>
<location internal="NHI_H_L001" external="NHI_H_L001"/>

]]>

351
NETCDF-CF_PROFILE_MATROOS Export

Overview

This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Exports data to NetCDF files which comply to the CF standard.


More information about the cf standards can be found at: [Link]

There are six types of NetCDF-CF exports which can be defined:

Time series (NETCDF-CF_TIMESERIES)


Profiles (NETCDF-CF_PROFILE)
Grids (NETCDF-CF_GRID)
Time series (NETCDF-CF_TIMESERIES_MATROOS)
Profiles (NETCDF-CF_PROFILE_MATROOS)
Grids (NETCDF-CF_GRID_MATROOS)

Configuring the export

An example of the NETCDF-CF_PROFILE_MATROOS export can be found at NETCDF-CF_PROFILE. The only difference is that the exportType
must be changed to NETCDF-CF_PROFILE_MATROOS.

The structure of the netCDF file which is created is shown below.

200101010000_example.nc
_<offset>" ;
int connections(noelements, nodesperelement) ;
connections:long_name = "Left and right node for each element" ;
connections:_FillValue = -999 ;
float waterlevel(time, nonodes) ;
waterlevel:long_name = "Waterlevel" ;
waterlevel:units = "m" ;
waterlevel:_FillValue = -9999.f ;

// global attributes:
:title = "Netcdf data" ;
:institution = "Deltares" ;
:source = "export NETCDF-CF_PROFILE_MATROOS from FEWS" ;
:history = "Created at Thu Oct 15 [Link] GMT 2009" ;
:references = "[Link] ;
:Conventions = "CF-1.4" ;
:coordinate_system = "RD" ;
]]></offset>

NETCDF-CF_PROFILE Export

Overview

This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Exports data to NetCDF files which comply to the CF standard.


More information about the cf standards can be found at: [Link]

There are six types of NetCDF-CF exports which can be defined:

Time series (NETCDF-CF_TIMESERIES)


Profiles (NETCDF-CF_PROFILE)
Grids (NETCDF-CF_GRID)
Time series (NETCDF-CF_TIMESERIES_MATROOS)
Profiles (NETCDF-CF_PROFILE_MATROOS)

352
Grids (NETCDF-CF_GRID_MATROOS)

Configuring the export

An example of the NETCDF-CF_PROFILE export will be given here.

ExportNetcdf_Profile 1.00 [Link]


<timeSeriesExportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<export>
<general>
<exportType>NETCDF-CF_PROFILE</exportType>
<folder>%REGION_HOME%/Export/netcdf/1D</folder>
<exportFileName>
<name>.nc</name>
<prefix>
<timeZeroFormattingString>yyyyMMddHHmm</timeZeroFormattingString>
</prefix>
</exportFileName>
<idMapId>IdExportNetCDF</idMapId>
<exportMissingValueString>-9999.0</exportMissingValueString>
</general>
<timeSeriesSet>
<moduleInstanceId>ExportNetcdf_Profile</moduleInstanceId>
<valueType>longitudinalprofile</valueType>
<parameterId>[Link]</parameterId>
<locationId>Maastakken_NDB(Haringvliet)</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</export>
</timeSeriesExportRun>
]]>

An example of the IdMapping used for the NETCDF-CF_PROFILE export will be given below.
Note that in the IdMapping of the parameters, the external name must match the variable names as used by the netcdf file exactly (case
sensitive). The locations that are mapped refer to branch id's which are defined in the [Link].
If the parameter has an entry in the standard name CF table, you can enter it in the externalQualifier1 attribute of the parameter. The value of this
qualifier will be added as the standard_name attribute for this variable in the netcdf exported file.

IdExportNetCDF 1.00 [Link]


<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">

<parameter externalqualifier1="waterlevel (not a standard name, but for example)" internal=


"[Link]" external="waterlevel"/>

<location internal="Maastakken_NDB(Haringvliet)" external="Maastakken_NDB(Haringvliet)"/>

</idMap>
]]>

An example of the branches file is shown below.

353
Branches 1.00 [Link]
<branches xmlns:xsi="[Link] xmlns="[Link]
" xsi:schemalocation="[Link]
[Link] version="1.1">
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<branch id="Maastakken_NDB(Haringvliet)">
<branchName>Maastakken_NDB(Haringvliet)</branchName>
<startChainage>1030</startChainage>
<endChainage>321624</endChainage>
<pt label="R_MS_001_1" chainage="1030" z="40.32" z_rb="51.34" y="308594.236" x=
"176029.1129"/>
<pt label="R_MS_001_2" chainage="2061" z="41.79" z_rb="50.92" y="309427.7428" x=
"176631.808"/>
...
<pt label="N_NDB_92" chainage="321624" z="-7.82" z_rb="2.79" y="436953" x="57935.1"/>
</branch>
</branches>
]]>

NETCDF-CF_TIMESERIES_MATROOS Export

Overview

This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Exports data to NetCDF files which comply to the CF standard.


More information about the cf standards can be found at: [Link]

There are six types of NetCDF-CF exports which can be defined:

Time series (NETCDF-CF_TIMESERIES)


Profiles (NETCDF-CF_PROFILE)
Grids (NETCDF-CF_GRID)
Time series (NETCDF-CF_TIMESERIES_MATROOS)
Profiles (NETCDF-CF_PROFILE_MATROOS)
Grids (NETCDF-CF_GRID_MATROOS)

Configuring the export

An example of the NETCDF-CF_TIMESERIES_MATROOS export can be found at NETCDF-CF_TIMESERIES. The only difference is that the
exportType must be changed to NETCDF-CF_TIMESERIES_MATROOS.

The structure of the netCDF file which is created is shown below.

200601010000_example.nc

NETCDF-CF_TIMESERIES Export

Overview

This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)

Exports data to NetCDF files which comply to the CF standard.


More information about the cf standards can be found at: [Link]

There are six types of NetCDF-CF exports which can be defined:

Time series (NETCDF-CF_TIMESERIES)

354
Profiles (NETCDF-CF_PROFILE)
Grids (NETCDF-CF_GRID)
Time series (NETCDF-CF_TIMESERIES_MATROOS)
Profiles (NETCDF-CF_PROFILE_MATROOS)
Grids (NETCDF-CF_GRID_MATROOS)

Configuring the export

An example of the NETCDF-CF_TIMESERIES export will be given here.

ExportNetcdf_Timeseries 1.00 [Link]


<timeSeriesExportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link]
[Link] xmlns="
[Link]
<export>
<general>
<exportType>NETCDF-CF_TIMESERIES</exportType>
<folder>%REGION_HOME%/Export/netcdf/0D</folder>
<exportFileName>
<name>.nc</name>
<prefix>
<timeZeroFormattingString>yyyyMMddHHmm</timeZeroFormattingString>
</prefix>
</exportFileName>
<idMapId>IdExportNetCDF</idMapId>
<exportMissingValueString>-999</exportMissingValueString>
<exportTimeZone>
<timeZoneName>GMT</timeZoneName>
</exportTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ExportNetcdf_Timeseries</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DMFlowPoints</locationSetId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-365" end="365"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</export>
</timeSeriesExportRun>
]]>

An example of the IdMapping used for the NETCDF-CF_TIMESERIES export will be given below. In this example, the mapped locations
correspond to the locations of the locatiesSet as defined above in the ExportNetcdf_Timeseries.xml.
If the parameter has an entry in the standard name CF table, you can enter it in the externalQualifier1 attribute of the parameter. The value of this
qualifier will be added as the standard_name attribute for this variable in the netcdf exported file.

355
IdExportNetCDF 1.00 [Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter externalqualifier1="discharge (not standardname, just for test)" internal="[Link]"
external="discharge"/>

<location internal="DMTak_1001" external="1001"/>


<location internal="DMTak_1002" external="1002"/>
<location internal="DMTak_1003" external="1003"/>
<location internal="DMTak_1004" external="1004"/>
...
<location internal="DMTak_6115" external="6115"/>
</idMap>
]]>

NetCDF MapD Export

Introduction

To be completed

Example configuration file (example config).

Example

No example present

PI Export

Introduction

Export scalar timeseries to PI type format (example config). This xml format is described in detail in the Delft-Fews published interface
documentation.

Example

356
<?xml version="1.0" encoding="UTF-8"?>
<TimeSeries
xsi:schemaLocation="[Link]
[Link]
version="1.2" xmlns="[Link]
xmlns:xsi="[Link]
<timeZone>0.0</timeZone>
<series>
<header>
<type>accumulative</type>
<locationId>EA_H-2001</locationId>
<parameterId>Rainfall</parameterId>
<timeStep unit="second" multiplier="900"/>
<startDate date="2003-03-01" time="[Link]"/>
<endDate date="2003-03-01" time="[Link]"/>
<missVal>-999.0</missVal>
<stationName>Bewdley</stationName>
<units>m</units>
</header>
<event date="2003-03-01" time="[Link]" value="-999.0" flag="88"/>
<event date="2003-03-01" time="[Link]" value="0.0010" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0020" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0030" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0040" flag="44"/>
<event date="2003-03-01" time="[Link]" value="-999.0" flag="88"/>
<event date="2003-03-01" time="[Link]" value="0.0060" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0070" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0080" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.009000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.010000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.011000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.012" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.013" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.014" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.015000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.016" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.017" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.018000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.019000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.020000001" flag="44"/>
</series>
</TimeSeries>

Rhine Alarm Model

Introduction

The RAM export function exports time series in the Rhine Alarm Model format. The file is meant to contain daily time series of Level and Flow
series for the Rhine River. Make sure the exported timeseriesSets have all the same length (relative View Period), this period may contain
missing values.

The export function will first export all Level series, then all Flow series. Make sure the idMapping file converts all level series to a parameter ID
named "Level" and all flow series to a parameter ID named "Flow". Example of an ID mapping file: IdExportRAM.

ID mapping
<parameter internal="[Link]" external="Level"/>
<location internal="Andernach" external="ANDERNACH"/>
<location internal="Lobith" external="LOBITH"/>
...........
]]>

There is a standard footer at the end of the file. This footer is as follows:

357
[Stuwen]
StuwProgrammaS285 = -1

[Haringvlietsluizen]
SluisProgrammaLPH84 = -1

[Dispersie]
DispersieBerekend = -1
DispersieWaarde = 5

An example of the export module instance Export_RAM.

Example

[Water Levels]
Date=05.02.2008 00.00
Variable=WaterLevel
"Station","Level"

[Water Levels]
Date=06.02.2008 00.00
Variable=WaterLevel
"Station","Level"
"Maxau",4.0780106
"Speyer",2.9005127
"Worms",1.4478912
"Mainz",2.7116013
"Kaub",2.0306778
"Koblenz",2.4764786
"Andernach",3.2620697
"Bonn",3.4753723
"Keulen",3.6934166
"Dusseldorf",3.3034477
"Ruhrort",4.706806
"Wesel",4.499606
"Rees",4.093315
"Lobith",10.248271
"Driel boven",7.5582066
"Amerongen boven",5.8657036
"Hagestein boven",2.7667627
"H-RN-0908",3.9794693

[Water Levels]
Date=07.02.2008 00.00
Variable=WaterLevel
"Station","Level"
"Maxau",4.3678207
"Speyer",2.9759445
"Worms",1.5724945

......
......

[Flows]
Date=05.02.2008 00.00
Variable=Flow
"Station","Flow"
"Rheinfelden",624.347

[Flows]
Date=06.02.2008 00.00
Variable=Flow
"Station","Flow"
"Maxau",777.0
"Speyer",860.47656
"Worms",1008.7158

358
"Mainz",1295.96
"Kaub",1444.0793
"Koblenz",1528.0868
"Andernach",2208.8018
"Bonn",2224.2356
"Keulen",2332.453
"Dusseldorf",2385.7227
"Ruhrort",2511.8423
"Wesel",2610.2185
"Rees",2735.6772
"Lobith",2766.772
"Driel boven",473.542
"Amerongen boven",474.09454
"Hagestein boven",488.70618
"H-RN-0908",662.99994
"Rheinfelden",586.232

[Flows]
Date=07.02.2008 00.00
Variable=Flow
"Station","Flow"

........
........

[Stuwen]
StuwProgrammaS285 = -1

[Haringvlietsluizen]
SluisProgrammaLPH84 = -1

[Dispersie]
DispersieBerekend = -1

359
DispersieWaarde = 5

Java source code

[Link]

[Link]

private LineWriter writer = null;


private TimeSeriesContent content = null;

@Override
public void serialize(TimeSeriesContent content, LineWriter writer, String virtualFileName)
throws Exception {
[Link] = writer;
[Link] = content;

[Link]([Link]());

writeLevelEvents();
writeFlowEvents();
writeFooter();
}

private void writeLevelEvents() throws IOException {


//Loop over all timesteps and over all series, pick the level series

for (int i = 0, n = [Link](); i < n; i++) {


[Link](i);

//Write first line with parameter name


[Link]("[Water Levels]");
[Link]("Date=" + [Link]([Link](), "[Link]
[Link]"));
[Link]("Variable=WaterLevel");
[Link]("\"Station\"" + ',' + "\"Level\"");

//Get values of every timeseries for current time.


for (int j = 0, m = [Link](); j < m; j++) {
[Link](j);
TimeSeriesHeader header = [Link]();
if (![Link]().equals("Level")) continue;
if ([Link]()) continue;
[Link]('\"' + [Link]() + "\"," + [Link]('.'));
}
[Link]();
}
[Link]();
}

private void writeFlowEvents() throws IOException {


//Loop over all timesteps and over all series, pick the flow series

for (int i = 0, n = [Link](); i < n; i++) {


[Link](i);
//Write first line with parameter name
[Link]("[Flows]");
[Link]("Date=" + [Link]([Link](), "[Link]
[Link]"));
[Link]("Variable=Flow");

360
[Link]("\"Station\"" + ',' + "\"Flow\"");

//Get values of every timeseries for current time.


for (int j = 0, m = [Link](); j < m; j++) {
[Link](j);
TimeSeriesHeader header = [Link]();
if (![Link]().equals("Flow")) continue;
if ([Link]()) continue;
[Link]('\"' + [Link]() + "\"," + [Link]('.'));
}
[Link]();
}
[Link]();
}

private void writeFooter() throws IOException {


[Link]("[Stuwen]");
[Link]("StuwProgrammaS285 = -1");
[Link]();
[Link]("[Haringvlietsluizen]");
[Link]("SluisProgrammaLPH84 = -1");
[Link]();
[Link]("[Dispersie]");
[Link]("DispersieBerekend = -1");
[Link]("DispersieWaarde = 100");
}
}
]]>

SHEF Export

Introduction

Export scalar timeseries to SHEF format (example config).

Example

No example output at present.

TSD Export

Introduction

Export scalar timeseries to tsd type format (example config). This is a tab delimited file with two header row. The first column contains the
date/time. Date format is yyyy-MM-dd HH:mm:ss. The first header line contains the parameter and the T0. The second line the location above
each column. As such, only one parameter can be exported per file.

Example

No example present

UM Aquo export

Introduction

The UM Aquo file format is a special XML format to exchange all types of time series data, defined by the Dutch IDsW. In FEWS it can only be
used to export sample time series. Currently only the format version of 2009 is supported.

More information on the UM Aquo file format can be found IDsW internet site for UM Aquo:

[Link]

The Export module in FEWS exports requires many additional information that should be supplied to the export module by using an idMap. In this
idMap the next four external qualifiers should be defined:

361
1. externalQualifier1 : eenheid
2. externalQualifier2 : hoedanigheid
3. externalQualifier3 : compartiment
4. externalQualifier4 : <landcode>;<waterbeheerderCode>;<waterbeheerder>

All codes are listed in:

unit / eenheid
[Link]
hoedanigheid
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
compartiment
[Link]
[Link]
[Link]
[Link]
[Link]
landcode
always: NL (it is a Dutch standard only...)
waterbeheerderCode and waterbeheerder
[Link]

Example flagConversionFile

362
<flagConversions xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns="
[Link]
<flagConversion>
<inputFlag> <value>0</value></inputFlag>
<outputFlag> <value>0</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>1</value></inputFlag>
<outputFlag> <value>0</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>2</value></inputFlag>
<outputFlag> <value>0</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>3</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>4</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>5</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>6</value></inputFlag>
<outputFlag> <value>99</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>7</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>8</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>9</value></inputFlag>
<outputFlag> <value>99</value></outputFlag>
</flagConversion>
<defaultOuputFlag><value>0</value></defaultOuputFlag>
<missingValueFlag><value>99</value></missingValueFlag>
</flagConversions>
]]>

Example idMap

Below an example idMap file is listed that is used within the export for timeseries of the waterboard Vallei en Eem in the Netherland. That is why
landcode=NL, waterbeheerderCode=10 and waterbeheerder=Waterschap Vallei en Eem.

363
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<!-- external: UM Aquo parameter
internal: FEWS parameter
externalQualifier1 : eenheid
externalQualifier2 : hoedanigheid
externalQualifier3 : compartiment
externalQualifier4 : landcode;waterbeheerderCode;waterbeheerder
-->
<!-- Temperatuur parameters-->
<parameter externalqualifier3="LT;Lucht" externalqualifier4="NL;10;Waterschap Vallei en Eem"
externalqualifier1="oC;graad Celsius" internal="T_meting_lucht" externalqualifier2="NVT;Niet van
toepassing" external="T;Temperatuur"/>
<parameter externalqualifier3="OW;" externalqualifier4="NL;10;Waterschap Vallei en Eem"
externalqualifier1="oC;graad Celsius" internal="T_meting_oppwater" externalqualifier2="NVT;Niet
van toepassing" external="T;Temperatuur"/>
<parameter externalqualifier3="GW:Grondwater" externalqualifier4="NL;10;Waterschap Vallei en
Eem" externalqualifier1="oC;graad Celsius" internal="T_meting_grondwater" externalqualifier2=
"NVT;Niet van toepassing" external="T;Temperatuur"/>
<!-- Hoogte parameters-->
<parameter externalqualifier3="OW;Oppervlaktewater" externalqualifier4="NL;10;Waterschap
Vallei en Eem" externalqualifier1="m;meter" internal="WATHTE_meting" externalqualifier2=
"NAP;t.o.v. Normaal Amsterdams Peil" external="WATHTE;Waterhoogte"/>
<!-- Neerslag parameters-->
<parameter externalqualifier3="HW;Hemelwater" externalqualifier4="NL;10;Waterschap Vallei en
Eem" externalqualifier1="ml;milliliter" internal="NEERSG_meting" externalqualifier2="NVT;Niet van
toepassingl" external="NEERSG;Neerslag"/>
<!-- Debiet parameters-->
<parameter externalqualifier3="OW;Oppervlaktewater" externalqualifier4="NL;10;Waterschap
Vallei en Eem" externalqualifier1="m3/s;kubieke meter per seconde" internal="Q_meting"
externalqualifier2="NVT;Niet van toepassingl" external="Q;Debiet"/>
<parameter externalqualifier3="OW;Oppervlaktewater" externalqualifier4="NL;10;Waterschap
Vallei en Eem" externalqualifier1="m3/s;kubieke meter per seconde" internal="Q_berekend"
externalqualifier2="NVT;Niet van toepassingl" external="Q;Debiet"/>
<parameter externalqualifier3="OW;Oppervlaktewater" externalqualifier4="NL;10;Waterschap
Vallei en Eem" externalqualifier1="m3/s;kubieke meter per seconde" internal="Q_totaal"
externalqualifier2="NVT;Niet van toepassingl" external="Q;Debiet"/>
<!-- Druk parameters-->
<parameter externalqualifier3="LT;Lucht" externalqualifier4="NL;10;Waterschap Vallei en Eem"
externalqualifier1="B;Beaufort" internal="DRUK_meting_lucht" externalqualifier2="NVT;Niet van
toepassingl" external="DRUK;Druk"/>
</idMap>
]]>

Java souce code

[Link]

Rdbms Export
What [Link]

Required no

Description Exports historical time series data to RDBMS

schema location [Link]

Entry in ModuleDescriptors <moduleDescriptor id="RdbmsExport">


<description>Exports historical time series data to RDBMS</description>
<className>[Link]</className>

</moduleDescriptor>

Configuration
General

364
jdbcDriverClass
jdbcConnectionString
user
password
exportTimeWindow
exportTimeZone
moduleInstanceID
filter
RDBMS DDL/object creation scripts
Required database size (disk space)
Additional remarks

Configuration

The RdbmsExport module exports historical time series data to tables in a RDBMS. These tables must exist prior to running the module. Notice
that in the current version no qualifiers are supported!

The configuration of the module is setup as:

365
In the sections below the different elements of the configuration are described

General

jdbcDriverClass

366
JDBC driver class to use for connection to RDBMS.
FEWS installation contains drivers for Oracle, PostgreSQL, Firebird

An Oracle example

<jdbcDriverClass>[Link]</jdbcDriverClass>

jdbcConnectionString

Connection string to use by JDBC driver to connect to RDBMS

An Oracle example:

<jdbcConnectionString>jdbc:oracle:thin:@localhost:1521:xe</jdbcConnectionString>

user

Username on the (target) RDBMS.

password

Password for user on the (target) RDBMS.


Encryption of the password in the FEWS configuration is not implemented yet.

exportTimeWindow

Defines the time window for which to export data from FEWS.

<exportTimeWindow unit="day" start="-10" end="0"/>

When setting up an export configuration, one must consider the following:


The RdbmsExport functionality is designed from the idea of a complete export initially (for the configured data sets) and periodic exports
henceforth of new and/or mutated data to keep the FEWS datastore and export database in sync.
Identifying new or mutated historical data can only be done within a period of 10 days from the System Time ( ST); if the period of 10 days is
exceeded a complete export is forced:

start < ST-10days => complete export


ST-10days < start <= ST => mutated and new data only

The exportTimeWindow will be applied if it is within the aforementioned 10 days period. Then it will limit the amount of data exported to be within
the specified start and end.

exportTimeZone

The time zone in which to export the data from FEWS.

<exportTimeZone>+01:00</exportTimeZone>

moduleInstanceID

Optional list of Module Instance Id's for which to export time series data.

<moduleInstance moduleInstanceID="Statistiek_Percentielen_jaar" />


<moduleInstance moduleInstanceID="Statistiek_Percentielen_seizoen" />

filter

Optional list of Filter Id's for which to export time series data.

367
<filter filterID="TSI_productie" />
<filter filterID="TMX_ruw" />
<filter filterID="DINO_ruw" />

RDBMS DDL/object creation scripts

Tables for storage of data in the RDBMS must be present before first execution of RDBMS Export module.
Proper priviledges must be assigned to the user account by the database administrator to insert/update data in these tables as well as (execution)
rights on the sequences and triggers in use.

Data model:

DDL scripts:

Oracle DDL
PostgreSQL DDL
Ms SQL Server DDL

Notice that it may be required to increase some column sizes, like [Link] from the default 256 characters to e.g. 1024 characters to
be able to store the complete string. If the size in the database is not large enough, the export will stop with an error (data truncation error).

Required database size (disk space)

One record in the TIMESERIEDATA table requires about 300 bytes. This means that 1.000.000 records take about 286 MB (300 * 1e6 million /
1024 /1024).

Additional remarks

The value of the timestep attribute (a string indicating the time step) in the Timeserie table will depend on the locale/language settings of
the computer which is running the export module.

368
Report Export

Report Export Module Configuration

This Report Export module is one of the DELFT-FEWS export modules. This export module is responsible for retrieving reports generated by
forecasting runs from the database, and exporting these to the relevant directory structure on the web server. Reports can then be accessed from
there via the web interface to DELFT-FEWS. All reports are exported as is, by the report module- ie the module is only responsible for distributing
reports created.

Access to these reports through the web server may be at different levels depending on the user in question. The report export module itself does
not explicitly consider these access rights, but exports the reports in such a structure to allow the static part of the web server to correctly
administer the access rights.

This means the export of reports is divided into three parts

Export of the current forecast information


Export of forecast information other than the current forecast.
Export of system status information

The web server will control access to all or some reports in these categories to appropriate users.

When available as configuration on the file system, the name of the XML file for configuring an instance of the correlation module called for
example Report_Export may be:

Report_Export 1.00 [Link]

Report_Export File name for the Report_Export configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

369
Figure 133 Elements of the reportExport module configuration

reportExportRootDir

Root directory to which the all reports are to be exported to. This directory is typically the root directory of the web server

currentForecastReports

Root element for definition of exporting reports for the current forecast

currentForecastSubDir

Root directory for exporting current forecasts to.

excludeModuleInstanceId

Optional list of reports generated by report module instances that should not be included in the export of current forecasts.

exportForecastReports

Root element for definition of exporting reports from recent forecasts made. Includes both the current forecast and a configurable number of
recently made forecasts.

numberForecastsToExport

Definition of number of recent forecasts to report.

370
NOTE: The number defined here should comply with the number of links to other forecasts in the index_template.html file. This file is located in
the reportExportRootDir directory.

exportForecastSubDir

Directory to use as root for exporting other forecasts to. For identification a sub-directory is created for each forecast exported. This sub-directory
is constructed using the id of the taskRun it was created by.

excludeModuleInstanceId

Optional list of reports generated by report module instances that should not be included in the export of other forecasts.

exportSystemStatusReports

Root directory for exporting system status reports to.

includeModuleInstanceId

List of reports identified by moduleInstanceId of the report module instances that created them that should be included in the export of other
system status reports.

05 General Adapter Module


What [Link]

Description Configuration for the general adapter module

schema location [Link]

Entry in ModuleDescriptors <moduleDescriptor id="GeneralAdapter">


<description>General Adaptor to launch published interface compliant modules</description>

<className>[Link]</className>
</moduleDescriptor>

General Adapter Configuration


general
burnInProfile
activities
General settings
description
piVersion
rootDir
workDir
exportDir
exportDataSetDir
exportIdMap
exportUnitConversionsId
importDir
importIdMap
importUnitConversionsId
dumpFileDir
dumpDir
diagnosticFile
missVal
convertDatum

371
timeZone
timeZoneOffset
timeZoneName
time0Format
ensembleMemberCount
Burn-In Profile
length
timeSeries
Startup Activities
purgeActivity
filter
unzipActivity
zipActivity
Export Activities
exportStateActivity
description
moduleInstanceId
stateExportDir
stateConfigFile
stateLocations
stateSelection
loopTimeStep
writeIntermediateState
ExportTimeSeriesActivity
description
exportFile
exportBinFile
ignoreRunPeriod
includeThresholds
timeSerieSets
timeSerieSets: timeSerieSet
omitMissingValues
omitEmptyTimeSeries
forecastSelectionPeriod
ExportMapStacksActivity
description
exportFile
gridFile
locationId
gridName
gridFormat
timeSerieSet
exportProfilesActivity
ExportDataSetActvity
description
moduleInstanceId
ExportParameterActivity
description
moduleInstanceId
fileName
ExportTableActivity
description
exportFile
tableType
operation
parameters
locationId/locationSetId
exportNetcdfActivity
description
exportFile
timeSeriesSets
omitMissingValues
omitEmptyTimeSeries
ExportRunFileActivity
description
exportFile
properties
Execute Activities
executeActivity
description
command
arguments
environmentVariables
timeOut
overrulingDiagnosticFile

372
ignoreDiagnostics
GA Variables
Import Activities
description
importStateActivity
stateConfigFile
importTimeSeriesActivity
importMapStacksActivity
importPiNetcdfActivity
importProfilesActivity
Shutdown Activities

General Adapter Configuration


A key feature of DELFT-FEWS is its ability to run external modules to provide essential forecasting functionality. These modules may be
developed by Deltares as well as by other companies or institutions. The DELFT-FEWS system does not have any knowledge of the specific
implementation of these modules. It is rather the central philosophy to have an open system, that is able to treat external modules as plug-ins that
can be used if needed.

The General Adapter is the part of the DELFT-FEWS system that implements this feature. It is responsible for the data exchange with the
modules and for executing the modules and their adapters. The central philosophy of the General Adapter is that it knows as little as possible of
module specific details. Module specific intelligence is strictly separated from the DELFT-FEWS system. In this way an open system can be
guaranteed. Module specific intelligence required by the module to run is vested in the module adapters.

Communication between the General Adapter and a module is established through the published interface (PI). The PI is an XML based data
interchange format. The General Adapter is configured to provide the data required for a module to run in the PI format. A module adapter is then
used to translate the data from the PI to the module native format. Vice versa, results will first be exported to the PI format by a module adapter
before the General Adapter imports them back into DELFT-FEWS.

The General Adapter module can be configured to carry out a sequence of five types of tasks;

Startup Activities. These activities are run prior to a module run and any export import of data. The activities defined are generally used to
remove files from previous runs that may implicate the current run.
Export Activities. These activities defined all items to be exported through the published interface XML formats to the external module,
prior to the module or the module adapters being initialised.
Execute Activities. The execute activities define the external executables or Java classes to be run. Tracking of diagnostics from these
external activities is included in this section.
Import Activities: These activities define all items to be imported following successful completion of the module run.
Shutdown Activities. These activities are run following completion of all other activities The activities defined are generally used to remove
files no longer required.

373
Figure 65 Schematic interaction between the General Adapter and an external module

When available as configuration on the file system, the name of the XML file for configuring an instance of the general adapter module called for
example HBV_Maas_Forecast may be:

HBV_Maas_Forecast 1.00 [Link]

HBV_Maas_Forecast File name for the HBV_Maas_Forecast configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

374
Figure 66 Elements of the General Adapter configuration

general

Root element of general settings.

burnInProfile

Burn-in period and initial value for cold state starts.

activities

Root element for the activities to be defined. The activities are defined in a fixed order;

startUpActivities
exportActivities
executeActivities
importActivities
shutDownActivities

General settings

375
Figure 67 Elements of the general section of the general adapter configuration

description

Optional description of the configuration. Used for reference purposes only.

piVersion

Version of the PI specification that is supported by the pre and post adapter.

376
rootDir

Root directory for the external module. Other directories can be defined relative to this rootDir using predefined tags (see comment box below).

workDir

Working directory to be used by the external module. When started this directory will be the current directory.

exportDir

Directory to export data from DELFT-FEWS to the external module. All Published Interface files will be written to this directory (unless overruled in
naming the specific export files).

exportDataSetDir

Directory to export module datasets from DELFT-FEWS to the external module. A module dataset is a ZIP file, which will be unzipped using this
directory as the root directory. If the zip file contains full path information, this will be included as s tree of subdirectories under this directory.

exportIdMap

ID of the IdMap used to convert internal parameterId's and locationId's to external parameter and location Id's. See section on configuration for
Mapping Id's units and flags.

exportUnitConversionsId

Id of UnitConversions to be used for export unit mapping

importDir

Directory to import result data from the external module to DELFT-FEWS. All Published Interface files will be read from this directory (unless
overruled in naming the specific export files).

importIdMap

ID of the IdMap used to convert external parameterId's and locationId's to interna l parameter and location Id's. This may be defined to be the
same as the import directory, but may also contain different mappings. See section on configuration for Mapping Id's units and flags.

importUnitConversionsId

Id of UnitConversions to be used for import unit mapping

dumpFileDir

Directory for writing dump files to. Dump Files are created when one of the execute activities fails. A dump file is a ZIP file which includes all the
dumpDir directories defined. The dump file is created immediately on failure, meaning that all data and files are available as they are at the time of
failure and can be used for analysis purposes. The ZIP file name is time stamped to indicate when it was created.

dumpDir

Directory to be included in the dump file. All contents of the directory will be zipped. Multiple dumpDir's may be defined.

NOTE: ensure that the dumpDir does not include the dumpFileDir. This creates a circular reference and may result in corrupted ZIP files.

diagnosticFile

377
File name and path of diagnostic files created in running modules. This file should be formatted using the Published Interface diagnostics file
specification.

missVal

Optional specification of missing value identifier to be used in PI-XML exported to modules and imported from modules.

NOTE: it is assumed an external uses the same missing value identification for both import and export data.

convertDatum

Optional Boolean flag to indicate level data is used and produced by the module at a global rather than a local datum. The convention in
DELFT-FEWS is that data is stored at a local datum. If set to true data in parameter groups supporting datum conversion will be converted on
export to the global datum by adding the z coordinate of the location. (see definition of parameters and locations in Regional Configuration).

timeZone

The time zone with reference to UTC (equivalent to GMT) for all time dependent data communicated with the module. If not defined, UTC+0
(GMT) will be used.

timeZoneOffset

The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.
(e.g. +01:00)

timeZoneName

Enumeration of supported time zones. See appendix B for list of supported time zones.

time0Format

The date time format of the %TIME0% variable


yyyy Year
M Month in year
d Day in month
H Hour in day (0-23)
m Minute in hour
s Second in minute

ensembleMemberCount

Defines if ensembles are read from or written to a number of sub directories.

Burn-In Profile

Burn-in profile for cold state starts. Used to replace first part of a timeseries.

For time series with matching parameter-location ids, the first value is replaced by the initialValue. Element length defines the length of timeseries
beginning that is to be replaced using linear interpolation.

length

Length of time series beginning that is to be replaced.

378
timeSeries

Initial value (which should match cold state), location and parameter should be specified.

Startup Activities

Figure 68 Elements of the startUpActivities section of the General Adapter configuration.

purgeActivity

Root element of a purge activity used to delete files from previous runs. Multiple purge activities may be defined.

filter

Filter specifying files to be removed. Wildcards may be used.

Deleting a whole directory can be achieved by defining the directory path in the filter without any file filter options ( .).
eg: %ROOT_DIR%/exportDir/purgeDirectory

A directory can only be removed if it is a sub directory of the General Adapter root directory!

Example (not the use of tags to define the directory names):

unzipActivity

Root element of an unzip activity used to unpack a zip file and put the contained files in the directory of choice. Multiple unzip activities may be
defined.

Each activity has the following elements:

description - optional description of the activity (for documentation only)


sourceZipFile - the name of the zip file to be unzipped
destinationDir - the name of the directory where the files will be put

zipActivity

Root element of a zip activity used to pack all files and subdirectories of an indicated directory to a zip file for later use/inspection. Multiple zip
activities may be defined.

Each activity has the following elements:

379
description - optional description of the activity (for documentation only)
sourceDir - the name of the directory containing the files to be zipped
destinationZipFile - the name of the zip file to be created

Example:

<startupActivities>
<unzipActivity>
<sourceZipFile>extra_files.zip</sourceZipFile>
<destinationDir>%ROOT_DIR%/work</destinationDir>
</unzipActivity>
</startupActivities>
...
<shutdownActivities>
<zipActivity>
<sourceDir>%ROOT_DIR%/work</sourceDir>
<destinationZipFile>%ROOT_DIR%/inspection/[Link]</destinationZipFile>
</zipActivity>
</shutdownActivities>

Export Activities

Figure 69 Elements of the ExportActivity section

Export activities are defined to allow exporting various data objects from DELFT-FEWS to the external modules. The list of objects that can be
exported (see figure above) includes;

exportStateActivity to export module states


exportTimeSeriesActivity to export time series for scalar or polygon time series
exportMapStacksActivity to export time series for grid time series
exportProfilesActivity to export time series for longitudinal profile time series
exportDataSetActivity to export module datasets
exportParameterActivity to export module parameters
exportTableActivity to export table (e.g. rating table)
exportNetcdfActivity to export grid time series in Netcdf format
exportRunFileActivity to export a run file (The run file contains general information that is used by the pre and post adapter)

380
Note that for most types of exportActivity, multiple entries may exist.

exportStateActivity

Figure 70 Elements of the ExportStatesActivity section.

description

Optional description for the export states configuration. Used for reference purposes only.

moduleInstanceId

Id of the moduleInstance that has written the state to be exported. Generally this will be the same as the Id of the current instance of the General
Adapter. This can also be the ID of another instance of the General Adapter. The latter is the case when using a state in a forecast run that has
been written in an historical run.

stateExportDir

Directory to export the states to. This is the export location for the (binary) state files.

stateConfigFile

Name (and location) of the PI-XML file describing the states. If the directory location is not explicitly specified the file will be written in the exportDir
defined in the general section.

stateLocations

Root element for the description of the state. Both a read location and a write location will need to be defined. This allows the name of the
file(s)/directory to be different on read and write. Multiple locations may be defined, but these must all be of the same type.

Attributes type: indication of type of state to be imported. This may either be "directory" or "file". Note that multiple locations are supported
only if type is "file".
stateLocation - Root element of a state location
readLocation - Location where the external module will read the state. This is the location (and name of file/directory) where the General
Adapter writes the state.
writeLocation - Location where the external module is expected to write the state. This is the location (and name of file/directory) where
the General Adapter expects to read the state.

<stateLocations type="file">
<stateLocation>
<readLocation>[Link]</readLocation>
<writeLocation>[Link]</writeLocation>
</stateLocation>
</stateLocations>

stateSelection

Root element to specify how a state to be exported to the external module is to be selected. Two main groups are available, cold states and warm
states. Only one of these types can be specified. Note that if a warm state selection is specified and an appropriate warm state cannot be found, a
cold state will be exported by default.

381
coldState - Root element for defining the stateSelection method to always export a cold state.
groupId - Id of the group of cold states to be used. This must be a groupId as defined in the ColdModuleInstanceStateGroups
configuration (see Regional Configuration).
coldState:startDate - Definition of the start date of the external module run when using the cold state. This startDate is specified relative
to the start time of the forecast run. A positive startDate means it is before the start time of the forecast run.
warmState - Root element for defining the stateSelection method to search for the most suitable warm state.
stateSearchPeriod - Definition of the search period to be used in selecting a warm state. The database will return the most recent suitable
warm state found within this search period.
coldStateTime - Definition of the start time to use for a cold state if a suitable state is not found within the warm state search period.
insertColdState - When you set insertColdState to true, the defaultColdState is inserted into the WarmStates when no WarmState is
found inside the stateSearchPeriod. By default the cold state is not inserted as warm state

<stateSelection>
<warmState>
<stateSearchPeriod unit="hour" start="-48" end="0"/>
<coldStateTime unit="hour" value="-48"/>
<insertColdState>true</insertColdState>"
</warmState>
</stateSelection>

loopTimeStep

When specified, all activities are run in a loop to ensure that a state is produced on every cardinal time step between the time of the exported
state and T0. This has two advantages:

states are distributed over time equally and frequently. It is possible to start an update run from every point, also half way of a cold udpate
run that spans several days.
restriction of memory consumption. You can run an update run over months without going out of RAM.

Do not specify a relative view period for all time series sets in the export activity

writeIntermediateState

When specified, an extra state is written at the end of the state search period. Note that the run is than split in two. E.g my state search period is
-10 to -4 days, then there are two update runs, one from the time where a state was found to -4 and one from -4 to T0. A state is written at the
end of both runs (T0 and T0 - 4days). You can additionally define a minimum run length. This is necessary for some runs that need a minimum
run length for e.g. PT updating. The run is then only split in two if both runs can be run over the minimum run length. If not, there is only one run
and the state is written to the end of this run (T0), no intermediate state is written.

see example configuration below and this figure

<exportStateActivity>
<moduleInstanceId>HBV_AareBrugg_Hist</moduleInstanceId>
<stateExportDir>%ROOT_DIR%/FEWS/states</stateExportDir>
<stateConfigFile>%ROOT_DIR%/FEWS/states/[Link]</stateConfigFile>
<stateLocations type="file">
<stateLocation>
<readLocation>HBV_States.zip</readLocation>
<writeLocation>HBV_States.zip</writeLocation>
</stateLocation>
</stateLocations>
<stateSelection>
<warmState>
<stateSearchPeriod unit="hour" start="-240" end="-96"/>
</warmState>

382
</stateSelection>
<writeIntermediateState>true</writeIntermediateState>
<minimumRunLength unit="day" multiplier="4"/>
</exportStateActivity>

ExportTimeSeriesActivity

Figure 71 Elements of the exportTimeSeries section

description

Optional description of the timeSeries export configuration.

exportFile

Name (and location) of the PI-XML file with exported time series. If the directory location is not explicitly specified the file will be written in the
exportDir defined in the general section.

exportBinFile

When true the events in the PI time series file are written to a binairy file instead of the xml file. The written xml file will only contain the time series
headers and optionally a time zone. The binairy file has the same name as the xml file only the extension is "bin" instead of "xml". During PI time
series import the bin file is automatically read when available. The byte order in the bin file is always Intel x86.

ignoreRunPeriod

When true the run period, written in the pi run file, will not be extended.

includeThresholds

When true any thresholds for the exported timeseries will be written in the timeserie headers

timeSerieSets

Root element for defining timeSerieSets to be exported.

timeSerieSets: timeSerieSet

TimeSeriesSets to be exported. These may contain either a (list of) locations or a locationSet. Multiple entries may be defined.

omitMissingValues

Are missing values to be written to the export file or should they be left out.

omitEmptyTimeSeries

When true, a series is not exported when the time series is empty (or when omitMissingValues = true, when the time series is empty after
removing the missing values.)

forecastSelectionPeriod

Can be used to select all approved forecasts with a forecast start time lying within this period

383
ExportMapStacksActivity

Figure 72 Elements of the ExportMapStacksActivity.

description

Optional description of the timeSeries export configuration.

exportFile

Name (and location) of the PI-XML file describing the map stack of the exported grid time series. If the directory location is not explicitly specified
the file will be written in the exportDir defined in the general section.

gridFile

Root element for defining file name of grid to be exported

locationId

LocationId for grid to be exported.

gridName

Name of the files for the grid to be exported. For grid files where each time slice is stored in a different file, this name is the prefix for the full file
name. The final file name is created using an index of files exported (e.g the file name for the 4^th^ time step is grid00000.004).

gridFormat

Format of the exported grid. Enumeration of options include;

asc : for exporting to ARC-INFO ASCII grid format


pcrgrid : for exporting to PCRaster native grid file format
usgsdem : for exporting to USGS DEM format (BIL)

timeSerieSet

TimeSeriesSets to be exported. These should contain only one locationId. For exporting multiple grids, multiple exportMapStack activities should
be defined.

exportProfilesActivity

Configuration of the exportProfiles activity is identical to the exportTimeSeries Activity.

384
ExportDataSetActvity

Figure 73 Elements of the exportDataSets section

description

Optional description of the module dataset export configuration.

moduleInstanceId

Optional reference to the moduleInstanceId of the moduleDataSet to be exported. If not defined the moduleInstanceId of the current module
instance is taken as a default (see section on Module Datasets and Parameters).

ExportParameterActivity

Figure 74 Elements of the exportParameter section

description

Optional description of the module parameter configuration.

moduleInstanceId

Optional reference to the moduleInstanceId of the moduleParameter to be exported. If not defined the moduleInstanceId of the current module
instance is taken as a default (see section on Module Datasets and Parameters)

fileName

Name (and location) of the PI-XML file with exported parameters. If the directory location is not explicitly specified the file will be written in the
exportDir defined in the general section.

ExportTableActivity

385
Figure 1 Elements of the ExportTableActivity configuration

description

Optional description of the ExportTableActivity configuration.

exportFile

File to which the table will be exported. This file is always placed in exportDir.

tableType

Type of table to be exported. Currently enumeration must be "ratingCurve"

operation

ID of the table to be exported.

parameters

Parameters for the convertEquation operation. Must include minimumLevel, maximumLevel and stepSize

locationId/locationSetId

location id to select the rating curve

exportNetcdfActivity

Figure 2 Elements of the ExportNetcdfActivity configuration

description

Optional description of the ExportNetcdfActivity configuration.

exportFile

File to which the data will be exported. This file is always placed in exportDir.

386
timeSeriesSets

TimeSeriesSet that defines what data is to be exported.

omitMissingValues

Are missing values to be written to the export file or should they be left out.

omitEmptyTimeSeries

The time series is not exported when the time series is empty or when; omitMissingValues = true and the time series is empty after removing the
missing values.

ExportRunFileActivity

Figure 3 Elements of the ExportRunFileActivity configuration

description

Optional description of the ExportRunFileActivity configuration.

exportFile

File to which the data will be exported. This file is always placed in exportDir.

properties

Kind of environment variables for the pre and post adapters. These properties are copied to the run file. This is also a convinient way way to pass
global properties to a pre or post adapter. An adapter is not allowed to access the FEWS [Link] directly. Global properties (between $)
are replace by there literal values before copied to the run file. These extra options makes an additional pre or post adapter configuration file
unnessesary.

Options:

string
int
float
bool

Execute Activities

387
Figure 75 Elements of the ExecuteActivity configuration

executeActivity

Root element for the definition of an execute activity. For each external executable or Java class to run, an executeActivity must be defined.
Multiple entries may exist.

description

Optional description for the activity. Used for reference purposes only.

command

Root element to define command to execute.

executable - File name and location of the executable to run the command is an executable. The file name may include environment
variables, as well as tags defined in the general adapter or on the [Link].
className - Name of Java Class to run if the command defined as a Java class. This class may be made available to DELFT-FEWS in a
separate JAR file in the \Bin directory.
binDir - Directory with jar files and optinally native dlls. When not specified the bin dir and classloader of FEWS is used. When specified
the java class is executed in a private class loader, it will not use any jar in the FEWS bin dir. Only one class loader is created per binDir,
adapters should still not use static variables. All dependencies should also be in this configured bin dir.

arguments

Root element for defining arguments to be passed to the executable/Java class

argument - Definition of an argument to be passed to the executable/Java Class

environmentVariables

Root element for defining environment variable prior to running the executable/Java class

environmentVariable - Definition of an environment variable prior to running the executable/Java class


[Link] - Name of environment variable
[Link] - Value of environment variable

timeOut

Optional timeout to be used when running module (in milliseconds). If run time exceeds timeout it will be terminated and the run considered as
having failed.

overrulingDiagnosticFile

File containing diagnostic information about activity. This file always is located in the importDir and overrules the global diagnostic file.

ignoreDiagnostics

388
For this activity no check should be done whether the diagnostics file is present or not.

GA Variables
Several variable are available to be used as a argument to an external program. These are:

variable description

#time_0# DEPRECATED, use %TIME0%\ instead

%TIME0% Time zero or forecast time.

%TASK_ID% current task id

%TIME_ZONE_OFFSET_SECONDS% the offset with GMT in seconds

%WORK_DIR% work directory of general adapter

%ENSEMBLE_MEMBER_INDEX% ensemble member index

Import Activities

Figure 76 Elements of the ImportActivities configuration

description

Optional description of import activity. Used for reference purposes only

importStateActivity

Root element for importing modules states resulting from the run of the external modules. Multiple elements may be defined. If no state is to be
imported (for example in forecast run as opposed to state run), then the element should not be defined.

stateConfigFile - Fully qualifying name of the XML file containing the state import configuration
expiryTime - When the state is an intermediate result in a forecast run you can let the state expire. By default the expiry time is the same
as the module instance run.
synchLevel - Optional synch level for state. Defaults to 0 is not specified (i.e. same as data generated by the forecast run)

stateConfigFile

Name (and location) of the PI-XML file describing the states to be imported. If the directory location is not explicitly specified the file will be
expected to be read from the importDir defined in the general section. This file contains all necessary information to define state type and location.
The moduleInstanceId of the state imported is per definition the current module instance.

importTimeSeriesActivity

Root element for importing scalar and polygon time series resulting from the run of the external modules. Multiple elements may be defined.
importFile and timeSeriesSet should be defined.

importFile - PI-XML file describing the time series to be imported. The file contains all information on type of data to be imported (scalar,

389
longitudinal, grid, polygon). For all data types except the grid the file also contains the time series data If the directory location is not
explicitly specified the file will be expected to be read from the importDir defined in the general section.

importMapStacksActivity

Root element for importing grid time series resulting from the run of the external modules. Multiple elements may be defined. importFile and
timeSeriesSet should be defined.

importPiNetcdfActivity

Root element for importing grid time series in Pi-Netcdf format. importFile and timeSeriesSet should be defined.

importProfilesActivity

Root element for importing longitudinal profile time series resulting from the run of the external modules. Multiple elements may be defined.
importFile and timeSeriesSet should be defined.

Shutdown Activities

Figure 77 Elements of the Shutdown Activities configuration


This activity is the identical to the startUpActivities. The only difference is that these are carried out after the module run and import of data. See
definition of StartUp activities for configuration.

06 Lookup Table Module


Lookup Table module configuration
The Lookup table module is used to derive a simple value based on combining input values of different time series in the forecast database.
These are then used to search in a multi-dimensional lookup table to derive the requested output. The module may also be employed to derive a
decision based on a hierarchic set of rules (critical conditions table).

The lookup table utility is predominantly applied as the forecasting tool for coastal forecasting. Typically values such as predicted surge, wind
force and direction, wave height, fluvial flow in an estuary are used to predict values at a number of points on the coast or in an estuary. These
values are generally defined as a Lookup Index. This can then be resolved to a text string such as "Flood Warning" or "Severe Flood Warning" for
use in for example reports using the ValueAttributeMaps (see Regional Configuration).

Three main types of lookup table may be defined;

simple table lookup. This is a two column (or row) table where the value at each time step in the input series is used to identify a relative
position in the first column (or row). The result value is found in the second column (or row) at the same relative position.
Multi-dimensional lookup. This is a lookup in a matrix. Two input series are required. One is used to find the relative row position in the
matrix at each tome step, while the other is used to find the relative column position in the matrix. The output value is found through
resolving these relative positions in the matrix values using bi-linear interpolation.
Critical condition tables. These defined a set of heuristic rules. Multiple inputs can be combined and an output is found through evaluating
the heuristic rules. A default output (also using rules can be defined).

When available as configuration on the file system, the name of the XML file for configuring an instance of the general adapter module called for
example Coastal_Lookup_Forecast may be:

Coastal_Lookup_Forecast 1.00 [Link]

Coastal_Lookup_Forecast File name for the Coastal_Lookup_Forecast configuration.

1.00 Version number

390
default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 78 Elements of the lookup table configuration.

LookupSet

Root element of the definition of a lookup table. Multiple entries may exist.

Attribute;

lookupSetId : Id of the lookup table. Used for reference purposes only (e.g. in log messages).

inputVariable

Definition of input variable to be used in the lookup table. For each entry in the lookup table an input variable will need to be identified.
The variableId is used to refer to the time series. See Transformation Module for definition of inputVariable configuration.

outputVariable

Definition of output variable as a result of the lookup table. A single timeSeriesSet for one location output variable per lookup table is
defined.

comment

Optional comment on lookup display configuration. Used for reference purposes only.

criticalConditionLookup

Root element for definition of a critical condition table. If no results can possibly be returned by any of the conditions specified, a
defaultValue should be defined as a set of rules.

Attributes;

Id Id of the criticalConditionTable. Used for reference purposes only

simpleTableLookup

391
Root element for definition of a simple table lookup. Multiple entries may exist.

Attributes;

lookUpVariableId Id of the input variable to be used in the table.


outputVariableId Id of the output variable.
rows number of rows in lookup table (if lookup is defined per row then this is equal to 1).
cols number of columns in lookup table (if lookup table is defined per col then this is equal to 1).
type Optional indication of type of value in lookup table (same type will be returned). Enumeration of "float"or "int".

multiDimensionalLookup

Root element for definition of a multidimensional lookup table. Multiple entries may exist.

Attributes;

lookUpRowVariableId Id of the input variable to be used in the table for finding relative row position.
lookUpColVariableId Id of the input variable to be used in the table for finding relative column position..
outputVariableId Id of the output variable.
rows number of rows in lookup table (matrix).
cols number of columns in lookup table (matrix).
type Optional indication of type of value in lookup table. Enumeration of "float"or "int".

criticalConditionLookup

Figure 79 Elements of the criticalConditionLookup configuration

criticalCondition

Definition of a critical condition as a set of rules. Multiple entries may exist. When multiple entries do exist, then these will be resolved
sequentially until a condition defined is met. The result is then written to the output time series. Each condition holds a set of rules. Each
rule is resolved to a Boolean true or false. Rules can be combined in ruleGroups using Boolean operators. If a "true" value returned
through combination of all rules and ruleGroups specified, then the conditions specified are met.

Attributes (only required attributes defined);

rule: string value for result to be returned if conditions specified are met (for reference purposes only).
ruleIndex: index value returned if conditions specified are met. This is the value returned in the output time series. Value given is either a
numerical value enclosed in quotes (e.g. "4" or "Missing" to indicate a missing value should be returned).

ruleCriteria

Root element for definition of set of rules and ruleGroups. Multiple ruleCriteria can be defined. These are combined using the logical
operator defined.

ruleCriteriaLogicalOperator

Operator for combining ruleCriteria to single Boolean value. Enumeration of "and " and "or ".

rule

Definition of a rule to resolve to a Boolean value.

Attributes;

variable: id of Input variable to use in evaluating rule


operator: Definition of operator to be used in comparison. Enumeration of options include;
lt : less than
le : less than or equal to
eq : equal to
ge : greater than or equal to
gt : greater than
ne : not equal to

392
value: Value to compare input variable to using operator defined.
logical: optional definition of logical operator to combine sequence of rules (for rules defined in a rule group only). Enumeration of "and "
and "or ".

ruleGroup

Root element for defining a rule group. A rule group is a sequence of rules. Each rule is configured as defined above, and combined
using the logical operator given in the rule. The logical operator need not be included in the last rule defined.

Example:

defaultValue

The default value element is identical to the specification of a criticalConditon as described above.

SimpleTableLookup

393
Figure 80 Elements of the simpleTableLookup configuration

LookUpData

Row vector of data used to find relative position of input variable.

Attributes;

number : optional definition of number of entries (otherwise inferred from data provided)
type : optional type indication of data. Enumeration of "float" and "int".
separator : optional indication of separator string used between values. Default is space. Enumeration of;
space

data

Element containing data vector separated by separator character defined.

rowwise

Element to define vector of data to lookup data in. Data value at relative position is returned. Attributes are same as LookUpData
element. Use this element if data is provided as a row.

columnwise

Element to define vector of data to lookup data in. Data value at relative position is returned. Use this element if data is provided as a one
value per row (as a column)

Attributes;

number : optional definition of number of entries (otherwise inferred from data provided)
type : optional type indication of data. Enumeration of "float" and "int".
separator : optional indication of separator string used between values. Default is space. Enumeration of;
lineseparator

info

Element containing information on how values are determined in lookup vector using the relative position determined.

info:extrapolation

Definition of how to extrapolate when relative position is above last or below first value in vector. Enumeration includes;

394
none : no extrapolation, missing value is returned
minmax : limit values returned to minimum/maximum of vector
linear : linear extrapolation using last or first two values in vector

info:interpolation

Definition of how to interpolate between values in vector. Enumeration includes;

class : returns closest value in vector.


linear : linear interpolation

Example:

MultipeDimensionLookup

Figure 81 Elements of the multiDimensionalLookup configuration

lookupColData

Row vector of data used to find relative position in matrix columns of input variable defined as lookUpColVariableId.

Attributes;

number : optional definition of number of entries (otherwise inferred from data provided)
type : optional type indication of data. Enumeration of "float" and "int".

395
separator : optional indication of separator string used between values. Default is space. Enumeration of;
space

lookupRowData

Row vector of data used to find relative position in matrix rows of input variable defined as lookUpRowVariableId.

Attributes;

number : optional definition of number of entries (otherwise inferred from data provided)
type : optional type indication of data. Enumeration of "float" and "int".
separator : optional indication of separator string used between values. Default is space. Enumeration of;
space

rowwise

Element for defining rows of marix as a vector of data on one line. For definition see simpleTableLookup. The number of rowwise
elements provided must be equal to the number of columns defined in the multiDimensionalLookup element. Each rowwise vector must
contain as many values as defined in cols in the multiDimensionalLookup element.

colwise

Element for defining rows of marix as a vector of data on one multiple lines. For definition see simpleTableLookup element. The number
of colwise elements provided must be equal to the number of columns defined in the multiDimensionalLookup element. Each colwise
vector must contain as many values as defined in cols in the multiDimensionalLookup element.

Info

See definition in simpleTableLookup element

Example:

07 Correlation Module
Correlation Module Configuration
correlationSet
inputTimeSeriesSet
outputTimeSerieSet
correlation
forecastLocationId
equationType

396
eventSetsDescriptorId
travelTimesDescriptorId
eventSelectionType
Comment
selectionCriteria
period
startDate
endDate
startTime
endTime
selectInsidePeriod
thresholds
thresholdLimit
selectAboveLimit
tags
tag
include
CorrelationEventSets
comment
correlationEventSet
locationId
parameterId
event
TravelTimesSets
travelTime
downstreamLocation
upstreamLocation
travelTime
validPeriod

Correlation Module Configuration


The Correlation Module is used to estimate a level at a downstream location through correlation of historical levels and discharges at specific
upstream locations. This utility is provided through an interactive interface available on the operator client as well as for running automatically
using a fixed correlation in the Forecasting Shell Server. The module may employ data from historical events at different locations as available in
the central database

The module can be used in two ways:

As an automatic forecasting module. The module is then run through a preconfigured workflow with all required inputs being retrieved
from those available in the database. Results are returned to the database. These results are then available for later viewing through for
example a suitably configured report. In this mode a module Instance of the correlation module is defined as described below.
In interactive mode. In this mode the module is used through the correlation display available on the operator client. In this mode no
results are returned to the database. A module instance does not need to be created in this mode, all required configuration settings are
selected through appropriate options in the dialogue.

The correlation module uses two associated configuration itemsto establish correlations, these are the CorrelationEventSets and the
TravelTimeSets.

When available as configuration on the file system, the name of the XML file for configuring an instance of the correlation module called for
example Correlation_Severn_Forecast may be:

Correlation_Severn_Forecast 1.00 [Link]

Correlation_Severn_Forecast File name for the Correlation_Severn_Forecast configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

397
Figure 82 Elements of the correlationSets configuration

correlationSet

Root element for defining a correlation to be applied..

inputTimeSeriesSet

Time series set to be used as input for the correlation. This time series set can either be a complete hydrograph (e.g. equidistant time series). It
may also be a non-equidistant series of peaks sampled using a transformation module defined previously in the workflow.

outputTimeSerieSet

Time series set to be used as input for the correlation. Values are returned for the same time step as the input series.

correlation

Root element for the definition of a correlation

forecastLocationId

LocationId for the forecast location. This is the location to be defined in the output time series set.

equationType

Definition of the equation type to be applied in determining the correlation.

Attributes;

equationType . Selection of equation type. Enumeration of options include


polynomial

simple_linear

exponential_divide

exponential_multiply

power

logarithmic

398
hyperbolic

polynomialOrder : Integer value for value of polynomial order. Applies only if polynomial equation is selected.

eventSetsDescriptorId

Id of the event sets to be used. This id is defined in the CorrelationEventSetsDescriptors (see Regional Configuration). A suitable
CorrelationEventSets configuration must be available (see below).

travelTimesDescriptorId

Id of the event sets to be used. This id is defined in the CorrelationEventSetsDescriptors (see Regional Configuration). A suitable
CorrelationEventSets configuration must be available (see below).

eventSelectionType

Method to be used in matching events. Events at the support and forecast location can be paired either on the basis of common
EventId's, or on the basis of a selection on travel time, where events at the upstream and downstream location are paired if these are
found to belong to the same hydrological event as defined using travel time criteria defined in the TraveTimesConfiguration. Enumeration
of options includes;

eventid
traveltime

Comment

Optional comment for correlation configuration. Used for reference purposes only.

selectionCriteria

Selection criteria used in defining events to be used in establishing correlations.

Figure 83 Elements of the selectionCriteria configuration

period

Root element for defining selection of events based on dates.

startDate

399
Start date of time span to be used in selection (yyyy-mm-dd).

endDate

End date of time span to be used in selection (yyyy-mm-dd).

startTime

Optional start time of time span to be used in selection (hh:mm:ss)

endTime

Optional end time of time span to be used in selection (hh:mm:ss)

selectInsidePeriod

Boolean to indicate if events are to be selected that fall in the time span defined, or that fall outside the time span defined.

thresholds

Root element for defining selection of events that fall above or below a threshold. The threshold is applied to events selected at the
forecast location.

thresholdLimit

Value of the threshold to use in selecting events.

selectAboveLimit

Boolean to indicate if events are to be selected that fall above the threshold if true. Events are selected below the threshold if false.

tags

Root element for defining selection of events on tags defined in the eventSets

tag

Definition of tag to use in event selection. Multiple tags may be defined.

include

Boolean to define if events with given tag are to be included or excluded in selection.

CorrelationEventSets

This configuration file is related to the Correlation module, and is used to define the events used in establishing a correlation. The
configuration file is in the CorrelationEventSets (table or directory). Each configuration defined is referenced using a
CorrelationEventSetsId as defined in the Regional Configuration.

Figure 84 Elements for defining correlationEventSets

comment

400
Optional comment for correlation event sets configuration. Used for reference purposes only.

correlationEventSet

Root element for defining set of events at a location to be used in establishing correlations.

locationId

LocationId for which events are defined

parameterId

ParameterId for which events are defined.

event

Definition of events at the specified location for the specified parameter.

Attributes;

eventId : String ID for the event (may be used in matching events)


value : value of the event
date : event data (yyyy-MM-dd).
time : event time (hh:mm:ss).
tag : optional tag for event (may be used in event selection)

TravelTimesSets

This configuration file is related to the Correlation module, and is used to define the travel time between locations. These travel times may be
used in matching events. The configuration file is in the TravelTimesSets (table or directory). Each configuration defined is referenced using a
TravelTimesSetsId as defined in the Regional Configuration.

Figure 85 Elements of the TraveTimeSets configuration

travelTime

Root element for defining a set of travel times. Multiple entries may exist.

downstreamLocation

LocationId of the downstream (forecast) location

Attributes;

id : Id of the location
name : name of the location (for reference purposes only)

upstreamLocation

LocationId of the upstream (support) location. Multiple entries may exist.

401
Attributes;

id : Id of the location
name : name of the location (for reference purposes only)

travelTime

Definition of the travel time between the location (average)

Attributes;

unit unit of time (enumeration of: second, minute, hour, day, week)
multiplier defines the number of units given above in a time step.**
divider same function as the multiplier, but defines fraction of units in time step.**

validPeriod

Window around travel time for determining validity of event to be matched.

unit unit of time (enumeration of: second, minute, hour, day, week)
start : start of validity period
end : start of validity period

08 Error Correction Module (ARMA)


What [Link]

Description Configuration o the ARMA module

schema location [Link]

Entry in ModuleDescriptors <moduleDescriptor id="ErrorModel">


<description>Applies error correction module</description>
<className>[Link]</className>
</moduleDescriptor>

AR Module Background information

Error correction module configuration


errorModelSet
inputVariable
autoOrderMethod
orderSelection
order_ar
order_ma
parameters
subtractMean
boxcoxTransformation
lambda
ObservedTimeSeriesId
SimulatedTimeSeriesId
OutputTimeSeriesId
fixedOrderMethod
correctionModel
order_ar
order_ma
ObservedTimeSeriesId
SimulatedTimeSeriesId
OutputTimeSeriesId
interpolationOptions
interpolationType
gapLength
defaultValue
maxObserved
minObserved
maxResult
minResult
ignoreDoubtful
outputVariable

402
Error correction module configuration
The error modelling module is a generic forecasting module. The module is used to improve the reliability of forecast by attempting to identify the
structure of the error a forecasting module makes during the modelling phase where both the simulated and observed values are available, and
then applying this structure to the forecast values. This is under the assumption that the structure of the error remains unchanged. A description of
the background of this module can be found at AR Module Background information. In defining the error model three time series will need to be
defined;

Merged input time series of simulated model output for the historical period and of forecasted model output for the forecast period. The
time series in the historical period will be used for establishing error model through comparison with the observed time series. The error
forecast will be applied to the time series in the forecast period.
Input time series for the observed data.
Output time series for the updated simulated data for the historical period and the updated forecast data for the forecast period.

Two methods of establishing an error model are available. The first uses an AR (Auto Regressive) model only, but allows the order of the model
to be determined automatically. The second method uses an ARMA model, but the order of both the AR and the MA (Moving Average) model
must be defined. In both cases various transformations may be applied to normalise the residuals prior to establishing the error model.

When available as configuration on the file system, the name of the XML file for configuring an instance of the error module called for example
GreatCorby_ErrorModel_Forecast may be:

GreatCorby_ErrorModel_Forecast 1.00 [Link]

GreatCorby_ErrorModel_Forecast File name for the GreatCorby_ErrorModel_Forecast configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

403
Figure 86 Elements of the error module configuration.

errorModelSet

Root element for definition of an error model set.

inputVariable

Definition of input variable to be used in the error correction model. At least two entries are required in the error model, one for observed time
series and one for simulated time series For each entry an input variable will need to be identified. The variableId is used to refer to the time
series. See Transformation Module for definition of inputVariable configuration.

autoOrderMethod

Root element for defining an error model using the AR structure.

404
Figure 87 Elements of the autoOrderMethod configuration.

orderSelection

Boolean to indicate if order of AR components should be established automatically or if the given order should be used.

order_ar

Order of the AR model. If the orderSelection is true, then this value is the maximum order (may not exceed 50). In literature mostly an value of the
AR order up to 3 is chosen, higher values are possible, but will have a smaller contribution to the overall result of the error correction.

405
order_ma

Not used in this method.

parameters

This optional setting can be used to exactly specify the values for all the parameters (multipliers, powers, dividers, etc) used in the error correction
model. An example is shown below. Please note that you will need to establish these parameters firs. One way to do this is to run a long historical
run with auto-parameters on. The log file will show the parameters determined by the model. These parameters can be used to fix the parameters
for the forecast.

subtractMean

Boolean to indicate if mean of residuals should be subtracted prior to establishing error model.

boxcoxTransformation

Boolean to indicate if the residuals should be transformed using Box-Cox transformation prior to establishing error model.

lambda

Lambda parameter to use in Box-Cox transformation (note: value of 0 means the transformation is a natural logarithm). Values ranging from 0 to
0.5 are often used.

ObservedTimeSeriesId

Input time series set to be defined as the observed data to compare simulated model output to.

SimulatedTimeSeriesId

Input time series set to be defined as the simulated model output for both the historic and the forecast period. Multiple series will be combined into
single series. Series with higher index will be overlayed by series with lower index.

OutputTimeSeriesId

Updated timeseries data generated by the error model. This serie can contain data for the historic and the forecast period.

fixedOrderMethod

Root element for defining an error model using the ARMA structure.

406
Figure 88 Elements of the fixedOrderMethod configuration.

correctionModel

Structure of the error model to be used. The model selection includes the selection of initial transformations. Enumeration of options included;

none
ARMA+ systematic
systematic
ARMA
ARMA+ log transformation
ARMA+ systematic+ log transformation

order_ar

Order of the AR part of the model. In literature mostly an value of the AR order up to 3 is chosen, higher values are possible, but will have a
smaller contribution to the overall result of the error correction.

order_ma

Order of the MA part of the model. The order you specify determines the length of the period effected by the moving average function. The higher
the order, the longer the effected period. The moving average model is not operational yet.

ObservedTimeSeriesId

Input time series set to be defined as the observed data to compare simulated model output to.

SimulatedTimeSeriesId

Input time series set to be defined as the simulated model output for both the historic and the forecast period. Multiple series will be combined into
single series. Series with higher index will be overlayed by series with lower index.

OutputTimeSeriesId

407
Updated timeseries data generated by the error model. This serie can contain data for the historic and the forecast period.

interpolationOptions

Interpolation options for filling the missing values of the observed time series. This parameter is optional.

interpolationType

You can make a selection of a type of interpolation. Enumeration of available options is;

linear ; for linear interpolation between available values


block ; for block interpolation (note: the last available value is then used until a new value available).
default ; for replacing unreliable values with a default.

gapLength

Maximum allowed gap size that can be filled using interpolation.

defaultValue

Default value required for 'defaultvalue' interpolation option.

maxObserved

Maximum value to be used by the error module. Higher values will be converted to NaN and not used as input for error correction. This parameter
is optional.

minObserved

Minimum value to be used by the error module. Lower values will be converted to NaN and not used as input for error correction. This parameter
is optional.

maxResult

Maximum value to be generated by the error module. This setting can be used to specify an upper limit of the generated output timeseries. This
parameter is optional.

minResult

Minimum value to be generated by the error module. This setting can be used to specify a lower limit of the generated output timeseries. This
parameter is optional.

ignoreDoubtful

Should the error module ignore doubtful input values. This parameter is optional.

outputVariable

Definition of output variable as a result of the error model. A single timeSeriesSet for one location output variable error model is defined.

408
AR Module Background information

Introduction
The quality of the flood forecasts will, in general, depend on the quality of the simulation model, the accuracy of the precipitation and boundary
forecasts, and the efficiency of the data assimilation procedure (Madsen, et al. 2000).
This document describes the AR error module that can be used for output correction.

Role in FEWS
The error modelling module is a generic forecasting module. The module is used to improve the reliability of forecast by attempting to identify the
structure of the error a forecasting module makes during the modelling phase where both the simulated and observed values are available, and
then applying this structure to the forecast values. This is under the assumption that the structure of the error remains unchanged.

Because of the structure of the forecasting system where models are first run over a historic period and then over a forecast period the error
modelling module runs in two phases, i during the historic period where the structure of the error model is determined, and ii during the forecast
phase where the error model is applied in correcting the forecast time series.

The module applies an AR model of the error. The order of the statistical model may either be selected by the user (through configuration), or
derived automatically. In this second mode the user must indicate the maximum order of each of these parts.

To stabilise the identification of the error model, transformations to the model residuals may be applied before identifying the model;

1. no transformation
2. transforming the series by subtracting the mean
3. Box-Cox transformation, in this case the user must also identify the lambda parameter to be used. A lambda of zero indicates a natural
logarithm transformation

Functionality described

This utility is applied to improve model time series predictions through combining modelled series and observed series. It uses as input an output
series from a forecasting module (typically discharge from a routing or rainfall-runoff module) and the observed series at the same location. An
updated series for the module output is again returned by the module. Updating is applied through application of an error model to the residuals
between module output and observed series. This error model is applied also to the forecast data from this module to allow correction of errors in
the forecast.

Data Requirements

Input time series data

To apply the error modelling module, time series data are required for both the simulated and historical period at a given location, as well as the
forecast time series at this location. Under normal configuration, these time series will be of the same parameter.

Time series Parameter (example) View period

Simulated values [Link] Historic period (e.g. -2000 hours to start of forecast)

Observed values [Link] Historic period (e.g. -2000 hours to start of forecast)

Forecast values [Link] Forecast period (e.g. start of forecast to +48-240 hours)*

* Note: The length of the forecast period may be zero. If this is the case, then the error modeling module will consider only the historic period.

Output time series data

The error modelling module returns two time series, an update time series for the historic period, and an updated time series for the forecast
period. In principal the updated time series over the historic period is almost identical to the observed time series.

Time series Parameter (example) View period

Updated values (historic) [Link] Historic period (e.g. -2000 hours to start of forecast)

Updated values (forecast) [Link] Forecast period (e.g. start of forecast to +48-240 hours)*

Configuration data

The configuration of the error modelling module is used to determine its behaviour in establishing the statistical model of the error and how this is

409
applied to derive the updated series

Configuration items

Order_AR: (maximum) order of the AR component;


Order_MA: 0;
Order_Sel: Option to determine if the orders are to be derived automatically (with the maxima as defined above) or as given;
Transform&nbsp: Option to apply a transformation to residuals. This may either be "none", "mean" or "boxcox";
Lambda: A required parameter for the "boxcox" transformation option.

Conditions and Assumptions

Below a short summary of the paper by Broersen (2002) can be found. The algorithms were extracted from ARMASA a Matlab Toolbox
(Broersen, Online) and are implemented in the Delft-FEWS AR module.

Time series definitions

Three types of time series models can be distinguished, autoregressive or Ar, moving average or MA and the combined ARMA type. An
ARMA(p,q) process can be written as (Priestley, 1981)

where en is a purely random process, thus a sequence of independent indetically distributed stochastic variables with zero mean and variance se2
. This process is purely AR for q=0 and MA for p=0. Any stationary stochastic process can be written as a unique AR(¥) or MA(¥) process The
roots of

are denoted as the poles of the ARMA(p,q) process, and the roots of

are the zeros. Processes and models are called stationary if all poles are strictly within the unit circle, and they are invertible if all zeros are within
the unit circle.

AR estimation

This model type is the backbone of time series analysis in practise. Burg's method, also denoted as maximum entropy, estimates the reflection
coefficients (Burg, 1967;Kay and Marple, 1981), thus making sure that the model will be stationary, with all roots of A(z) within the unit circle.
Asymptotic AR order selection criteria can give wrong orders if candidate order are higher than 0.1N (N is the signal length). The finite sample
criterion CIC(p) is used for model selection (see Broersen, 2000). The model with the smallest value of CIC(p) is selected. CIC uses a
compromise between the finite sample estimator for the Kullbach-Leibler information (Broersen and Wensink, 1998) and the optimal asymptotic
penalty factor 3 (Broersen, 2000,Broersen and Wensink, 1996).

Box-Cox transformations

The Box Cox transformation (Box and Cox, 1964) can be applied in the order selection and estimation of the coefficients. The object in doing so is
usually to make the residuals more homoskedastic and closer to a normal distribution:

for l not equal to zero, when l=0 T(y) =log(y) .

Application of the Module

The implemented algorithm computes AR(p) models with p=0,1,...,N/2 and selects a single best AR model with CIC. However, one can choose to
provide the order one wants to use. Usually the mean of the signal will be extracted from the signal to obtain the model and coefficients, but this
option can be switched off. It is recommended to use the subtraction of the mean. Figure 1 shows an example of using the implemented error
module on the Moesel river basin at Cochem, Germany.

410
![Link]!
Figure 1. Application of AR module with subtraction of mean to the Moesel basin at Cochem, Germany. Blue is the measured discharge (Q), red
is the updated model update and forecast, green is the model simulation. The forecasts starts at t=401 hours.

Optionally, one can choose to use the Box-Cox transformation. In the update the algorithm will provide an updated model update. During the
forecast the selected model and coefficients are used for predicting the model error and are added with the model forecast to obtain an updated
model forecast.

Figure 2. Application of AR module to the Moesel basin using Box Cox transformation and subtraction of mean. Blue is the measured discharge
(Q), red is the updated model update and forecast, green is the model simulation. Forecasts starts at t=401 hours.

Figure 2 shows the effect of additionally applying a Box Cox transformation (l=0.3). It gives slightly better predictions than without (Figure 1).

![Link]!

Figure 3. Application of AR module to the Moesel basin using subtraction of mean. Blue is the measured discharge (Q), red is the updated model
update and forecast, green is the model simulation. Forecasts starts at t=250 hours.

![Link]!

Figure 4. Application of AR module to the Moesel basin using subtraction of mean. Blue is the measured discharge (Q), red is the updated model
update and forecast, green is the model simulation. Forecasts starts at t=500 hours.

Figure 3 and 4 show two applications (forecast starts at t=250 hours and at t=500 hours) of the algorithm with subtraction of mean but without
Box-Cox transformation.

References
Box, G.E.P and D.R. Cox, 1964. An analysis of transformations. J. Royal Statistical Soc. (series B), vol 26, pp 211-252.

Broersen, P.M.T.; Weerts, A.H. (2005). Automatic Error Correction of Rainfall-Runoff models in Flood Forecasting Systems. Instrumentation and
Measurement Technology Conference, 2005. IMTC 2005. Proceedings of the IEEE
Volume 2, Issue , 16-19 May 2005 Page(s): 963 - [Link]

Broersen, P.M.T., 2000. Finite sample criteria for Autoregressive order selection. IEEE Trans. Signal Processing, vol 48, pp 3550-3558.

Broersen, P.M.T. Automatic spectral analysis with time series models. IEEE Instr. Meas., vol 51, pp 211-216.

411
Broersen, P.M.T. Matlab toolbox ARMASA (online) Available: [Link]

Broersen, P.M.T. and H.E. Wensink, 1996. On the penalty factor for autoregressive order selection in finite samples, vol 44, pp 748-752.

Broersen, P.M.T. and H.E. Wensink, 1998. Autoregressive model order selection by a finite sample estimator for the Kullbach-Leibler
discrepancy. IEEE Trans. Signal Processing, vol 46, pp 2058-2061.

Burg, J.P., 1967. Maximum entropy spectral analysis. Proc. 37th Meeting Soc. Exploration Geophys., Oklahoma City, OK, pp 1-6.

Kay, S.M. and S.L. Marple, 1981. Spectrum analysis-A modern perspective. Proc IEEE, vol 69, pp 1380-1419.

Madsen, H., M.B. Butts, S.T. Khu, S.Y. Liong, 2000. Data assimilation in rainfall-runoff forecasting. Hydroinformatics 2000, 4th Inter. Conference
on Hydroinformatics, Cedar Rapids, Iowa, USA, 23-27 July 2000, 9p.

Priestely, M.B., 1981. Spectral analysis and time series. New York:Academic.

09 Report Module
Report Module Configuration
Configuring formatting of reports
Configuring content of reports
Charts
Spatial plot snapshots
Summary
thresholdsCrossingsTable
thresholdCrossingCountsTable
flagCountsTable
flagSourceCountsTable
maximumStatusTable
mergedPrecipitationTable
SystemStatusTables
liveSystemStatus
exportStatus table
importStatus table
scheduledWorkflowStatus table
completedWorkflowStatus table
currentForecastStatus table
logMessageListing table
forecastHistory table

Improved report configuration


Tags in report template
Using Colours in Tables

CSS Table Styles

Report Module Configuration


The role of the report module is to generate reports of the forecast results in a form that can easily be displayed without using DELFT-FEWS on a
web server. Reports that are produced by DELFT-FEWS serve two purposes:

provision of detailed, specific information to e.g. forecasting duty officers, area officers
provision of general, overview reports

Reports are used to present the forecast result data in a fixed, user defined, format. The format of a report can be easily customized by a user
and stored in a template.

Some functions of the DELFT-FEWS use the report component to present the results. For example the critical condition lookup tables defined in
for example coastal forecasting only produce index time series. Without post-processing of these indexes, a user can never see what is
happening in his coastal system. The report component will therefore also be used to interpret the indexes and transform these into information
that a user can understand. The ValueAttributeMaps (see Regional Configuration) define how these indeces are to be transformed to
understandable strings and/or icons.

Reports generated can be exported directly to the file system (for viewing through the web server) or they can be stored in the database. The
report export module may then be used to exporting selected reports when required to the web server.

The report template uses tags as placeholders to identify the location of objects in the report. In the following table the available tags are
described. Appendix D gives an overview of available tags. Appendix D also gives details on how to defined declatations to be sued in the
reports- allowing layout of tables etc. to be defined.

When available as configuration on the file system, the name of the XML file for configuring an instance of the correlation module called for
example Report_Coastal_Forecast may be:

412
Report_Coastal_Forecast 1.00 [Link]

Report_Coastal_Forecast File name for the Report_Coastal_Forecast configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 89 Elements of the Reports configuration

Configuring formatting of reports

declarations

Root element for declaring variables and formats to be used in making the report. Details on how these are to be configured is given in Appendix
D.

Figure 90 Elements of the declarations configuration

defineGlobal

The DefineGlobal element can be used to enter some information that is valid for all reports. Multiple DefineGlobal elements may be defined, as
long as the ID attribute is kept unique.

413
Related TAG: $DEFINITION(definitionId)$

<item>Format

Formatting instructions for a specific item in the report. See Appendix D for details.

templateDir

Root directory of the directory containing report templates.

reportRootsDir

Root directory to which the all reports are to be exported. This directory is typically the root directory of the web server

Note: A [Link] tag may be useful to define this directory

reportRootsSubDir

Root directory to which the current reports are to be exported. This directory is relative to the reportsRootsDir

sendToLocalFileSystem

Boolean option to determine if reports are to be written to the file system on creation. If set to false reports will be stored in the Reports table in the
database, pending export through the ReportExport module.

Configuring content of reports

report

Root element for defining the data to be included in the report.

414
Figure 91 Elements of the Report configuration

InputVariable

Input timeseriesSets for the report. All timeSeriesSets that are used in the report must be defined here, both for carts and for tables. See
Transformation Module for details on definint an inputVariable element.

FileResource

Reference to an external file that must be copied with the report. The file will be copied to the same location as the report, with the correct
filename inserted at the place of the tag. This can be used for copying images.

Related TAG: $FILERESOURCE(resourceId)$

Chart

In the Chart element the variableId's to be used for one or more charts are defined. The Chart ID that is defined is referenced in the TAG.

Related TAG: $CHART(chartId)$

Summary

In the Summary element the variableId's are specified that are used to create the summary information. The OverlayFormat of the
SummaryFormat determines what is shown on the map.

Related TAG: $SUMMARY(summaryId)$

Table

415
In the Table element the variableId's are specified that are used to create a table. The TableFormat controls how the table is formatted, i.e. the
row and column information and how the data is displayed in the map.

Related TAG: $TABLE(tableId)$

Status

The Status element links a Status ID that is referenced in the STATUS TAG to a Status Format ID.

Related TAG: $STATUS(statusId)$

Template

Template file name to be used for the report.

OutputSubDir

Output Sub Directory for the report

OutputFileName

Report Filename.

DefineLocal

The DefineLocal element can be used to enter some information that is valid for a single report. Multiple DefineLocal elements may be defined, as
long as the ID attribute is kept unique.

A special case of the DefineLocal attribute is when the DefineLocalID is the same as a previously DefineGlobalID. In this case the DefineLocal
overrides the setting of the DefineGlobal. This is valid only for the report being configured, not for any other configured reports in the same
configuration file.

Related TAG: $DEFINITION(definitionId)$

Charts

Charts can be used for visualising (more than one) timeseries by displaying them on a x and y axis using lines, dots etc. The charts which can be
added to html-reports looks more or less the same as in the TimeSeries Display. Charts are being created as (indivudual) *.png files.

Figure 92 Example of a chart (*.png file)

Template tag

In the Chart element the variableId's to be used for one or more charts are defined. The Chart ID that is defined is referenced in the TAG.

Related TAG: $CHART(chartId)$

Configuration aspects

416
Chart should be configured according the following schema definition ([Link])

Figure 93 Chart definition according the ChartComplexType ([Link]).

For adding a chart to a report the following aspects are important to configure:

Chart attributes;
Chart timeseries;

Chart attributes

To display the chart the following attributes need to be defined:

Id: unique identifier for this chart (unique in this report);


formatId: referring to the format of this chart (in declaration section of the report)
height: an integer value for the height of the chart in pixels;
width: an integer value for the width of the chart in pixels;

Chart timeseries

To display lines, dots etc. of a certain timeseries, the reference to this timeseries (variableId) should be mentioned.

Figure 94 chart declaration in the report section

Spatial plot snapshots

Gridded time series can be visualized in a report by means of snapshots. The snap shot is an image depicting the time series spatially.

417
Configuration aspects

Spatial plot snapshots are configured according to the following schema definition ([Link])

418
419
snapshot

The snapshot is defined as a relative time interval from T0. Optionally a file name may be specified for the snapshot which is used to save the
snapshot on the file system. If omitted the file name is generated by the report module.

Summary

A summary is a (background) map which can be added to a report. On top of this map, icons can be displayed. The icons indicate the location or
the (warning) status of the monitoring or forecasting point. By adding specific html-functionality to the report template(s), maps can be used to
navigate through the reports as well. Clickable areas and locations can be distinguished here. The map itself (as a file) can be any image file. For
displaying maps in html-reports the following formats are advised: *.png, *.jpg or *.gif.

Template tag

In the Summary element the variableId's are specified that are used to create the summary information. The OverlayFormat of the
SummaryFormat determines what is shown on the map.

Related TAG: $SUMMARY(summaryId)$

Creating a summary

420
The map itself is an existing file and can be created in several ways. An image processing software package (like Paint Shop Pro) can create a
'screendump' from the map section of The FEWS-Explorer. The FEWS-Explorer itself has some hidden features which can be used as well. The
[F12] button can be used for:

Copy current map to a .png file ([F12]+ L);


Copy current map extent to the clipboard ([F12]+ K);

The *.png file is named "[Link]" and can be found in the /bin directory of your system. The map extent (rectangle containing real world
coordinates) can be pasted into any application by choosing Edit-Paste or [Ctrl]+ V. These four coordinates describing the extent of your map
picture in world coordinates are needed in the declarations section of the report ModuleConfigFile where you declare this summary.

Remark: Every time you use the above mentioned [F12] features, the png file in the /bin directory and the clipboard is overwritten! In making
series of maps you should copy/rename your png file after using this option. You should also paste the map extent in a text editor or spreadsheet
directly before repeating the operations with another map extent in the FEWS-Explorer.

Configuring a summary

Declaration section

In the declarations section of the report ModuleConfigFile, the summaryFormat needs to be declared. The following elements should be specified
(see figure).

Id: unique identifier (as reference to this summary)


Map
o Image
file: relative reference/path to the image file;
width: width of the image file in pixels;
height: height of the image file in pixels;
o x0: horizontal margin/shift of the map on the html page;
o y0: vertical margin shift of the map on the html page;
o leftLongitude: left side of the map in real world coordintes;
o rightLongitude: right side of the map in real world coordinates;
o bottomLatitude: bottom side of the map in real world coordinates;
o topLatitude: top side of the map in real world coordinates;
o mapFormat: details for positioning and behaviour of the map;
o overlayFormat: details for positioning and behaviour of icons on the map;

Figure 95 Summaryformat in the declarations section

Detailed explanation

File details like width and height can be retrieved using image processing software.
The x0 and y0 elements are margins/shifts of the position of the map compared to the left-upper point of the report (e.g. an A4-sheet).
This left-upper point is (0,0). The x0/y0 refer to the left-upper point of the image.
The mapFormat is used for positioning the map on the page (relative to all other content) and therefor it is placed in a so-called [DIV] tag.
This type of html tag puts everything which is configured within that tag in that position on the page. The following table explains the
references of the number in this format:

Position Type Variable

421
0 number Absolute x position of map image.

1 number Absolute y position of map image.

2 number Image width.

3 number Image height

4 number Image filename.

5 (optional) number Reference to a clickable map (#clickmap by default)

The overlayFormat is used for positioning location and/or warning icons on the map based on:
The map itself (defined in mapFormat);
X and Y coordinates of the required locations from the [Link];
Note that in case of numbers bigger than 999, it can be that the output may contain unwanted commas, e.g. width 1274 is printed in the
reports as 1,274. In order to avoid this, it is recommended to use

, number, #} ]]>

instead of

, number} ]]>

Position Type Variable

0 number Absolute x position of status icon.

1 number Absolute y position of status icon.

2 string Image filename.

3 string Status description.

4 string Location name.

5 (optional) string Html-link for opening a individual html page ([Link])

The mapFormat and overlayFormat elements are configured in html language in which some parts needs to be dynamically filled in. The result will
be valid html. The map itself is placed as the bottom 'layer' using the <DIV> tag attributes (index="0") in the mapFormat. Objects defined in the
overlayFormat are given higher indices to place them 'on top of the map' and will always be visible (location icons, warning icons).

Use of optional settings in mapFormat and overlayFormat

In both formats, an optional item can be specified.

Clickable map

A clickable map is an image on a html page containing 'hot spots' where the cursor change from the normal 'arrow' to a 'pointing finger' during a
mouse-over event. These hot spots can contain hyperlinks to other pages.

mapFormat
&lt;div style="TOP:{1, number, #}px;
LEFT:{0, number, #}px;
position:absolute;Z-index:0"&gt;
&lt;img border="0" src="{4}"
width="{2,number,#}" height="{3, number, #}"
usemap="{5}"&gt;
&lt;/div&gt;

When adding the string "usemap="{5}" to the mapFormat (see above) the outcome in the html page will be (printed in bold).

The part describing the hot spots for this map are defined in the [map] tag. In this example below, three areas are 'clickable'. Every hot spots links
to another html page in the set of reports.

422
<img border="0" src="[Link]" width="503" height="595" *usemap="#clickmap"*>

<!-- Here the clickable map starts: SHOULD BE IN THE TEMPLATE -->
<map name="clickmap">
<area alt="Northumbria Area" href="northumbria/area_status.html" shape="polygon" coords="138,
...,...,34">
<area alt="Dales Area" href="dales/area_status.html" shape="polygon" coords="266, ..., ..., 285">
<area alt="Ridings Area" href="ridings/area_status.html" shape="polygon" coords="359, ..., ..., 379">
</map>

To avoid hot spots on a map, do not include the "usemap="{5}" in the mapFormat.

Hyperlinks

Hyperlinks can be added to the overlayFormat. By using the following option, hyperlinks to individual reports will be added automatically. They will
have a fixed (non-configurable) name, "[Link]" and assuming it is located in a directory with the same name as the locationId
compared to where this report with the map is located in the directory structure.

overlayFormat
&lt;div style="TOP:{1,number,#}px;
LEFT:{0,number,#}px;
position:absolute;Z-index:1"&gt;
&lt;a href="{5}"&gt;
&lt;img border="0" src="{2}" title="{4}: {3}"&gt;
&lt;/a&gt;
&lt;/div&gt;\n

When adding href="{5}" to the overlayFormat (at that location) a hyperlink is being added to the icon placed on the map. In html it will look like
this.

<!-- map -->


<div style="TOP:160px;LEFT:5px;position:absolute;Z-index:0">
<img border="0" src="[Link]" width="503" height="578" usemap="#clickmap">
</div\>

<!-- icons -->


<div style="TOP:467px;LEFT:427px;position:absolute;Z-index:1">
<a href="BOYNTN1/[Link]"><img border="0" src="fluvial_site_data.gif"
title="Boyntn1: No threshold exceeded"></a>
</div>

Report section

A summary can be added to a report configuration as shown below.

Figure 96 Summary configuration in the reports section

The following elements need to be defined.

Id: identifier referring to the tag in the template. In this case the corresponding template will contain $SUMMARY(statusAreaMap)$

423
formatId: identifier referring to the unique identifier of this map in the declarations section.
timeSeries: reference to an inputVariable (declared in the report or in the declarations section.
mapId: reference to a valueAttributeMap;

Tables

Tables are available in different types. The similarity between them is that they are referenced with the same template tag.

Template tag

In the Table element the variableId's are specified that are used to create a table. The TableFormat controls how the table is formatted, i.e. the
row and column information and how the data is displayed in the map.

Related TAG: $TABLE(tableId)$

The following table types are availabe:


1. table: original table;
2. htmlTable: new style table with same functionality as 'table';
3. thresholdsCrossingsTable: table containing numbers (count) of threshold crossings.
4. maximumStatusTable: table containing (coloured) indicating threshold crossings over time.
5. mergedPrecipitationTable: table containing (merged) precipitation figures in specific time intervals
6. systemStatusTable: table containing system (management) information;

Tables 2 to 6 have references to cascading style sheets. Below the different tables are explained.

htmlTable

The htmlTable is the successor of the table described earlier. The configuration of this htmlTable is easier and more readable.

Declarations section

In the declarations sections a format of a table needs to be defined. In the figure below, a format of a htmlTable is configured.

Figure 97 Declaration of a htmlTableFormat

The following elements needs to be defined:

id: unique identifier (as reference to this table);


tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
column: indicates what (of one or more Timeseries) should be shown in the columns (options are: location, time, date, locationcritical,
parameter, parameters, choice);
row: indicates what (of one or more Timeseries) should be shown in the rows (options are: location, time, date, locationcritical, parameter,
parameters, choice);
relativeWholePeriod: definition of a moving period in time to be displayed (in the example above 24 hours before the day containing T0 of
the forecast and 24 after the day containing T0 of the forecast (in total 3 days);
topLeftText: definition of the text to be displayed in the upper left cell of the table;
cellFormat: format of the cell containing the values;
topFormat: format of the column headers;
leftFormat: format of the most left column;
missingValueText: definition of the missing value character. Choices are: " ", "#", "" or " -";
nullValueText: definition of the null value (NAN) indicator. Choices are: " ", "-", "-999", " no data";

Report Section

424
In the report itself the reference to a tableFormat and the declaration of the content should take place. The schema prescribes as follows:

Figure 98 Declaration and reference of a htmlTable in a report

The following elements need to be defined:

id: identifier to the template tag (in this case: $TABLE(table1)$);


formatId: reference to the format of this table (to one of the htmlTableFormats in the declarations section);
timeSeries: reference to an inputVariable (declared in the report or in the declarations section.
cellFormat: addition to tableStyleX class for adding specific styles to the content of the cell;

Remark: htmlTables can contain more than one timeseries. By adding different cellFormats to the series (see picture obove) different styles can
be attached for display in the table. In this way you can distinguish the two timeseries in the table!

Detailed explanation

The choice of adding a certain tableStyle to a table supplies you with the opportunity to influence the looks and style of the table, its borders,
background and content. By setting the tableStyle a number of style classes are being generated in the html-pages. By declaring these classes in
a stylesheet (*.css) and ascribe a certain font family, font size, border color, background color etc. you are able to 'polish' the looks of your
reports. If a tableStyle class is not mentioned in a stylesheet, it is being displayed using default settings.

The following classes are generated automatically and are added after the tableStyleX in which X stand for an integer in the range 1 to 10.

class description specific for

_beforeT0 date/time indication before time zero (TO) of the forecast time column (most left column)

_firstAfterT0 date/time indication of the first occurrence after time zero (T0) of the forecast time column (most left column)

_afterT0 date/time indication after time zero (TO) of the forecast time column (most left column)

_data default indication of data content of a cell data cells

_anyString user defined cellFormat data cells

_datamax addition to current style if value is maximum of that series (_data_datamax or data cells
_anyString_datamax)

_leftcol default indication of a row header

_header default indication of a column header

_threshold_n indication of threshold level (n=0,1,2,...) threshold tables (colouring of


backgrounds)

thresholdsCrossingsTable

A thresholdCrossingsTable is a table in which the number of thresholds for each level are counted. The number given in the table suggests with
the 'worst' case situation. When a timeseries crosses a number of thresholds in a forecast, only the 'worst' thershold crossings is counted. An
example of a thresholdsCrossingsTable is given below.

Figure 99 Example of a thresholdCrossingTable (NE Region)

Declarations section

425
In the declaration sections the layout of the table needs to be defined.

Figure 100 Declaration of a thresholdsCrossingsTable

The following elements needs to be defined:

id: unique identifier (as reference to this table);


tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
thresholdsCrossingsCounterHeaderText: String to be displayed in the table header;
thresholdsCrossingsTableUpCrossingsHeaderText: not in use anymore: can be left blank.
thresholdsCrossingsTableDownCrossingsHeaderText: not in use anymore: can be left blank.
cellFormat: any string but "_thresholds" is recommended: style class of the table header in combination with the typeId (range always
starts with a 0);
topFormat; not in use anymore, can be left blank ;
colWidth: width of a column in pixels;
missingThresholdsText: Choice for missing threshold. Choices are: " ", "#", "" or " -";
noThresholdsCrossed: definition of the null value (NAN) indicator. Choices are: " ", "-", "-999", " no data";
countIndividualThresholds: true if individual threshold crossings must be counted; false (= default) if per location only the worst case must
be counted
thresholdTypes: array of thresholds, assuming a logical order
o typeId: textual indication of corresponding threshold level;
o name: indication of the text to appear in column headers;

Report section

A thresholdsCrossingsTable should be defined in the report as well. E.g. this table needs to be 'filled' with its corresponding timeseries.

Figure 101 Declaration of a thresholdsCrossingsTable in a report

426
The following elements needs to be defined:

id: identifier to the template tag (in this case: $TABLE(table1)$);


formatId: reference to the format of this table (to one of the thresholdsCrossingsTable in the declarations section);
mergeLocations: boolean indicator. True means: treat all locations of mentioned timeseries together for combined assessment. False
means: extract individual timeseries so that every row indicates one location (timeseries);
Choice between
o table; --> this can be used to display a table 'in' another one.
o tableRow
formatId: reference to the thresholdsCrossingsTable format;
id: identifier (mainly for own use/comment);
timeSeries:
mapId: reference to a valueAttributeMap;
Text: reference to an inputVariable (declared in the report or in the declarations section.

Since this type of table is a table in which you can aggregate data (which means combine timeseries) the following option is available:
mergeLocations. By default this This is explained in detail.

thresholdCrossingCountsTable

A thresholdCrossingsCountsTable displays threshold crossing counts depending on which thresholds have been crossed within a given time
period. The thresholdCrossingCountsTable is a new version of the thresholdsCrossingsTable. A thresholdCrossingCountsTable has the same
layout as a thresholdCrossingCountsTab in the thresholdOverviewDisplay for consistency.

Declarations section

In the declaration sections the layout of the table needs to be defined in a thresholdCrossingCountsTableFormat.

The following options are available:

id: identifier of this table format.


thresholdGroupId: id of a thresholdGroup that is defined in the thresholds configuration file. This table displays all thresholds in the
specified thresholdGroup.
relativePeriod: Relative time period for this table. The time period is relative to timeZero. A relative period can be e.g. -3 to +3 hours or
e.g. +3 to +6 hours (relative to timeZero).
countAllActiveThresholds: If true, then this table counts all thresholds that are active (all thresholds that have been crossed). If false, then
for a given location this table only takes into account the active threshold with the most severe warning level. Default is false.
countWarningAreas: If true, then this table counts warning areas, as follows. It is possible to define warning areas for a
levelThresholdValue in the thresholdValueSets configuration file. If a crossed levelThresholdValue has warning areas defined, then the
number of warning areas is counted for that levelThresholdValue. If a crossed levelThresholdValue has no warning areas defined, then a
count of 1 is used for that levelThresholdValue. If this option is false, then for each crossed levelThresholdValue a count of 1 is used (i.e.
the warning areas are ignored). Default is true.
noThresholdsDefinedText: If specified, then this text is shown in cells that correspond to data for which no thresholds are defined. Default
is empty space.
noDataAvailableText: If specified, then this text is shown in cells for which no data is available. Default is "n/a".
crossingCountZeroText: If specified, then this text is shown in cells for which there are no threshold crossings. Default is "-".
tableHeaderText: String to be displayed in the table header. The relative period and thresholdGroup name will be appended to this string.
columnWidth: The width of the columns in the table.
tableStyle: The tableStyle to use for this table. The available tableStyles are defined in the report tables .css file.
cellFormat: The cellFormat to use for this table. The available cellFormats for the configured tableStyle are defined in the report tables
.css file.
scrollableTable: Use this to split the table into two parts, one for the header row(s) and one for the data rows. The data row part refers to
the tableStyle for this table with "_scrollable" appended. This can be used to make the data rows scrollable while the header row(s)
remain fixed. For this to work the referred style needs to be defined in the report tables .css file.

Report section

In the reports section define a thresholdCrossingCountsTable.

The following options are available:

id: identifier for the template tag (in this case: $TABLE(table1)$);
formatId: reference to the format of this table (to one of the thresholdCrossingCountsTableFormats in the declarations section);
mergeLocations: boolean indicator. True means: treat all locations of mentioned timeseries together for combined assessment. False
means: extract individual timeseries so that every row indicates one location (timeseries);
Choice between
o timeSeries;
o table; --> this can be used to display a table within another one.
o tableRow

flagCountsTable

FlagCountsTable is available since Delft-FEWS release 2011.01. A FlagCountsTable displays flag counts depending on the flags of the values in

427
a time series within a given time period.

Example of a flagCountsTable

Declarations section

In the declaration section the layout of the table needs to be defined in a flagCountsTableFormat.

The following options are available:

id: identifier of this table format.


tableStyle: The tableStyle to use for this table. The available tableStyles are defined in the report tables .css file.
hyperlinkUrl: Optional URL. If specified, then the location name for each time series will be a hyperlink that refers to this URL. It is
possible to insert the following tags: !LOCATION_ID!, !LOCATION_NAME!, !PARAMETER_ID! and !PARAMETER_NAME!. The
!LOCATION_ID! tag will be replaced with the location id for the time series. The !LOCATION_NAME! tag will be replaced with the location
name for the time series. The !PARAMETER_ID! tag will be replaced with the parameter id for the time series. The
!PARAMETER_NAME! tag will be replaced with the parameter name for the time series. This can e.g. be used to link each row in this
FlagCountsTable to a page (or an anchor within a page) that contains a FlagSourceCountsTable with more detailed information about the
time series for that row.
scrollableTable: Use this to split the table into two parts, one for the header row(s) and one for the data rows. The data row part refers to
the tableStyle for this table with "_scrollable" appended. This can be used to make the data rows scrollable while the header row(s)
remain fixed. For this to work the referred style needs to be defined in the report tables .css file.

Configuration example:

<tableStyle>tableStyle1</tableStyle>
<hyperlinkUrl>[Link]#!LOCATION_NAME!_!PARAMETER_NAME!</hyperlinkUrl>

]]>

Report section

In the reports section define a flagCountsTable.

The following options are available:

id: Identifier for this FlagCountsTable that is used in the report template html file in the table tag (e.g: $TABLE(table1)$).
formatId: The id of the FlagCountsTableFormat to use for this FlagCountsTable.
inputVariableId: One or more ids of inputVariables that are defined at the start of this report. For each time series in the inputVariable(s),
there will be one row in the table with the location, parameter and flag counts for that time series. For a given time series this uses only
the data within the relativeViewPeriod that is defined for that time series in the timeSeriesSet. If a timeSeriesSet contains multiple time
series (e.g. a locationSet), then for each time series in the timeSeriesSet a separate row is created.

Configuration example:

<inputVariableId>Cowbeech</inputVariableId>
<inputVariableId>Romsey</inputVariableId>
<inputVariableId>CrosslandsDrive</inputVariableId>

]]>

flagSourceCountsTable

FlagSourceCountsTable is available since Delft-FEWS release 2011.01. A FlagSourceCountsTable displays counts of flag sources depending on
the flag sources of the values in a time series within a given time period. The flag source for a value contains the reason why that value got a
certain flag. For example if a value was rejected by a "hard max" validation rule, then it gets flag unreliable and flag source "hard max".

428
Example of a flagSourceCountsTable

Declarations section

In the declaration section the layout of the table needs to be defined in a flagSourceCountsTableFormat.

The following options are available:

id: identifier of this table format.


tableStyle: The tableStyle to use for this table. The available tableStyles are defined in the report tables .css file.
scrollableTable: Use this to split the table into two parts, one for the header row(s) and one for the data rows. The data row part refers to
the tableStyle for this table with "_scrollable" appended. This can be used to make the data rows scrollable while the header row(s)
remain fixed. For this to work the referred style needs to be defined in the report tables .css file.

Configuration example:

<tableStyle>tableStyle1</tableStyle>

]]>

Report section

In the reports section define a flagSourceCountsTable.

The following options are available:

id: Identifier for this FlagSourceCountsTable that is used in the report template html file in the table tag (e.g: $TABLE(table1)$).
formatId: The id of the FlagSourceCountsTableFormat to use for this FlagSourceCountsTable.
inputVariableId: The id of an inputVariable that is defined at the start of this report. The time series of this inputVariable is used for this
table. This table shows for each validation rule (hard max, hard min, rate of change, etc.) the number of values that were rejected
because of that validation rule. This uses only the data within the relativeViewPeriod that is defined for the time series in the
timeSeriesSet. If the timeSeriesSet contains multiple time series (e.g. a locationSet), then an error message is given.

Configuration example:

<inputVariableId>Cowbeech</inputVariableId>

]]>

maximumStatusTable

A maximumStatusTable indicates, by colouring, when certain threshold levels are crossed. In this type of table, the rows should be defined
individually and can contain more than one series. The boolean value 'mergLocation' plays an important role in combining the locations or treat
them individually.

429
Figure 102 Example of a maximumStatusTable (NE Region)

Declarations section

In the declaration sections the layout of the table needs to be defined.

Figure 103 Declaration of maximumStatusTable

The following elements needs to be defined:

tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
mainHeaderText: a textual string which is displayed in the table header;
timeStepsInTable: integer number of timesteps to be displayed in the table. This should be derived from the relativeViewPeriod of the
corresponding timeSeries to add:
timeStepsAggregation: integer number of timesteps to be aggregated (=taken together). Worst status is being displayed.
timeHeaderInterval: integer number for aggregating the headers of the cells or not. Number '1' means 'no aggregation' so every column
has got its own header.
timeHeaderDisplayMinutes: boolean value for having the minutes displayed;
colWidth: integer value for the width of the cells;
showAsSingleColumn: boolean value for displaying the timeseries into one column only (true). If set to 'true' the last value of the
timeseries is considered.

Reports Section

In the report itself the necessary timeseries needs to be assigned to the the table.

430
Figure 104 Declaration of a maximumStatusTable in a report

The following elements need to be defined:

id: identifier to the template tag (in this case: $TABLE(table1)$);


formatId: reference to the format of this table (to one of the maximumStatusTables in the declarations section);
mergeLocations: boolean indicator. True means: treat all locations of mentioned timeseries together for combined assessment. False
means: extract individual timeseries so that every row indicates one location (timeseries);
table: add individual maximumStatusTables here for a correct visualisation (see detailed explanation: 'two tables into one'). In most cases
2: first for observed values, second for forecast table.

For each table:

formatId: reference to the format of this table (to one of the maximumStatusTables in the declarations section);
id: identifier (used for comments only)
mergeLocations: boolean indicator. True means: treat all locations of mentioned timeseries together for combined assessment. False
means: extract individual timeseries so that every row indicates one location (timeseries);
tableRow: (1 or more)
o formatId: reference to the thresholdsCrossingsTable format;
o id: identifier (mainly for own use/comment);
o timeSeries: (1 or more)
mapId: reference to a valueAttributeMap;
Text: reference to an inputVariable (declared in the report or in the declarations section.

Detailed Explanation: Two tables into one

In fact, the maximumStatusTable is designed as visualised below. To create a nicely aligned table the 'two timeseries tables' (the one with the
'observed' values and the one with the forecast series) are put in individual cells of the outer table. So the outer table only consist of two cells. The
left cell contains the observed table, the right cell contains the forecast table. The outer table itself needs to be declared as well!! The report
declaration (see above) can be inspected to see this one in practice.

Detailed Explanation: Wide variation of tables

The variation for displaying maximumStatus information is wide. The combination of relativeViewPeriod (length of forecast series), timestep and
the desire to aggregate timesteps can all be implemented. The calculation should be correct. If not, several messages will be shown.

Some examples

Source: 12 hrs of 15min data


Display: maximumStatus with 15 minute data, time header by the hour (with minutes)
Configuration

value explanation

timeStepsInTable 48 (12*4=48 timesteps)

timeStepsAggregation 1 each column represent 1 timestep

timeHeaderInterval 4 4 columns have a merged header (hour)

timeHeaderDisplayMinutes true minute indication

Result: table with 48 time columns with 12 aggregated headers visualising the hour with a minute indication (like in first figure of this section)

Source: 6 hrs of 15min data


Display: maximumStatus with hourly data (no minute indication)

431
Configuration

value explanation

timeStepsInTable 24 (6*4=24 timesteps)

timeStepsAggregation 4 4*15min aggregate to 1 hour

timeHeaderInterval 1 each hour 'column' has its own header

timeHeaderDisplayMinutes false no minute indication in header

Result: table with 6 (time) columns indicating the 'worst' status of that hour.

Example thresholdsCrossingsTable/maximumStatusTable

In a configuration the following timeSeries/InputVariables exist:

Name Parameter LocationSet Locations

Catchment1_Waterlevel_Obs [Link] Catchment1_Locs Loc1


Loc2
Loc3

Catchment1_Waterlevel_For [Link] Catchment1_Locs Loc1


Loc2
Loc3

Catchment2_Waterlevel_Obs [Link] Catchment2_Locs Loc4


Loc5
Loc6

Catchment2_Waterlevel_For [Link] Catchment2_Locs Loc4


Loc5
Loc6

The geographical hierarchy is that Area 1 contains 2 Catchments (Catchment1 and Catchment2)

The Region overview should be configured that all catchments belonging to that area are 'put' into one row which is describing the status of that
area. This is valid for both the observed as the forecast timeseries. The 'mergeLocations' variable should be put to 'true' because all locations
should be merged (combined).

The Area overview shouls be configured in such way that all catchments are in separate rows. This is valid for both the observed as the forecast
timeseries. The 'mergeLocations' variable should be put to 'true' because all locations should be merged (combined).

The Catchment overview forms the exception here. With mergeLocations set to 'False', the corresponding locationSet is extracted into the
individual locations and so every location has got its own row.

For the two tables for which this is valid, the last example does not give much additional value for a thresholdsCrossingsTable. Then each row
(which is equal to one locatation) will have a '1' in one of the cells. A maximumStatusTable supplies more value because it will indicate when this
(maximum) threshold will be reached.
See below mentioned (simplified) figures.

432
mergedPrecipitationTable

A mergedPrecipitationTable contains both observerd rainfall as well as forecast rainfall, preferably in [Link] timeseries. Data can be visualised
in configurable time intervals compared to T0 and will appear in seperate columns. Additionally, a single column can be added to visualise any
parameter (e.g. CWI). An example can be found below (without extra column). In the example below, actually two tables are plotted next to each
other. The left table (with names) contains the historical date. The one on the right hand side contains the forecast timeseries and has no name
column. A table like this has two header rows to be defined by the user.

Figure 105 Example of a mergedPrecipitationTable (NE Region)

Declaration Section

In the declaration section the layout of the table needs to be defined.

433
Figure 106 mergedPrecipitationTable configuration in the declarations section

The following elements need to be defined.

id: unique identifier (as reference to this table);


tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
headerRow1Text: text to be displayed in table header (first line);
headerRow2Text: text to be displayed in table header (second line);
dataColumns: definition of individual column with a time interval for aggregating precipitation. Each column should contain:
columnHeader: text to be displayed in column header (first line);
unitHeader text to be displayed in column header (second line);
relativeViewPeriod: period to aggregate the data;
unit: unit to aggregate. Choices: day/hour/minute/second/week
start: start of interval (in time unit) compared to T0;
end: end of interval (in time unit) compared to T0;
nameColumnWidth: integer value of width of the name column (most left column (if present);
dataColumnWidth: integer value of width of the data columns;
suppressNameColumn boolean value for setting the name column visible or not. This column, when visible, is filled with the (full) location
name of the timeseries which is visualised in this table.

Report section

In the report section the content (timeseries) are 'attached' to this table.

434
Figure 107 mergedPrecipitationTable in the report section.

The mergedPrecipitationTable in the report section is (very) easy to define. The rule of the 'outer table' is valid here as well. To align the historical
and the forecast table nicely, the outer table contains both [Link] tables.

The following elements need to be defined:

id: identifier (reference to the template tag)


formatId: reference to the format of this table (to one of the mergedPrecipitationTables in the declarations section);
table:
id: identifier (used for comments only)
formatId: reference to the format of this table (to one of the mergedPrecipitationTables in the declarations section)
timeSeries: reference to an inputVariable.

SystemStatusTables

SystemStatusTables display information about the status and behaviour of the FEWS system itself (like in the System monitor).

SystemStatysTables come in different types:

liveSystemStatus: information about live system: MC and FSS('s);


exportStatus: information about exported files/reports;
importStatus: information about files imported;
scheduledWorkflowStatus: information about (next) scheduled workflows;
completedWorkflowStatus: information about number of workflows completed in last 24 hrs.
currentForecastStatus: information about which workflows are set to CURRENT;
logMessageListing list of logmessages (based on a query)
forecastHistory: historic overview of forecasts.

In most tables it is possible to add 'benchmark' data to compare the actual and the desired/required situation. The configuration of such a table
requires the definition of this benchmark value. Such a table contains a 'Item', "Benchmark' and a 'Status' column.

Besides a 'benchmark' (something to compare the actual status with) additional fields (columns from the database) can be included in the table. A
specific boolean value (showOutputFieldsOnly) can be used to either include or exclude these benchmark columns. In most tables this boolean is
set to 'False' because most tables contain both status information as well as additional (meta)information. See figure below.

435
Figure 108 A systemStatusTable is divided into a status part and an extraOutputFields part.

Each type will be briefly explained:

liveSystemStatus

A liveSystemStatusTable displays information about the status and behaviour of the live system components (MasterController and Forecasting
Shell Server(s))

Figure 109 Example of a liveSystemStatus table (NE Region)

Declarations Section

Figure 110 Example of the configuration of a liveSystemStatus table in the declarations section

The following elements need to be defined:

tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
showOutputFieldsOnly: boolean value for displaying the outputfields only.

Report Section

436
Figure 111 Example of the configuration of a liveSystemStatus table in the report section

The following elements need to be defined:

id: identifier to the template tag ($TABLE(liveSystemStatusTable)$);


formatId: reference to the format of this type of systemStatus table in the declarations section);
mcStatusQuery:
itemTextStatus: left blank will display MC name;
benchmarkTextStatus: benchmark text;
fssStatusQuery:
itemTextStatus: left blank will display FSS name;
benchmarkTextStatus: benchmark text for status;
itemTextSize: text for size query;
benchmarkTextStatus: benchmark text for size
singleRecordQuery
tableName: tablename in local datastore;
inputfield reference to column in table (to query):
recordId reference to record in table:
statusField reference to column in table (to display):
itemText: text to be displayed (can be empty)
benchmarkText: benchmark text

exportStatus table

A exportStatus table displays information about the status of a number export features of the system, such as:

last occurence of a (export) workflow (by workflowStatusQuery);


nr of files present in an export directory (by logMessageParseQuery);
transfer speed of exporting files (by logMessageParseQuery);

Figure 112 Example of an exportStatus table (NE Region)

Declarations Section

437
Figure 113 Example of the configuration of an exportStatus table in the declarations section

The following elements need to be defined:

tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
showOutputFieldsOnly: boolean value for displaying the outputfields only.

Report Section

438
++
Figure 114 Example of the configuration of an exportStatus table in the report section

This table is constructed using two types of queries:

workflowStatusQuery;
logMessageParseQuery

The following elements need to be defined:

id: identifier to the template tag ($TABLE(liveSystemStatusTable)$);


formatId: reference to the format of this type of systemStatus table in the declarations section);
workflowStatusQuery:
workflowId:
itemText: text in 'Item' column;
benchmarkTextStatus: text in the 'Benchmark' column;
selectCompletedWorkflowsOnly: boolean indicating completed or scheduled workflows;
statusField: field from the taskRunsCompletions table indicating whether a workflow has been completed or not;
logMessageParseQuery
itemText: text in 'Item' column;
benchmarkTextStatus: text in the 'Benchmark' column;
logEntryEventCode: eventcode to filter on;
keyword: typical/unique keyword to parse for;
displayWord: integer value for the word to display (n-th word in this log message);

439
importStatus table

An importStatus table displays information about the datafeeds which have been imported, how many of them were read and how many failed to
be imported. The frequency of the files imported can be (visually) compared with a benchmark figure. See below for an example.

Figure 115 Example of an importStatus table (NE Region)

Declarations Section

Figure 116 Example of the configuration of an importStatus table in the declarations section

The following elements need to be defined:

tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field definition specifically for import related topics. Recommended fields are:
Last file imported
Nr. of files read
Nr. of files failed
showOutputFieldsOnly: boolean value for displaying the outputfields only.

440
Remark: when defining extraOutputFieldHeaders it is important to maintain the same order in the declarations sections (definition of
the fields) and in the report section (referencing the content) otherwise the header and the content will not correspond.

Report Section

Figure 117 Example of the configuration of an importStatus table in the report section

The following elements need to be defined:

id: identifier to the template tag ($TABLE(importStatusTable)$);


formatId: reference to the format of this type of systemStatus table in the declarations section);
datafeedStatusQuery
BenchmarkText
id: textual string contain the name of the datafeed
text: textual string indicating the benchmark for this datafeed
statusField: textual reference to main field in the database table ImportStatus, lastImportTime
extraOutputField
Text: textual references to the fields in the database table ImportStatus to fill the defined ExtraOutputFields (Declaration section).
In this case it is recommended to add here:
lastFileImported
filesReadCount
filesFailedCount

scheduledWorkflowStatus table

A scheduledWorkflowStatus table displays the workflows which are scheduled together with their repeat time and next due time. The figure below
illustrates this.

441
Figure 118 Example of a scheduledworkflowStatus table (NE Region)

Declarations Section

Figure 119 Example of the configuration of a completedWorkflowStatus table in the declarations section

The following elements need to be defined:

tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field(s) definition specifically for scheduled workflow related topics. Recommended fields are:
Workflows
Description
MC Id
Repeat Time
Next Due Time
showOutputFieldsOnly: boolean value for displaying the outputfields only.

Since this table is not referring to a benchmark (it is just reading the configuration) the value for showOutputFieldsOnly is set to true. Only these
fields are displayed.

442
remark: One reference to an existing workflow is sufficient to extract all scheduled workflows out of the database, that's why it
seems that there is only one table row configured here. In fact, this table will be filled with ALL scheduled workflows when configured
as above.

Report Section

Figure 120 Example of the configuration of a scheduledWorkflowStatus table in the report section

The following elements need to be defined:

id: identifier to the template tag ($TABLE(scheduledWorkflowTable)$);


formatId: reference to the format of this type of systemStatus table in the declarations section);
workflowStatusQuery
workflowId: textual reference (case-sensitive!) to an existing workflow in the configuration
itemText: text in the 'Item' Column (can be left blank in case of this statusTableSubType)
benchmarkText: text in the 'Benchmark' column (can be left blank in case of this statusTableSubType)
selectCompletedWorkflowsOnly: boolean variable: 'true' refers to query completed workflows (see next table subtype) and 'false'
refers to query scheduled workflows (this type)
statusField: textual reference to the field in the database table Tasks. In this case taskStatus. In case of scheduled workflows the
column 'taskRepeatTime' contains a integer value indicating that it is a repeating activity (workflow)
extraOutputField
Text: textual references to the fields in the database table ImportStatus to fill the defined ExtraOutputFields (Declaration
section). In this case it is recommended to add here:
workflowId
description
ownerMcId
taskRepeatTime
taskPendingSinceTime

completedWorkflowStatus table

A completedWorkflowStatus table contains an overview of all workflows carried out in the last 24 hours. An example is given below.

Figure 121 Example of a completedworkflowStatus table (NE Region)

Declarations Section

443
Figure 122 Example of the configuration of a completedWorkflowStatus table in the declarations section

The following elements need to be defined:

tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
statusSubHeader: Additional Field(s) definition specifically for completed workflow related topics. Recommended fields are:
Nr. of Runs
Nr. Failed
showOutputFieldsOnly: boolean value for displaying the outputfields only.

Report Section

Figure 123 Example of the configuration of a completedWorkflowStatus table in the report section

The following elements need to be defined:

id: identifier to the template tag ($TABLE(scheduledWorkflowTable)$);


formatId: reference to the format of this type of systemStatus table in the declarations section);
workflowStatusQuery
workflowId: textual reference (case-sensitive!) to an existing workflow in the configuration
itemText: text in the 'Item' Column (can be left blank in case of this statusTableSubType)
benchmarkText: text in the 'Benchmark' column (can be left blank in case of this statusTableSubType)
selectCompletedWorkflowsOnly: boolean variable: 'true' refers to query completed workflows (this type) and 'false' refers to
query scheduled workflows (see previous table subtype)
statusField: textual reference to the field in the database table TaskRuns. In this case taskRunStatus.

currentForecastStatus table

The currentForecastStatus table gives an overview of which workflows are set to CURRENT. These mentioned workflows in this tables are the
same as the marked with a green icon the System Monitor of the Operator Client. An example of this table is given below.

444
Figure 124 Example of a currentForecastStatus table (NE Region)

Declarations Section

Figure 125 Example of the configuration of a currentForecast table in the declarations section

The following elements need to be defined:

tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field(s) definition specifically for scheduled workflow related topics. Recommended fields are:
T0
What-if Scenario
Description
FDO
showOutputFieldsOnly: boolean value for displaying the outputfields only.

Report Section

445
Figure 126 Example of the configuration of a currentForecastStatus table in the report section

The following elements need to be defined:

id: identifier to the template tag ($TABLE(scheduledWorkflowTable)$);


formatId: reference to the format of this type of systemStatus table in the declarations section);
currentForecastQuery
workflowId: textual reference (case-sensitive!) to an existing workflow in the configuration
itemText: text in the 'Item' Column
benchmarkText: text in the 'Benchmark' column
statusField: textual reference: should be "dispatchTime"!
extraOutputField: textual references to specific TaskRun details (see figure above):
T0
whatIfId
description
FDO

logMessageListing table

A logMessageListing table contains logmessages which are available in the Log Browser tab in the System Monitor of the Operator Client. Log
messages of a specific type can be queried. By making use of a correct reference to the cascading style sheet this table can be set to 'scrollable'
An example of such a table is given in the figure below.

Figure 127 Example of a logMessageListing table (NE Region)

Declarations Section

446
Figure 128 Example of the configuration of a logMessageListing table in the declarations section

The following elements need to be defined:

tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field(s) definition specifically for scheduled workflow related topics. Recommended fields are:
Log Creation Time
Log Message
TaskrunId
showOutputFieldsOnly: boolean value for displaying the outputfields only. This tableType requires a 'true' here.

Report Section

Figure 129 Example of the configuration of a currentForecastStatus table in the report section

The following elements need to be defined:

id: identifier to the template tag ($TABLE(scheduledWorkflowTable)$);


formatId: reference to the format of this type of systemStatus table in the declarations section);
logMessageQuery
logLevelFilter: textual (case-sensitive) reference to one of the log message levels. Choices are: INFO, WARN, ERROR, FATAL.
logEntryEventCode: textual (case-sensitive) reference to a specific type of log message. The eventCode is a 'filter' to retrieve
certain types or error messages. In this case the "[Link] " eventCode has been used.
statusField: textual reference to the correct field in the LogEntries table in the database (="logLevel");
extraOutputField: textual reference to other fields in the LogEntries table which fill the corresponding columns. In this case 3
additional columns need to be filled with information:

447
logCreationTime (creation time of message)
logMessage (content of the log message itself)
taskRunId (reference to the taskrun that throwed this message)

forecastHistory table

A forecastHistory table provides an overview of all most recent forecasts carried out. The number of foracast to include is configurable. An
example of such a table is given below.

Figure 130 Example of a forecastHistory table (NE Region)

Declarations Section

Figure 131 Example of the configuration of a forecastHistory table in the declarations section

The following elements need to be defined:

tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);

448
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field(s) definition specifically for scheduled workflow related topics. Recommended fields are:
Dispatch Time
Completion Time
T0
Workflow
What-if Scenario
Description
FDO
showOutputFieldsOnly: boolean value for displaying the outputfields only. This tableType requires a 'true' here.

Report Section

+
Figure 132 Example of the configuration of a forecastHistory table in the report section

The following elements need to be defined:

id: identifier to the template tag ($TABLE(scheduledWorkflowTable)$);


formatId: reference to the format of this type of systemStatus table in the declarations section);
forecastHistoryQuery
nrOfForecasts: integer value referring to the number of most recent forecast to include in this table;
statusField: textual reference to the field in the TaskRuns table of the database
extraOutputFields: textual references to required fields
dispatchTime
completionTime
T0
workflowId
whatIfId
description
fdoName
mcId
fssId

To export a report see the reportExport module instance

Improved report configuration

Summary
The report configuration is quite extensive. To reduce configuration efforts it is now possible to use the inputVariables item in de <declaration>
section (at the top of the document) in stead of mentioning this same inputVariable in all individual reports. This latter should be replaced by one
line containing a reference to the locationId. Example configurations of both options are given below.

449
Config options
Individual reports contain individual inputVariables

Declarions section contains (overall) inputVariables, individual reports contain locationIds

Tags in report template

Tags

450
The report template uses tags as placeholders to identify the location of objects in the report. In the following table the available tags are
described.

Tag Description

$CURRENTTIME(dateFormatId)$ The actual time the report was generated. Note that this is the Delft FEWS time, which
is not necessarily equal to the local time.

Arguments: 1
- dateFormatId: specified in the configuration file, sets the formatting for the date to be
displayed

$LOCATIONNAME(variableId)$ The location name associated with a time series.

Arguments: 1
- variableId: refers to the variableId assigned to the time series in the report
configuration.

$TIMEZERO(variableId; dateFormatId)$ The time zero of the forecast run in which the time series is created.

Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- dateFormatId: specified in the configuration file, sets the formatting for the date to be
displayed

$MINVALUE(variableId; numberFormatId)$ The minimum value found in the time series

Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- numberFormatId: specified in the configuration file, sets the formatting for the values
to be displayed

$MAXVALUE(variableId; numberFormatId)$ The maximum value found in the time series

Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- dateFormatId: specified in the configuration file, sets the formatting for the date to be
displayed

$MINTIME(variableId; dateFormatId)$ The date/time of minimum value found in the time series

Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- numberFormatId: specified in the configuration file, sets the formatting for the values
to be displayed

$MAXTIME(variableId; dateFormatId)$ The date/time of maximum value found in the time series

Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- dateFormatId: specified in the configuration file, sets the formatting for the date to be
displayed

$MAXWARNINGLEVEL(variableId)$ returns the name of the highest warning level threshold that has been crossed

$EXTERNALFORECASTINGSTARTTIME(variableId)$ returns the start of the external forecast

$FORECASTNAME(variableId)$ The name or description of the forecast.

Arguments: 1
- variableId: refers to the variableId assigned to the time series in the report
configuration.

451
$DEFINITION(definitionId)$ The definition tag provides a means to enter some additional textual information into a
report. This information can be set for all reports at once, through the defineGlobal
element of the declarations section or for each report through the defineLocal element
in the reports section.

Arguments: 1
- definitionId: refers to ID provided in either the defineLocal or defineGlobal elements.
The defineLocalId takes preference over the defineGlobalId when both are the same.

$FILERESOURCE(resourceId)$ The fileresource tag provides a means to include an external file into the report. This
may be any file, as long as it is permissible in the report file. The inclusion is

Arguments: 1
- resourceId: Refers to the ID given to the fileResource element in the reports section.
The fileResource element specifies the location of the file to be included relative to the
region 'home' directory.

$TABLE(tableId)$ Inserts a table. The layout of the table is defined in the report configuration files.

Arguments: 1
- tableId: ID of the table definition

$CHART(chartId)$ Inserts a reference to the filename of the chart. The chart is created in the same
directory as the report file. The reference is inserted without any path prefixes. This
feature will only be useful in XML or HTML report files.

Arguments: 1
- charted: ID of the chart definition

$SUMMARY(summaryId)$ Inserts a map with overlying text, symbols or values of configured timeseries. This is
a complex tag that requires substantial preparation in the configuration.

Arguments: 1
- summaryId: ID of the summary definition

$STATUS(statusId)$ Inserts a table created using a SQL query on the database. The table may be
additionally formatted.

IMPORTANT: the HTML table header is not created by this TAG. The TAG only
creates the records of the table. This has been made to enable the user to provide
more user friendly header info than the field names.

Arguments: 1
- statusId: ID of the status definition

Using Colours in Tables


A Report HTML table can contain colours; it can be usefull to colour cells that cross a certain threshold. To achieve this, the configuration of the
following files must be correct:

ValueAttributesMap: This file must include a table with colours.

Report module
In the declarations section a table layout must be specified with a tableStyle.

In the reports section a report table must have a map ID linked to the timeseries in the table.

[Link] file: In this file the correct colours must be configured for the map ID's

452
10 Performance Indicator Module
Performance Indicator module
Assessing performance of modules
Assessing performance of forecast values- lead time accuracy
Assessing performance of forecast values- timing of thresholds
Configuration of performance module
performanceIndicatorSet
inputVariable
outputVariable
modulePerformanceIndicator
leadTimeAccuracyIndicator
thresholdTimingIndicator
additionalCriteria
description
leadTimes
leadTime
thresholdIds
thresholdId

Performance Indicator module


The performance indicator module is used as an analysis tool in DELFT-FEWS to establish an overview of how well the forecasting system is
performing in terms of accuracy of the individual forecasting module or in terms of the forecasting system as a whole. Performance can be
assessed in two ways;

Performance of the individual forecasting modules. This reflects how accurate a given forecasting module is, following the traditional
performance measures used widely in module calibration, for example root mean square error, Nash-Sutcliffe measure etc.
Performance of the forecasting system itself. This reflects the accuracy of the system in forecasting. Three types of measure are
proposed to this end, (i) lead time accuracy of forecast time series, (ii) accuracy of timing threshold event crossings and (iii) accuracy and
timing of peak predictions.

The first type of performance assessment can be used either in calibration of the system, or in the operational setting to determine performance of
modules and take actions such as the use of an alternative module due to poor performance.

The second type of measure can be assessed once observed data for which forecasts were made becomes available.

Assessing performance of modules

The first and most simple application of the performance indicator module is in the traditional module calibration. This is by comparing two time
series where one time series is the estimated series and the other is the reference time series. These time series are compared over a
configurable length. As with other time series this is referenced with respect to the forecast start time (T0).

The time series are compared using a number of performance indicators. is the estimated value, is the reference value, and is the
number of data points. is the mean of the reference values.

Bias (BIAS)

453
Mean absolute error (MAE)

Mean Square error (MSE)

Nash-Sutcliffe efficiency (NS)

Peak accuracy in Mean Square Error MSE(MSE_PEAK)_

where K is the number of peaks identified.

To establish the peak accuracy, the peak must be identified- logic from the TransformationModule is to be used, although this needs extending to
make sure a peak is a peak. A peak needs to be independent, and it must be ensured that the peak given is not simply the maximum value in a
time window at the boundaries (see also Threshold Event crossing module). Note that the peak the estimated series does not need to fall exactly
on the same time as the reference peak, but must be identified within a window (see peak independence window).

Procedure in peak comparison is

Find peaks in reference series


Find accompanying peaks in estimated series- if there is no identifiable peak, use value at time of peak in reference series
Determine performance

Volume error (PERC_VOLUME)

On establishing the performance, the indicator is returned as a time series (simulated historical). This time series is a non-equidistant time series,
labelled as a forecast historical with the time stamp set to T0

Assessing performance of forecast values- lead time accuracy

Performance of forecast is assessed on the basis of lead time accuracy. This is done by comparing the forecast lead time value against the
observed value at the same time (received later!). For each lead time, this value is assessed over a given number of forecasts.

An option in the configuration of the module determines if the module identifies performance of approved forecasts only or of all forecasts.

Performance is assessed over all forecasts available for a given period of time- e.g over a week or month (relative view period). Clearly evaluation
can not be done over forecasts beyond the length of the rolling barrel in the local data store.

Lead time accuracy is evaluated using again the MSE, MAE or BIAS

Lead time accuracy in Mean Square Error LEAD_MSE

454
Lead time accuracy in bias LEAD_BIAS

Lead time accuracy in Mean absolute error LEAD_MAE

where is the lead time accuracy at time , J is the number of forecasts considered, is the reference value at time and

is the estimated value at time .

There are two options in writing results:

1. The results of the evaluation are written as a time series (simulated forecasting) , with as a reference time the T0 of the evaluation run and a
time stamp for each .
2. The results for each lead time are written as a different time series (simulated historical). This will allow assessment of lead time accuracy at
selected lead times to be compared against catchment conditions.

On selecting reference values , these may not yet be available (should this be the case then the number of forecasts considered ( J ) is
reduced accordingly. If less than the configured number is considered, then a WARN message indicating how many of the expected number were
actually used.

Assessing performance of forecast values- timing of thresholds

An important indicator of performance is the timing of predicted threshold event crossings. Again this is evaluated over a number of forecasts. To
evaluate this the threshold crossings in the indicator and the reference series are considered. For each pair of matching thresholds (matched on
threshold id's) the time between the two is evaluated, and expressed either as a time bias (T_BIAS) or a time absolute error (T_MAE). Times are
evaluated in terms of seconds.

where is the time of the threshold in the reference series, is the time of the threshold in the estimated series.

The thresholds to consider is determined in configuration by providing one or more ThresholdID's

The results of the evaluation are written as a time series (simulated historical), with as a reference time the T0 of the evaluation run and a time
stamp for each .

Configuration of performance module

455
Figure 134 Elements of the performance module configuration

performanceIndicatorSet

Root element for configuration of a performance Module indicator. Multiple elements may be defined for each performance indicator to be
assessed.

Attributes;

performanceIndicatorId : Optional Id for the configuration. Used for reference purposes only

inputVariable

Definition of inputVariables (time series). Input variables are identified by their VariableId. See transformation module on definition of the
inputVariable element. An input variable will need to be defined for both simulated and for observed time series.

outputVariable

Definition of outputVariable time series of performance indicator values is to be written to. This will normally be a non-equidistant time series as it
is not a-priori certain when the performance indicator module is run.

modulePerformanceIndicator

Root element for configuration of performance indicator assessing module performance

Attributes;

indicatorType : selection of performance indicator. Enumeration of options includes:


bias
meanabsoluteerror
meansquareerror
nashsutcliffeefficiency
peakmeansquareerror
volumeerror
calculatedVariableId : VariableId to identify calculated time series
observedVariableId : VariableId to identify observed (reference) time series
outputVariableId : VariableId to write resulting Performance index time series to.

leadTimeAccuracyIndicator

Root element for configuration of performance indicator assessing lead time accuracy

Attributes;

indicatorType : selection of performance indicator. Enumeration of options includes;


bias
meanabsoluteerror
meansquareerror
calculatedVariableId : VariableId to identify calculated time series
observedVariableId : VariableId to identify observed (reference) time series
outputVariableId : VariableId to write resulting Performance index time series to.

thresholdTimingIndicator

456
Root element for configuration of performance indicator assessing accuracy of threshold Timing

Attributes;

indicatorType : selection of performance indicator. Enumeration of options includes;


bias
meanabsoluteerror
meansquareerror
calculatedVariableId : VariableId to identify calculated time series
observedVariableId : VariableId to identify observed (reference) time series
outputVariableId : VariableId to write resulting Performance index time series to.

Figure 135 Elements of the ModulePerformance configuration

additionalCriteria

Additional criteria identified in establishing performance indicators. Application depends on the performance indicator selected.

Attributes;

Criteria : list of criteria that may be applied. Enumeration of options includes;


minnumberofforecasts
timewindowinseconds
thresholdvaluesetid
peakthresholdvalue
maximumgapbetweenpeaksinseconds
minimumrecessionbetweenpeaks
value: value of criteria defined
violationOfCriteriaFlaggedAs: optional flag applied to PerformanceIndicator output series if criteria identified (eg. minnumberofforecasts
)__ do not hold. Enumeration of;
unreliable
doubtful

description

Description of criteria defined. For reference purposes only.

Figure 136 Elements of the leadTimeAccuracy configuration.

leadTimes

Root element for defining lead times.

leadTime

Lead time for which to assess lead time performance.

Attributes;

time : lead time in number of seconds;


outputVariableId: variableId to output lead time accuracy to. This is defined when a separate time series is defined to keep track of
performance at different lead times. It is not required when keeping track of performance in a single time series (Note that in the former a
simulate historical time series can be used. In the latter this must be a simulated forecasting time series).

457
Figure 137 Elements of the thresholdTimingAccuracy configuration.

thresholdIds

Root element for defining threshold crossings to be assessed.

thresholdId

Configuration of threshold crossing to be checked.

Attributes;

intId : Id of the threshold. See thresholds configuration in Regional Configuration.


outputVariableId: variableId to output threshold timing accuracy to. This is defined when a separate time series is defined to keep track
of performance for different thresholds. It is not required when keeping track of performance in a single time series (Note that in the
former a simulate historical time series can be used. In the latter this must be a simulated forecasting time series).

11 Amalgamate Import Data Module


Amalgamate Import Data module
Import data is stored in DELFT-FEWS for a default length of time before it is removed from the database by the rolling barrel. To keep track on
how long data is to be kept in the system, an expiry time is set on each piece of data imported. When the expiry time has passed the data is
removed. To keep track of when data became available to the system a creation time is also administered. This time is used when identifying
what data was made available to DELFT-FEWS. This allows tracing the data available at the time of the forecast. A disadvantage of tracking
when data was imported is that each piece of data is stored individually in a database record (as a BLOB). The amount of these records with
small BLOB's will have implications on the size of the database.

If observed data is to be kept in the system longer than forecast data without having severe implications on the size of the database, the
amalgamate module can be configured to amalgamate multiple small lengths of data to a single BLOB in a single record. These can be stored
with a much longer expiry time than the default. On a scheduled system this module can be run on a daily basis, amalgamating for example
import data that is a number of weeks old and is about to expire into single blocks with a much later expiry time.

When available as configuration on the file system, the name of the XML file for configuring an instance of the amalgamate module called for
example Amalgamate_Import may be:

Amalgamate_Import 1.00 [Link]

Amalgamate_Import File name for the Amalgamate_Import configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

458
Figure 138 Elements of the Amalgamate Module configuration

task

Root element for definition of an amalgamate task. Multiple entries may exist.

maximumCombinedBlobLength

Definition of the maximum length of the amalgamated BLOB.

Attributes;

unit unit of time (enumeration of: second, minute, hour, day, week)
multiplier defines the number of units given above in a time step.**
divider same function as the multiplier, but defines fraction of units in time step.**

timeSeriesSet

Time series set to amalgamate. The input and output time series sets are identical. Set a new expiry time in the time series set to ensure
it is kept in the database for the required period.

12 Archive Module
Archive Forecast Module

Introduction

The standard version of Delft FEWS (FEWS) includes archiving functionality. DELFT-FEWS can create archives for selected forecasts,
thresholds, configurations and timeseries. Delft FEWS can also restore a forecast and its data from an archive. An archive of a forecast contains
all result data from the forecast run, but also includes all the data used by the forecast at the time of the run, including any initial module states.
The archive can be used for hindcasting and analysis; using the data that was available at the time of the forecast. Forecast archives, with all
associated data, can be created manually or automatically. The archives are placed as zip files in a user defined folder. Retrieving of archives can
be done by importing the zip files.

Management of the folders with archives is the responsibility of the system manager. When not selectively managed the disk space required for
archiving will quickly increase depending on the data volumes used and produced by a given FEWS configuration.

As all functional tasks are run by DELFT-FEWS through a workflow, a moduleInstance is created to allow archives to be made. The module
instances must be correctly registered in the moduleInstanceDescriptors (see Regional Configuration), and point to the relevant class in the
moduleDesriptors configuration (see System Configuration).

Archive modules

FEWS contains 4 modules that can create or restore archives. These modules need to be registered in the ModuleDescriptors file locateed in the
system configuration files.

459
ModuleDescriptors 1.00 [Link]
<description>Forecast archiver</description>
<className>[Link]</className>

<moduleDescriptor id="TimeSeriesArchiver">
<description>Time Series archiver</description>
<className>[Link]</className>
</moduleDescriptor>
<moduleDescriptor id="ThresholdEventsArchiver">
<description>Threshold Events archiver</description>
<className>[Link]</className>
</moduleDescriptor>
<moduleDescriptor id="ConfigurationArchiver">
<description>Configuration archiver</description>
<className>[Link]</className>
</moduleDescriptor>
]]>

Configuration of a ConfigurationArchiver Module Instance

The ConfigurationArchiver can be used to make archives of configuration changes in the FEWS database. In the example below, the
ConfigurationArchiver is scheduled to run once every 24 hours. The ConfigurationArchiver checks if there have been any configuration changes in
the last 24 hours. If so, it stores the configuration changes in a zip file and stores these in the folder configured in the [Link] file with the
Tag ARCHIVE_EXPORT_PATH.

Archive_Configuration 1.00 [Link]


<archiveRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns="
[Link]
<importDirectory>$ARCHIVE_IMPORT_PATH$</importDirectory>
<exportDirectory>$ARCHIVE_EXPORT_PATH$</exportDirectory>
<exportArchiveRun>
<archivePeriod unit="hour" start="-24" end="0"/>
<archiveType>ConfigurationArchive</archiveType>
</exportArchiveRun>
</archiveRun>
]]>

Configuration of a ThresholdEventsArchiver Module Instance

The ThresholdEventsArchiver can be used to make archives of threshold crossings that have been stored in the FEWS database. Threshold
events are created by the FEWS Threshold module, and stored in the FEWS ThresholdEvents table. The threshold events can be used by the
FEWS Performance module to analyse the performance of hte Forecasting System.
An example of the ThresholdEventsArchiver is shown in the example below. As with the ConfigurationArchiver, the ThresholdEventsArchiver is
scheduled to run once every 24 hours. The ThresholdEventsArchiver checks if there have been any threshold events in the last 24 hours. If so, it
stores the threshold events in a zip file and stores these in the folder configured in the [Link] file with the Tag
ARCHIVE_EXPORT_PATH.

Archive_Thresholds 1.00 [Link]


<archiveRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns="
[Link]
<importDirectory>$ARCHIVE_IMPORT_PATH$</importDirectory>
<exportDirectory>$ARCHIVE_EXPORT_PATH$</exportDirectory>
<exportArchiveRun>
<archivePeriod unit="hour" start="-24" end="0"/>
<archiveType>ThresholdEventsArchive</archiveType>
</exportArchiveRun>
</archiveRun>
]]>

Configuration of a ForecastArchiver Module Instance

The ForecastArchiver can be used to make archives of forecasts that have been stored in the FEWS database. All forecasts that have been

460
made, together with the data that has been used to make the forecasts will be stored in the forecast archive. An example of the ForecastArchiver
is shown in the example below, the ForecastArchiver is scheduled to run once every 24 hours. The ForecastArchiver stores all forecasts in a zip
file and stores these in the folder configured in the [Link] file with the Tag ARCHIVE_EXPORT_PATH.

Archive_Forecast 1.00 [Link]


<archiveRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns="
[Link]
<importDirectory>$ARCHIVE_IMPORT_PATH$</importDirectory>
<exportDirectory>$ARCHIVE_EXPORT_PATH$</exportDirectory>
<exportArchiveRun>
<archivePeriod unit="hour" start="-24" end="0"/>
<exportToFile>true</exportToFile>
</exportArchiveRun>
</archiveRun>
]]>

Configuration of a TimeSeriesArchiver Module Instance

The TimeSeriesArchiver can be used to make archives of timeseries from selected module instances that will no be stored in a normal forecast
archive. These time series can be performance indicators or imported data that is not used by any forecast run. All timeseries that have been
stored in the database with the selected module instances will be stored in the forecast archive. An example of the TimeSeriesArchiver is shown
in the example below, the TimeSeriesArchiver is scheduled to run once every 24 hours. The TimeSeriesArchiver stores all forecasts in a zip file
and stores these in the folder configured in the [Link] file with the Tag ARCHIVE_EXPORT_PATH.

Archive_TimeSeries 1.00 [Link]


<archiveRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns="
[Link]
<importDirectory>$ARCHIVE_IMPORT_PATH$</importDirectory>
<exportDirectory>$ARCHIVE_EXPORT_PATH$</exportDirectory>
<exportArchiveRun>
<archivePeriod unit="hour" start="-24" end="0"/>
<archiveType>TimeSeriesArchive</archiveType>
<includeModuleInstanceId>Import_Telemetry</includeModuleInstanceId>
<includeModuleInstanceId>Import_CWB_Database</includeModuleInstanceId>
<includeModuleInstanceId>Import_CWB_Grid</includeModuleInstanceId>
<includeModuleInstanceId>Import_Qpesums</includeModuleInstanceId>
<includeModuleInstanceId>Import_GFS</includeModuleInstanceId>
<includeModuleInstanceId>Import_TRMM</includeModuleInstanceId>
<includeModuleInstanceId>Import_WRF</includeModuleInstanceId>
</exportArchiveRun>
</archiveRun>
]]>

Workflow to create archives

The standard procedure is to run a scheduled archive workflow every 24 hours. The Archive workflow can also be started from the Manual
Forecast Display. An example of the Archive Workflow is shown below.

461
Archive_Scheduled 1.00 [Link]
<workflow xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Archive_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Archive_Thresholds</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Archive_TimeSeries</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Archive_Configuration</moduleInstanceId>
</activity>
</workflow>
]]>

Workflow to Import Archives

Archives can only be imported in a FEWS database from a Stand Alone System, it is not possible to import archives in a FEWS Operator Client.
To import an archive workflow must be configured that runs the archive modules again, this time to import the archives from an import folder.

Import_Archive 1.00 [Link]


<workflow xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<!-- Run Import Archive Files on Stand Alone -->
<activity>
<moduleInstanceId>Import_Archive_Forecast</moduleInstanceId>
</activity>
<activity>
<moduleInstanceId>Import_Archive_Thresholds</moduleInstanceId>
</activity>
<activity>
<moduleInstanceId>Import_Archive_TimeSeries</moduleInstanceId>
</activity>
</workflow>
]]>

The module instances for importing forecasts, timeseries and threshold events are almost similar. Instead of configuring an exportArchiveRun, an
importArchiveRun must be configured. The following example shows a configuration of an import archive module instance to import forecasts.

Import_Archive_Forecast 1.00 [Link]


<archiveRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns="
[Link]
<importDirectory>$ARCHIVE_IMPORT_PATH$</importDirectory>
<exportDirectory>$ARCHIVE_EXPORT_PATH$</exportDirectory>
<importArchiveRun>
<archiveType>ForecastArchive</archiveType>
</importArchiveRun>
</archiveRun>
]]>

Archive Display

It is possible to configure an Archive display that facilitates the retrieval of archives using a Stand Alone system. This Archive display can only be

462
used when an Archive Server has been installed. Fo more information on installation of an Archive Server, please contact the FEWS Product
Manager.

13 Rolling Barrel Module


Rolling Barrel Module
The DELFT-FEWS is not considered an archive for storing forecast related data indefinitely. To make sure that the database system does not fill
up, a rolling barrel of configurable length is applied to all data. All dynamic data is kept in the system for a configurable length of time. This is
identified by setting an expiry time to each item of dynamic data saved in the database. DELFT-FEWS can run a module to remove data for which
the expiry date has passed from the database. This module should be scheduled to run on a daily basis on the live system (usually at times when
the system is not used extensively- e.g. around midnight). On operator clients the rolling barrel is run when the system identifies that enough data
is expired to make the run worthwhile.

As all functional tasks are run by DELFT-FEWS through a workflow, a moduleInstance is created to allow the rolling barrel to be run. The module
instance must be correctly registered in the moduleInstanceDescriptors (see Regional Configuration), and point to the relevant class in the
moduleDesriptors configuration (see System Configuration). The module does not require any configuration. There is therefore not an XML file
available, nor need one be configured to run this module.

14 Support Location Module


Support Locations Module Configuration
The support locations module from Delft FEWS is used for checking the completeness of time series sets. These time series sets are a
combination of a location or location set, parameter, time step and time period. The time series are checked on missing data for the same time
instance. If all series have no valid data value for a time instance the support locations module will pop-up a display.

In stand alone forecasting system without backup profiles the situation may occur that not all required forecasts or historical data are available.
For such events the system automatically forces the user to create a user defined set of observations or forecasts. In such cases the system
shows a table with the names of a set of base locations, also called Support Locations. These are locations that are tagged by the user as being
locations for which the user always knows the observed or forecasted values. The user has to complete the table before the forecast can be
executed any further. These values will be used as the base data on which the forecast is made.

When available as configuration on the file system, the name of the XML file for configuring an instance of the support locations module called for
example MeteoSupportLocations may be:

MeteoSupportLocations 1.00 [Link]

MeteoSupportLocations File name for the MeteoSupportLocations configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 139 Elements of the supportStations configuration.

supportStationsTimeSeriesSet

463
TimeSeriesSet defining the data to be used as support data. Only a single time series set may be defined, this time series set may include either
a (list of) locationId's or a locationSetId.

dataTimeSeriesSets

TimeSeriesSets defining the data to be checked for missing data on the same time instance. Multiple time series sets may be defined, and each
may include either a (list of) locationId's ar a locationSetId.

15 Scenario Module
Scenario Module Configuration
The scenarios module from Delft FEWS is used for generating scenario time series sets in a running forecast. These scenario time series sets are
used to transform model input and output time series sets.

In stand alone forecasting system a forecaster may want to run scenarios, or alternative forecast, by using simple transformations on time series.
The parameters used in the transformations are time series sets generated from coefficients inserted in the scenarios display.

When available as configuration on the file system, the name of the XML file for configuring an instance of the scenarios module called for
example Makescenarios may be:

Makescenarios 1.00 [Link]

Makescenarios File name for the Makescenarios configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 140 Root element of the scenarios module.

Scenario

Root element for the definition of a scenario. Multiple entries may exist.

Attributes;

Id : id of the scenario defined. Used for reference purposes only. This Id will be included in log messages generated.
Name : name of the scenario defined. Used to show in the scenario display.

464
Figure 141 Elements of the scenarios configuration.

description

The description is only used to give some background information on the scenario.,

scenarioVariable

Main element in the definition of a scenario. Multiple entries may exist.

variable

Definition of a variable in the scenario variable. Multiple entries may exist. The variable has a variableid as attribute and includes a time series set.

TimeSeriesSets

TimeSeriesSet defining the data to be generated from the variable values and transformation type.

transformationType

The transformationType is an enumeration of transformation functions that can be used in the scenario.

Equal
Linearwithstartvalueandendvalue
Linearwithstartvalueandincrement

defaultValue

The value used in the transformation function. For some of the transformation types multiple default value entries may exist.

16 Pcraster Transformation (pcrTransformation)


What [Link]

Description Configuration of the pcraster transformation module

schema location [Link]

Entry in ModuleDescriptors <moduleDescriptor id="PcrTransformation">


<description>PCr Transformation Component</description>
<className>[Link]</className>
</moduleDescriptor>

Introduction
Current status
Other documentation
Module Configuration
Defining Area Map
Examples
Defining internal, input and output variables
Examples
Defining the PCRaster Model
Example
Sample configuration to perform a typical PCRaster Transformation
Points precipitation to grid example

List of pcraster functions

Introduction
The pcrTransformation model allows a direct link between data in DELFT-FEWS and pcraster using the PCraster API based on in-memory
exchange of (XML) data. As such, Delft-Fews can use all available pcraster functions to process data. Pcraster documentation is available
elsewhere.

Current status
At this point a working version is available within Delft-Fews that can be used to perform all operations supported by PCraster. This means that all

465
time series data stored in Delft-Fews (grids and scalars) can be used as input to the module; all output is in grid format. If multiple timesteps are
fed to the module at one each timestep will be run separately, i.e. it is not possible to refer to data of a previous timestep within the module. A
pcraster model usually consists of a initial section (executed only once) and a dynamic section that is executed for each timestep. This version of
the pcrTransformation only implements the initial section.

As of release 2008.3 the system includes support for dynamic scripts. Existing script will continue to work without modification
(albeit significantly faster) and dynamic scripts are now supported.

Other documentation
Examples are available in the attached pdf document.

Module Configuration
The schema diagram is shown below, three main sections can be distinguished:

1. Definition of the Area Map


2. Definition of internal, input and output variables
3. Definition of the PCRaster Model itself

Multiple pcraster models can be configured in a single file.

Defining Area Map

The diagram below shows the possible options when defining an area map. The area map can be define using three methods:

1.

466
1. Grid Definition (number of rows, columns etc.) (do not use if any of the other methods can be used)
2. Grid location Id (this will use a grid definition the the [Link] file in the RegionConfigFiles section).
3. TimeSeriesSet which defines a Grid TimeSeries. (the grid definition is taken from the timeseries itself)

Examples

Here are few examples, which show the different methods available when defining an Area Map.
1. The area map is defined as a (FEWS) grid location id which refers to the grid definition at the same location within [Link] configuration file

<areaMap>
<locationId>H-2002</locationId>
</areaMap>

2. The area map is defined as a (FEWS Grid) TimeSeries Set. For details on how to define TimeSeriesSet please refer FEWS Configuration
Guide.

<areaMap>
<moduleInstanceId>ImportGrid</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>P.m</parameterId>
<locationId>MeteoGrid</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>read only</readWriteMode>
<areaMap>

3. The area map is given as Grid Definition (Not recommended) . The grid definition contains information such as GeoDatum, coordinates of
upper left grid point, number of columns and rows and cell width and height.

<areaMap>
<geoDatum>WGS 1984</geoDatum>
<upperLeftCorner>
<x>2</x>
<y>1</y>

467
<z>90</z>
</upperLeftCorner>
<rows>100</rows>
<columns>100</columns>
<cellwidth>0.1</cellwidth>
<cellheight>0.1</cellheight>
</areaMap>

Defining internal, input and output variables

The diagram below gives an overview of schema on how to define PCRaster Variables:

Here one can define

1. Data Exchange Type- memory or file,


2. Internal PCRaster model variable, with a unique Id and data type. The data type (which are exactly similar to the data types used in
PCRaster) can be

boolean
nominal
ordinal
scalar
ldd (not yet implemented?)
directional

3. Input PCRaster model variable. Use to get data from Delft-Fews and pass it on to the pcraster transformation module:

468
- Variable id (should be matching exactly as defined in the PCRaster text model),
- Data type (similar to that used for internal variables),
- Scalar type data to be passed to PCRaster, if different from the normal data value. At present the following options are available:

timeInJulian
timeAsDayofYear
timeAsDayofMonth
timeAsHourofDay
timeAsDaysElapsedSince
timeAsHoursElapsedSince
Reference date. Needed if the scalar type is defined as "timeAsDaysElapsedSince" or "timeAsHoursElapsedSince".
Spatial type options are: spatial and non-spatial. Generally all grid input timeseries are treated as spatial data, while all scalar timeseries
(or constant values) are treated as non-spatial. To treat the scalar timeseries (single data value per time) value or constant value as
spatial, one can set this option to "spatial". By doing so, the grid (as defined by area map) will be filled with (single) data value from
timeseries for a corresponding timestep. Hence for a given timestep, the input to the PCRaster model will be a grid with a constant value
in all the grid cells.

However there is exception to the above mentioned approach. If the input variable is a scalar timeseries at multiple locations (using LocationSetId
in TimeSeriesSet definition) and spatial type is set to spatial, and then the following approach is used:

1. Create a grid (as defined by area map),


2. Get the data value for a current time from a timeseries for a first location,
3. For this location, get the geo-reference coordinates (X, Y),
4. Get the corresponding grid cell within which the above location lies,
5. Put the data value as that grid cell value,
6. Get the data value for the current time from the timeseries for next location.
7. Repeat the steps 3-6 till all the data for the current time at all locations is read and the value is put in the appropriate grid cell.*
TimeSeriesSet, if the input data is a timeseries.

Value, if the input data is a constant value, and


External, if the data has to be read from the external PCRaster formatted file.

4. Output PCRaster model variable.

Variable id (should be matching exactly as defined in the PCRaster text model.


Data type (similar to that for internal variables), and
TimeSeriesSet, (at present) all output data should be grid timeseries.

Please not that the input variable should be regarded as read-only in the actual pcraster script. You should NOT try to modify
them within the script. Make a copy in an other variable(e.g. mycopy = theinputvar; ) if this is needed.

Examples

Here are few examples, showing different possibilities to define an interval, input and output variables. Refer to the comments for details:

<definitions>
<!-- dataExchange options = Memory -->
<dataExchange>memory</dataExchange>
<!-- internalVariable name used within the PCRaster Test Model -->
<internalVariable variableId="blnmap" dataType="boolean"/>
<!-- internalVariable name used within the PCRaster Test Model
, now with the dataType as scalar -->
<internalVariable variableId="toSpatial" dataType="scalar"/>
<!-- InputVariable which refers to the external data file -->
<inputVariablevariableId="externalVar" dataType="scalar">
<external>d://[Link]</external>
</inputVariable>
<!---Input Variable which refers to TimeSeriesGrid Array i.e. Grid as input -->
<inputVariablevariableId="input" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>ModuleInstance</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2002</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<!-- InputVariable which refers to the TimeSeries Float Array i.e, scalar value per
time, non spatial in nature . In other words, a value per time distributed

469
constantly over the whole grid for calculation purpose -->
<inputVariablevariableId="input" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId> ModuleInstance</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2002</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<!-- InputVariable which refers to the Constant Value i.e, a constant scalar value
irrespective of time and non spatial in nature. In other words, a constant
value distributed constantly over the whole grid for calculation purpose -->
<inputVariablevariableId="constant" dataType="scalar">
<value>10</value>
</inputVariable>

<!-- InputVariable which refers to the TimeSeries Float Array for multiple location
(given by locationSetID) i.e, scalar value per time, spatial in nature (as
given by spatialType), and defined only at grid cells where contains the
location. In other words, the grid cell which contains the georeference
position of the location -->
<inputVariablevariableId="input" dataType="scalar"spatialType="spatial" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>ModuleInstance</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>TestLocLiesWithinGrid_H-2002</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<!-- InputVariable which refers to the TimeSeries Float Array (NonSpatial) ,
however the time is passed to PCRaster (scalarType =
timeAsDaysElapsedSince). For the scalar Type defined as time "Elapsed
Since" the reference Date has to be defined -->

<inputVariablevariableId="input" dataType="scalar" scalarType="timeAsDaysElapsedSince" referenceDate="1984-10-30"


convertDatum="false">

<timeSeriesSet>
<moduleInstanceId>ModuleInstance</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2002</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>

</inputVariable>

<outputVariable variableId="transfmap" dataType="scalar" convertDatum="false">


<timeSeriesSet>
<moduleInstanceId>OutputModuleInstance</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2003</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>

Defining the PCRaster Model

470
In this section, one can provide the PCRaster model as simple ASCII text. The text model given here is in fact the valid PCRaster model and can
be run directly using PCRaster model, except that it does not contain any area map or variable definition part. All the variables used within this
model should appear in the definition section as described in the section above.

Example

Here is an example, showing how to configure a PCRaster Model.

<pcrModel id="String">
<text>
<!---PCRaster accepts # as Comment--->
# there is no dynamic section!
# initial
# result should be the grid within constant value of 1.8 and 0.8
# generate unique Id's
Unq = uniqueid(boolean(input));
transfmap = spreadzone(ordinal(cover(Unq,0)),0,1);
</text>
</pcrModel>

Please remember, the PCRaster model, which is defined here, is full PCRaster model script written in PCRaster Modelling Environment
language. Take care that all the variables ids defined in Variable definition section matches the variables used here in the model. In other words,
the model defined here should be a PCRaster compatible script.

Sample configuration to perform a typical PCRaster Transformation


A working sample configuration for PcrTransformation is shown as below:

<?xml version="1.0" encoding="UTF-8"?>


<!-- Solar radiation module demonstration configuration -->
<pcrTransformationSets xmlns="[Link]
xmlns:xsi="[Link] xsi:schemaLocation="[Link]
[Link] version="1.1">
<logLevel>WARN</logLevel>
<pcrTransformationSet id="Potradiation">
<areaMap>
<locationId>Radiation</locationId>
</areaMap>
<definitions>
<dataExchange>memory</dataExchange>
<inputVariable variableId="Altitude" dataType="scalar" convertDatum="false" spatialType="spatial">
<timeSeriesSet>
<moduleInstanceId>Radiation</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<inputVariable variableId="YearDay" dataType="scalar" convertDatum="false"
scalarType="timeAsDayofYear">
<timeSeriesSet>
<moduleInstanceId>Radiation</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<inputVariable variableId="Hour" dataType="scalar" convertDatum="false"
scalarType="timeAsHourofDay">
<timeSeriesSet>
<moduleInstanceId>Radiation</moduleInstanceId>

471
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<!-- Total potential Solar radiation -->
<outputVariable variableId="SL" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>Radiation</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
<!-- Diffuse radiation -->
<outputVariable variableId="SLDF" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>Radiation</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
<!-- direct radiation -->
<outputVariable variableId="SLDR" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>Radiation</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</definitions>
<pcrModel id="String">
<text><![CDATA[
#! --unittrue --degrees
# Test script to determine radiation over a grid.
#
# Inputs from Delft-Fews into this script
# - YearDay -> scalar with day since beginning of year
# - Hour of day -> Fractional hour of day (e.g. 12.5 = 12:30)
# Ouputs to FEWS
# - SL -> Total Solar radiation
#
# This version determines Clear Sky radiation assuming a level surface using a uniform
# altitude. This level is configured in the script below.
Altitude=spatial(10);

Latitude = ycoordinate(boolean(Altitude));
Longitude = xcoordinate(boolean(Altitude));

Day =YearDay;
pi = 3.1416;
Sc = 1367.0; # Solar constant (Gates, 1980) [W/m2]

472
Trans = 0.6; # Transmissivity tau (Gates, 1980)

AtmPcor = ((288-0.0065*Altitude)/288)**5.256; # atm pressure corr [-]

# Solar geometry
# ----------------------------
# SolDec :declination sun per day between +23 and -23 [deg]
# HourAng :hour angle [-] of sun during day
# SolAlt :solar altitude [deg], height of sun above horizon
# SolDec = -23.4*cos(360*(Day+10)/365);
# Now added a new function that should work on all latitudes!
theta =(Day-1)*360/365; # day expressed in degrees

# Time change equal to 4 min per degree longtitude


# Assume the time input to be GMT
HourS = Hour + (Longitude * 4/60);

SolDec =180/pi * (0.006918-0.399912 * cos(theta)+0.070257 * sin(theta) - 0.006758 *


cos(2*theta)+0.000907 * sin(2*theta) - 0.002697 * cos(3*theta)+0.001480 * sin(3*theta));

HourAng = 15*(HourS-12.01);

SolAlt = scalar(asin(scalar(sin(Latitude)*sin(SolDec)+cos(Latitude)*
cos(SolDec)*cos(HourAng))));

# Solar azimuth
# ----------------------------
# SolAzi :angle solar beams to N-S axes earth [deg]
SolAzi = scalar(acos((sin(SolDec)*cos(Latitude)-cos(SolDec)*
sin(Latitude)*cos(HourAng))/cos(SolAlt)));
SolAzi = if(HourS le 12 then SolAzi else 360 - SolAzi);

Slope = spatial(0.0001);
Aspect = spatial(1);

# Surface azimuth
# ----------------------------
# cosIncident :cosine of angle of incident; angle solar beams to angle surface
cosIncident = sin(SolAlt)*cos(Slope)+cos(SolAlt)*sin(Slope)
*cos(SolAzi-Aspect);

# Radiation outer atmosphere


# ----------------------------
OpCorr = Trans**((sqrt(1229+(614*sin(SolAlt))**2)
-614*sin(SolAlt))*AtmPcor); # correction for air masses [-]
Sout = Sc*(1+0.034*cos(360*Day/365)); # radiation outer atmosphere [W/m2]
Snor = Sout*OpCorr; # rad on surface normal to the beam [W/m2]

# Radiation at DEM
# ----------------------------
# Sdir :direct sunlight on a horizontal surface [W/m2] if no shade
# Sdiff :diffuse light [W/m2] for shade and no shade
# Stot :total incomming light Sdir+Sdiff [W/m2] at Hour
# Radiation :avg of Stot(Hour) and Stot(Hour-HourStep)
# NOTE: PradM only valid for HourStep and DayStep = 1
Sdir = if(Snor*cosIncident<0,0.0,Snor*cosIncident);
Sdiff = if(Sout*(0.271-0.294*OpCorr)*sin(SolAlt)<0, 0.0,
Sout*(0.271-0.294*OpCorr)*sin(SolAlt));

# Fill in missing values with areaaaverage


SLDR=cover((Sdir*1),(Altitude * 0) + areaaverage(Sdir*1,boolean(Altitude))); # hourly rad [W/m2]
SLDF=cover((Sdiff*1),(Altitude * 0) + areaaverage(Sdiff*1,boolean(Altitude))); # hourly rad [W/m2]

SL = SLDR + SLDF; # Total rad in [W/m2]

]]></text>
</pcrModel>

473
</pcrTransformationSet>
</pcrTransformationSets>

Points precipitation to grid example

<?xml version="1.0" encoding="UTF-8"?>


<pcrTransformationSets xmlns="[Link]
xmlns:xsi="[Link] xsi:schemaLocation="[Link]
[Link] version="1.1">
<logLevel>WARN</logLevel>
<pcrTransformationSet id="Thiessen">
<areaMap>
<locationId>FineGrid</locationId>
</areaMap>
<definitions>
<dataExchange>memory</dataExchange>
<inputVariable variableId="P" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>ImportPubRts</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>MetGauges_P.obs</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<relativeViewPeriod unit="hour" start="-96" startOverrulable="true" end="0"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<outputVariable variableId="MeasMap" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>PrecipitationGaugeToGrid_Historical</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>FineGrid</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<relativeViewPeriod unit="hour" start="-96" end="0" startOverrulable="true"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</definitions>
<pcrModel id="String">
<text><![CDATA[#! --unittrue --degrees
dynamic
# Simple Thiesen polygons to get spatial average precipitation on a grid

# Creat unique Id's for input stations


Unq = uniqueid(boolean(P));
# Now generate polygons and fill those
GaugeArea = spreadzone(ordinal(cover(Unq,0)),0,1);
MeasMap = areaaverage(P,GaugeArea);
]]></text>
</pcrModel>
</pcrTransformationSet>
</pcrTransformationSets>

List of pcraster functions

Table of Contents

See also the following external links:

[Link]

[Link]

474
[Link]

+ -- Addition

- -- Subtraction

/ or div -- Division

* -- Multiplication ** -- nth power of a first expression, where n is the value of a second expression
abs -- Absolute value
accucapacityflux, accucapacitystate -- Transport of material downstream over a local drain direction network
accuflux -- Accumulated material flowing into downstream cell
accufractionflux, accufractionstate -- Fractional material transport downstream over local drain direction network
accuthresholdflux, accuthresholdstate -- Input of material downstream over a local drain direction network when transport threshold is exceeded
accutriggerflux, accutriggerstate -- Input of material downstream over a local drain direction network when transport trigger is exceeded
acos -- Inverse cosine
and -- Boolean-AND operation
areaarea -- The area of the area to which a cell belongs
areaaverage -- Average cell value of within an area
areadiversity -- Number of unique cell values within an area
areamajority -- Most often occurring cell value within an area
areamaximum -- Maximum cell value within an area
areaminimum -- Minimum cell value within an area
areanormal -- Value assigned to an area taken from a normal distribution
areatotal -- Sum of cell values within an area
areauniform -- Value assignseds to area taken from an uniform distribution
asin -- Inverse sine
aspect -- Aspects of a map using a digital elevation model
atan -- Inverse tangent
boolean -- Conversion data type to boolean data type
catchment -- Catchment(s) of one or more specified cells
catchmenttotal -- Total catchment for the entire upstream area
cellarea -- Area of one cell
celllength -- Horizontal and vertical length of a cell
clump -- Contiguous groups of cells with the same value ('clumps')
cos -- Cosine
cover -- Missing values substituted for values from one or more expression(s)
defined -- Boolean TRUE for non missing values and FALSE for missing values
directional -- Data conversion to the directional data type
downstream -- Cell gets value of the neighbouring downstream cell
downstreamdist -- Distance to the first cell downstream
eq or == -- Relational-equal-to operation on two expressions
exp -- Basee exponential
fac -- Faculty or factorial of a natural positive number
ge or >= -- Relational-greater-than-or-equal-to operation
gt or > -- Relational-greater-than operation
idiv -- Quotient of integer division of values on first expression by values on second expression
if then -- Boolean condition determining whether value of expression or missing value is assigned to result
if then else -- Boolean condition determining whether value of the first or second expression is assigned to result
kinematic -- Dynamic calculation of streamflow through a channel
ldd -- Data conversion from specific data types to local drain direction data type
lddcreate -- Local drain direction map with flow directions from each cell to its steepest downslope neighbour
lddcreatedem -- Modified digital elevation model
ldddist -- Friction-distance from the cell under consideration to downstream nearest TRUE cell
lddmask -- Local drain direction map cut into a (smaller) sound local drain direction map
lddrepair -- Reparation of unsound local drain direction map
le or <= -- Relational-less-than-or-equal-to operation
ln -- Natural logarithm (e)
log10 -- Log 10
lookup -- Compares cell value(s) of one or more expression(s) with the search key in a table
lt or < -- Relational-less-than operation
maparea -- Total map area
mapmaximum -- Maximum cell value
mapminimum -- Minimum cell value
mapnormal -- Cells get non spatial value taken from a normal distribution
maptotal -- Sum of all cell values
mapuniform -- Cells get non spatial value taken from an uniform distribution
max -- Maximum value of multiple expressions
min -- Minimum value of multiple expressions
mod -- Remainder of integer division of values on first expression by values on second expression
ne or != -- Relational-not-equal-to operation
nodirection -- Expression of directional data type
nominal -- Data conversion data type nominal data type
normal -- Boolean TRUE cell gets value taken from a normal distribution
not -- Boolean-NOT operation

475
or -- Boolean-OR operation
order -- Ordinal numbers to cells in ascending order
ordinal -- Data conversion to the ordinal data type
path -- Path over the local drain direction network downstream to its pit
pit -- Unique value for each pit cell
plancurv -- Planform curvature calculation using a DEM
pred -- Ordinal number of the next lower ordinal class
profcurv -- Profile curvature calculation using a DEM
rounddown -- Rounding down of cellvalues to whole numbers
roundoff -- Rounding off of cellvalues to whole numbers
roundup -- Rounding up of cellvalues to whole numbers
scalar -- Data conversion to the scalar data type
sin -- Sine
slope -- Slope of cells using a digital elevation model
slopelength -- Accumulative-friction-distance of the longest accumulative-friction-path upstream over the local drain direction network cells against
waterbasin divides
spread -- Total friction of the shortest accumulated friction path over a map with friction values from source cell to cell under consideration
spreadldd -- Total friction of the shortest accumulated friction downstream path over map with friction values from an source cell to cell under
consideration
spreadlddzone -- Shortest friction-distance path over map with friction from a source cell to cell under consideration, only paths in downstream
direction from the source cell are considered
spreadmax -- Total friction of the shortest accumulated friction path over a map with friction values from a source cell to cell under consideration
spreadzone -- Shortest friction-distance path over a map with friction from an identified source cell or cells to the cell under consideration
sqr -- Square
sqrt -- Square root
streamorder -- Stream order index of all cells on a local drain direction network
subcatchment -- (Sub-)Catchment(s) (watershed, basin) of each one or more specified cells
succ -- Ordinal number of the next higher ordinal class
tan -- Tangent
time -- Timestep
timeinput... -- Cell values per timestep read from a time series that is linked to a map with unique identifiers
timeinput -- Set of output maps per timestep with an extension that refers to the time at the timestep
timeoutput -- Expression value of an uniquely identified cell or cells written to a time series per timestep
timeslice -- Timeslice
uniform -- Boolean TRUE cell gets value from an uniform distribution
uniqueid -- Unique whole value for each Boolean TRUE cell
upstream -- Sum of the cell values of its first upstream cell(s)
view -- TRUE or FALSE value for visibility from viewpoint(s) defined by a digital elevation model
windowaverage -- Average of cell values within a specified square neighbourhood
windowdiversity -- Number of unique values within a specified square neighbourhood
windowhighpass -- Increases spatial frequency within a specified square neighbourhood
windowmajority -- Most occurring cell value within a specified square neighbourhood
windowmaximum -- Maximum cell value within a specified square neighbourhood
windowminimum -- Minimum value within a specified square neighbourhood
windowtotal -- Sum of values within a specified square neighbourhood
xcoordinate -- X-coordinate of each Boolean TRUE cell
xor -- Boolean-XOR operation
ycoordinate -- Y-coordinate of each Boolean TRUE cell

17 WorkflowLooprunner
What [Link]

Description Modify workflow run period based in a timeseries threshold

schema location [Link]

What is the WorkflowLooprunner?


Run SOBEK around the maximum value of a given period
Run SOBEK where the discharge exceeds a pre-defined threshold
Run Model for entire view period if pre-defined threshold is exceeded

Although this manual mentions the SOBEK model this module can be used for any external module (e.g. also ISIS)

What is the WorkflowLooprunner?

To decrease the run time length the SOBEK model for the Rhine basin is only run for periods where the discharge at Lobith exceeds a given
threshold. Respectively the SOBEK model for the Meuse basin is only run for periods where the discharge at Borgharen exceeds a given
threshold. To achieve this a so called WorkflowLooprunner was configured in FEWS. There are two options to select the periods for which the

476
SOBEK model shall be run. The first option is to define a threshold for a given time series. If this threshold is exceeded the SOBEK model will be
run. The second option is to define the length of a time interval, e. g. yearly. In each time interval SOBEK is run for a defined time window around
the maximum value.

These options can be configured in the WorkflowLoopRunner "GRADE_SBKdag_Rijn_SelectedPeaks_Update.xml" for the Rhine basin and in the
file "GRADE_SBKdag_Maas_SelectedPeaks_Update.xml" for the Meuse basin. Currently the WorkflowLoopRunners are configured such that the
SOBEK model is run within a period of ten days before and two days after the maximum value of the discharge in Lobith respectively Borgharen
in a time interval of 40 years.

In the next two sections how to configure the different options in FEWS is explained.

Run SOBEK around the maximum value of a given period

In the Workflow LoopRunner the following things have to be defined:

trigger option
trigger time series
relative view period
step value option
step size
relative run window

First, choose the trigger option "Step Value Trigger" to run SOBEK for a period around a maximum in a pre-defined time interval. Then define the
trigger time series to which the maximum is referred to and the relative view period for which the WorkflowLoopRunner shall be run. After that
define the step value option, you can choose between maximum and minimum. The step size defines the time interval within the relative view
period. For each time interval the maximum/minimum value of the trigger time series will be defined.
The relative run window defines the period over which the SOBEK model is run around the maximum/minimum discharge value.

As an example see the schema of the WorkflowLoopRunner configuration file in Figure 1. This example shows how to run SOBEK for a period
around one maximum value in 40 years time.

Figure 1 WorkflowLoopRunner with trigger option "step value trigger"

In the example file the trigger time series is [Link], the discharge at Lobith calculated from HBV. The step value option is the maximum value of the
time series. According to the relative run window the SOBEK model is run from ten days before the maximum value until two days after the
maximum value.

In Figure 2 you can see how the relative run window is defined. On the left hand side of the figure the step size (time interval) is a third of the
relative view period. For each time interval the maximum value is defined. Around that maximum the relative run window for SOBEK is defined. If
the relative run windows from different time intervals overlap, SOBEK is run for the merged relative run window. On the right hand side of the
figure the step size equals the relative view period, thus resulting in one relative run window.

477
Figure 2: Definition of the relative run window for the trigger option "step value trigger" for step size unlike relative view period (left) and step size
equal to relative view period (right)

It is recommended to choose the time step size in such a way that the relative view period is a multiple of the time step size. If this is not the case,
the last part of the relative view period is not taken into account, e.g. if the relative view period contains ten days and the time step size is three
than the last day of the relative view period will be ignored in the WorkflowLoopRunner. Furthermore, choose an adequate time period before the
maximum value, so that the model can simulate the peak sufficiently. Also keep in mind that the run time increases if the relative run window
increases. Besides, one single SOBEK run must not be longer than 30 days, due to an internal setting in the SOBEK model.

Run SOBEK where the discharge exceeds a pre-defined threshold

In the Workflow LoopRunner the following things have to be defined:

• trigger option
• trigger time series
• relative view period
• value option
• value
• relative run window

Choose the trigger option "Value Trigger" to run SOBEK for a period where the trigger time series exceeds a given threshold. Then define the
trigger time series and the relative view period for which the WorkflowLoopRunner shall be run. With the value option you can configure in which
direction the trigger is activated. You can choose between "below" and "above" a given threshold. Define the threshold value in the field "value".

The relative run window defines the period over which the SOBEK model is run. The period contains the whole period where the trigger time
series exceeds the defined threshold and optional a period before and after that time. The additional time you can define with the start and end
time of the relative view period. The start time is relative to the time where the threshold is exceeded. The end time is relative to the time where
the trigger time series goes back to threshold value again. In Figure 4 you can see how the relative run window is defined. The green line
demonstrates the part of the relative run window where the threshold is exceeded. the red parts of the relative run window represent the additional
time which can be optionally added to increase the relative run window. If the relative run windows of two different peaks overlap, SOBEK is run
for the merged relative run window.

_Figure 3 Definition of the relative run window for the trigger option "value trigger"_

When you define the threshold value you should take into account that the relative run window must not get too long. Otherwise the workflow
might fail, because of a time out error. Nevertheless SOBEK has to run for an adequate time to simulate the peaks sufficiently. Also keep in mind
that the run time increases if the relative run window increases. Besides, one single SOBEK run must not be longer than 30 days, due to an
internal setting in the SOBEK model.

As an example for configuring the WorkflowLoopRunner see Figure 4.

478
_Figure 4 : WorkflowLoopRunner with trigger option "value trigger"_

In the example file the trigger time series is [Link], the discharge at Lobith calculated from HBV. The relative view period is 14610 days (40 years).
For each time interval the maximum value of the trigger time series will be defined. As value option "above" was chosen. The threshold value is
4000. The relative run window includes the period where the discharge at Lobith exceeds the threshold value as well as to days before and after
that period.

Run Model for entire view period if pre-defined threshold is exceeded

Next to running the model for a relative run window with respect to
threshold values found in the relative view period of the indicated
timeSeriesSet, an option exsists run the model for the entire view period in
this event.
This implies that the relative run window is similar to the total relative
view period, independant of the instance at which the threshold value is
exceeded. This is illustrated in Figure 5.

Figure 5: Definition of the run window for the runPeriodOption alwaysFullPeriod="true".

This feature is enabled by setting the alwaysFullPeriod flag to true at


runPeriodOptions. This is illustrated in Figure 6.

479
_Figure 6: Run model for entire view period by selecting
alwaysFullPeriod="true"._

18 Mass-balances
What [Link]

Required no

Description Determine mass balance for a specified polygon

schema location [Link]

Introduction
Horizontal flux
Vertical flux
Storage change
Remarks

Introduction
The mass-balances module determines the inflow, outflow and storage change within a give polygon from available flow fields. The various parts
of the mass balance are computed separately and result in a scalar timeseries:

To compute the horizontal inflow and outflow, you need to have timeseries of the flow in x- and y-direction defined on a rectangular grid.
To compute the vertical flow, you need to have timeseries of the flow coming in through the lower face of the grid cells and the flow
coming in through the upper face.
To compute the storage change, you need either the storage change per grid cell or the water table per grid cell. In the first case, the
computation consists of summing the values over all grid cells within the given polygon, in the second case, the change over time must
be computed as well.

The polygon for which the mass balance is determined is defined via the location set of the output timeseries: the locations defined in that set are
taken as the vertices of the polygon. Grid cells are considered to be inside the polygon if their centre is.

In the sections below the different elements of the configuration are described

– TODO: screendumps of the schema parts –

Horizontal flux

The input for determining the horizontal flux consists of:

The flow velocity in the x-direction


The flow velocity in the y-direction

The timeseries must be defined on the same rectangular grid for the same times.

The output consists of a timeseries of the nett in- and outflow, where the flow rate through the side faces is computed as the flow velocity times

480
the length of the side times a thickness of 1 m. The result is a flow rate in m3/s (assuming the flow velocity is given in m/s and the grid size in m).

Vertical flux

The input for determining the vertical flux consists of:

The flow velocity at the lower face of each grid cell


The flow velocity at the upper face of each grid cell

The timeseries must be defined on the same rectangular grid for the same times.

The output consists of a timeseries of the nett in- and outflow, where the flow rate through the faces faces is computed as the flow velocity times
the length and width of the grid cell. The result is a flow rate in m3/s (assuming the flow velocity is given in m/s and the grid size in m). Effects of
porosity are not taken into account.

Storage change

The input for determining the storage change consists of either:

The storage change rate per grid cell (that is, the change in the water table per time step)

or

The water table or water level per grid cell

The timeseries must be defined on the same rectangular grid, and in the latter case there must be at least two times.

The output consists of a timeseries of the nett change in storage, where the stored volume per grid cell is computed as the water table times the
length and width of the grid cell. The result is the change in the volume of water present within the area delimited by the polygon. Effects of
porosity are not taken into account.

Remarks
While the above description refers to volume or mass balances, the module is more generally applicable to any parameter that represents a mass
balance, for instance, if instead of flow velocities, you specify the flux of nutrients (concentration times flow velocity), you can compute the nett
inflow/outflow of nutrients through the given polygon.

Porosity is not taken into account in the module, but you can correct for that via the transformation module.

19 Rating curves
What [Link]

Description Configuration for rating curves at specific locations

schema location [Link]

Entry in ModuleDescriptors <moduleDescriptor id="Transformation">


<description>General transformation Component</description>
<className>[Link]</className>
</moduleDescriptor>

Rating Curves Module Configuration


The rating curves module is a standard file (i.e. it has a fixed name) which contains all the rating curves used in your system. A rating curve is
referenced by the location id. I.e. each location has a specific rating curve for that point in the river. Furthermore a rating curve can have one or
more periods for which it is valid. For each location there can be different rating curves that are valid for different times.

When the transformation module needs to use a rating curve for a given location at a given time, then it will search all rating curves for the given
location id. From all rating curves with the given location id, it will use the rating curve that is valid for the given time.

It is also possible to have different rating curves with the same location id, the same rating curve type and with overlapping valid periods, as long
as they have different rating curve ids. This makes it possible to have rating curves with valid periods that only have a start date (no end date),
which are valid until the next rating curve with the next start date becomes valid. In this case, if multiple rating curves are valid for a given time,
then the transformation module will use the rating curve that has the most recent start date in its valid period.

The ratings are either "qhrelationtable" or "simpleratingcurve" refer to by the hydroMeteoFunction in the transformation module. An example of the
reference from the transformation module is shown below:

481
or
<hydroMeteoFunction ratingcurvetype="LevelToFlow" outputvariableid="Flow" function=
"simpleratingcurve" useratingcurve="true"/>
]]>

Rating curve

Figure 1: Overview of rating curve config file

location for which the rating is valid should be the same as in the [Link]
ratingCurveType you can choose from either LevelToFlow or FlowToLevel
reversible if this option is set to "true" then both level to flow and flow to level calculations with be allowed
ValidPeriod allows you to enter a start and end date for which the rating is valid (e.g. a summer and winter rating). The dates and times can be
specified with or without a time zone. Use e.g. 2008-06-20T[Link]+05:00 for a time in time zone GMT+05:00. Use e.g. 2008-06-20T[Link]Z
for a time in GMT, where the Z means GMT. If a time is specified without a time zone, e.g. 2009-12-01T[Link], then the time is assumed to be
in local time. Note: 2008-06-20 [Link] in time zone GMT+5:00 is physically the same time as 2008-06-20 [Link] in GMT.
correction see below
ratingCurveTable see below
ratingCurveEquation see below

correction (not yet implemented)

482
Figure 2: correction complex type

The correction complex type allows the user to specify a correction technique for unsteady flow (jones equation) or backwater (constant fall
method or normal fall method).

jonesEquation the user must specify the minimum h for which the method is valid (h_min and the a, b and c parameters - see below)

The Jones Equation is of the form:

where:
Qm = unsteady discharge
Qc = steady discharge
S0 = energy slope for steady flow
vw = wave velocity
dh/dt = rate of charge of water level in time (m/day)

The adjustment factor 1/S0vw (day/m) varies with water level. This factor is fitted by a parabolic function of h:

for h>hmin

Twin gauge station fall-discharge methods

Stage-fall-discharge or twin gauge station fall-discharge methods are user to include backwater effects on stage-discharge ratings.

In these methods the fall F between the water level at the discharge measuring site and a downstream station is considered as an additional
parameter, to account for the effect of water surface slope on discharge. Both the constant fall method and normal fall method are based on the
following equation:

Where:
Qm = backwater affected discharge
Qr = reference discharge
Fm = measured fall
Fr = reference fall
p = power, with 0.4 < p < 0.6

constant fall method

In this method the reference fall Fr is taken as a constant. A special case of the constant-fall method is the unit-fall method, where Fr = 1m is
applied. In the computational procedure a value for Fr is assumed. Then a rating curve is fitted to the values:

483
normal fall method

In this method the reference fall (Fr) is modelled as a funtion of the water level; Fr = f(h). This funtion is represented by a parabola:

In Fews you should specify the a, b and c parameters and also a value of hmin, below which the backwater correction is not valid.

Rating Curve Table

ratingCurveTable this allows you to simply enter the pairs of q and h values.

<ratingCurve ratingcurveid="TheBigRiver">
<location>
<locationId>X1123</locationId>
</location>
<ratingCurveType>LevelToFlow</ratingCurveType>
<reversible>true</reversible>
<ratingCurveTable>
<ratingCurveTableRecord flow="0.100" level="0.054"/>
<ratingCurveTableRecord flow="0.500" level="0.155"/>
<ratingCurveTableRecord flow="1.000" level="0.244"/>
<ratingCurveTableRecord flow="1.479" level="0.317"/>
</ratingCurveTable>
</ratingCurve>
]]>

Rating Curve Equation

A rating curve equation can be defined per section of the rating curve using the lowerLevel and UpperLevel tags. The equation can be in the form
of a power equation or a parabola. The form of the equation:

Power equation

Parabola

In Fews you should configure the appropriate values for a, b and c

Here is an example:

<location>
<locationId>234206</locationId>
</location>
<ratingCurveType>LevelToFlow</ratingCurveType>
<reversible>true</reversible>
<ratingCurveEquation>
<lowerLevel>0</lowerLevel>
<upperLevel>0.391</upperLevel>
<equation>Power</equation>
<a>11.9001</a>
<b>0</b>
<c>1.55067</c>
</ratingCurveEquation>
<ratingCurveEquation>
<lowerLevel>0.391</lowerLevel>
<upperLevel>0.807</upperLevel>
<equation>Power</equation>
<a>16.6258</a>
<b>-0.1</b>
<c>1.8564</c>
</ratingCurveEquation>

]]>

484
20 Transformation Module (Improved schema)
What [Link]

Description Configuration for the new version of the transformation module

schema location [Link]

Entry in ModuleDescriptors <moduleDescriptor id="TransformationModule">


<description>Transformation Module</description>
<className>[Link]</className>

</moduleDescriptor>

Contents
Contents
Transformation Module Configuration (New Version)
Configuration

Accumulation Transformations
Adjust Transformations
Aggregation transformations
DisaggregationTransformations
DischargeStage Transformations
Events Transformations
Filter Transformations
Interpolation Serial Transformations
Interpolation Spatial Transformations
Lookup transformations
Merge Transformations
Review transformations
StageDischarge transformations
Statistics Summary Transformations
Structure Transformations
TimeShift
User Transformations
DayMonth Sample
PCA and Regression Transformation
Selection Transformations

Transformation Module Configuration (New Version)


The Transformation module is a general-purpose module that allows for generic transformation and manipulation of time series data. The module
may be configured to provide for simple arithmetic manipulation, time interval transformation, shifting the series in time etc, as well as for applying
specific hydro-meteorological transformations such as stage discharge relationships etc.

An improvement version of the FEWS Transformation Module is currently under construction. The new version is much more easy to configure
than the old version. The new version uses a new schema for configuration, also several new transformations are added.

Configuration

When available as configuration on the file system, the name of an XML file for configuring an instance of the transformation module called for
example TransformHBV_Inputs may be:

TransformHBV_Inputs 1.00 [Link].

TransformHBV_Inputs File name for the TransformHBV_Inputs configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

The configuration for the transformation module consists of two parts: transformation configuration files in the Config/ModuleConfigFiles directory
and coefficient set configuration files in the Config/CoefficientSetsFiles directory.

In a transformation configuration file one or more transformations can be configured. Some transformations require coefficient sets in which given
coefficients are defined. For a given transformation that requires a coefficient set there are different ways of defining the coefficient set in the
configuration. One way is to specify an embedded coefficient set in the transformation configuration itself. Another way is to put a reference in the
transformation configuration. This reference consists of the name of a separate coefficient set configuration file and the id of a coefficient set in
that file.

485
Both the transformations and coefficient sets can be configured to be time dependent. This can be used for instance to define a given coefficient
value to be 3 from 1 January 2008 to 1 January 2009, and to be 4 from 1 January 2009 onwards. This can be done by defining multiple
periodCoefficientSets, each one with a different period, as in the following xml example.

<period>
<startDateTime date="2008-01-01" time="[Link]"/>
<endDateTime date="2009-01-01" time="[Link]"/>
</period>
<structure>
<pumpFixedDischarge>
<discharge>3</discharge>
</pumpFixedDischarge>
</structure>

<periodCoefficientSet>
<period>
<validAfterDateTime date="2009-01-01"/>
</period>
<structure>
<pumpFixedDischarge>
<discharge>4</discharge>
</pumpFixedDischarge>
</structure>
</periodCoefficientSet>
]]>

If a date is specified without a time, then the time is assumed to be [Link], so <validAfterDateTime date="2009-01-01"/> is the same as
<validAfterDateTime date="2009-01-01" time="[Link]"/>. To specify dates and times in a particular time zone use the optional time zone
element at the beginning of a transformations or a coefficient sets configuration file, e.g. <timeZone>GMT+5:00</timeZone>. Then all dates and
times in that configuration file are in the defined time zone. If no time zone is defined, then dates and times are in GMT. Note: 2008-06-20
[Link] in time zone GMT+5:00 is physically the same time as 2008-06-20 [Link] in GMT.

If for a given transformation there are different coefficientSets configured for different periods in time, then the following rule is used. The start of a
period is always inclusive. The end of a period is exclusive if another period follows without a gap in between, otherwise the end of the period is
inclusive. If for example there are three periodCoefficientSets defined (A, B and C), each with a different period, as in the following xml example.
Then at 2002-01-01 [Link] periodCoefficientSet A is valid. At 2003-01-01 [Link] periodCoefficientSet B is valid since the start of the period is
inclusive. At 2004-01-01 [Link] periodCoefficientSet B is still valid, since there is a gap after 2004-01-01 [Link]. At 2011-01-01 [Link]
periodCoefficientSet C is valid, since no other periods follow (the period of C is the last period in time that is defined). This same rule applies to
time-dependent transformations.

<!-- periodCoefficientSet A -->


<period>
<startDateTime date="2002-01-01" time="[Link]"/>
<endDateTime date="2003-01-01" time="[Link]"/>
</period>
...

<periodCoefficientSet>
<!-- periodCoefficientSet B -->
<period>
<startDateTime date="2003-01-01" time="[Link]"/>
<endDateTime date="2004-01-01" time="[Link]"/>
</period>
...
</periodCoefficientSet>
<periodCoefficientSet>
<!-- periodCoefficientSet C -->
<period>
<startDateTime date="2010-01-01" time="[Link]"/>
<endDateTime date="2011-01-01" time="[Link]"/>
</period>
...
</periodCoefficientSet>
]]>

486
Accumulation Transformations
The following transformations can be used to calculate accumulative curves of time series.

AccumulationMeanInterval — MeanInterval: This transformation calculates the accumulative mean from the input time series within
several intervals.
AccumulationSum — Sum: Calculates the sum.
AccumulationSumInterval — SumInterval: This transformation creates cumulative curves from the input time series within several
intervals. The intervals are defined by the specified intervalTimeStep.
AccumulationSumOriginAtTimeZero — Sum: Calculates the accumulated sum forwards and backwards in time (forecast and historical).

AccumulationMeanInterval

Information

Transformation: MeanInterval

Transformation Accumulation
Group:

Description: This transformation calculates the accumulative mean from the input time series within several intervals. The intervals are
defined by the specified intervalTimeStep. For a given interval the first output value equals the first input value within the
interval and the other output values are equal to the mean of the corresponding input value and all previous input values
within the interval. The startTime of an interval is exclusive and the endTime of an interval is inclusive. The output time
series must have the same timeStep as the input time series.

Hydrological Information

Purpose and To create accumulative mean curves for several intervals.


use of
Transformation:

Background This transformation also works for grid input and output. It does not work for irregular time steps. If the transformation is
and from instantaneous/mean input parameter type to accumulated output parameter type, then the result is multiplied by the
Exceptions: timestep in seconds, before the mean is calculated. In this case the input data is assumed to be in units/second.

Input

Input variable.

Options

intervalTimeStep This time step defines the intervals that are used for the accumulation. Each time in this time step is the boundary between two
intervals.

ignoreMissing Optional. If true, then missing values are ignored. If false, then output values will be set to missing values starting from the first
missing input value in an interval until the end of that interval. Default is true.

Output

Output variable with accumulative mean curves.

Configuration Example

487
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<variable>
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationMeanInterval</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="3"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationMeanInterval</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="3"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="accumulation mean interval">
<accumulation>
<meanInterval>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<intervalTimeStep times="08:00"/>
<ignoreMissing>true</ignoreMissing>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</meanInterval>
</accumulation>
</transformation>
</transformationModule>
]]>

AccumulationSum

Information

Transformation: Sum

Transformation Accumulation
Group:

Description: Calculates for each timestep in the output period, the accumulated sum of the input values. In case the input is
instantaneous or mean and the output is accumulation, then the sum is multiplied by the duration of the input time step in
seconds. Each output value is the sum of the corresponding input value and all previous input values.

Hydrological Information

Purpose and use of This transformation can for instance be used to report the accumulated sum of the discharge per
Transformation: month.

488
Background In case the input is instantaneous or mean and the output is accumulation, the unit of the input must be unit/s. The input type
and (scalar or grid) must be the same as the output type and their timestep must be regular. In case the ignoreMissing value is set to
Exceptions: false, once a missing value is encountered, the output will only contain missing values after that time step.

Input

inputVariable Regular scalar timeseries or regular grid.

Options

ignoreMissing Treat a missing value as 0 (default true).

CoefficientSets

No connection to CoefficientSets.

Output

outputVariable Accumulated sum of the input data.

Configuration Example

Configuration example for calculation of accumulation sum.

489
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<variable>
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSum</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="30"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</variable>
<!-- output variables -->
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSum</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="5" end="25"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<!-- transformations -->
<transformation id="accumulation sum">
<accumulation>
<sum>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<ignoreMissing>false</ignoreMissing>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</sum>
</accumulation>
</transformation>
</transformationModule>

]]>

AccumulationSumInterval

Information

Transformation: SumInterval

Transformation Accumulation
Group:

Description: This transformation creates cumulative curves from the input time series within several intervals. The intervals are defined
by the specified intervalTimeStep. For a given interval the first output value equals the first input value within the interval
and the other output values are equal to the sum of the corresponding input value and all previous input values within the
interval. The startTime of an interval is exclusive and the endTime of an interval is inclusive. The output time series must
have the same timeStep as the input time series.

490
Hydrological Information

Purpose and To create cumulative curves in several intervals.


use of
Transformation:

Background This transformation also works for grid input and output. It does not work for irregular time steps. If the transformation is
and from instantaneous/mean input parameter type to accumulated output parameter type, then the result is multiplied by the
Exceptions: timestep in seconds. In this case the input data is assumed to be in units/second.

Input

Input variable.

Options

intervalTimeStep This time step defines the intervals that are used for the accumulation. Each time in this time step is the boundary between two
intervals.

ignoreMissing Optional. If true, then missing values are ignored and treated as 0. If false, then output values will be set to missing values starting
from the first missing input value in an interval until the end of that interval. Default is true.

Output

Output variable with cumulative curves.

Configuration Example

491
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<variable>
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSumInterval</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="10"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSumInterval</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="10"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="accumulation sum interval">
<accumulation>
<sumInterval>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<intervalTimeStep unit="hour" multiplier="1"/>
<ignoreMissing>false</ignoreMissing>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</sumInterval>
</accumulation>
</transformation>
</transformationModule>
]]>

AccumulationSumOriginAtTimeZero

Information

Transformation: SumOriginAtTimeZero

Transformation Accumulation
Group:

Description: Calculates the accumulated sum of the input values for each timestep in the output period from timezero forwards and
backwards in time. In case the input is instantaneous or mean and the output is accumulation, then the sum is multiplied by
the duration of the input time step in seconds. The sum of the historical data is accumulated backwards in time starting at
T0. The sum of the forecast data is accumulated forwards in time starting at T0.

Hydrological Information

492
Purpose and use of This transformation can for instance be used to report the historical accumulated sum backwards in time and the
Transformation: forecast accumulated sum of the discharge per month.

Background In case the input is instantaneous or mean and the output is accumulation, the unit of the input must be unit/s. The input type
and (scalar or grid) must be the same as the output type and their timestep must be regular. In case the ignoreMissing value is set to
Exceptions: false, once a missing value is encountered, the output will only contain missing values after that time step.

Input

inputVariable Regular scalar timeseries or regular grid.

Options

ignoreMissing Treat a missing value as 0 (default true).

CoefficientSets

No connection to CoefficientSets.

Output

outputVariable Accumulated sum of the input data.

Configuration Example

Configuration example for calculation of Accumulation SumOriginAtTimeZero.

493
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<variable>
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSumOriginAtTimeZero</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="-15" end="15"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</variable>
<!-- output variables -->
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSumOriginAtTimeZero</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="-15" end="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<!-- transformations -->
<transformation id="accumulation sum origin at time zero">
<accumulation>
<sumOriginAtTimeZero>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<ignoreMissing>false</ignoreMissing>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</sumOriginAtTimeZero>
</accumulation>
</transformation>
</transformationModule>

]]>

494
Adjust Transformations
AdjustQ
AdjustQUsingMeanDailyDischarge
AdjustQUsingInstantaneousDischarge
AdjustStage
AdjustTide

AdjustQ

AdjustQ
Input

observedInstantaneousDischarge
observedMeanDailyDischarge
simulatedDischarge

Coefficient set

blending steps
errorTolerance
maxNumberOfIterations
iterpolationType

Output

adjustedSimulatedDischarge

Description

AdjustQ corrects the simulated discharges by using observed instantaneous discharges and observed mean dialy discharges. The procedure is
actually a combination of the

transformations AdjustQUsingInstantaneousDischarge and AdjustQUsingMeanDailyDischarge. First the simulated discharged will be corrected by
using the instantaneous discharges. If not all of the mean dialy discharges are within the error tolerance the simulated discharges will also be
corrected with the AdjustQUsingMeanDailyDischarge procedure. A detailed description of the configuration options in the coefficient set can be
found the sections of AdjustQUsingInstantaneousDischarge and AdjustQUsingMeanDailyDischarge.

495
AdjustQMeanDailyDischarge

AdjustQMeanDailyDischarge

Input

observedMeanDailyDischarge
simulatedDischarge

Coefficient set

error tolerance
maxNumberOfIterations

Output

adjustedSimulatedDischarge

Description

This procedure corrects the simulated discharge with mean dialy discharge values until the error is within the specified error tolerance. To correct
the simulated discharge, the mean dialy discharge for the simulated value will be calculated. The simulated values will then be corrected by
applying the following formula.

Qi = Qi * QME/SQME (1)

where QME is the observed MDQ

SQME is the simulated MDQ

Qi is the instantaneous discharge

The correction procedure will continue until all the simulated discharges are within the error tolerance or until the maximum number of iterations is
reached. The maximum number of iterations is a configuration option.

AdjustQUsingInstantaneousDischarge

AdjustQUsingInstantaneousDischarge
Input

observedDischarge
simulatedInstantaneousDischarge

Coefficient set

blending steps
interpolations type

Output

adjustedForecastDischarge

Description

This procedure uses an observed instantaneous discharge to correct a simulated discharge. If there is an observed value for a certain time step
then that value will be used instead of the simulated value. If there is no observed data is available the correction procedure will calculate a value
or in some cased use the simulated value.

The configuration has two configuration options which will influence the behaviour of the correction procedure. The first one is blending steps. If
there is a gap in the observed data which is
x time steps large and x < blending steps then the output values for the gap will be determined using an interpolation procedure. If x >= blending
steps then a blend procecure will be used to
fill the gap.

Interpolation procedure

The second configuration option which influence the behaviour of the correction procedure is the interpolation type. Two options are

496
available:ratio and difference. To calculate the value of the adjusted time series a correction procedure is used for the simulated discharge. When
the ratio-option is selected the the simulated values will be corrected by multiplying the simulated value with a correction factor based on the
ratio's between the observed and simulated discharge at the start of the gap and at the end of the gap. The correction factor will be linearly
interpolated between the ratio at the beginning of the gap and the ration at the end of the gap. When the difference option is selected the
simulated value will corrected by adding a correction value to it. This value will be based on the difference between the observed and simulated
discharge at the beginning of the gap and the difference between the simulated and observed value at the end of the gap. In some cases it is
possible that the program overrules the configured interpolation option. When the ratio between the ratio's is larger than 2 or one of the ratio's is
larger than 5 than the program will switch to interpolating by difference even if ratio's was configured.

Blend procedure

When the gap in the observed data is to large to fill with a interpolation procedure then the gap in the observed data will be filled with the blend
procedure. This procedure is also used to provide a smooth transition between the observed data and the simulated at T0. The blend procedure
will provide a smooth transition between the observed data and the simulated data. The difference between the simulated discharge and the
observed discharge at the beginning of a gap, end of a gap or at the latest observed value will be used to correct the simulated value. The
following formula will be used to correct the simulated value.

Qadjusted = Qsim + (1 - i/N)*Difference


i=number time steps between observed value and simulated value
N=blending steps
Difference=observed value - simulated value (at the last observed value)

AdjustStage

AdjustStage
Input

forecastStage
averageBalanceFirstSegment
averageBalanceSecondSegment
averageBalanceThirdSegment
averageBalanceFourthSegment
startFirstStageRange
startSecondStageRange
startThirdStageRange
startFourthStageRange
endFourthStageRange

Coefficient set

no coefficient is needed for this transformation

Output

adjustedForecastStage

Description

The transformation AdjustStage uses the output of the transformation StageReview to adjust the simulated stage values. The transformation
StageReview has divided the simulated stage values into 4 equally divided segments and has calculated for each segment, for each day an
average daily balance.

The AdjustStage procedure first determines the centres of each segment, secondly it determines which centres surrounds the simulated stage
value which has to be adjusted. For each centre the associated balance will be retrieved and the balance for the simulated stage value will be
calculated by using lineair interpolation. The simulated stage value will be corrected by adding the calculated balance. When the simulated stage
value is lower than the centre of the first segment or higher than the centre of the fourth segment. The balance will not be calculated by using
lineair interpolation but will be set equal to balance of the first segment, or the balance of the fourth segment.

AdjustTide

AdjustTide
Input

observedTidalStage
forecastTidalStage
tideBalance

Output

adjustedTidalStage

497
Description

The AdjustTide operation corrects a simulated tide with an observed tidal time series and a set of balances. The balances are calculated by the
transformation tidalBalance. When observed data is available, the observed data will be used for the output time series. When no observed data
is available the balances and the simulated tidal time series are combined to create an adjusted tide. First the times of the peaks and valleys are
determined. They are already calculated by the tideBalance transformation and located at the times at which the tideBalance operation has
written the balances. The estimated peaks and valleys of the tideBalance operation and the peaks and valleys of the simulated tidal time series
are matched. The adjusted peaks and valleys are calculated by adjusting the simulated peaks and valleys with the matched balance.

Hadjusted = Hsimulated + balance

The value of the adjusted tidal time series between the peaks and valleys are calculated using a cosinus interpolation. The formula which is used
for the interpolation is:

= |tmax - tmin|

= *(tmax - tmin)/

Hadjusted = Hmin + |Hmax - Hmin|*(cos()+1)/2

Configuration example

Aggregation transformations

Available aggregation transformations

Aggregation Accumulative — aggregates data by summing the values


Aggregation Instantaneous — aggregates data by sampling values with the same date/time
Aggregation InstantaneousToMean — aggregates data by calculating the mean value
Aggregation MeanToMean — aggregates data by calculating the mean value

The graph below demonstrates the differences between the instantaneous, the instantaneousToMean and the MeanToMean methods when
aggregating data.

498
The blue line (first column in the table) shows the original 15 minute data. The red line (second column) is the result of the instantaneous
disaggregation. The next two columns show the results of the meanToMean (light green line) and the instantaneousToMean (dark green line)
method.

The default behaviour of all these aggregations is to save the result at the end of the time interval that is being investigated. This
explains the apparent shift in the hydrograph. Although this is very useful for many operational environments (it ensures you
have data NOW) it may not always be wanted. In that case it is easy to combine these aggregations with a delay.

Aggregation Accumulative

Accumulative
Input

inputVariable

Output

outputVariable

Description

This transformation performs an aggregation from an instantaneous time series to an aggregated time series. This procedure sums the values of
the input timeseries that are within the aggregation period. If no aggregation period is configured, then the aggregation period is equal to the
period between the current output time and the previous output time. Alternatively the aggregation period can be configured in the time series set
of the output variable. In that case the aggregation period is relative to the current output time and aggregation periods for different output times
are allowed to overlap. Using overlapping aggregation periods it is possible to use this transformation to calculate a moving sum. If one of the
input values is missing or unreliable the output is missing.

The table below shows an example of accumulating 6-hourly values to daily values using this transformation.

Original series Result

Date/Time Value Value

01-01-2007 00:00 1,00

01-01-2007 06:00 2,00

01-01-2007 12:00 3,00

01-01-2007 18:00 4,00

02-01-2007 00:00 5,00 14,00

499
02-01-2007 06:00 6,00

02-01-2007 12:00 NaN

02-01-2007 18:00 8,00

03-01-2007 00:00 9,00 NaN

03-01-2007 06:00 10,00

The figure below shows original 15 minute data and the aggregated hourly data using the accumulative function:

Configuration example

500
<transformation id="aggregation accumulative">
<aggregation>
<accumulative>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>accumulative</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</accumulative>
</aggregation>
</transformation>

Aggregation Instantaneous

Instantaneous
Input

InputVariable

Output

OutputVariable

Description

This transformation performs an aggregation from an instantaneous input time series to an instantaneous output time series. Sets the output
value to the exact same value in the input timeseries at time t. I simply samples points. As such, if an output time has no equivalent in the input
series no value is given. The table below shows how 6-hourly values are converted to daily values using this method.

Original series Result

Date/Time Value Value

01-01-2007 00:00 1,00

01-01-2007 06:00 2,00

01-01-2007 12:00 3,00

01-01-2007 18:00 4,00

02-01-2007 00:00 5,00 5,00

02-01-2007 06:00 6,00

501
02-01-2007 12:00 NaN

02-01-2007 18:00 8,00

03-01-2007 00:00 9,00 9,00

03-01-2007 06:00 10,00

Configuration example

<transformation id="aggregation instantaneous">


<aggregation>
<instantaneous>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>instantaneous</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</instantaneous>
</aggregation>
</transformation>

502
Aggregation InstantaneousToMean

InstantaneousToMean
Input

inputVariable

Options

allowMissingValues
includeFirstValueOfAggregationPeriodInCalculation

Output

outputVariable

Description

This transformations calculates the mean value of instantaneous values over a certain period. If the option allowMissingValues is true (this is the
default behaviour), then a missing value is returned if one of the input values in the period is a missing value. If the option allowMissingValues is
false, then a mean value is calculated if there are 1 or more non-missing values in the aggregation period, i.e. missing values are ignored in this
case.

The transformation offers two different ways for calculating the mean value over a period. The default method (used by setting the
includeFirstValueOfAggregationPeriodInCalculation option to true - this is the default behaviour) calculates the mean of the last n pairs,
averages that, and stores it at the output time. An alternate method (similar to the MeanToMean aggregation) is enabled by setting the
includeFirstValueOfAggregationPeriodInCalculation option to false) calculates the mean of all values that fit in the output interval, excluding
the start time itself, and stores that at the output time.

In the four tables below examples of in and output using the different options are given.

Example for includeFirstValueOfAggregationPeriodInCalculation = true and allowMissingValues = false:

Date/Time Input Value Calculation Output Value

2007-01-01 00:00 1

2007-01-01 06:00 2

2007-01-01 12:00 3

2007-01-01 18:00 4

2007-01-02 00:00 5 (((1+2)/2) + ((2+3)/2) + ((3+4)/2) + ((4+5)/2))/4 3

2007-01-02 06:00 6

2007-01-02 12:00 7

2007-01-02 18:00 NaN

2007-01-03 00:00 9 (((5+6)/2)+((6+7)/2))/2 6

2007-01-03 06:00 10

Example for includeFirstValueOfAggregationPeriodInCalculation = true and allowMissingValues = true:

Date/Time Input Value Calculation Output Value

2007-01-01 00:00 1

2007-01-01 06:00 2

2007-01-01 12:00 3

2007-01-01 18:00 4

2007-01-02 00:00 5 (((1+2)/2) + ((2+3)/2) + ((3+4)/2) + ((4+5)/2))/4 3

503
2007-01-02 06:00 6

2007-01-02 12:00 7

2007-01-02 18:00 NaN

2007-01-03 00:00 9 - NaN

2007-01-03 06:00 10

Example for includeFirstValueOfAggregationPeriodInCalculation = false and allowMissingValues = false:

Date/Time Input Value Calculation Output Value

2007-01-01 00:00 1

2007-01-01 06:00 2

2007-01-01 12:00 3

2007-01-01 18:00 4

2007-01-02 00:00 5 (2 + 3 + 4 + 5)/4 3,50

2007-01-02 06:00 6

2007-01-02 12:00 7

2007-01-02 18:00 NaN

2007-01-03 00:00 9 (6 + 7 + 9)/3 7,33

2007-01-03 06:00 10

Example for includeFirstValueOfAggregationPeriodInCalculation = false and allowMissingValues = true:

Date/Time Input Value Calculation Output Value

2007-01-01 00:00 1

2007-01-01 06:00 2

2007-01-01 12:00 3

2007-01-01 18:00 4

2007-01-02 00:00 5 (2 + 3 + 4 + 5)/4 3,50

2007-01-02 06:00 6

2007-01-02 12:00 7

2007-01-02 18:00 NaN

2007-01-03 00:00 9 - NaN

2007-01-03 06:00 10

504
Configuration example

<transformation id="aggregation instantaneousToMean">


<aggregation>
<instantaneousToMean>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<allowMissingValues>true</allowMissingValues>

<includeFirstValueOfAggregationPeriodInCalculation>true</includeFirstValueOfAggregationPeriodInCalculation>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>instantaneousToMean</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</instantaneousToMean>
</aggregation>
</transformation>

Aggregation MeanToMean

MeanToMean
Input

505
InputVariable

Output

OutputVariable

Description

This transformation performs an aggregration from an mean input time series to a mean output time series. The average value of the mean value
in the aggregation period (excluding the value at the start of the period) will be the calculated mean value for the output time series.

Original series Result

Date/Time Value Value

01-01-2007 00:00 1,00

01-01-2007 06:00 2,00

01-01-2007 12:00 3,00

01-01-2007 18:00 4,00

02-01-2007 00:00 5,00 3,50

02-01-2007 06:00 6,00

02-01-2007 12:00 7,00

02-01-2007 18:00 8,00

03-01-2007 00:00 9,00 7,50

03-01-2007 06:00 10,00

This method will give the same results as the instantaneoustoMean transformation transformation while setting the
includeFirstValueOfAggregationPeriodInCalculation option to false. However, it has no option to ignore missing values in the input series.

Configuration example

506
<transformation id="aggregation MeanToMean">
<aggregation>
<meanToMean>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>meanToMean</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</meanToMean>
</aggregation>
</transformation>

DisaggregationTransformations

Available disaggregation transformations

Accumulative — disaggregates data by dividing the values


Instantaneous — disaggregates data by sampling the values and optionally interpolate linear
MeanToInstantaneous — disaggregates data
meanToMean — disaggregates data by sampling and repeating the input data
weights — disaggregate by setting a weight for each output point

The graph below gives an overviews of the results of the different disaggregations available.

507
Accumulative

Accumulative

disaggregates data by dividing the values

Input

InputVariable

Output

OutputVariable

Description

This transformation performs a disaggregation on an accumulative input time series. Divides the values of the input time-series by the number of
time-steps in the output time-series and stores the resulting values at each step.

The table below shows how daily values are disaggregated to 6-hourly values using this method.

Input Output

Date/Time Value Value

01-01-2007 06:00 1,25

01-01-2007 12:00 1,25

01-01-2007 18:00 1,25

02-01-2007 00:00 5,00 1,25

02-01-2007 06:00 1,75

02-01-2007 12:00 1,75

02-01-2007 18:00 1,75

03-01-2007 00:00 7,00 1,75

508
Configuration example

<transformation id="disaggregation accumulative">


<disaggregation>
<accumulative>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>dis_accumulative</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="5"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</accumulative>
</disaggregation>
</transformation>

Instantaneous

Instantaneous

509
disaggregates data by sampling the values and optionally interpolate linear

Input

inputVariable

Options

interpolate (true|false)

Output

outputVariable

Description

This transformation performs a disaggregation on an instantaneous input time series. The output values are copied from the input time series if a
matching time exists in the input value. If this is not the case the output value is calculated by linear interpolation if the option interpolate is
enabled. If the option is disabled the output value will be a missing value.

Linear interpolation is done using the following equation:

Y = Y_0 + (Time_t - Time_0) * (Y_0 - Y_1)/(Time_0 - Time_1)

in which:

Y is the interpolation result in the output series


Y_0 is the value of the first point before Y (in the input series)
Y_1 is the value of the first point after Y (in the input series)
Time_t is the date/time for Y (in the output series)
Time_0 is first time before Time_t (in the input series)
Time_1 is first time after Time_t (in the input series)

Input Output (interpolation) Output (no interpolation)

Date/Time Value Value Value

01-01-2007 00:00 10,00 10,00 10,00

01-01-2007 06:00 8,75 -

01-01-2007 12:00 7,50 -

01-01-2007 18:00 6,25 -

02-01-2007 00:00 5,00 5,00 5,00

02-01-2007 06:00 6,00 -

02-01-2007 12:00 7,00 -

02-01-2007 18:00 8,00 -

03-01-2007 00:00 9,00 9,00 9,00

Configuration example

510
<transformation id="disaggregation instantaneous">
<disaggregation>
<instantaneous>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>dis_instantaneous</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="5"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</instantaneous>
</disaggregation>
</transformation>

MeanToInstantaneous

MeanToInstantaneous

disaggregates data

511
Input

inputVariable

Output

outputVariable

Description

This transformation takes a mean input time series as input and transforms it to an instantaneous time series. Because it is not possible to
calculate exactly how the instantaneous values were which resulted in this mean time series, the transformation will make a best estimate of the
instantaneous time series.

The first step in this procedure is analysing the previous mean value, the current mean value and the next mean value.

No change

If there is no significant rise in these values, which is the case when the current mean value is the same as the previous mean value and the next
mean value within an error tolerance of 0.1% the current mean value is considered to be the best estimate for the instantaneous values.

If this is not the case, the procedure checks if there is a continuous rise or fall or if the mean values are in a peak or valley.

Rise or Fall

If the mean values are in a continous rise (currentMeanValue >= previousMeanValue && currentMeanValue <= nextMeanValue) then the
estimation procedure is as follows.

First the instantaneous value at the end of the disaggregation period is estimated by

Fall: endValue = currentMeanValue - 0.75f * (currentMeanValue - nextMeanValue)

Rise: endValue = currentMeanValue + 0.25f * (nextMeanValue - currentMeanValue)

The values between the value at the end of the previous disaggregation period and the estimated end value are estimated by creating a small rise
or fall from the end value.

Peak or Valley

First the value of the peak or valley is estimated

peak = currentMeanValue + 0.25f * (difma + difmb) / 2

difma = |previousMeanValue - currentMeanValue|

difmb = |nextMeanValue - currentMeanValue|

Secondly the place of the peak is estimated. This will be done analysing the ratio between difma and difmb.

When the value and the place of the peak are estimated the values between the peak and the end value and the last value of the previous period
are added

by adding a small rise or fall to/from the peak.

After this procedure the estimated instantaneous values are corrected by using the AdjustQMeanDailyDischarge-transformation (a volume
correction). This transformation will ensure that the mean values of the estimated instantaneous time series are equal to the orignal mean
values.

Configuration example

512
<transformation id="disaggregation imeanToInstantaneous">
<disaggregation>
<meanToInstantaneous>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>dis_meanToInstantaneous</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="5"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</meanToInstantaneous>
</disaggregation>
</transformation>

meanToMean

MeanToMean

disaggregates data by sampling and repeating the input data

513
Input

InputVariable

Output

OutputVariable

Description

This transformation performs a disaggregation from a mean time series to a mean time series.

Each output time series value within a given data time interval of the input time series is equal to the input time series value for that interval.

The table below shows a simple example of the procedure.

Time Input Output

12:00 x 1

00:00 1 1

12:00 x 2

00:00 2 2

Configuration example

514
<transformation id="disaggregation MeanToMean">
<disaggregation>
<meanToMean>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>dis_meanToMean</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="5"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</meanToMean>
</disaggregation>
</transformation>

weights

Weights

disaggregate by setting a weight for each output point

Input

InputVariable

configuration

weights for each point

Output

OutputVariable

Description

This transformation performs a disaggregation from times series to another series in which each output points is multiplied bu a specified weight.
There MUST be a weight for each output point or the disaggregation will fail. E.g. when converting 15 minute values to 5 minute values three
weight elements must be specified.

Each output time series value within a given data time interval of the input time series is equal to the input time series value multiplied by the
weight specified for the output time.

The table below shows a simple example of the procedure.

Time Input Output Weight

12:00 x 0.5 0.5

00:00 1 1 1

515
12:00 x 2 0.5

00:00 2 2 1.0

In this case two eigth elements (0.5 and 1.0) have been specified. Note that the order in which the elements appear determines to which point
they are applied.

Configuration example

516
<disaggregation>
<weights>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<weight>0.9</weight>
<weight>1.1</weight>
<weight>0.9</weight>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>dis_weights</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="5"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</weights>
</disaggregation>

DischargeStage Transformations
mergedRatingCurves
power
table
ratingCurve

DischargeStageMergedRatingCurves

Information

Transformation: MergedRatingCurves

Transformation DischargeStage
Group:

Description: Merges two rating curves using a time dependent weight variable and uses the resulting rating curve to convert discharge
input values to stage output values. For each timeStep in the output time series, first the specified two rating curves are
merged using the value of the weight input time series at that timeStep. If weight is 1, then uses the first rating curve. If
weight is 0, then uses the second rating curve. If weight is between 0 and 1, then interpolates linearly between the first and
the second rating curve to get the merged rating curve. Then the merged rating curve is used to convert the discharge input
value for that timeStep to a stage output value. This can only use rating curves that are stored as time series in the
dataStore. This uses the inverse of the equation Q_output = weight*Q_ratingCurve1(H_input) + (1 -
weight)*Q_ratingCurve2(H_input)

Hydrological Information

517
Purpose and This can be used e.g. for a river reach with a lot of vegetation in the summer resulting in a higher hydraulic roughness.
use of Then, you might want to handle a rating curve for the winter period (level of 1m corresponds to 5 m3/s) and one for the
Transformation: summer (same water level represents only 3 m3/s due to the higher roughness). The weight value can be used for shifting
inbetween: weight=0 for the winter, weight=1 for the summer, and a weight value of 0.5 for a certain time in spring when
vegetation is growing.

Background Weight value must always be in the range 0 <= weight <= 1. If ratingCurve(s) not found, then logs a warning message and
and sets the output to missing values.
Exceptions:

Input

discharge input variable with discharge (water flow) values.


weight input variable with weight values.

ratingCurve

References to two rating curves that are merged and used to convert discharge to stage values for this transformation. This can only use rating
curves that are stored as time series in the dataStore. To import ratingCurves into the dataStore use timeSeriesImport module with importType
pi_ratingcurves to import a file in the pi_ratingcurves.xsd format. The ratingCurves are referenced using their locationId and qualifierId. If no
locationId is specified, then the locationId of the stage input variable is used.

Output

stage output variable with stage (water level) values.

Configuration Example

<dischargeStage>
<mergedRatingCurves>
<discharge>
<variableId>input</variableId>
</discharge>
<weight>
<variableId>eta</variableId>
</weight>
<ratingCurve>
<locationId>H-2001</locationId>
<qualifierId>winterRatingCurve</qualifierId>
</ratingCurve>
<ratingCurve>
<locationId>H-2001</locationId>
<qualifierId>summerRatingCurve</qualifierId>
</ratingCurve>
<stage>
<variableId>output</variableId>
</stage>
</mergedRatingCurves>
</dischargeStage>

]]>

DischargeStagePower

Information

Transformation: Power

Transformation Group: DischargeStage

Description: Converts discharge (Q) to stage (H) for an open cross section. Uses equation

Hydrological Information

518
Purpose and use of Transformation: Used to convert discharge (water flow) to stage (water level) for an open cross section.

Background and Exceptions:

Input

discharge input variable with discharge (water flow) values.

CoefficientSets or CoefficientSetFunctions

The coefficient set should contain the a, b and c coefficients for equation

and the type of calculations for which the coefficient set is valid.

When using coefficient set functions (available since build 30246), the a, b, c and type elements can contain tags between "@" signs (e.g.
"@NUMBER@") that refer to location attributes that are defined in the locationSets configuration file. The tags are replaced by actual values.
These values can be different for different locations and time periods. See 22 Locations and attributes defined in Shape-DBF files for more
information.

Coefficient a in equation

.
b

Coefficient b in equation

.
c

Coefficient c in equation

.
type

Type of calculations for which the coefficient set is valid. Can be level_to_flow, flow_to_level or level_to_flow_and_flow_to_level.

Output

stage output variable with stage (water level) values.

Configuration Examples

519
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>DischargeStagePowerTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>DischargeStagePowerTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="discharge stage power test">
<dischargeStage>
<power>
<discharge>
<variableId>input</variableId>
</discharge>
<coefficientSet>
<a>57.632</a>
<b>3.01</b>
<c>2.147</c>
<type>level_to_flow_and_flow_to_level</type>
</coefficientSet>
<stage>
<variableId>output</variableId>
</stage>
</power>
</dischargeStage>
</transformation>
]]>

The example below uses coefficientSetFunctions (available since build 30246). Here the elements 'a', 'b', 'c' and 'type' are defined in
coefficientSetFunctions, where @A@, @B@ and @C@ refer to location number attributes and @type@ refers to a location text attribute defined
in the locationSets configuration file.

520
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>DischargeStagePowerWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>locationWithAttributes1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>DischargeStagePowerWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>locationWithAttributes1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="discharge stage power with coefficient set functions test">
<dischargeStage>
<power>
<discharge>
<variableId>input</variableId>
</discharge>
<coefficientSetFunctions>
<a>@A@</a>
<b>@B@</b>
<c>@C@</c>
<type>@type@</type>
</coefficientSetFunctions>
<stage>
<variableId>output</variableId>
</stage>
</power>
</dischargeStage>
</transformation>
]]>

Table

Table
Input

discharge

Coefficient set

type
authoriseExtrapolation
interpolationType
minimumStage
tableRecord

Output

stage

521
Description

This transformation will transform a discharge value to a stage value by doing a table lookup. The coefficient set used in this transformation has
an option type. The type will indicate if the lookup table can be used in a discharge to stage-transformation, a stage to discharge-transformation or
both. If a coefficient set which is defined as a level_to_flow type is used in this type of transformation an error will be issued. The
authoriseExtrapolation option will enable/disable the extrapolation option. The interpolationType can be used the configure the type of
interpolation used.

The available options are:

linear
logarithmic

When the option logarithmic is selected the calculation method used is almost the same the method used when the linear option is selected. The
only difference is that the calculation is done with the natural logarithm of the lookup-value and with the natural logarithm of the table values.

The minimum stage value allow configurators to enter a minimum stage value. Stage values below this value are converted to the minimum value.

The table record is the actual lookup table. Each tableRecord is a single entry in the lookup-table with a stage and a discharge value. Note that it
is also possible to define an offset for each tableRecord. This offset

will be applied as a positive offset to the stage value. Offsets will apply to the tableRecord in which it is defined and the records above this record
until a new offset is defined.

Configuration example

Events Transformations
The following transformations can be used to measure time series events. The measurements will be registered for each output period where the
event started.

522
EventsDischargeVolume — DischargeVolume: Calculates either the discharge volume of all events initiated in the output period or only
the discharge volume of the largest event initiated in the output period.
EventsDuration — Duration: Calculates either duration of all events initiated in the output period .
EventsMaximum — Maximum: Calculates the maximum input value of the events initiated in the output period.
EventsMeanDischargeVolume — MeanDischargeVolume: Calculates the mean discharge volume per event.
EventsNumberOfEvents — NumberOfEvents: Calculates the number of events initiated in the output period.

EventsDischargeVolume

Information

Transformation: DischargeVolume

Transformation Events
Group:

Description: Calculates either the discharge volume of all events initiated in the output period or only the discharge volume of the largest
event initiated in the output period. An event in this transformation is defined as a the largest possible series of subsequent
subevents where the duration of gaps (where there are no subevents) is shorter than the specified maxGapDuration
parameter. A subevent is defined as a measurement in time in the input where the value is larger than the specified
threshold parameter. The discharge volume of a subevent is calculated by multiplying the input value (m3/s) by the
duration of the input time step (s). The discharge volume for an event is the sum of the discharge volumes for its subevents,
and is registered only in the output period where the event initiated.

Hydrological Information

Purpose and This transformation can for instance be used to report discharge volume statistics on sewer spillage for each month.
use of
Transformation:

Background The unit of the input must be m3/s. The output time step must be bigger than the input time step. All input values must be
and non-missings, otherwise the result will be set to missing value. In case one of the inputs is doubtful, the output flag is set to
Exceptions: ORIGINAL_DOUBTFUL.

Input

discharge Equidistant measurements in m3/s.

Options

eventSelection Selects either discharge volume of all events or only the discharge volume of the event with the largest discharge volume.
threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours.

CoefficientSets

No connection to CoefficientSets.

Output

volume Aggregated volume of the selected events in m3.

Configuration Example

Configuration example for discharge volume of events for each month.

523
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">

<!-- input variables -->


<!-- output variables -->
<!-- transformations -->
<transformation id="events dischargeVolumeAllEvents">
<events>
<dischargeVolume>
<discharge>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeVolume_AllEvents</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</discharge>
<eventSelection>all_events</eventSelection>
<volume>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeVolume_AllEvents</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</volume>
</dischargeVolume>
</events>
</transformation>
</transformationModule>
]]>

Configuration example for discharge volume of the largest event of each month.

524
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events dischargeVolume largest event">
<events>
<dischargeVolume>
<discharge>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeVolume_LargestEvent</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</discharge>
<eventSelection>largest_volume_event</eventSelection>
<volume>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeVolume_LargestEventTest</
moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</volume>
</dischargeVolume>
</events>
</transformation>
</transformationModule>
]]>

EventsDuration

Information

Transformation: Duration

Transformation Events
Group:

Description: Calculates either the net duration or the gross duration of the events initiated in the output period. An event in this
transformation is defined as a the largest possible series of subsequent subevents where the duration of gaps (where there
are no subevents) is shorter than the specified maxGapDuration parameter. A subevent is defined as a measurement in
time in the input where the value is larger than the specified threshold parameter. The duration of a single subevent is
equal to the duration of the input time step. The duration of an event is the sum of the duration of its subevents plus the
duration of the gaps that do not exceed the maxGapDuration parameter, and is registered only in the output period where
the event initiated.

Hydrological Information

Purpose and use This transformation can for instance be used to report on the duration of sewer spillage events for each month.
of
Transformation:

525
Background and The output time step must be bigger than the input time step. All input values must be non-missings, otherwise the result
Exceptions: will be set to missing value. In case one of the inputs is doubtful, the output flag is set to ORIGINAL_DOUBTFUL.

Input

input Equidistant measurements (for instance m3/s).

Options

threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours. In order to calculate the net duration instead of the gross duration, this value has to be set to zero.
outputTimeUnit Defines the time unit of the output (default is day).

CoefficientSets

No connection to CoefficientSets.

Output

output Duration of the selected events using the specified output unit.

Configuration Example

Configuration example for net duration of events for each month.

526
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events net duration">
<events>
<duration>
<input/>
<timeSeriesSet>
<moduleInstanceId>EventsDuration_NetDuration</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<maxGapDuration unit="day" multiplier="0"/>


<outputTimeUnit>hour</outputTimeUnit>
<output>
<timeSeriesSet>
<moduleInstanceId>EventsDuration_NetDuration</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</output>
</duration>
</events>
</transformation>
</transformationModule>
]]>

Configuration example for gross duration of events for each month.

527
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events gross duration">
<events>
<duration>
<input/>
<timeSeriesSet>
<moduleInstanceId>EventsDuration_GrossDuration</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<maxGapDuration unit="day"/>
<output>
<timeSeriesSet>
<moduleInstanceId>EventsDuration_GrossDuration</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</output>
</duration>
</events>
</transformation>
</transformationModule>
]]>

EventsMaximum

Information

Transformation: Maximum

Transformation Events
Group:

Description: Calculates the maximum input value of the events initiated in the output period. An event in this transformation is defined as
a the largest possible series of subsequent subevents where the duration of gaps (where there are no subevents) is shorter
than the specified maxGapDuration parameter. A subevent is defined as a measurement in time in the input where the
value is larger than the specified threshold parameter.

Hydrological Information

Purpose and use This transformation can for instance be used to report the maximum value of the events for each month.
of
Transformation:

Background and The output time step must be bigger than the input time step. All input values must be non-missings, otherwise the result
Exceptions: will be set to missing value. In case one of the inputs is doubtful, the output flag is set to ORIGINAL_DOUBTFUL.

528
Input

input Equidistant measurements.

Options

threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours.

CoefficientSets

No connection to CoefficientSets.

Output

output Maximum of the input values of the events that initiated in the output period.

Configuration Example

Configuration example for calculation of the maximum for events for each month.

<transformationModule xmlns:xsi="[Link] xmlns="


[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events maximum">
<events>
<maximum>
<input/>
<timeSeriesSet>
<moduleInstanceId>EventsMaximum</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<output>
<timeSeriesSet>
<moduleInstanceId>EventsMaximum</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</output>
</maximum>
</events>
</transformation>
</transformationModule>
]]>

EventsMeanDischargeVolume

Information

529
Transformation: MeanDischargeVolume

Transformation Events
Group:

Description: Calculates the mean discharge volume of events initiated in the output period per event initiated in the output period. An
event in this transformation is defined as a the largest possible series of subsequent subevents where the duration of gaps
(where there are no subevents) is shorter than the specified maxGapDuration parameter. A subevent is defined as a
measurement in time in the input where the value is larger than the specified threshold parameter. The discharge volume
of a subevent is calculated by multiplying the input value (m3/s) by the duration of the input time step. The discharge
volume for an event is the sum of the discharge volumes for its subevents, and is registered only in the output period where
the event initiated.

Hydrological Information

Purpose and This transformation can for instance be used to report the mean discharge volume per event on sewer spillage for each
use of month.
Transformation:

Background The unit of the input must be m3/s. The output time step must be bigger than the input time step. All input values must be
and non-missings, otherwise the result will be set to missing value. In case one of the inputs is doubtful, the output flag is set to
Exceptions: ORIGINAL_DOUBTFUL.

Input

discharge Equidistant measurements in m3/s.

Options

threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours.

CoefficientSets

No connection to CoefficientSets.

Output

meanVolume Mean volume per event of the selected events in m3.

Configuration Example

Configuration example for calculation of the mean discharge volume per event for each month.

530
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events dischargeMeanVolume">
<events>
<dischargeMeanVolume>
<discharge>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeMeanVolume</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</discharge>
<meanVolume>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeMeanVolume</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</meanVolume>
</dischargeMeanVolume>
</events>
</transformation>
</transformationModule>
]]>

EventsNumberOfEvents

Information

Transformation: NumberOfEvents

Transformation Events
Group:

Description: Calculates the number of events initiated in the output period. An event in this transformation is defined as a the largest
possible series of subsequent subevents where the duration of gaps (where there are no subevents) is shorter than the
specified maxGapDuration parameter. A subevent is defined as a measurement in time in the input where the value is
larger than the specified threshold parameter.

Hydrological Information

Purpose and use This transformation can for instance be used to report the number of events initiated each month.
of
Transformation:

Background and The output time step must be bigger than the input time step. All input values must be non-missings, otherwise the result
Exceptions: will be set to missing value. In case one of the inputs is doubtful, the output flag is set to ORIGINAL_DOUBTFUL.

Input

531
input Equidistant measurements.

Options

threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours.

CoefficientSets

No connection to CoefficientSets.

Output

output Number of events initiated in the output period.

Configuration Example

Configuration example for calculation of the number of events initiated each month.

<transformationModule xmlns:xsi="[Link] xmlns="


[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events numberOfEvents">
<events>
<numberOfEvents>
<input/>
<timeSeriesSet>
<moduleInstanceId>EventsNumberOfEvents</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<output>
<timeSeriesSet>
<moduleInstanceId>EventsNumberOfEvents</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</output>
</numberOfEvents>
</events>
</transformation>
</transformationModule>
]]>

Filter Transformations
LowPass

532
FilterLowPass

Information

Transformation: LowPass

Transformation Filter
Group:

Description: Low pass filter for discrete time series. This transformation calculates the following difference equation.

y(t) = b0*x(t) + b1*x(t-1) + ... + bM*x(t-M) + a1*y(t-1) + ... + aN*y(t-N)

Here x is the input, y is the output, t denotes time, b0 to bM are the feedforward coefficients and a1 to aN are the feedback
coefficients. When this transformation runs, then it first retrieves the required previous output values from previous runs, if
available.

Hydrological Information

Purpose and use of Transformation: To smooth time series data.

Background and Exceptions: This transformation filters out high frequency fluctuations in time series data.

Input

Input variable x(t). For each calculation of y(t) the input values x(t) to x(t-M) are required. If one of these input values is missing, then the output
value y(t) will be a missing value.

CoefficientSets or CoefficientSetFunctions

The coefficientSet should contain the a and b coefficients for the filter (see the equation above). It is possible to choose the number of coefficients
to use. The first defined a coefficient is a1, the second defined a coefficient is a2 and so on. The last defined a coefficient is aN. The first defined
b coefficient is b0, the second defined b coefficient is b1 and so on. The last defined b coefficient is bM.

When using coefficient set functions (available since build 30246), the a and b coefficient elements can contain tags between "@" signs (e.g.
"@NUMBER@") that refer to location attributes that are defined in the locationSets configuration file. The tags are replaced by actual values.
These values can be different for different locations and time periods. See 22 Locations and attributes defined in Shape-DBF files for more
information.

One or more feedback coefficients (a1 to aN).

One or more feedforward coefficients (b0 to bM).

Output

Output variable y(t). For each calculation of y(t) the previous output values y(t-1) to y(t-N) are required. When this transformation runs, then it first
retrieves the required previous output values from previous runs, if available. If one of these previous output values is missing, then that output
value is ignored. Effectively this means that it behaves as if all previous missing output values would be 0.

Configuration Examples

533
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>FilterLowPassTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="10" end="43"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>FilterLowPassTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="10" end="43"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="filter low pass">
<filter>
<lowPass>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<coefficientSet>
<a>0.4</a>
<a>0.3</a>
<b>0.2</b>
<b>0.1</b>
</coefficientSet>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</lowPass>
</filter>
</transformation>
]]>

The example below uses coefficientSetFunctions (available since build 30246). Here the coefficients are defined in coefficientSetFunctions, where
@a1@, @a2@, @b0@ and @b1@ refer to location number attributes that are defined in the locationSets configuration file.

534
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>FilterLowPassWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>locationWithAttributes5</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="10" end="43"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>FilterLowPassWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>locationWithAttributes5</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="10" end="43"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="filter low pass with coefficient set functions test">
<filter>
<lowPass>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<coefficientSetFunctions>
<a>@a1@</a>
<a>@a2@</a>
<b>@b0@</b>
<b>@b1@</b>
</coefficientSetFunctions>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</lowPass>
</filter>
</transformation>
]]>

Interpolation Serial Transformations


block
default
directionLinear
extrapolateBase
extrapolateConstant
extrapolateExponential
linear

Block

Block
Input

inputVariable

Options

535
maxGapLength

Output

outputVariable

Description

This tranformation fills the gaps in the time series with the last value in the time series before the start of the gap. If a maxGapLength is defined
the gap will only be filled if the size of the gap is smaller than maxGapLength.

Configuration example

directionLinear

Information

Transformation: Direction Linear

Transformation Interpolation Serial


Group:

Description: Fills gaps in a time series that contains direction data values (e.g. wind direction in degrees), using linear interpolation. The
direction values are interpolated over the smallest angle. For example halfway between directions 0 degrees and 350
degrees the interpolated value would be 355 degrees. For a gap between two directions that are exactly opposite (e.g. 90
and 270 degrees) the interpolated values will be equal to the last known direction.

Hydrological Information

Purpose and Linear interpolation of direction data values (e.g. wind direction in degrees).
use of
Transformation:

Background The direction values are interpolated over the smallest angle. For example halfway between directions 0 degrees and 350
and degrees the interpolated value would be 355 degrees. For a gap between two directions that are exactly opposite (e.g. 90
Exceptions: and 270 degrees) the interpolated values will be equal to the last known direction, because if two directions are exactly
opposite, then it is not possible to choose which is the smallest angle to interpolate over. Direction values can also be
designated to be "varying". Varying values are represented in the Delft-FEWS graphical user interface with a "?" sign. This
transformation handles varying values just like missing values, this means it replaces varying values with an interpolated
value.

Input

Time series with direction data values.

Options
directionRange

The range of the values in the input time series and output time series. For degrees this range could be e.g. 0 to 360 or e.g. -180 to 180. For
radians this range could be e.g. 0 to 2*PI. Input values outside the specified range will be handled like missing values, this means these will be

536
replaced with an interpolated value.

maxGapLength (optional)

Optional maximum length of gap in number of time steps. Gaps equal to or smaller than maxGapLength will be filled with interpolated values.
Gaps larger than maxGapLength will not be filled. If maxGapLength is not defined, then all gaps will be filled with interpolated values.

Output

Time series with direction data values.

Configuration Example

<interpolationSerial>
<directionLinear>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<directionRange>
<lowerLimit>0</lowerLimit>
<upperLimit>360</upperLimit>
</directionRange>
<maxGapLength>5</maxGapLength>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</directionLinear>
</interpolationSerial>

]]>

extrapolateExponential

Extrapolate exponential
Input

inputVariable

Options

extrapolateDirection
baseValue
recessionConstant
maxGapLength

Output

outputVariable

Description

This transformation will fill the gap at the end or start of a time series by using a exponential decay of the last value of the time series before the
gap. The option extrapolateDirection can be used to indicate if the gap at the start of the time series or at the end of the time series or both must
be filled. The transformation will extrapolate to the configured base value with the configured recession constant. The value at a certain time
which is n steps aways from the start of the gap will be calculated with the following formula:

Y = (Ystartgap - baseValue)*recessionConstant^n+baseValue

If the gap in the time series is larger than the configured mapGapLength the gap will not be filled.

Configuration example

537
Transformation - InterpolationSerial Linear

Transformation - InterpolationSerial Linear

schema: [Link]

keywords: transformation, interpolation

Description and usage

This transformation function is used to fill inner gaps in a time series. The inner gaps are filled with linearly interpolated data values.

A gap is defined as a number of consecutive values that are unreliable or missing. An inner gap is defined as a gap for which there is at least one
reliable or doubtful value before the gap and at least one reliable or doubtful value after the gap. This function fills only inner gaps.

Each inner gap is filled using linear interpolation between the value just before the gap and the value just after the gap.

This function has an option to define the maximum length of the gaps that should be filled. Gaps that are equal to or smaller than the defined
maximum gap length will be filled with interpolated values. Gaps that are larger than the defined maximum gap length will not be filled.

Input/Output time series

In this function one input time series and one output time series must be identified.

inputVariable: A time series with input values. This will typically contain inner gaps.
outputVariable: A time series in which the output will be stored. The output series will contain all input values and the inner gaps will be
filled.

Configuration

A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.

inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.

outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.

maxGapLength
Optional element defining the maximum length of gaps that should be filled. The length is equal to the number of time steps. Gaps equal to or
smaller than maxGapLength will be filled with interpolated values. Gaps larger than maxGapLength will not be filled. If maxGapLength is not
defined, then all gaps will be filled with interpolated values.

Example

538
<interpolationSerial>
<linear>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<maxGapLength>5</maxGapLength>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</linear>
</interpolationSerial>

]]>

Common issues

None reported.

Related items

[Link]

Interpolation Spatial Transformations


average
bilinear
closestDistance
inputAverageTimesOutputArea
inverseDistance
kriging
max
min
sum
thiessenPolygon
triangulation
weighted

InterpolationBilinear

InterpolationSpatialAverage

Transformation - InterpolationSpatial average

schema: [Link]

keywords: transformation, spatial interpolation

Description and usage

This transformation function is used to calculate the average value of an input time series (grid or scalar) within the area of a polygon of the output
time series.

This transformation can handle three types of input:

scalar
regular grid
irregular grid

When the input is a scalar time series the average value for a certain polygon in the output will be calculated by finding the points in the input time
series which are within the area of the polygon and calculate the average value of these points. When the input is a time series with a grid (regular
or irregular) the transformation will determine which cells of the input time series have an overlap with the output polygon and the average value
of these cells will be calculated. The average value will be a weighted average. The weight of each input cell wil be based on how much area of
the input polygon covers a certain part of the output polygon.

The output time series can be a output time series with polygons or a irregular/regular grid. However it is expected to have a slow performance
with large grids because this transformation is optimized for output time series based on polygons.

The configurator has the possibility to configure a minimum or a maximum value for the output of the transformation. If the output exceeds the

539
minimum or maximum value configured the output will truncated to the minimum or maximum value configured.

Input/Output time series

In this function one input time series and one output time series must be identified.

inputVariable: A time series with input values. This can be a scalar time series or time series with a regular/irregular grid.
outputVariable: A time series in which the output will be stored. The output time series can be a time series with polygons or with a
regular grid.

Configuration

A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.

inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.

outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.

minimumValue

Optional element defining the minimum value of the output time series. If the output value is lower than the configured minimum value the output
value will be equal to the configured minimum value.

maximumValue

Optional element defining the maximum value of the input time series. If the output value is higher than the configured maximum value the output
value will be equal to the configured maximum value.

Example

<average>

<minimumValue>0</minimumValue>

<maximumValue>20000</maximumValue>

<inputVariable>

<variableId>input</variableId>

</inputVariable>

<outputVariable>

<variableId>output</variableId>

</outputVariable>

</average>

]]>

Common issues

None reported.

Related items

[Link]

InterpolationSpatialClosestDistance

Transformation - InterpolationSpatial Closest Distance

schema: [Link]

540
keywords: transformation, spatial interpolation, closest distance

Description and usage

This transformation function finds the closest location/grid cell in the input time series and uses the value of that location/grid cell for the output.

This transformation can handle three types of input:

scalar
regular grid
irregular grid
longitudinal profile

The output can be a:

scalar
regular grid
irregular grid
longitudinal profile

If the time series is not a scalar time series the centre of the grid cell will be used when trying to find the closest input location/grid cell.

When a longitudinal profile is used the profile is considered to be a scattered grid.

The configurator has the possibility to configure a minimum and maximum value for the output. If the output exceeds the minimum or maximum
value the output is truncated to that value.

It is also possible to maximize the search radius in which the transformation searches for the closest input location/grid cell. This can be done by
setting the searchRadius in the configuration.

Input/Output time series

In this function one input time series and one output time series must be identified.

inputVariable: a time series with input values. This can be a scalar time series, longitudinal profile or a time series with a regular/irregular
grid.
outputVariable: a time series in which the output will be stored. This can be a scalar time series, longitudinal profile or a time series with a
regular/irregular grid.

Configuration

A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.

inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.

outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.

minimumValue

Optional element defining the minimum value of the output time series. If the output value is lower than the configured minimum value the output
value will be equal to the configured minimum value.

maximumValue

Optional element defining the maximum value of the input time series. If the output value is higher than the configured maximum value the output
value will be equal to the configured maximum value.

searchRadius

Optional element defining the maximum radius in which the transformation searches the closest location/grid cell.

Example

541
<closestDistance>

<inputVariable>

<variableId>input</variableId>

</inputVariable>

<minimumValue>0</minimumValue>

<maximumValue>1000</maximumValue>

<searchRadius>10000</searchRadius>

<outputVariable>

<variableId>output</variableId>

</outputVariable>

</closestDistance>

]]>

Common issues

None reported.

Related items

[Link]

InterpolationSpatialInverseDistance

Transformation - InterpolationSpatial Inverse distance

schema: [Link]

keywords: transformation, spatial interpolation, inverse distance

Description and usage

This transformation function calculates the output based on the weighted average of the closest input locations/grid cells. The weight of each input
location/grid cell will be calculated by the inverse distance of each location.

This transformation can handle four types of input:

scalar time series


regular grid
irregular grid
longitudinal profile

The output can be a:

scalar time series


regular grid
irregular grid
longitudinal profile

If the time series is not a scalar time series the centre of the grid cell will be used when trying to find the closest input location/grid cell.

When a longitudinal profile is used the profile is considered to be a scattered grid.

The configurator has the possibility to configure a minimum and maximum value for the output. If the output exceeds the minimum or maximum
value the output is truncated to that value.

It is also possible to maximize the search radius in which the transformation searches for the closest input location/grid cell. This can be done by
setting the searchRadius in the configuration.

542
The weight of each input value in the output is computed by the inverse distance from the input location/grid cell to the output location/grid cell.
The power to which the distance is raised in this calcuation can be configured. It is also possible to configure the maximum total number of input
values which are used to calculate the output. First the transformation will try to find the closest input locations/grid cells which should be used in
the calculation. If one or more values in the input values of these time series are missing values, the transformation will not search for the next
closest locations/grid cells but will ignore these values in the calculation.

Input/Output time series

In this function one input time series and one output time series must be identified.

inputVariable: a time series with input values. This can be a scalar time series, longitudinal profile or a time series with a regular/irregular
grid.
outputVariable: a time series in which the output will be stored. This can be a scalar time series, longitudinal profile or a time series with a
regular/irregular grid.

Configuration

A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.

inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.

outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.

minimumValue

Optional element defining the minimum value of the output time series. If the output value is lower than the configured minimum value the output
value will be equal to the configured minimum value.

maximumValue

Optional element defining the maximum value of the input time series. If the output value is higher than the configured maximum value the output
value will be equal to the configured maximum value.

searchRadius

Required element defining the maximum radius in which the transformation searches the closest location/grid cell.

InverseDistancePower

Required element to define the InverseDistanceOptionPower to which the inverse distance will be raised to calculate the weight factor of the input
location/grid cell

numberOfpoints

Required elemet to defie the maximum number of points/grid cells which will be used to calculate the output.

Example

543
<inverseDistance>

<inputVariable>

<variableId>input</variableId>

</inputVariable>

<minimumValue>0</minimumValue>

<maximumValue>10000</maximumValue>

<searchRadius>100000</searchRadius>

<inverseDistancePower>2</inverseDistancePower>

<numberOfPoints>3</numberOfPoints>

<outputVariable>

<variableId>output</variableId>

</outputVariable>

</inverseDistance>

]]>

Common issues

None reported.

Related items

[Link]

InterpolationSpatialMax
Max: Calculates the maximum of the input values within the output polygons.

Information

Transformation: max

Transformation Group: InterpolationSpatial

Description: Calculates the maximum of the input values within each polygon specified in the output.

Hydrological Information

Purpose and use of Transformation: Can be used to compute the maximum radar cell value within a catchment (polygon).

Background and Exceptions:

Input

Input can be one grid or one or more scalars.

Output

Output can be one or more polygons.

Configuration Example

544
<interpolationSpatial>
<max>
<inputVariable>
<variableId>grid</variableId>
</inputVariable>
<outputVariable>
<variableId>polygon1</variableId>
</outputVariable>
</max>
</interpolationSpatial>

]]>

InterpolationSpatialMin
Min: Calculates the minimum of the input values within the output polygons.

Information

Transformation: min

Transformation Group: InterpolationSpatial

Description: Calculates the minimum of the input values within each polygon specified in the output.

Hydrological Information

Purpose and use of Transformation: Can be used to compute the minimum radar cell value within a catchment (polygon)

Background and Exceptions:

Input

Input can be one grid or one or more scalars.

Output

Output can be one or more polygons.

Configuration Example

<interpolationSpatial>
<min>
<inputVariable>
<variableId>grid</variableId>
</inputVariable>
<outputVariable>
<variableId>polygon1</variableId>
</outputVariable>
</min>
</interpolationSpatial>

]]>

InterpolationSpatialSum

Transformation - InterpolationSpatial sum

schema: [Link]

keywords: transformation, spatial interpolation, sum

Description and usage

545
This transformation function is used to calculate the sum of an input time series (grid or scalar) within the area of a polygon of the output time
series.

This transformation can handle three types of input:

scalar
regular grid
irregular grid

When the input is a scalar time series the sum for a certain polygon in the output will be calculated by finding the points in the input time series
which are within the area of the polygon and calculate sum of the input values of these points. When the input is a time series with a grid (regular
or irregular) the transformation will determine which cells of the input time series have an overlap with the output polygon and the sum value of
these cells will be calculated. If the input polygon is only partly within the output polygon the input value will only be accounted for the part which
covers the output polygon.

The output time series can be a output time series with polygons or a irregular/regular grid. However it is expected to have a slow performance
with large grids because this transformation is optimized for output time series based on polygons.

The configurator has the possibility to configure a minimum or a maximum value for the output of the transformation. If the output exceeds the
minimum or maximum value configured the output will truncated to the minimum or maximum value configured.

Input/Output time series

In this function one input time series and one output time series must be identified.

inputVariable: A time series with input values. This can be a scalar time series or time series with a regular/irregular grid.
outputVariable: A time series in which the output will be stored. The output time series can be a time series with polygons or with a
regular grid.

Configuration

A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.

inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.

outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.

minimumValue

Optional element defining the minimum value of the output time series. If the output value is lower than the configured minimum value the output
value will be equal to the configured minimum value.

maximumValue

Optional element defining the maximum value of the input time series. If the output value is higher than the configured maximum value the output
value will be equal to the configured maximum value.

Example

<sum>

<inputVariable>

<variableId>input</variableId>

</inputVariable>
<minimumValue>0</minimumValue>
<maximumValue>10000</maximumValue>
<outputVariable>

<variableId>ouput</variableId>

</outputVariable>

</sum>

]]>

546
Common issues

None reported.

Related items

[Link]

InterpolationSpatialWeighted

Information

Transformation: Weighted

Transformation InterpolationSpatial
Group:

Description: For each time step this transformation calculates the weighted average of the input variables. The weights are re-scaled so
that the total weight becomes 1. If for a given time an input variable has a missing value, then for that time that input
variable is ignored and the weights of the other input variables are re-scaled so that the total weight becomes 1.

Hydrological Information

Purpose and use of This transformation can for example be used to calculate the weighted average of the amount of rainfall of a
Transformation: number of locations in a catchment.

Background and
Exceptions:

Input

One or more weighted input variables. Each input variable has a weight.

Options
minInputValuesRequired (optional)

This is the minimum number of input variables that should have a non-missing value for the calculation. If for a given time the number of input
variables that have a non-missing value is less than this configured minimum, then for that time the output value will be a missing value. This can
be used for example to avoid getting output values of calculations for which very few input variables are available, because such calculations
would be inaccurate. If minInputValuesRequired is not specified, then it will be set to 1.

Output

Weighted average.

Configuration Example

547
<interpolationSpatial>
<weighted>
<weightedInputVariable>
<inputVariable>
<variableId>location1</variableId>
</inputVariable>
<weight>0.3</weight>
</weightedInputVariable>
<weightedInputVariable>
<inputVariable>
<variableId>location2</variableId>
</inputVariable>
<weight>0.2</weight>
</weightedInputVariable>
<weightedInputVariable>
<inputVariable>
<variableId>location3</variableId>
</inputVariable>
<weight>0.1</weight>
</weightedInputVariable>
<weightedInputVariable>
<inputVariable>
<variableId>location4</variableId>
</inputVariable>
<weight>0.4</weight>
</weightedInputVariable>
<minInputValuesRequired>2</minInputValuesRequired>
<outputVariable>
<variableId>average</variableId>
</outputVariable>
</weighted>
</interpolationSpatial>

]]>

Lookup transformations
conditional
multiDimensional
simple

Multidimensional

Multidimensional
Input

rowIndexLookupVariable
columnIndexLookupVariable

Coefficient set

interpolationType
extrapolationType
rowIndexLookupTable
columnIndexLookupTable
outputLookupTable

Output

output

Description

The output value will determined by looking up an output value in the output lookup table. To calculate the output value the position of the output
in the outputLookupTable must be calculated.

548
First the row position will be calculated. The input time series rowIndexLookupVariable provides the lookup value for the rowIndexLookupTable
which is a simple 1-dimensional lookup table which will provide the row position. In the same way the column position will be calculated. The
lookup value will be provided by the columnIndexLookupVariable time series and the column position will be calculated by doing a simple table
lookup in the columnIndexLookupTable with the lookup value.

When the row position and the column position of the output value in the outputLookupTable are determined it is possible to calculate the output
value. By doing a linear interpolation between the 4 surrounding nodes in the outputLookupTable the output value will determined.

Configuration Example

Simple

Simple
Input

input

Coefficient set

interpolationType
extrapolationType
lookupTable

Output

output

Description

The output will be calculated using a simple table lookup with the input value. The output value will be calculated by interpolation or extrapolation
in the lookup table. The type of

interpolation can be configured in the coefficient set with the interpolation type option. The options available are:lineair and logarithmic. When the

549
input value is outside the range

of the lookup table the behaviour of the transformation will be determined by the configured extrapolation type.

The options available are:

notAllowed,
maxMin,
linear

If the first option, notAllowed, is configured an input value outside the range will return a missing value. The second option will return the
minimum value or the maximum value in the lookup table. The third option linear enables the extrapolation for the function.

Merge Transformations

Merge Transformations

forecasts
mean
simple
synoptic
toggle

Simple Merge

Simple
Input

inputVariable ( 1 or more input time series)

Options

fillGapConstant

Output

outputVariable

Description

This transformation will perform a simple merge operation that functions as a data hierarchy. The picture below shows the functionality of a merge
operation. First the most important time series is evaluated, if a value exists at a time in this series then this value will be used. If this is not the
case, the second time series (series 2 in the example), will be evaluated. This procedure will continue until a valid value is found in one of the time
series or until all time series are evaluated. If no valid value exists at time x in all of the input time series then a missing value will be returned
unless the user has specified a default value with the fillGapConstant option.

The hierarchy of the input will be determined by the order in which in input time series are listed in the

configuration. In the configuration example inputA would be evaluated first before inputB.

Configuration example

550
Review transformations
StageReview
TidalBalanceReview

Stage Review

Stage Review
Input

observedStage
forecastStage

Options

maxDifference

Output

averageBalanceFirstSegment
averageBalanceSecondSegment
averageBalanceThirdSegment
averageBalanceFourthSegment
startFirstStageRange
startSecondStageRange
startThirdStageRange
startFourthStageRange
endFourthStageRange

Description

The stage review transformation will divide the forecastStage into four equally divided segment. The lowest segement will start at the lowest
forecastStage and the fourth segment will end at the highest forecastStage. The start and is rounded off downwards to meters and the end is
rounded of upwards to meters. After the start the first segment and end of the fourth segment are calculated the range of the remaining segments
are calculated. Secondly for each day for each segment an average daily balance is calculated.

Differences larger than maxDifference will not be used in the calculation.

Configuration example

551
TidalBalanceReview

TidalBalanceReview
Input

observedTidalStage
forecastTidalStage

Output

tideBalance

Description

552
The TidalBalanceReview transformation creates a output time series tideBalance which will be the input for AdjustTide transformation. First the
peaks and valleys in the observed time series and the forecast time series are matched. The difference between the observed stage and the
simulated stage will be the balance associated with the peak or valley. This procedure will apply for the part of the time series before T0. After T0
the time of the peaks and valleys will be determined by identifying the peaks and valleys in the simulated time series and correcting the time of the
peak or valley with the lag of the observed tidal time series with the simulated time series. The balance will be calculated by multiplying the
balance of the associated peak or valley of the previous cycle with 0.8.

Configuration example

StageDischarge transformations
mergedRatingCurves
power
table
ratingCurve

StageDischargeMergedRatingCurves

Information

Transformation: MergedRatingCurves

Transformation StageDischarge
Group:

Description: Merges two rating curves using a time dependent weight variable and uses the resulting rating curve to convert stage input
values to discharge output values. For each timeStep in the output time series, first the specified two rating curves are
merged using the value of the weight input time series at that timeStep. If weight is 1, then uses the first rating curve. If
weight is 0, then uses the second rating curve. If weight is between 0 and 1, then interpolates linearly between the first and
the second rating curve to get the merged rating curve. Then the merged rating curve is used to convert the stage input
value for that timeStep to a discharge output value. This can only use rating curves that are stored as time series in the
dataStore. This uses the equation Q_output = weight*Q_ratingCurve1(H_input) + (1 - weight)*Q_ratingCurve2(H_input).

Hydrological Information

Purpose and This can be used e.g. for a river reach with a lot of vegetation in the summer resulting in a higher hydraulic roughness.
use of Then, you might want to handle a rating curve for the winter period (level of 1m corresponds to 5 m3/s) and one for the
Transformation: summer (same water level represents only 3 m3/s due to the higher roughness). The weight value can be used for shifting
inbetween: weight=0 for the winter, weight=1 for the summer, and a weight value of 0.5 for a certain time in spring when
vegetation is growing.

Background Weight value must always be in the range 0 <= weight <= 1. If ratingCurve(s) not found, then logs a warning message and
and sets the output to missing values.
Exceptions:

Input

553
stage input variable with stage (water level) values.
weight input variable with weight values.

ratingCurve

The transformation configuration references to two rating curves that are merged and used to convert stage to discharge values for this
transformation. This can only use rating curves that are stored as time series in the dataStore. To import ratingCurves into the dataStore use
timeSeriesImport module with importType pi_ratingcurves to import a file in the pi_ratingcurves.xsd format. The ratingCurves are referenced using
their locationId and qualifierId. If no locationId is specified, then the locationId of the stage input variable is used.

Output

discharge output variable with discharge (water flow) values.

Configuration Example

<stageDischarge>
<mergedRatingCurves>
<stage>
<variableId>input</variableId>
</stage>
<weight>
<variableId>eta</variableId>
</weight>
<ratingCurve>
<locationId>H-2001</locationId>
<qualifierId>winterRatingCurve</qualifierId>
</ratingCurve>
<ratingCurve>
<locationId>H-2001</locationId>
<qualifierId>summerRatingCurve</qualifierId>
</ratingCurve>
<discharge>
<variableId>output</variableId>
</discharge>
</mergedRatingCurves>
</stageDischarge>

]]>

StageDischargePower

Information

Transformation: Power

Transformation Group: StageDischarge

Description: Converts stage (H) to discharge (Q) for an open cross section. Uses equation

Hydrological Information

Purpose and use of Transformation: Used to convert stage (water level) to discharge (water flow) for an open cross section.

Background and Exceptions:

Input

stage input variable with stage (water level) values.

CoefficientSets or CoefficientSetFunctions

554
The coefficient set should contain the a, b and c coefficients for equation

and the type of calculations for which the coefficient set is valid.

When using coefficient set functions (available since build 30246), the a, b, c and type elements can contain tags between "@" signs (e.g.
"@NUMBER@") that refer to location attributes that are defined in the locationSets configuration file. The tags are replaced by actual values.
These values can be different for different locations and time periods. See 22 Locations and attributes defined in Shape-DBF files for more
information.

Coefficient a in equation

.
b

Coefficient b in equation

.
c

Coefficient c in equation

.
type

Type of calculations for which the coefficient set is valid. Can be level_to_flow, flow_to_level or level_to_flow_and_flow_to_level.

Output

discharge output variable with discharge (water flow) values.

Configuration Example

555
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>StageDischargePowerTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>StageDischargePowerTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="stage discharge power test">
<stageDischarge>
<power>
<stage>
<variableId>input</variableId>
</stage>
<coefficientSet>
<a>57.632</a>
<b>3.01</b>
<c>2.147</c>
<type>level_to_flow_and_flow_to_level</type>
</coefficientSet>
<discharge>
<variableId>output</variableId>
</discharge>
</power>
</stageDischarge>
</transformation>
]]>

The example below uses coefficientSetFunctions (available since build 30246). Here the elements 'a', 'b', 'c' and 'type' are defined in
coefficientSetFunctions, where @A@, @B@ and @C@ refer to location number attributes and @type@ refers to a location text attribute defined
in the locationSets configuration file.

556
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>StageDischargePowerWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>locationWithAttributes1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>StageDischargePowerWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>locationWithAttributes1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="stage discharge power with coefficient set functions test">
<stageDischarge>
<power>
<stage>
<variableId>input</variableId>
</stage>
<coefficientSetFunctions>
<a>@A@</a>
<b>@B@</b>
<c>@C@</c>
<type>@type@</type>
</coefficientSetFunctions>
<discharge>
<variableId>output</variableId>
</discharge>
</power>
</stageDischarge>
</transformation>
]]>

StageDischarge table

Table
Input

stage

Coefficient set

authoriseExtrapolation
interpolationType
minimumStage
tableRecord

Output

discharge

Description

557
This transformation will transform a stage value to a discharge value by doing a table lookup. The coefficient set used in this transformation has
an option type. The type will indicate if the lookup table can be used in a discharge to stage-transformation, a stage to discharge-transformation or
both. If a coefficient set which is defined as a flow_to_level type is used in this type of transformation an error will be issued. The
authoriseExtrapolation option will enable/disable the extrapolation option. The interpolationType can be used the configure the type of
interpolation used.

The available options are:

linear
logarithmic

When the option logarithmic is selected the calculation method used is almost the same the method used when the linear option is selected. The
only difference is that the calculation is done with the natural logarithm of the lookup-value and with the natural logarithm of the table values.

The minimum stage value allow configurators to enter a minimum stage value. Stage values lower than the minimum value will return the lowest
discharge value in the lookup table.

The table record is the actual lookup table. Each tableRecord is a single en

try in the lookup-table with a stage and a discharge value. Note that it is also possible to define an offset for each tableRecord. This offset

will be applied as a positive offset to the stage value. Offsets will apply to the tableRecord in which it is defined and the records above this record
until a new offset is defined.

Configuration example

<stageDischarge>
<table>
<stage>
<variableId>input</variableId>
</stage>
<coefficientSet>
<type>level_to_flow</type>
<authoriseExtrapolation>true</authoriseExtrapolation>
<interpolationType>linear</interpolationType>
<tableRecord discharge="0" stage="0.433"/>
<tableRecord discharge="0.0595" stage="0.457"/>
<tableRecord discharge="0.190" stage="0.488"/>
<tableRecord discharge="1.84" stage="0.610"/>
<tableRecord discharge="3.85" stage="0.686"/>
<tableRecord discharge="6.71" stage="0.762"/>
<tableRecord discharge="14.9" stage="0.914"/>
<tableRecord discharge="306" stage="4.88"/>
<tableRecord discharge="340" stage="4.95"/>
<tableRecord discharge="377" stage="5.03"/>
<tableRecord discharge="408" stage="5.09"/>
<tableRecord discharge="419" stage="5.11"/>
</coefficientSet>
<discharge>
<variableId>output</variableId>
</discharge>
</table>
</stageDischarge>

]]>

Statistics Summary Transformations


A StatisticsSummary transformation will compute the configured statistic function for the input values to get one result value (the summary). This
uses only the input values within the relativeViewPeriod that is defined in the timeSeriesSet of the output variable. The result output value is
stored at T0 (time zero) in the output timeSeries.

The available statistic functions are:

count
kurtosis
max
mean

558
median
min
percentileExceedence
percentileNonExceedence
quartile
rootMeanSquareError
rsquared
skewness
standardDeviation
sum
variance

Structure Transformations

Structure Transformations

The next structure transformations are available:

crumpWeir: Converts stage to discharge.


crumpWeirBackwater: Converts stage to discharge.
flatVWeir: Converts stage to discharge.
flatVWeirBackwater: Converts stage to discharge.
generalWeirFixedHeight: General weir function which has a fixed height. The height is configured in the coefficientset.
generalWeirVariableHeight: General weir function which has a variable height. The height is one of the inputvariables.
pumpFixedDischarge: Calculates discharge of a pump, using a fixed discharge when the pump is on. The fixed discharge is equal to the
capacity of the pump and is defined in a coefficientSet.
pumpHeadDischargeTable: Calculates discharge of a pump. When the pump is on, then the discharge equals the capacity of the pump.
The capacity of the pump depends on the head. The discharges for different heads are defined in a table in a coefficientSet.
pumpSpeedDischargeTable: Calculates discharge of a speed-controlled pump with a fixed capacity. When the pump is on, then the
discharge of the pump depends only on the speed. The discharges for different speeds are defined in a table in a coefficientSet.
pumpSpeedHeadDischargeTable: Calculates discharge of a speed-controlled pump with a head-dependent capacity. When the pump is
on, then the discharge of the pump depends on both the speed and the head. The discharges for different speeds and heads are defined
in a table in a coefficientSet.

crumpWeir
A Crump weir is a standard design weir for moderate flow rates.
Note: When the downstream level of water in the river should be taken into account for backwater correction, use the crumpWeirBackwater
transformation instead of the crumpWeir transformation.

Input

1. headLevel: is the upstream level of water in the river measured from the top of the crest.
2. type: type can be 'simple' or 'crest_tapping'. With 'crest_tapping' the pressure tapping measurements are taken and used for the flow
calculation.

Coefficient set

1. pUpValue: is the distance in metres from the bottom of the river, or crest, to the top of the crest.
2. width: is the width if the weir crest in metres.
3. sSlope: is the side slope of the weir. Crump weirs that have side slopes are uncommon, and if present only possible on one crest. The
slope is the ratio of the horizontal distance over the vertical distance one metre (1m) expressed as a number.
4. dischargeCoefficient: weir discharge coefficient (default is equal to sqrt(g)).
5. energyHeadCorrection: if true energy head correction is taken into account (default is true).

Furthermore, one to three crests can be defined. The first crest is mandatory and only contains the relativeLevel attribute. The relativeLevels are
the required head level adjustments for each crest of the weir. Each relativeLevel is subtracted from the headLevel to give a true indication of
what the actual head level is over each crest. This is required as each crest at a particular weir may have different heights.
The second and third crests also must contain the width attribute. User has the choice between entering one of the following:

1. singleCrest: to define only crest1


2. doubleCrest: to define crest1 and crest2
3. tripleCrest: to define crest1, crest2 and crest3

Output

1. discharge: discharge of the weir.

Description

Calculates discharge of a triangular profile or Crump weir. The flow calculations are done using measurements taken at the weir.
The following formula is used:

559
Parameters:

P = height of weir or height of vertex above bottom


b = width of weir opening of width of triangle
h1 = upstream water level
H1 = upstream energy level
h2 = downstream water level

Q = Cg . Cd . b . H1^(3/2)

Cg = sqrt(g)
Cd = 0.633

The modular limit h2/h1 = 0.75.

If this value is exceeded a missing value will be entered for the discharge. Further limiting conditions are:
P >= 0.06 m
b >= 0.30 m
h1/P <= 3
b/h1 >= 2

crumpWeirBackwater
A Crump weir backwater is a standard design weir for moderate flow rates.

Input

1. headLevel: is the upstream level of water in the river measured from the top of the crest.
2. tailLevel: is the downstream level of water in the river measured from the top of the crest and can be positive or negative.
3. type: type can be 'simple' or 'crest_tapping'. With 'crest_tapping' the pressure tapping measurements are taken and used for the flow
calculation.

Coefficient set

1. pUpValue: is the distance in metres from the bottom of the river, or crest, to the top of the crest.
2. width: is the width if the weir crest in metres.
3. sSlope: is the side slope of the weir. Crump weirs that have side slopes are uncommon, and if present only possible on one crest. The
slope is the ratio of the horizontal distance over the vertical distance one metre (1m) expressed as a number.
4. dischargeCoefficient: weir discharge coefficient (default is equal to sqrt(g)).
5. energyHeadCorrection: if true energy head correction is taken into account (default is true).

Furthermore, one to three crests can be defined. The first crest is mandatory and only contains the relativeLevel attribute. The relativeLevels are
the required head level adjustments for each crest of the weir. Each relativeLevel is subtracted from the headLevel to give a true indication of
what the actual head level is over each crest. This is required as each crest at a particular weir may have different heights.
The second and third crests also must contain the width attribute. User has the choice between entering one of the following:

1. singleCrest: to define only crest1


2. doubleCrest: to define crest1 and crest2
3. tripleCrest: to define crest1, crest2 and crest3

Output

1. discharge: discharge of the weir.

Description

Calculates discharge of a triangular profile or Crump weir with backwater correction. The flow calculations are done using measurements taken at
the weir.

flatVWeir
Flat V weirs are used to calculate the flow of a river or stream. Predominantly, Flat V weirs are used where the flow rates are low and river
sections quite narrow.
Note: When the downstream level of water in the river should be taken into account for backwater correction, use the flatVWeirBackwater
transformation instead of the flatVWeir transformation.

Input

1. headLevel: is the upstream level of water in the river measured from the top of the crest at the bottom of the V.
2. type: type can be 'simple' or 'crest_tapping'. With 'crest_tapping' the pressure tapping measurements are taken and used for the flow
calculation.

Coefficient set

560
1. pUpValue: is the distance in metres from the bottom of the river to the top of the crest.
2. width: is the width if the weir crest in metres. Note that there can only be one crest at a Flat V weir.
3. cSlope: is the slope of the "V" at the crest. The slope is the ratio of the horizontal distance over the vertical distance one metre (1m)
expressed as a number.
4. sSlope: is the side slope of the weir. Most Flat V weirs don't have side slope. The slope is the ratio of the horizontal distance over the
vertical distance one metre (1m) expressed as a number. There are two different calculation methods for Flat V weirs that are identified
by either having the s-slope equal to 9999, or an actual value.

Output

1. discharge: discharge of the weir.

Description

Calculates discharge of a flat v weir. The flow calculations are done using measurements taken at the weir.
The following formula is used:

Parameters:

hr = reference level
P = height of weir or height of vertex above bottom
L = length of weir
b = width of weir opening of width of triangle
h1 = upstream water level
H1 = upstream energy level
H2 = downstream energy level
htr = height of triangle
B = width of channel
ha = height of opening

For h1 <= htr :

Q = Cg . Cd . m . H1^(5/2)

Cg = 4/5 sqrt(g)
Cd = 0.615 for m <= 15
Cd = 0.620 for 15 < m < 30
Cd = 0.625 for m >= 30
m = b/2htr

For h1 > htr :

Q = Cg . Cd . m . (H1^(5/2) - (h1 - htr)^(5/2))

Cd = 0.620 for m <= 15


Cd = 0.625 for 15 < m < 30
Cd = 0.630 for m >= 30

The modular limit is:

for h1 <= htr h2/h1 = 0.70


for h1 > htr h2/h1 = 0.75

If the modular limit is exceeded a missing value will be entered for the discharge.

flatVWeirBackwater
Flat V weirs are used to calculate the flow of a river or stream. Predominantly, Flat V weirs are used where the flow rates are low and river
sections quite narrow.

Input

1. headLevel: is the upstream level of water in the river measured from the top of the crest at the bottom of the V.
2. tailLevel: is the downstream level of water in the river measured from the top of the crest at the bottom of the V and can be positive or
negative.
3. type: type can be 'simple' or 'crest_tapping'. With 'crest_tapping' the pressure tapping measurements are taken and used for the flow
calculation.

Coefficient set

1. pUpValue: is the distance in metres from the bottom of the river to the top of the crest.
2. width: is the width if the weir crest in metres. Note that there can only be one crest at a Flat V weir.
3. cSlope: is the slope of the "V" at the crest. The slope is the ratio of the horizontal distance over the vertical distance one metre (1m)
expressed as a number.
4.

561
4. sSlope: is the side slope of the weir. Most Flat V weirs don't have side slope. The slope is the ratio of the horizontal distance over the
vertical distance one metre (1m) expressed as a number. There are two different calculation methods for Flat V weirs that are identified
by either having the s-slope equal to 9999, or an actual value.

Output

1. discharge: discharge of the weir.

Description

Calculates discharge of a flat v weir with backwater correction. The flow calculations are done using measurements taken at the weir.

StructurePumpFixedDischarge Transformation

PumpFixedDischarge
Input

1. status: pump status (on = 1 or off = 0). Can be equidistant or non-equidistant.

Coefficient set

Contains the fixed discharge of the pump.

Output

1. discharge: discharge of the pump.

Description

Calculates discharge of a pump, using a fixed discharge when the pump is on. The fixed discharge is equal to the capacity of the pump and is
defined in a coefficientSet.
Input can be equidistant or non-equidistant. First the intermediate result (discharge) is calculated at each time that is present in the status input
series. At a given time t1 the calculation uses the most recent status input value before t1 to determine if the pump is on. If the pump is off, then
the intermediate discharge at t1 is 0. If the pump is on, then the intermediate discharge at t1 equals fixedDischarge*(t1 - t0). t0 is the most recent
input time before t1 and fixedDischarge is defined in the coefficientSet. Finally the intermediate discharge is aggregated to the times in the
equidistant output time series.

StructurePumpHeadDischargeTable Transformation

PumpHeadDischargeTable
Input

1. status: pump status (on = 1 or off = 0). Can be equidistant or non-equidistant.


2. head: difference between output and input water level for the pump. Head = downstreamWaterLevel - upstreamWaterLevel. Can be
equidistant or non-equidistant.

Coefficient set

Contains a table with one or more table records. Each record lists the discharge of the pump for a given head. Heads need to be in ascending
order. For head values between records linear interpolation will be applied to get the discharge. For head values outside the table range a
warning will be logged and the discharge will be equal to the first (or last) discharge defined in the table.

Output

1. discharge: discharge of the pump.

Description

Calculates discharge of a pump. When the pump is on, then the discharge equals the capacity of the pump. The capacity of the pump depends on
the head. The discharges for different heads are defined in a table in a coefficientSet.
Input can be equidistant or non-equidistant. First the intermediate result (discharge) is calculated at each time that is present either in the status
input series or in the head input series or in both input series. At a given time t1 the calculation uses the most recent status input value before t1
to determine if the pump is on and the most recent head input value before t1 to lookup the discharge (= previousDischarge) in the head
discharge table. If the pump is off, then the intermediate discharge at t1 is 0. If the pump is on, then the intermediate discharge at t1 equals
previousDischarge*(t1 - t0). t0 is the most recent input time before t1 (either status or head input time, whichever changed most recently). Finally
the intermediate discharge is aggregated to the times in the equidistant output time series.

StructurePumpSpeedDischargeTable Transformation

PumpSpeedDischargeTable

562
Input

1. status: pump status (on = 1 or off = 0). Can be equidistant or non-equidistant.


2. speed: a speed value is either the pump speed (Hz) or a percentage or fraction of the capacity of the pump. Can be equidistant or
non-equidistant.

Coefficient set

Contains a table with one or more table records. Each record lists the discharge of the pump for a given speed. Speeds need to be in ascending
order. For speed values between records linear interpolation will be applied to get the discharge. For speed values outside the table range a
warning will be logged and the discharge will be equal to the first (or last) discharge defined in the table.

Output

1. discharge: discharge of the pump.

Description

Calculates discharge of a speed-controlled pump with a fixed capacity. When the pump is on, then the discharge of the pump depends only on the
speed. The discharges for different speeds are defined in a table in a coefficientSet.
Input can be equidistant or non-equidistant. First the intermediate result (discharge) is calculated at each time that is present either in the status
input series or in the speed input series or in both input series. At a given time t1 the calculation uses the most recent status input value before t1
to determine if the pump is on and the most recent speed input value before t1 to lookup the discharge (= previousDischarge) in the speed
discharge table. If the pump is off, then the intermediate discharge at t1 is 0. If the pump is on, then the intermediate discharge at t1 equals
previousDischarge*(t1 - t0). t0 is the most recent input time before t1 (either status or speed input time, whichever changed most recently). Finally
the intermediate discharge is aggregated to the times in the equidistant output time series.

StructurePumpSpeedHeadDischargeTable Transformation

PumpSpeedHeadDischargeTable
Input

1. status: pump status (on = 1 or off = 0). Can be equidistant or non-equidistant.


2. speed: a speed value is either the pump speed (Hz) or a percentage or fraction of the capacity of the pump. Can be equidistant or
non-equidistant.
3. head: difference between output and input water level for the pump. Head = downstreamWaterLevel - upstreamWaterLevel. Can be
equidistant or non-equidistant.

Coefficient set

Contains a table with one or more table records. Each record contains the discharge of the pump for a particular speed value and a particular
head value. The records need to be sorted on speed. The speed values need to be in ascending order and for each speed value the
corresponding head values need to be in ascending order. For speed or head values between the listed values linear interpolation will be applied
to get the discharge. For speed or head values outside the range of listed values a warning will be logged and the first (or last) defined values will
be used to get the discharge.
For given head and speed input values, the calculation will lookup a discharge value as follows. For each listed speed value the corresponding
head and discharge values are used to create a head discharge table. Then for each listed speed value the corresponding head discharge table is
used to lookup the discharge value corresponding to that listed speed value and the head input value. This way a temporary speed discharge
table is created. Then the speed input value is looked up in the temporary speed discharge table to get the final discharge value.

Output

1. discharge: discharge of the pump.

Description

Calculates discharge of a speed-controlled pump with a head-dependent capacity. When the pump is on, then the discharge of the pump depends
on both the speed and the head. The discharges for different speeds and heads are defined in a table in a coefficientSet.
Input can be equidistant or non-equidistant. First the intermediate result (discharge) is calculated at each time that is present in one or more of the
different input series. At a given time t1 the calculation uses the most recent status input value before t1 to determine if the pump is on and the
most recent speed input value before t1 and the most recent head input value before t1 to lookup the discharge (= previousDischarge) in the
coefficient set tables. If the pump is off, then the intermediate discharge at t1 is 0. If the pump is on, then the intermediate discharge at t1 equals
previousDischarge*(t1 - t0). t0 is the most recent input time before t1 (either status, speed or head input time, whichever changed most recently).
Finally the intermediate discharge is aggregated to the times in the equidistant output time series.

TimeShift
constant
length
variable

563
Constant

TimeShift Constant
Input

inputVariable, the time series which has to shift a certain number of time steps

Options

numberOfTimeSteps

Output

outputVariable, the shifted time series.

Description

The option numberOfTimeSteps defines the number of time steps the transformation has to shift. A positive value will shift the time series to the
future. In the shown example

the output time series shiftedInput will be shifted backwards 1 time steps to the input time series input

Configuration example

User Transformations

User Transformations

userSimple
userPeriodic

UserPeriodic Transformation

UserPeriodic
Input

It is possible to define embedded variables in this transformation. In the expression both embedded variables and variables defined at the start of
the transformations configuration file can be used. If an embedded variable and a variable defined at the start of the transformations configuration
file have the same variableId, then the embedded variable will be used.

Expression

For instance "X1 + X2 * 3". In the expression reference input variables or coefficients using their id, e.g. "X1 + a" where "X1" is the variableId of a
variable defined elsewhere and "a" is the id of a coefficient defined in a coefficientSet. A variableId or coefficientId should not start with a
numerical character and should not contain operators. The following operators can be used in the expression: +, -, /, *, ^, sin, cos, tan, asin, acos,
atan, sinh, cosh, tanh, asinh, acosh, atanh, log, ln, exp, sqrt, abs, pow. "pi" in lowercase letters is recognised as a standard constant. This means
that the user cannot use variables or coefficients with id "pi".

Coefficient set

564
Should contain the coefficients that are used in the free format expression. Defined the ids and values of the coefficients here, then reference to
the ids of these coefficients in the expression. Make sure that for all the coefficient ids in the free format expression the values are defined here.

Periodic Output Range

Output values will be shifted periodically to within this range, e.g. [0, 360]. The lower and upper limits are inclusive.

Output

1. output: result of the evaluated expression, shifted periodically to within the given output range.

Description

Function specified by a custom free format expression and coefficients. Any number of input variables and coefficients can be used in the free
format expression. The expression may contain general mathematical operators. A function parser is used to evaluate the expression. For each
time step in the output time series the expression is evaluated. Each result is shifted periodically to within the given output range and written to the
output time series.

UserSimple Transformation

UserSimple
Input

It is possible to define embedded variables in this transformation. In the expression both embedded variables and variables defined at the start of
the transformations configuration file can be used. If an embedded variable and a variable defined at the start of the transformations configuration
file have the same variableId, then the embedded variable will be used.

Expression

For instance "X1 + X2 * 3" (without the quotes). In the expression input variables or coefficients can be referenced using their id, e.g. "X1 + a"
where "X1" is the variableId of a variable defined elsewhere and "a" is the id of a coefficient defined in a coefficientSet. A variableId or
coefficientId should not start with a numerical character and should not contain operators. The following operators can be used in the expression:
+, -, /, *, ^, sin, cos, tan, asin, acos, atan, sinh, cosh, tanh, asinh, acosh, atanh, log, ln, exp, sqrt, abs, pow. "pi" in lowercase letters is recognised
as a standard constant. This means that the user cannot use variables or coefficients with id "pi".

Furthermore it is possible to use "if statements" in the expression. This can e.g. be used to get one output value if X is greater than 3 and get
another output value if X is equal to or less than 3. For instance in the expression

the if statement will be replaced with 10.5 or -2, depending on the value of the variable 'X'. In this case if X is greater than 3, then the if statement
is replaced with 10.5. If X is equal to or less than 3, then the if statement is replaced with -2. The following symbols can be used in an if statement:

greater than

greater than or equal to

less than

less than or equal to

If statements can also be nested, e.g.

Coefficient set or coefficient set functions

Should contain the coefficients that are used in the free format expression. Define the ids and values of the coefficients here, then refer to the ids
of these coefficients in the expression. Make sure that for all the coefficient ids in the free format expression the values are defined here.

When using coefficient set functions (available since build 30246), the value elements can contain tags between "@" signs (e.g. "@NUMBER@")
that refer to location attributes that are defined in the locationSets configuration file. The tags are replaced by actual values. These values can be
different for different locations and time periods. See 22 Locations and attributes defined in Shape-DBF files for more information.

Output

1.

565
1. output: result of the evaluated expression.

Description

Function specified by a custom free format expression and coefficients. Any number of input variables and coefficients can be used in the free
format expression. The expression may contain general mathematical operators. A function parser is used to evaluate the expression. For each
time step in the output time series the expression is evaluated and the result is written to the output time series.

Configuration examples

In the example below 'X1' is a reference to a variable and 'a' and 'b' are references to a coefficient.

<variableId>X1</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleTest2</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="364"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>Y1</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleTest2</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="364"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="user simple test 2">
<user>
<simple>
<expression>(a + b)*X1 - 3</expression>
<coefficientSet>
<coefficient id="a" value="1.34"/>
<coefficient id="b" value="2.5"/>
</coefficientSet>
<outputVariable>
<variableId>Y1</variableId>
</outputVariable>
</simple>
</user>
</transformation>
]]>

The example below uses an if statement. Here 'X' is a reference to a variable and 'a' is a reference to a coefficient.

566
<variableId>X</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleWithIfElseStatementTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="9"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>Y</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleWithIfElseStatementTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="9"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="user simple with if else statement test">
<user>
<simple>
<!-- if X > 3, then the expression part if(X > 3, 10.5, -2) is replaced with 10.5
-->
<!-- if X <= 3, then the expression part if(X > 3, 10.5, -2) is replaced with -2 -->
<expression>if(X > 3, 10.5, -2) + 5*a</expression>
<coefficientSet>
<coefficient id="a" value="1.5"/>
</coefficientSet>
<outputVariable>
<variableId>Y</variableId>
</outputVariable>
</simple>
</user>
</transformation>
]]>

The example below uses coefficientSetFunctions (available since build 30246). Here 'X' is a reference to a variable and 'a', 'b' and 'c' are
references to coefficients. Here the coefficients are defined in coefficientSetFunctions, where @coef_a@, @coef_b@ and @coef_c@ refer to
location number attributes that are defined in the locationSets configuration file.

567
<variableId>X</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>locationWithAttributes4</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="9"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>Y</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>locationWithAttributes4</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="9"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="user simple with coefficient set functions test">
<user>
<simple>
<expression>a*(X^2) + b*X + c</expression>
<coefficientSetFunctions>
<coefficient id="a" value="@coef_a@"/>
<coefficient id="b" value="@coef_b@"/>
<coefficient id="c" value="@coef_c@"/>
</coefficientSetFunctions>
<outputVariable>
<variableId>Y</variableId>
</outputVariable>
</simple>
</user>
</transformation>
]]>

DayMonth Sample

Description and usage

Samples a multi-year time series to produce only a single data point per year using the T0 day Month to determine the sampling moment.

Example

Input series 1:
multi-year (1975-1977) timeseries with daily time step, values available at 00:00h
Input series 2:
multi-year (1975-1977) timeseries with monthly time step, values available at 1st of the Month, 00:00h
Input series 3:
multi-year (1975-1977) timeseries with daily time step, values available at 12:00h

T0=01-01-2010 00:00h
Output series 1:
non-equidistant timeseries with values at 01-01-1975, 01-01-1976, 01-01-1977 all 00:00h
Output series 2:

568
non-equidistant timeseries with values at 01-01-1975, 01-01-1976, 01-01-1977 all 00:00h
Output series 3:
non-equidistant timeseries with missing values at 01-01-1975, 01-01-1976, 01-01-1977 all 00:00h

T0=01-01-2010 12:00h
Output series 1:
non-equidistant time series with missing values at 01-01-1975, 01-01-1976, 01-01-1977 all 12:00h
Output series 2:
non-equidistant time series with missing values at 01-01-1975, 01-01-1976, 01-01-1977 all 12:00h
Output series 3:
non-equidistant time series with values at 01-01-1975, 01-01-1976, 01-01-1977 all 12:00h

T0=03-02-2010 00:00h
Output series 1:
non-equidistant time series with values at 03-02-1975, 03-02-1976, 03-02-1977 all 00:00h
Output series 2:
non-equidistant time series with missing values at 03-02-1975, 03-02-1976, 03-02-1977 all 00:00h
Output series 3:
non-equidistant time series with values at 01-01-1975, 01-01-1976, 01-01-1977 all 00:00h

h.5 Input/Output timeseries


Input data are a multi year time series.
Output is a timeseries having values at the dayMonth corresponding to T0. The output has to be non-equidistant time series to accommodate
shifting sampling times while T0 is moving over time.

The output timeseries will hold missing values if the input timeseries has missing values at the exact same dayMonth 00:00h.

Configuration

This function can be used in the transformation module as well as in the TimeseriesDisplay.

Configuration in the transformation module

inputVariable
required element defining the identifier of the input time series with multi-year data. This ID must reference a valid input time series

outputVariable
required element defining the identifier of the output time series with data sampled by the dayMonth of T0. This ID must reference a valid
non-equidistant output time series

Example

<sample>
<dayMonthSample>
<inputVariable>
<variableId>ne</variableId>
</inputVariable>
<outputVariable>
<variableId>DM_ne</variableId>
</outputVariable>
</dayMonthSample>
</sample>
]]>

Configuration as a dropdown statistics box in the timeseriesdisplay

Example

<dayMonthSampleFunction/>
]]>

Remark:

The dayMonthSample sample function was produced for use with the PCA and Regression Transformation to conduct multi-year regression
analysis.

PCA and Regression Transformation


Description and usage

569
Principal components analysis (PCA) is used when a data set contains many time series (dimensions), and the dimensions need to be reduced,
while retaining the most significant relationships between the time series. Reducing the number of dimensions reduces the data set size and
removes unrelated variability.

The PCA regression transformation produces a linear regression equation and a root mean square error (RMSE). The PCA linear regression
equation produces an estimate of a parameter, given some combination of the input time series. The RMSE is a measure of dispersion around
the regression line.

The PCA regression transformation was developed to update basin snow models when they drift away from realistic output. Snow updating uses
historic and current snow water equivalence (SWE). Historic and current data come from monitoring stations within or near the basin, and from
simulations of SWE in the basins. Historic observed data are used for PCA for a basin, and can potentially include time series from many
monitoring stations. Current data are current daily SWE values, and are also either simulated (modelled basin) or observed (from monitoring
stations within or near the basin). PCA finds the strongest underlying relationships between the historic observed station time series, and
produces a linear equation. Current SWE values can then be input into the equation, and a PCA estimate of current basin SWE is produced.

Input/output timeseries

In this function four nonequidistant input time series must be identified:


1. historicalObserved
2. historicalSimulated
3. currentObserved
4. currentSimulated
In the snow updating use of the PCA regression transformation, these time series are subsamples of a daily time series to produce one data point
per month. See the dayMonth sample Wiki entry for more details.

In this function two output time series must be identified:


1. A time series with the PCA-estimated parameter value calculated by the algorithm
2. A time series with the associated RMSE calculated by the algorithm

Each time series is assigned a variable ID which is used in the actual expression.

PCA and regression transformation

a) Handling of time series gaps and irregular lengths


In order to obtain the longest possible common period of record among the input time series, the gap filling behavior has been changed from the
default FEWS behavior

On the left hand side of Table 1, a dataset is shown that consists of one basin and three stations with varying start and end times. Station 3 has a
gap.

In the middle of the table, the resulting time series lengths are highlighted in color for various combinations of station and basin pairings.

On the right hand side of the table (light blue highlighting), the default FEWS pairing behavior is shown.

The BPA FEWS gap handling technique uses all of the available data, resulting in a longer dataset (gray highlighting).

Table 1: A graphical demonstration of the BPA FEWS gap handling technique

570
b) PCA transformation
i) Data Preprocessing
Before PCA calculations take place, the data set may need to be normalized and standardized (ex. where two datasets have very different
means, standard deviations, or are not normally distributed) . However, no one type of preprocessing is appropriate for all time series. Therefore,
to automatically assess which preprocessing type produces the 'best' results, the FEWS PCA algorithm performs a variety of preprocessing
techniques. The user is presented with the result with the lowest RMSE.

Preprocessing techniques include the following attempts to normalize the dataset:


Square-root
Cube-root
Log10
No pre-processing

Preprocessing can also standardize the dataset by subtracting the time series mean and dividing by the standard deviation.

Therefore there are eight possible preprocessing types for PCA: square-root and standardizing, square-root and not standardizing, cube-root and
standardizing, cube-root and not standardizing...

ii) Derivation of the PCA equation


Details below describe how a linear equation is produced from an eigenvector in FEWS.
Where a PCA equation is constructed from two historical SWE time series, and one historical modeled time series, the eigenvector matrix is:

where:

'm', 'x', and 'y' are eigenvalues from basin 'm', and stations 'x' and 'y' respectively.
'z' is the PCA derived equation
'a' and 'b' are coefficients
'm' is the dimension of the matrix
'c' is a constant offset, derived by determining the mean of the historical SWE time series

c) Linear regression and multiple linear regression:


In addition to PCA analysis, the snow analysis module also attempts to minimize the RMSE by modeling using linear or multiple linear regression.

i) Data Preprocessing
Regression preprocessing and iteration procedures are identical to PCA (square-root, cube-root, log10, or no preprocessing). Data can also be
either normalized or not. Therefore, there are eight regression preprocessing types: square-root and standardizing, square-root and not
standardizing, cube-root and standardizing, cube-root and not standardizing...

ii) Derivation of the regression equation

The user is informed in the FEWS statistics window if regression produces the lowest RMSE, and has been chosen.

Configuration

Config example: PCA_RMSE_SnowAnalysis.xml

<?xml version="1.0" encoding="UTF-8"?>


<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">

<!--declare your input time series-->


<variable>
<variableId>Obs_hist</variableId>

571
<timeSeriesSet>
<moduleInstanceId>DayMonthSampleSNWE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SNWE</parameterId>
<locationId>2A16</locationId>
<locationId>2A18</locationId>
<locationId>2A21</locationId>
<locationId>2A22</locationId>
<locationId>2A23</locationId>
<locationId>2A25</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="hour" startoverrulable="true" start="-240" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>Sim_hist</variableId>
<timeSeriesSet>
<moduleInstanceId>DayMonthSampleSWE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SWE</parameterId>
<locationId>MCDQ2IL</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="hour" startoverrulable="true" start="-240" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>Obs_current</variableId>
<timeSeriesSet>
<moduleInstanceId>DayMonthSampleSNWE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SNWE</parameterId>
<locationId>2A16</locationId>
<locationId>2A18</locationId>
<locationId>2A21</locationId>
<locationId>2A22</locationId>
<locationId>2A23</locationId>
<locationId>2A25</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-2" end="2"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>Current_sim</variableId>
<timeSeriesSet>
<moduleInstanceId>DayMonthSampleSWE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SWE</parameterId>
<locationId>MCDQ2IL</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-2" end="2"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>

<!--declare your output time series-->


<variable>
<variableId>PCA_swe</variableId>

572
<timeSeriesSet>
<moduleInstanceId>PCA_MCDQ2IL_RMSE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SWE</parameterId>
<qualifierId>pca</qualifierId>
<locationId>MCDQ2IL</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-2" end="2"/>
<readWriteMode>add originals</readWriteMode>
<ensembleId>main</ensembleId>
</timeSeriesSet>
</variable>
<variable>
<variableId>PCA_rmse</variableId>
<timeSeriesSet>
<moduleInstanceId>PCA_MCDQ2IL_RMSE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SWE</parameterId>
<qualifierId>rmse</qualifierId>
<locationId>MCDQ2IL</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-2" end="2"/>
<readWriteMode>add originals</readWriteMode>
<ensembleId>main</ensembleId>
</timeSeriesSet>
</variable>
<!--perform the PCA calculation-->
<transformation id="PCA">

<regression>
<principalComponentAnalysis>
<historicalObserved>
<variableId>Obs_hist</variableId>
</historicalObserved>
<historicalSimulated>
<variableId>Sim_hist</variableId>
</historicalSimulated>
<currentObserved>
<variableId>Obs_current</variableId>
</currentObserved>
<currentSimulated>
<variableId>Current_sim</variableId>
</currentSimulated>
<enableCombinationAnalysis>true</enableCombinationAnalysis>
<estimatedCurrentSimulated>
<variableId>PCA_swe</variableId>
</estimatedCurrentSimulated>
<errorStatistics>
<variableId>PCA_rmse</variableId>
</errorStatistics>
</principalComponentAnalysis>
</regression>
</transformation>

</transformationModule>]]>

Config example: [Link]


The following sample describes how the PCA snow updataing display is configured.
Note the use of the dayMonthSample function (DayMonth Sample)

573
<dayMonthSampleFunction/>

<statisticalFunctions>
<statisticalFunction function="principalcomponentanalysisrme">
<observedParameterId>SNWE</observedParameterId>
<simulatedParameterId>SWE</simulatedParameterId>
</statisticalFunction>
</statisticalFunctions>]]>

Produces the following display:

Figure 1. The snow updating GUI

Selection Transformations
Selection of lows
Selection of peaks
Selection of independent lows
Selection of independent peaks
Selection of maximum
Selection of minimum

Selection of independent lows


Description

Set of rules to allow selection of lows from an input time series.


This transformation will select only the lows which occur within the defined gap in time between lows.

Input

Timeseries

Options

Requirements for definitions of low selections using gaps to define independence are:

574
An attribute "gapLengthInsec" must be defined. The value attribute defines the length of the minimum gap in seconds.

There are two choices for refining the selection:

An attribute "totalNumberBeforeT0" must be defined. The value attribute defines the maximum number of lows to consider before T0.
An attribute "totalNumberAfterT0" must be defined. The value attribute defines the maximum number of lows to consider after T0.
An attribute "skipJustBeforeT0" indicates how many lows to lows just before T0. (optional)
An attribute "skipJustAfterT0" indicates how many lows to lows just after T0. (optional)

or

An attribute "totalNumber" must be defined. The value attribute defines the maximum number of lows to consider.

Output

Timeseries containing the selection of lows

Configuration example

SelectionIndependentLowsFunctionTest 1.00 [Link]


<selection>
<independentLows>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionIndependentLowsFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="0" end="365"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<gapLengthInSec>2700</gapLengthInSec>
<totalNumberBeforeT0>3</totalNumberBeforeT0>
<totalNumberAfterT0>4</totalNumberAfterT0>
<skipJustBeforeT0>2</skipJustBeforeT0>
<skipJustAfterT0>2</skipJustAfterT0>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionIndependentLowsFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-5" end="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</independentLows>
</selection>

]]>

In this example:

The time between two local minima (lows) should be at least 2700 seconds or 45 minutes.
Only the last three lows before T0 and the first four lows after T0 are considered.
The first two lows of the last three lows just before T0 are skipped, leaving only the third last one.
Similarly the first two lows just after T0 are skipped, leaving the third and fourth ones.

Selection of independent peaks


Description

Set of rules to allow selection of peaks from an input time series.

575
This transformation will select only the peaks which occur within the defined gap in time between peaks.

Input

Timeseries

Options

Requirements for definitions of peak selections using gaps to define independence are:

An attribute "gapLengthInsec" must be defined. The value attribute defines the length of the minimum gap in seconds.

There are two choices for refining the selection:

An attribute "totalNumberBeforeT0" must be defined. The value attribute defines the maximum number of peaks to consider before T0.
An attribute "totalNumberAfterT0" must be defined. The value attribute defines the maximum number of peaks to consider after T0.
An attribute "skipJustBeforeT0" indicates how many peaks to skip just before T0. (optional)
An attribute "skipJustAfterT0" indicates how many peaks to skip just after T0. (optional)

or

An attribute "totalNumber" must be defined. The value attribute defines the maximum number of peaks to consider.

Output

Timeseries containing the selection of peaks

Configuration example

SelectionIndependentPeaksFunctionTest 1.00 [Link]


<selection>
<independentPeaks>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionIndependentPeaksFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="0" end="365"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<gapLengthInSec>2700</gapLengthInSec>
<totalNumberBeforeT0>3</totalNumberBeforeT0>
<totalNumberAfterT0>4</totalNumberAfterT0>
<skipJustBeforeT0>2</skipJustBeforeT0>
<skipJustAfterT0>2</skipJustAfterT0>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionIndependentPeaksFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-5" end="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</independentPeaks>
</selection>

]]>

In this example:

The time between two local maxima (peaks) should be at least 2700 seconds or 45 minutes.

576
Only the last three peaks before T0 and the first four peaks after T0 are considered.
The first two peaks of the last three peaks just before T0 are skipped, leaving only the third last one.
Similarly the first two peaks just after T0 are skipped, leaving the third and fourth ones.

Selection of lows
Description

Set of rules to allow selection of lows from an input time series.

Input

Timeseries

Options

In the configuration of low selections there are two choices for refining the selection:

An attribute "totalNumberBeforeT0" must be defined. The value attribute defines the maximum number of lows to consider before T0.
An attribute "totalNumberAfterT0" must be defined. The value attribute defines the maximum number of lows to consider after T0.
An attribute "skipJustBeforeT0" indicates how many lows to skip just before T0. (optional)
An attribute "skipJustAfterT0" indicates how many lows to skip just after T0. (optional)

or

An attribute "totalNumber" must be defined. The value attribute defines the maximum number of lows to consider.

Output

Timeseries containing the selection of lows

Configuration example

SelectionLowsFunctionTest 1.00 [Link]


<selection>
<lows>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionLowsFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="0" end="365"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<totalNumberBeforeT0>3</totalNumberBeforeT0>
<totalNumberAfterT0>4</totalNumberAfterT0>
<skipJustBeforeT0>2</skipJustBeforeT0>
<skipJustAfterT0>2</skipJustAfterT0>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionLowsFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-5" end="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</lows>
</selection>

]]>

577
In this example:

Only the last three lows before T0 and the first four lows after T0 are considered.
The first two lows of the last three lows just before T0 are skipped, leaving only the third last one.
Similarly the first two lows just after T0 are skipped, leaving the third and fourth ones.

Selection of maximum
Description

Set of rules to allow selection of maximum values from an input time series.

Input

Timeseries

Options

An optional attribute "selectNumberOfHighestMax" may be defined. The value attribute defines the number of highest maximum values
which will be written to the output timeseries.
The periodTransformation may be applied to this transformation (see Configuration example 2 below).

Output

Timeseries containing the selection of maximum values.

Configuration example 1

SelectionMaximumFunctionTest 1.00 [Link]


<selection>
<maximum>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="0" end="365"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionMaximumFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-5" end="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</maximum>
</selection>

]]>

In this example:

The highest maximum value of the input series is returned by the output time series.

Configuration example 2

578
SelectionPeriodMaximumFunctionTest 1.00 [Link]
<periodTransformation>
<period>
<season>
<startMonthDay>--04-01</startMonthDay>
<endMonthDay>--03-31</endMonthDay>
</season>
</period>
<selection>
<maximum>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>RH_24H</parameterId>
<locationSetId>KNMIDAG</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-2924" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<selectNumberOfHighestMax>3</selectNumberOfHighestMax>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionPeriodMaximumFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>RH_24H.max</parameterId>
<locationSetId>KNMIDAG</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-2924" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</outputVariable>
</maximum>
</selection>
</periodTransformation>

]]>

In this example:

For each hydrologic year, the three highest maximum values are returned by the output time series.

Selection of minimum
Description

Set of rules to allow selection of minimum values from an input time series.

Input

Timeseries

Options

An optional attribute "selectNumberOfLowestMin" may be defined. The value attribute defines the number of lowest minimum values
which will be written to the output timeseries.
The periodTransformation may be applied to this transformation (see Configuration example 2 below).

Output

Timeseries containing the selection of minimum values.

Configuration example 1

579
SelectionMinimumFunctionTest 1.00 [Link]
<selection>
<minimum>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionMinimumFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="0" end="365"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionMinimumFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-5" end="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</minimum>
</selection>

]]>

In this example:

The lowest minimum value is returned by the output time series.

Configuration example 2

580
SelectionPeriodMinimumFunctionTest 1.00 [Link]
<periodTransformation>
<period>
<season>
<startMonthDay>--04-01</startMonthDay>
<endMonthDay>--03-31</endMonthDay>
</season>
</period>
<selection>
<minimum>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>RH_24H</parameterId>
<locationSetId>KNMIDAG</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-2924" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<selectNumberOfLowestMin>3</selectNumberOfLowestMin>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionPeriodMinimumFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>RH_24H.max</parameterId>
<locationSetId>KNMIDAG</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-2924" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</outputVariable>
</minimum>
</selection>
</periodTransformation>

]]>

In this example:

For each hydrologic year, the three lowest minimum values are returned by the output time series.

Selection of peaks
Description

Set of rules to allow selection of peaks from an input time series.

Input

Timeseries

Options

In the configuration of peak selections there are two choices for refining the selection:

An attribute "totalNumberBeforeT0" must be defined. The value attribute defines the maximum number of peaks to consider before T0.
An attribute "totalNumberAfterT0" must be defined. The value attribute defines the maximum number of peaks to consider after T0.
An attribute "skipJustBeforeT0" indicates how many peaks to skip just before T0. (optional)
An attribute "skipJustAfterT0" indicates how many peaks to skip just after T0. (optional)

or

An attribute "totalNumber" must be defined. The value attribute defines the maximum number of peaks to consider.

581
Output

Timeseries containing the selection of peaks

Configuration example

SelectionPeaksFunctionTest 1.00 [Link]


<selection>
<peaks>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionPeaksFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="0" end="365"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<totalNumberBeforeT0>3</totalNumberBeforeT0>
<totalNumberAfterT0>4</totalNumberAfterT0>
<skipJustBeforeT0>2</skipJustBeforeT0>
<skipJustAfterT0>2</skipJustAfterT0>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionPeaksFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-5" end="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</peaks>
</selection>

]]>

In this example:

Only the last three peaks before T0 and the first four peaks after T0 are considered.
The first two peaks of the last three lows just before T0 are skipped, leaving only the third last one.
Similarly the first two peaks just after T0 are skipped, leaving the third and fourth ones.

21 Secondary Validation
What [Link]

Description Configuration for the Secondary Validation module

schema location [Link]

Entry in
ModuleDescriptors <description>SecondaryValidation</description>
<className
>[Link]</
className>
]]>

SecondaryValidation (since 2010_01)

582
The SecondaryValidation module can be used to perform certain checks on time series data and generate log messages when the specified
criteria are met.

Checks for counting reliable, doubtful, unreliable and missing values


FlagsComparisonCheck
SeriesComparisonCheck

Configuration

An XML file for configuring an instance of the SecondaryValidation module called for example CheckImportedData would be the following:

CheckImportedData 1.00 [Link]

CheckImportedData File name for the CheckImportedData configuration.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

A SecondaryValidation configuration file is typically located in the ModuleConfigFiles folder and can be used to configure one or more checks.
The configured checks will be processed one by one in the specified order. The checks can generate log messages, which can trigger actions in
the master controller, like e.g. sending warning e-mails. A special type of check is available for automatically modifying flags to 'doubtful' or
'unreliable' per time step when a condition on multiple time series becomes true.

Checks for counting reliable, doubtful, unreliable and missing values

These checks are intended for generating log events when a specific constraint is violated. The time series configured in these checks will be
processed one by one. If a time series does not pass the check, then the configured log message is logged with the specified event code and
level. The log event code can be used to trigger a certain action in the master controller, e.g. sending warning emails.

The following different types of checks are available:

minNumberOfValuesCheck: Logs a message when there are not enough values within a configured period.
minNonMissingValuesCheck: Logs a message when there are not enough non-missing values within a configured period. A
non-missing value is a value that is reliable, doubtful or unreliable.
minReliableOrDoubtfulValuesCheck: Logs a message when there are not enough values that are reliable or doubtful within a
configured period.
minReliableValuesCheck: Logs a message when there are not enough reliable values within a configured period.

Check for setting flags per time step using an expression

The seriesComparisonCheck check is available for testing constraints between multiple time series or parameters per time step.
This check verifies constraints between multiple time series sets or multiple parameters and automatically modifies the flags per time step when
the required input data was available (reliable or doubtful) and the specified expression is evaluated and is true.

Check for setting flags per time step using other timeseries

The flagsComparisonCheck check is available for comparing and setting flags for multiple time series or parameters per time step.
This check determines for each timestep the most unreliable input flag within the input flags, and if it is more unreliable than the output flag it
updates the output flag.

Variable Definitions

The configuration contains variable definitions for one or more time series that can be used as input for checks. Each variable definition contains a
variableId and a timeSeriesSet. The variableId can be used to reference the time series in a check. Alternatively, depending on which check it is,
either variable definitions or variables can be embedded in the checks.

Checks for counting reliable, doubtful, unreliable and missing values


Contents of checks for counting reliable, doubtful, unreliable and missing values

The minNumberOfValuesCheck, minNonMissingValuesCheck, minReliableOrDoubtfulValuesCheck and minReliableValuesCheck all consist of


the following elements:

id: Identifier of the check. This is only used in log messages and exception messages.
variable: One or more time series that need to be checked. This can be either an embedded timeSeriesSet or a reference to a
variableDefinition defined at the start of the configuration file. If this contains multiple time series (e.g. for multiple locations), then each
time series is checked individually.

583
checkRelativePeriod: The check will only consider data in this time period. This time period is relative to the timeZero of the taskrun in
which the module instance runs. The start and end of the period are included. This period overrules any relativeViewPeriods specified in
the timeSeriesSets of the time series.
minNumberOfValues: The minimum required number of values in the time series to pass the check.
logLevel: Log level for the log message that is logged if a time series does not pass the check. Can be DEBUG, INFO, WARN, ERROR
or FATAL. If level is error or fatal, then the module will stop running after logging the first log message.
logEventCode: Event code for the log message that is logged if a time series does not pass the check. This event code has to contain a
dot, e.g. "[Link]", because the log message is only visible to the master controller if the event code contains a dot.
logMessage: Log message that is logged if a time series does not pass the check. It is possible to use the following tags in the
logMessage: %HEADER% and %LOCATION_NAME%. The %HEADER% tag will be replaced with the header of the time series. The
%LOCATION_NAME% tag will be replaced with the name of the location of the time series.

Tag Replacement

%HEADER% The name of the time series.

%LOCATION_NAME% The location name of the time series.

Configuration example for checks on amounts of reliable, doubtful, unreliable and missing values

<secondaryValidation xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link]
xmlns="[Link]
<variableDefinition>
<variableId>input1</variableId>
<timeSeriesSet>
<moduleInstanceId>MinReliableValuesCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<!-- any relativeViewPeriod here will always be overruled by checkRelativePeriod in
each check -->
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<variableDefinition>
<variableId>input2</variableId>
<timeSeriesSet>
<moduleInstanceId>MinReliableValuesCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>location2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<!-- any relativeViewPeriod here will always be overruled by checkRelativePeriod in
each check -->
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>

<minNonMissingValuesCheck id="MinNonMissingValuesCheck">
<variable>
<variableId>input1</variableId>
</variable>
<variable>
<variableId>input2</variableId>
</variable>
<checkRelativePeriod unit="hour" start="-12" end="0"/>
<minNumberOfValues>18</minNumberOfValues>
<logLevel>INFO</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>Not enough values available for time series %header%</logMessage>
</minNonMissingValuesCheck>

584
<minNumberOfValuesCheck id="MinNumberOfValuesCheck">
<variable>
<variableId>input1</variableId>
</variable>
<variable>
<variableId>input2</variableId>
</variable>
<checkRelativePeriod unit="hour" start="-12" end="0"/>
<minNumberOfValues>24</minNumberOfValues>
<logLevel>DEBUG</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>Not enough values available for time series %header%</logMessage>
</minNumberOfValuesCheck>

<minReliableOrDoubtfulValuesCheck id="MinReliableOrDoubtfulValuesCheck">
<variable>
<variableId>input1</variableId>
</variable>
<variable>
<variableId>input2</variableId>
</variable>
<checkRelativePeriod unit="hour" start="-12" end="0"/>
<minNumberOfValues>12</minNumberOfValues>
<logLevel>WARN</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>Not enough values available for time series %header%</logMessage>
</minReliableOrDoubtfulValuesCheck>

<minReliableValuesCheck id="MinReliableValuesCheck">
<variable>
<variableId>input1</variableId>
</variable>
<variable>
<variableId>input2</variableId>
</variable>
<checkRelativePeriod unit="hour" start="-12" end="0"/>
<minNumberOfValues>6</minNumberOfValues>
<logLevel>WARN</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>Not enough values available for time series %header%</logMessage>
</minReliableValuesCheck>
</secondaryValidation>
]]>

FlagsComparisonCheck
Contents of check for flagsComparisonCheck

id: identifier of the check.


variableDefinition: embedded variable definition (see above).
inputVariableId: One or more identifiers for variables of which the flags have to be used.
outputVariableId: One or more identifiers for variables for which the flags have to be modified.
logLevel: Log level for the log message that is logged if a time series does not pass the check. Can be DEBUG, INFO, WARN, ERROR
or FATAL. If level is error or fatal, then the module will stop running after logging the first log message. Fatal should never be used
actually.
logEventCode: Event code for the log message that is logged if a time series does not pass the check. This event code has to contain a
dot, e.g. "[Link]", because the log message is only visible to the master controller if the event code contains a dot.
logMessage: Log message that is logged if a time series does not pass the check. Some more options are available than in the other
checks:

Tag Replacement

%AMOUNT_CHANGED_FLAGS% The number of flags that has been altered.

%CHECK_ID% The id of the check that caused the flags to be altered.

585
%EXPRESSION% The expression that caused the flags to be altered.

%HEADER% The header names of the timeseries for which the flags were altered.

%LOCATION_ID% The locationId where the alterations took place.

%LOCATION_NAME% The name of the locations where the alterations took place.

%OUTPUT_FLAG% The flag that has been set.

%PARAMETER_ID% The parameterId where the alterations took place.

%PARAMETER_NAME% The name of the parameter where the alterations took place.

%PERIOD% The period in which flags were changed.

It is not possible to compare two different location sets both containing more than one location id, but the following comparisons can be
configured:

one location with a scalar


all the locations in a location set with a scalar
two different locations
one location with all the locations in a location set
two similar locationSets, containing exactly the same location ids

Rules for updating the flags

For each timestep, the most unreliable flag in the inputVariables is determined, e.g. unreliable > doubtful > reliable.
If the most unreliable flag in the inputVariables is unreliable, and the corresponding flag in the outputVariable is reliable or doubtful, it is made
unreliable as well.
If the most unreliable flag in the inputVariables is doubtful, and the corresponding flag in the outputVariable is reliable, it is made doubtful as well.

Configuration examples for flagsComparisonCheck

A configuration example for the flagsComparisonCheck is given below:

<secondaryValidation xmlns:xsi="[Link] xsi:schemalocation="


[Link]
[Link] xmlns="
[Link]

<!-- comparison of variables with similar location sets, different parameters, does
comparison per location -->
<flagsComparisonCheck id="FlagsComparisonCheck_similarLocationSet">
<!-- referred to by locationset5 and locationset6-->
<variableDefinition>
<variableId>locationLocationTestLocation12_H_obs_init1</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs1</parameterId>
<locationId>location12</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<!-- referred to by locationset5 and locationset6-->
<variableDefinition>
<variableId>locationLocationTestLocation13_H_obs_init2</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs1</parameterId>
<locationId>location13</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>

586
</variableDefinition>
<!-- referred to by locationset5 and locationset6-->
<variableDefinition>
<variableId>locationLocationTestLocation12_H_obs2_init3</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs2</parameterId>
<locationId>location12</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<!-- referred to by locationset5 and locationset6-->
<variableDefinition>
<variableId>locationLocationTestLocation13_H_obs2_init4</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs2</parameterId>
<locationId>location13</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>

<variableDefinition>
<variableId>similarLocationSetTest1_H_obs_initSet</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs1</parameterId>
<locationSetId>locations5</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>

<variableDefinition>
<variableId>similarLocationSetTest2_H_obs_initSet</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs2</parameterId>
<locationSetId>locations6</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>

<inputVariableId>similarLocationSetTest1_H_obs_initSet</inputVariableId>
<inputVariableId>similarLocationSetTest2_H_obs_initSet</inputVariableId>
<outputVariableId>similarLocationSetTest1_H_obs_initSet</outputVariableId>
<outputVariableId>similarLocationSetTest2_H_obs_initSet</outputVariableId>
<logLevel>INFO</logLevel>
<logEventCode>SecondaryValidation.similarLocationSetTest2_H_obs_initSet</logEventCode>
<logMessage>%AMOUNT_CHANGED_FLAGS% flags set to %OUTPUT_FLAG% by [%CHECK_ID%,
%EXPRESSION%], header=%HEADER%, location(s)=%LOCATION_NAME%</logMessage>
</flagsComparisonCheck>

587
</secondaryValidation>

]]>

SeriesComparisonCheck
Contents of check for seriesComparisonCheck

id: identifier of the check.


variableDefinition: embedded variable definition (see above).
expression: A comparison between one or more variableIds (see examples below).
validatingVariableId: One or more identifiers for variables for which the flags have to be modified.
outputFlag: New flag value for time steps for which there is valid data and the expression fails. Either doubtful or unreliable.
logLevel: Log level for the log message that is logged if a time series does not pass the check. Can be DEBUG, INFO, WARN, ERROR
or FATAL. If level is error or fatal, then the module will stop running after logging the first log message. Fatal should never be used
actually.
logEventCode: Event code for the log message that is logged if a time series does not pass the check. This event code has to contain a
dot, e.g. "[Link]", because the log message is only visible to the master controller if the event code contains a dot.
logMessage: Log message that is logged if a time series does not pass the check. Some more options are available than in the other
checks:

Tag Replacement

%AMOUNT_CHANGED_FLAGS% The number of flags that has been altered.

%CHECK_ID% The id of the check that caused the flags to be altered.

%EXPRESSION% The expression that caused the flags to be altered.

%HEADER% The header names of the timeseries for which the flags were altered.

%LOCATION_ID% The locationId where the alterations took place.

%LOCATION_NAME% The name of the locations where the alterations took place.

%OUTPUT_FLAG% The flag that has been set.

%PARAMETER_ID% The parameterId where the alterations took place.

%PARAMETER_NAME% The name of the parameter where the alterations took place.

%PERIOD% The period in which flags were changed.

It is not possible to compare two different location sets both containing more than one location id, but the following comparisons can be
configured:

one location with a scalar


all the locations in a location set with a scalar
two different locations
one location with all the locations in a location set
two similar locationSets, containing exactly the same location ids

Configuration examples for seriesComparisonCheck

The expression is always a comparison. The comparison operator is within XML one of (.ne., .eq., .gt., .ge., .lt., .le.). Each variable has to be a
single word without spaces. Mathematical symbols or functions like e, pi or cos cannot be used as variableId, but they will be interpreted
mathematically. Note that in case one of the variables of the expression contains missing values for a timestep, the expression fails, and no flags
will be altered for this timestep. Also manually edited flags will be left untouched.

Some mathematical functions worth mentioning are the following (these must be in lowercase):

Function Description

avg(x1, x2, x3, ...) Average

min(x1, x2, x3, ...) Minimum

max(x1, x2, x3, ...) Maximum

abs( x ) Absolute value

round( x ) Rounded value

588
floor( x ) Floor

ceil( x ) ceiling

sin, cos, tan Trigonometric

mod( x , y) x % y Modulus

sqrt( x ) SquareRoot

sum( x, y,...) Sum of multiple variables

A simple configuration example for the seriesComparisonCheck is given below, it will make the workflow check the values that are reliable or
doubtful, and mark them as unreliable if they are smaller than thirteen:

<secondaryValidation xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link]
xmlns="[Link]
<!-- comparison between location variable and scalar, set to unreliable -->
<seriesComparisonCheck id="checkWithScalar">
<variableDefinition>
<variableId>H_obs_location1</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheck</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<expression>H_obs_location1 .ge. 13</expression>
<validatingVariableId>H_obs_location1</validatingVariableId>
<outputFlag>unreliable</outputFlag>
<logLevel>INFO</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>%AMOUNT_CHANGED_FLAGS% flags set to %OUTPUT_FLAG% by [%CHECK_ID%,
%EXPRESSION%].</logMessage>
</seriesComparisonCheck>
</secondaryValidation>
]]>

A more complex sample does a comparison for different parameters in similar location sets, it will mark values that were reliable or doubtful as
unreliable,
in this case first for location1 and then for location2, when the difference between them is bigger than three:

<secondaryValidation xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link]
xmlns="[Link]

<!-- comparison of variables with similar location sets, different parameters, does comparison
per location -->
<seriesComparisonCheck id="similarLocationSetSeriesComparisonCheck">
<!-- referred to by locationset1 and locationset2-->
<variableDefinition>
<variableId>H_obs1_location1</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs1</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>

589
</timeSeriesSet>
</variableDefinition>

<!-- referred to by locationset1 and locationset2-->


<variableDefinition>
<variableId>H_obs1_location2</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs1</parameterId>
<locationId>location2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<!-- referred to by locationset1 and locationset2-->
<variableDefinition>
<variableId>H_obs2_location1</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs2</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<!-- referred to by locationset1 and locationset2-->
<variableDefinition>
<variableId>H_obs2_location2</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs2</parameterId>
<locationId>location2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>

<variableDefinition>
<variableId>locationSet1</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>locationset1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>

<variableDefinition>
<variableId>locationSet2</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheckTest</moduleInstanceId>

590
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>locationset2</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>

<expression>abs(locationSet1 - locationSet2) .gt. 3</expression>


<validatingVariableId>locationSet1</validatingVariableId>
<validatingVariableId>locationSet2</validatingVariableId>
<outputFlag>unreliable</outputFlag>
<logLevel>INFO</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>%AMOUNT_CHANGED_FLAGS% flags set to %OUTPUT_FLAG% by %CHECK_ID%.</logMessage
>
</seriesComparisonCheck>
</secondaryValidation>
]]>

Sample screenshot

The sample screenshot below demonstrates the use of the seriesComparisonCheck. In this case it has been used to set flags to unreliable for
timesteps where the waterlevel measurements upstream are below the measurements downstream. The different output flags have been
displayed using different colors at the bottom of the screenshot. In this case the flags of the values above the yellow part have been set to
unreliable, whereas the flags of the values above the purple line have remained the same.

22 forecastLengthEstimator
Function: Sets the forecast length

Module Name: ForecastLengthEstimator

Where to Use? In a workflow

Why to Use? To set the length of a forecast based on (external) timeseries

Description: The forecastLengthEstimator is a module that can be used at the start of a workflow the set the length of the operations in
the other modules in that workflow.

591
Preconditions: the endoverrulable attribute in the relative view period in the time series sets must be set to true in all modules you want to
apply the forecast length to

Outcome(s):

Scheendump(s): link to attached screendump(s) for displays only

Remark(s):

Available since: DelftFEWS200803

Contents

Contents
Overview
Configuration
Sample input and output
Error and warning messages
Known issues
Related modules and documentation
Technical reference

Overview

The forecastLengthEstimator is a module that can be used at the start of a workflow to set the length of the operations in the other modules in that
workflow. As most models cannot handle gaps in the input data, this option can be useful if you want to run a hydrological model only with the
data available and thus avoid e.g. extrapolating the meteorological forecast data.

Configuration

A configuration example of the forecast length estimator is given below:

<forecastLengthEstimator xmlns:xsi="[Link] xsi:schemalocation=


"[Link] [Link]
" xmlns="[Link]
<externalForecastTimeSeries>
<moduleInstanceId>ImportCOSMO2</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>COSMO2</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="0" end="30"/>
<readWriteMode>read only</readWriteMode>
</externalForecastTimeSeries>
<minForecastLength unit="hour" multiplier="3"/>
<maxForecastLength unit="hour" multiplier="30"/>
</forecastLengthEstimator>
]]>

The forecast length is defined by the length of the external forecast time series (in this example ImportCOSMO2). You can define a mimimum and
/ or maximum forecast length (minForcastLength / maxForecastLength). If the actual forecast length of the external forecast looked at is shorter
than the minimum forecast length the forecast length is set to this minimum length (in this example 3 hours). If the actual forecast length is longer
than the maximum forecast length the forecast length is set to this maximum length (in this example 30 hours).

The logging will provide you with the information which forecast length was used in the run, see example below

[] INFO - [Link] - [Link]: Workflow 'HBV_FlowForecast_ECMWF'


[] INFO - [Link] - Started Activity ForecastLength_ECMWF
[] INFO - [Link] - [Link]: Established Forecast Length as 234 hours

Note
The endoverrulable attribute in the relative view period in time series sets must be set to true in all subsequent modules in which
you want to use the actual forecast length.

592
Sample input and output

Sample input and output

Error and warning messages

Description of errors and warnings that may be generated

Error: Error message

Action: Action to fix

Known issues

Describe all known issues and unexpected behaviour

Related modules and documentation

Links to related parts of the system

Technical reference

Entry in moduleDescriptors: [Link]

Link to schema: [Link]

23 Decision Module
*** !!! This page is under construction / valid from Delft-FEWS version 2011.01 !!! ***

Decision Module

What [Link]

Description Configuration for the Decision module

schema location [Link]

Entry in
ModuleDescriptors <description>DecisionModule</description>
<className
>[Link]</className>
]]>

Please note that at the moment the Decision Module is only available in the development build (2011.01).

Contents

Decision Module
Contents
Overview
Some important prerequisites
Barrier states should be defined
Input from present and future model state
Scalable
Two types of criteria
The decision evaluation process
Configuration
Variables definition
variableId
timeSeriesSet
Schema definition
Rules definition

593
Schema definition
variables
criticalConditions
transitionRules
DecisionTrees definition
Decision definition
Schema definition
evaluationType
stateDefinitionId
inputState
conditionRules
transitionRules
outputState
DecisionEvaluation
evaluationType

Overview

The Decision Module in Delft-FEWS is used to implement decision logic and evaluation for barriers. With this module we can iteratively evaluate
configurated decision rules. The configuration file of the Decision Module contains the definition of one or more Decision Trees. These Decision
Trees defined in the Decision Module are associated with a Barrier definition which are defined by the Barriers configuration file.

Some important prerequisites

Barrier states should be defined

The decision logic (criteria) is linked to the barrier state. Barrier states could be for example "the barrier is open", "the barrier is closed", "the
barrier is halfway closed", "at a stage where some additional criteria need to be evaluated", etc. For each of these states, separate decision logic
may be relevant and should be evaluated.

Input from present and future model state

While evaluating the decision logic, relevant input may consist of information from both the present and future model state.

Some examples to illustrate the above:

When the barrier is open, and when the forecast results show that water levels at location A will exceed 3 meters, the barrier should start
closing when the water level at the barrier passes the 2 meter mark.
When the barrier is closed, the barrier should be opened when the local water level gradient is negative.

Scalable

Furthermore, the decision logic should be "scalable" (i.e. it should be easy to add additional rules). If we continue with the above example, a
decision rule could also be

When the barrier is open, and when the forecast results show that water levels at location A will exceed 3 meters, the barrier should
close. If the river discharge at location C is below 6000 m3/s, the barrier should start closing when the local water level passes the 2
meter mark. If the river discharge at location C exceeds the 6000 m3/s, the barrier should start closing at slack tide. The barrier is only
allowed to close when the shipping through the location B has been blocked in advance.

Two types of criteria

With regard to decision logic, we can differentiate between two different types of criteria in the above example:

1. Criteria which indicate that a barrier state change should occur (for example, a state change from "the barrier is open" to "the barrier is
closed").
For example, the forecast water level at location A exceeds 3 meters.
2. Criteria which indicate when a barrier state change should occur.
For example, the local water level passes the 2 meter mark. (Note that this criteria is conditional to the example 1).

As the decision logic takes both "future" information (if there is a high water event in our forecast horizon, do ...) and information not included in
the model state into account (in the above example, both the discharge at location C and the "shipping state" are not included state of the running
model), we cannot evaluate the decision logic based on triggers in this model. As such, we want to do this in an external module (i.e. FEWS in
this case).

When evaluating the decision logic, it is relevant to take into account the fact that the model state will change after the barrier state has been
updated (changed). To evaluate the decision logic for subsequent steps in the process, this implies that we will need to re-run the model to take
this state change into account. Also, if there are multiple barriers in our area of interest, this implies that if the state of one of these barriers
changes, we need to update our model simulation before we can assess the decision logic for the other barrier(s).

The decision evaluation process

594
The entire process can be summarized as follows:

1. Run a baseline simulation (forecast) taking the actual state of the barriers as a starting point.
2. Evaluate the decision logic based on the baseline simulation.
3. If relevant criteria are met, the barrier state changes. Here, we distinguish between criteria which indicate if a state change is required.
And criteria which indicate when a state change is required.
4. Run a new simulation taking the barrier state change into account.
5. Evaluate the decision logic of this simulation. (Note that the decision logic will only be evaluated over the period following the latest
barrier state change.)
6. Loop this process starting from step 3 until no relevant criteria are met and therefore no state change is required.

While the decision logic is model independent, the model should be fed with the appropriate timeseries representing the appropriate barrier states
(for example timeseries of crest height, crest level and gate height for various barrier elements).

The configuration files are based around timeseries representing the barrier state, which are used and updated in the evaluation process. Each
value in this timeseries represents a model state. For example, if the value of this series is 0, this indicates the barrier is open. If the value of this
series if 1, this indicates the barrier is closed. Etc. This section only describes the configuration of the Decision module file. For an explanation of
Barriers configuration file, go to the Barriers section.

Configuration

Variables definition

All variables within the Decision Module are time series in the form of Time Series Sets. Within the Decision Module various items make use of a
variableId. This variableId is used in the actual section as an alias to that time series.
The identifier assigned to a time series should contain alphanumeric characters only. It should not start with a numerical character.

variableId

required element used to identify the time series in the decisionModule block or in the rules block.

timeSeriesSet

required element used to identify the timeSerieSet.

595
Example variable definition
<variableId>[Link]</variableId>
<timeSeriesSet>
<moduleInstanceId>RMM_Structures_Forecast_Processing_Extrapolate</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>SVKW</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<relativeViewPeriod unit="day" endoverrulable="true" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>[Link]</variableId>
<timeSeriesSet>
<moduleInstanceId>RMM_DecisionTree</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>SVKW</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<relativeViewPeriod unit="day" endoverrulable="true" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
]]>

Schema definition

Rules definition

The Rules section defines the rules which will be used by the decision entries of each decisionTree.

Schema definition

variables

This section contain a set of variables defined only for the rule definitions. For a description of the variables see the section above.

criticalConditions

596
The criticalConditions determine if further activity should be realized. The evaluation period for the criticalCondions is determined by the definition
in the decisionModule (evaluation = lastKnownState).

The definition of the criticalConditions reuses the criticalConditionLoopUp of the Lookup Table Module.

transitionRules

If certain criticalConditions are met, at a certain moment the state transition should be activated. The moment at which this occurs will be
determined by the transitionRules.

DecisionTrees definition

For each barrier a decisionTree is configured. Each decision element in this decisionTree should be unique (i.e. at each moment in time only one
element can be valid).

Example decistionTree
<barrierId>scheepvaart</barrierId>
<decision id="Stremming (peilsluiting)">
<evaluationType>lastKnownState</evaluationType>
<stateDefinitionId>scheepvaart</stateDefinitionId>
<inputState>
<stateValueId>geen stremming</stateValueId>
</inputState>
<conditionRules>
<allValid>
<anyValid>
<isTrue>[Link].d</isTrue>
<isTrue>[Link].d</isTrue>
</anyValid>
<isFalse>[Link]</isFalse>
</allValid>
</conditionRules>
<transitionRules>
<anyValid>
<isTrue>[Link]</isTrue>
<isTrue>[Link]</isTrue>
</anyValid>
</transitionRules>
<outputState>
<stateValueId>gestremd</stateValueId>
</outputState>
</decision>
<decision id="Stremming (kenteringsluiting)">
...
</decision>
<decision id="Vrijgeven scheepvaart">
...
</decision>

]]>

Decision definition

Schema definition

597
evaluationType

stateDefinitionId

inputState

conditionRules

transitionRules

outputState

DecisionEvaluation

The last section of this file deals with the evaluation of the decision logic. If we are in the first step of the iterative loop, the last know system
(barrier) state is used as initial value. The section within the 'initialConditionalWorkflow' tag is run, where the timeSeriesSets associated with the
variableId's are used for initial values. From this workflow the (external) model is called (for example, a workflow through which Sobek is run from
the FEWS General Adapter).

Example decisionEvalution
<initialConditionalWorkflow>
<variableId>[Link]</variableId>
<variableId>[Link]</variableId>
<variableId>[Link]</variableId>
<workflowId>Sobek_DSS_Forecast</workflowId>
</initialConditionalWorkflow>
<conditionalWorkflow>
<variableId>[Link]</variableId>
<variableId>[Link]</variableId>
<variableId>[Link]</variableId>
<stateChanges>
<evaluationType>FirstInTime</evaluationType>
<stateChange>
<decisionTreeId>SVKH</decisionTreeId>
</stateChange>
<stateChange>
<decisionTreeId>SVKW</decisionTreeId>
</stateChange>
<stateChange>
<decisionTreeId>scheepvaart</decisionTreeId>
</stateChange>
</stateChanges>
<workflowId>Sobek_DSS_Forecast</workflowId>
</conditionalWorkflow>

]]>

598
After running the model for the first time, we need to evaluate the decision logic prior to a (possible) second run. If a state change occurs in the
decisionTree, we need to run the model again taking this state change into account. From the second iteration and onwards, the section within the
'conditionalWorkflow' tag will be run. Note that we need to evaluate three state changes in this case (SVKH = the position of the Hartelkering,
SVKW = the position of the Maeslantkering and scheepvaart = the "shipping state"), each of which has a separate decisionTree definition.

evaluationType

If one state variable changes value, this will change the overall system state, and hence may effect the evaluation of the other state variables.
There are to options which can be defined:

All: all state variable changes will be taken into account in the next iteration.
FirstInTime: if there is a state change in more than one state variable, only the state change which occurs first in time should be taken
into account in the next iteration. After a new iteration with the model, the other state values will be re-evaluated in this case.

Barriers
*** !!! This page is under construction / valid from Delft-FEWS version 2011.01 !!! ***

Please note that at the moment the Decision Module is only available in the development build (2011.01).

Contents

Contents
Overview
Configuration

Overview

The Barriers configuration file is used to define the specifics of one or more barriers used in Fews.
For the use of the Barriers file in combination with the Decision module, we refer to the Decision module section for further details about the
functionality of this module.

In the Barriers file the characteristics of the barrier are defined. First and foremost these are the barrier states.
For example, the state of a certain barrier, which is linked to a particular variable and timeSeriesSet, can be:

"open" (value 0)
"sedimentstop" (value 1)
"closed" (value 2)
"drain" (value 3)
"[Link]" (value 4)

Furthermore, model dependent characteristics like crest level, crest height and gate level are linked to there barrier states. As such, once the
barrier state is known (i.e. we have a timeSeriesSet with values of 0, 1, 2, 3 and 4 in the above case), we can apply a mapping from these state
values to the appropriate input required by the running model. Note that a change from one barrier state to the other will not be instantaneous, but
should take some "rate of change" into account. The appropriate "rate of change" information from one state to the other is also included in this
file, by defining the "speed".

Configuration

work in progress

24. ImportAmalgamate
What ImportAmalgamate

Required no

Description Amalgamates external historical data

schema location [Link]

Description
Configuration
workflowId
importRunMinimalAge
Example

599
Description
Workflows may produce inefficient blobs that only span a few time steps. These blobs will be amalgamated. After the amalgamate is finished, the
original import runs with all its imported external data is scheduled for deletion. Large grids, external forecasts and samples should be imported in
a separate workflow, that are not handled through this module.

Configuration

workflowId

One or more work flow ids that import external historical time series over a short time span (scheduled frequently).

importRunMinimalAge

Import runs younger than the specified age are skipped. After the amalgamate has runned it is no longer possible to create an archive with the
exact original data available during the run.

Example

[Link]
<className>[Link]</className>
]]>

[Link]
<moduleId>ImportAmalgamate</moduleId>
]]>

[Link]
<importAmalgamate xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns=
"[Link]
<workflowId>Import_Data</workflowId>
<workflowId>Procesoverzicht_Update</workflowId>
<importRunMinimalAge unit="hour"/>
</importAmalgamate>]]>

06 Configuring WorkFlows
What A [Link]

Required no

Description Definition of sequence of moduleInstances (or workflows) in logical order

schema location [Link]

Introduction
Workflows are used in DELFT-FEWS to define logical sequences of running forecast modules. The workflow itself simply defines the sequence
with which the configured modules are to be run. There is no inherent information nor constraints within the workflow on the role the module has
in delivering the forecasting requirement.

Workflows may contain calls to configured module instances, but may also contain calls to other workflows. In the workflowDescripors
configuration described in the Regional Configuration section, the properties of the workflows is defined.

All workflows are defined in the Workflows section of the configuration; when working from a filesystem this is the WorkflowFiles directory. Each
workflow will have the same structure and must adhere to the same XML schema definition. Workflows are indentified by their name, which are
registered to the system through the workflowDescriptors configuration in the Regional Configuration section.

Workflows
Workfows defined may either be available from the Workflows table – when the configuration is loaded into the database – or available in the

600
WorkflowFiles directory when the configuration is available on the file system.

When available on the file system, the name of the XML file for configuring a workflow called for example ImportExternal may be:

ImportExternal 1.00 [Link]

ImportExternal Choosen name for this workflow.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Each processing step in the workflow is referred to as an activity. In defining workflows, several levels of complexity in defining these activities are
available;

Simple activities
Activities with a fallback activity;
Activities to be run as an ensemble.

Each activity can either be a single moduleInstance of another workflow.

601
Figure 142 Elements of the Workflow configuration.

activity

Root element for the definition of a workflow activity. Multiple entries can be defined.

runIndependent

Boolean flag to indicate if the activity is considered to be independent of other activities in the workflows. If the flag is set to "false" (default) then
the failure of this activity will casue the complete workflow being considered as having failed. No further activities will be carried out. If the flag is
set to "true", then failure of an activity will not cause the workflow to fail. The next activity in the workflow will also be attempted. An indication is
given to the user in the Forecast Management display if one or more workflow activities have failed.

moduleInstanceId

602
The ID of the moduleInstance to run as the workflow activity. This module instance must be defined in the moduleInstanceDescriptors (see
Regional Configuration) and a suitable module configuration must be available (see Module configurations).

workflowId

The ID of the workflow to run as the workflow activity. This workflow must be defined in the worfklowDescriptors (see Regional Configuration) and
a suitable workflow must be available (see Module configurations).

fallbackActivity

A fallback activity may be defined to run if the activity under which it is defined fails. This can be used to run a simple module if the more complex
method used in preference fails, and ensures continuity of the forecasting process. The definition of the fallbackActivity is the same as the
definition of an activity.

ensemble

This element is defined to indicate the activity is to run as an ensemble.

ensemble:ensembleId

Id of the ensemble to apply in retrieving data from the database. For all time series sets used as input for the activities running as an ensemble, a
request for time series with this Id defined will be made. Ensemble id's in a sub workflow will override this ensembleId. A sub workflow without an
ensembleId will make use of this ensembleId

ensemble:runInLoop

Boolean flag to indicate if the activity it to be run as many times are there are members in the ensemble, or if it is to be run only once, but will use
all members of the ensemble in that single run. If the value is set to "true", then when running the workflow DELFT-FEWS will first establish how
many members there are in the ensemble, and then run the activity for each member. If the value is set to "false" then the activity will be run only
once. On requesting a time series set within the modules to be run, the database will return all members of that ensemble.

ensembleMemberIndex

Optional field to only run one particular ensemble member. If this field is used only the specified ensemble member will be run.

ensembleMemberIndexRange

Optional field to run a particular range of ensemble members. Processing starts at member start and ends at member end. If end is not specified
the processing will start at start and end at the last member of the ensemble.

603
Activities in a workflow or nested workflows may be called only once. This is to avoid mistakes through multiple calls to the
same activity in different locations thus creating ambiguous results, or circular references in defining fallback activities.

When running activities as an ensemble that request time series sets from the database that are not a part of that ensemble, the
default ensembleId should be added to the TimeSeriesSets definition. The default ensemble Id is "main".

All time series sets written when running in ensemble mode will have the ensembleId as specified in the workflow ensembleId
element, unless overruled by a local ensembleId defined in the timeSeriesSet on writing.

Examples
The workflow below runs seven moduleInstances. If the first moduleInstance fails in this example all other processing is stopped. If any of the
other activities fail the processing will continue.

<?xml version="1.0" encoding="UTF-8"?>


<workflow version="1.1" xmlns="[Link]
xmlns:xsi="[Link] xsi:schemaLocation="[Link]
E:\schemas\[Link]">
<activity>
<runIndependent>false</runIndependent>
<moduleInstanceId>Astronomical</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>BackupPrecipitation_Forecast</moduleInstanceId>
</activity>

<runIndependent>true</runIndependent>
<moduleInstanceId>PrecipitationGaugeToGrid_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Spatial_Interpolation_Precipitation_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>MergePrecipitation_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>GridToCatchments_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Singapore_Sobek_Forecast</moduleInstanceId>
</activity>
</workflow>

The example below is more complex and includes several modules that are run in ensemble mode.

<?xml version="1.0" encoding="UTF-8"?>


<workflow xmlns="[Link] xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link] version="1.1">
<!--Run Rhein Interpolation -->
<activity>
<runIndependent>true</runIndependent>
<workflowId>Rhein_Interpolate</workflowId>
</activity>
<!--Spatial interpolation from grid to HBV-centroids-->
<activity>

604
<runIndependent>true</runIndependent>
<moduleInstanceId>Rhein_SpatialInterpolationCOSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Aggregate forecast data for display -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Rhein_AggregateForecast_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Disaggregate timeseries at HBV-centroids -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Rhein_DisaggregateSeriesCOSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Merge timeseries from historical run and forecast run -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_Merge_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Aggregate inputs for display -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_AggregateInputs_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Interpolate timeseries from historical run and forecast run -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_Interpolate_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Run HBV-model for forecast period-->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Run ErrorModule for forecast period-->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_AR_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>

605
<!--Calculate Statistics-->
<activity>
<runIndependent>true</runIndependent>
<workflowId>Statistics_COSMO-LEPS</workflowId>
</activity>
<!--Export forecast data to wavos format -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Rhein_ExportForecast_COSMO-LEPS</moduleInstanceId>

606
</activity>
</workflow>

07 Display Configuration

Introduction
DELFT FEWS supports several plug-in displays that can optionally be included in the configuration for a particular forecasting system. These
displays implement the DELFT-FEWS display plug-in interface, and while the list included here is a standard feature of DELFT-FEWS, specific
plug-in displays may be included as well. Multiple instance of each plug-in display can be applied, each with a unique name as registered in the
DisplayInstanceDescriptors (see System configuration). Each plug-in display used must of a supported type as registered in the
DisplayDescriptors (see System configuration). The display may be initiated from the fewsExplorer by defining a call to the display in toolbar or
in the tools menu (see configuration of the FEWS Explorer).

Plug-in displays distributed as a standard feature of DELFT-FEWS are:

Grid display
Longitudinal Display
What-If scenario display
Lookup Table display
Correlation display

The main map display and the time series display are not considered optional and therefore form a part of the System configuration

Contents
01 Grid Display
02 Longitudinal Display
03 What-If Scenario Display
04 Lookup Table Display
05 Correlation Display
06 System Monitor Display
07 Skill Score Display
08 Time Series Modifiers
09 State editor display
10 Interactive forecast display
11 Threshold Display
12 Task Run Dialog Display
13 Manual Forecast Display — Configure the Manual Forecast Display:
14 ChartLayer
15 Schematic Status Display (formerly Scada Display)
16 Modifier display

01 Grid Display

Grid display
The grid display is used in DELFT-FEWS for viewing grid time series. These grid time series can be dynamically animated against a map
background (comparable to the main map display).

The Id of the grid display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the GridDisplay with an Id of FloodMapDisplay is for example:

FloodMapDisplay 1.00 [Link]

FloodMapDisplay File name for the Flood Map display configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

607
Figure 143 Example of a configuration of the Grid Display

Besides plotting a grid in the grid display, it is also possible to plot scalar data (Figure 10) and longitudinal profiles (figure 9).

Figure 010 Example of a configuration of the scalar data in Grid Display

608
Figure 009 Example of a configuration of the longprofile in Grid Display

Figure 144 Root elements of the gridDisplay configuration

title

Name of the Grid Display. When opened this will appear in the title bar of the window.

gridPlotGroup

Definition of a group in the grid display. Each group may have its own set of maps and time series to display. Defining groups creates a tree view
in the left of the display (see example above). Multiple instances may exist.

Attributes;

id : Id of the display group- this is used in the tree view.


name : Optional name of the display group- used for reference purposes only

description

Optional description of the display group/grid plot. Used for reference purposes only

highlight

Optional property to highlight the Group name in bold in the selection filter.

609
gridPlot

Definition of a grid plot within the display group. Each grid plot forms a node in the tree view. When a gridPlot is selected, the appropriate maps
will be displayed and the time series data retrieved from the database.

Attributes;

id : Id of the grid plot- this is used in the tree view.


name : Optional name of the grid plot- used for reference purposes only

timeSeriesSet

Definition of the time series set to be displayed in the selected grid plot. This can refer to one location with valuetype grid or longitudinal profile, or
it can refer to a locationSet of scalars. Contourlines can only be displayed in combination with a regular grid.

classBreaks

Definition of colours to use in displaying the dynamic grid. These are also shown in the legend on the left of the grid display (see example above).

geoMap

Definition of the maps used as a background to the dynamic grid displayed. The layout and zoom extent are also defined in this element.

Definition of class breaks

610
Figure 145 Elements of the configuration of class breaks

description

Optional description of the configuration. Used for reference purposes only.

missingValueColor

Not implemented yet.

missingValueOpaqueness

611
Not implemented yet.

unitVisible (available since build 18734)

When this is true the display unit for the class break values will be displayed in the legend. Default is false. The display unit can be configured in
parameter group.

rescaleAroundOrdinalValue (available since build 17892)

Definition of the optional ordinal value that will always keep the same colour when the class break colours are rescaled in the grid display. After
rescaling, the highest lowerValue will be changed to the maximum grid value visible in the current zoom extent and the lowest lowerValue will be
changed to the minimum grid value visible in the current zoom extent. The lowerValues and the colours in between will be rearranged to fit
between the minimum and maximum. Thus the colours for given values change.
If no ordinal value is specified, then the colours are just rearranged. However, if e.g. ordinal value = 0 is specified and 0 values have a white
colour, then the rescaling will take this into account so that 0 values always stay coloured white. This can be used for instance when displaying
temperatures, where red colours are used for positive values and blue colours are used for negative values and zero values are coloured white.

lowerColor

Colour definition for the colour in the legend associated with the lowest value in the range.

upperColor

Colour definition for the colour in the legend associated with the highest value in the range.

lowerOpaquenessPercentage

Optional definition of the opaqueness of the colour in the legend associated with the lowest value in the range.

upperOpaquenessPercentage

Optional definition of the opaqueness of the colour in the legend associated with the highest value in the range.

lowerSymbolSize

Optional definition of the size of symbols associated with the lowest value in the range.

upperSymbolSize

Optional definition of the size of symbols associated with the highest value in the range.

lowerValue

Definition of the value at which the colour in the grid displayed changes. The legend will be a gradual change in colours from the lowest colour to
the highest colour, with the number of increments determined by the number of lowerValue items entered. Multiple entries may exist.

color

Deprecated, use break.

break

The options described above can be used for definitions of lowerValues that have colors that change gradually between a lowerColor and
upperColor. The break option can be used instead for specifying a discrete lowerValue with an absolute color, symbolSize and
opaquenessPercentage. Multiple entries may exist.

Definition of background maps

The background maps to be displayed are defined in the geoMap element. This is an XML implementation of the OpenMap configuration
described also in Appendix C for configuring the main map display (in time this will also be done using the geoMap element).

612
613
Figure 146 Elements of the geoMap configuration
The more advanced options are described below. Rather straightforward options like northArrowVisible are self explaining.

description

Optional description of the configuration. Used for reference purposes only.

extents

Root element for the definition of a zoom extent. The extents defined will appear in a drop down list in the grid display.

geoDatum

Coordinate system the extents are defined in. Enumeration of available coordinate systems is available in Appendix B.

defaultExtent

Definition of the default zoom extent.

Attributes;

name : name of the default zoom extent (displayed in the drop-down list)

extraExtent

Definition of the additional zoom extents. Multiple entries may exist.

Attributes;

name : name of the zoom extent (displayed in the drop-down list)

left, right, top, bottom

Coordinates of the zoom extent. Note that in displaying the maps for the extent defined, the map display will be scaled to fit the extent in the
current display window.

wfsConnection

614
Notice that you need to specify a mapLayersCacheDir in the [Link], like mapLayersCacheDir=%REGION_HOME%/MapCache

More info on connection to ArcSDE and WFS can be found here.

arcSdeConnection

Notice that you need to specify a mapLayersCacheDir in the [Link], like mapLayersCacheDir=%REGION_HOME%/MapCache

More info on connection to ArcSDE and WFS can be found here.

615
serverShapeLayer

To make use of a Wfs or ArcSDE connection you have to use the option for serverShapeLayer.

616
617
openStreetMapLayer

To make use of a server that uses the open street map protocol

Demo Open Street Map


<url>[Link]
<cacheDir>$REGION_HOME$/OsmTiles</cacheDir>

]]>

For testing purposes you can use "[Link]

wmsLayer

To make use of a WMS server you have to use the option for wmsLayer.

url : Base url for the wms server. This is everything before the text "VERSION=" in the url. Use &amp; to include a &
layer name : Layer name to display. It's the part after the text "LAYERS=" till the next & or ; in the url. To find the layer names enter the
url that ends withs "request=GetCapabilities" in a browser.

Demo Aerial Photos Netherlands


<url>[Link]
<wmsLayerName>lufo2005-1m</wmsLayerName>
<cacheDir>$REGION_HOME$/wms_areal_cache</cacheDir>

]]>

Demo with clouds Europe


<url>[Link]
<wmsLayerName>IR108</wmsLayerName>
<cacheDir>$REGION_HOME$/wms_meteosat_cache</cacheDir>

]]>

618
Demo HIRMLAM temperature Europe
<url>[Link]
<wmsLayerName>2011-05-26T[Link]Z/HIRLAM-temp/HIRLAM-temp-2m</wmsLayerName>
<cacheDir>$REGION_HOME$/wms_hirlam_cache</cacheDir>

]]>

esriShapeLayer

619
620
In this section the location of the background shape file can be defined.

id : Id of the background map


describtion : optional name of the backgroundmap
file: path to the shape file
visible: whether the layer is visible (true or false)
tooltip : information that is displayed when the user is moving the mouse cursor over a shape. To see this information turn on the
'Information' button.
lineColor : color of the line
fillColor : color of the area
opaquenessPercentage: percentage of opaqueness.
lineWidth : width of the line
pointSize or pointIconId: allow size adjustment of points in an ESRI shape-layer, resp. displays an icon (as defined in
[Link]) at the points in the ESRI shapelayer

An example of the various options, that can be completely mixed is shown in the below picture.

layer

Definition of a GIS layer to be displayed.

Attributes;

id : required id of the map layer- must be unique for the current geoMap element.
name : optional name of the map layer defined

description

621
Optional description of the map layer. Used for reference purposes only.

className

Name of the class used in displaying the map layer. A different class is required for different types of GIS data.

NOTE: Defining a class name allows advanced users to add additional display functionality to the OpenMap utility, and this being used in map
displays in DELFT-FEWS. See the OpenMap documentation for details on how to add additional display classes.

visible

Boolean flag indicating if layer is visible by default.

properties

Definition of properties associated with the map layer to be displayed. Properties that need to be defined depend on the class used. At least one
property must be defined. This may be a dummy property. Multiple entries may exist.

string

Definition of a string property. An example is the definition of the geoDatum for displaying shape files using the geoDatumDisplay class.

key

Key to identify the property

Value

Value of the property defined.

Note: when displaying a shape file layer that does not use WGS 1984 as the coordinate system, a property must be defined that defines the geo
datum. To do this set the key value as "geoDatum" and define the coordinate system using the enumeration in Appendix B.

Configuration (Example)
The following example shows how to configure a Meteosat image as grayScaleImage in the Grid display.

622
Extract of [Link]
<gridPlot id="MeteoSat">
<timeSeriesSet>
<moduleInstanceId>ImportMeteosat</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>image</parameterId>
<locationId>meteosat</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="-12" end="36"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<classBreaks>
<lowerColor>black</lowerColor>
<upperColor>white</upperColor>
<lowerValue>0</lowerValue>
<lowerValue>8</lowerValue>
<lowerValue>16</lowerValue>
<lowerValue>24</lowerValue>
<lowerValue>32</lowerValue>
<lowerValue>40</lowerValue>
<lowerValue>48</lowerValue>
<lowerValue>56</lowerValue>
<lowerValue>64</lowerValue>
<lowerValue>72</lowerValue>
<lowerValue>80</lowerValue>
<lowerValue>88</lowerValue>
<lowerValue>96</lowerValue>
<lowerValue>104</lowerValue>
<lowerValue>112</lowerValue>
<lowerValue>120</lowerValue>
<lowerValue>128</lowerValue>
<lowerValue>136</lowerValue>
<lowerValue>144</lowerValue>
<lowerValue>152</lowerValue>
<lowerValue>160</lowerValue>
<lowerValue>168</lowerValue>
<lowerValue>176</lowerValue>
<lowerValue>184</lowerValue>
<lowerValue>192</lowerValue>
<lowerValue>200</lowerValue>
<lowerValue>208</lowerValue>
<lowerValue>216</lowerValue>
<lowerValue>224</lowerValue>
<lowerValue>232</lowerValue>
<lowerColor>orange</lowerColor>
<upperColor>red</upperColor>
<lowerValue>240</lowerValue>
<lowerValue>248</lowerValue>
<lowerValue>255</lowerValue>
</classBreaks>
</gridPlot>

]]>

Coupling ArcSDE and WFS

Introduction

The ArcSDE connection in FEWS relies on the open source GeoTools 2.5.5 library to connect to an ESRI ArcSDE or WFS server.

[Link]

623
[Link]

There is an open source GIS system, named uDig ([Link] that is based on GeoTools. With this application GeoTools can be
used outside FEWS and the ArcSDE connection can be tested. uDig provides an wizard to setup the connection.

FEWS is not using GeoTools for drawing maps for the simple reason that GeoTools did'nt exist at the time FEWS developments started. It is not
possible to have GeoTools map layers mixed with FEWS map layers because they use different projection techniques.

FEWS uses it's own implementations to render shape and grids to achieve the performance of the maps and grids we see today. FEWS also
provides a shape highlighting feature not easy to migrate to GeoTools.

The implementation of the ArcSDE connection in GeoTools 2.5.5 is too slow to use without caching and background loading. These caching is not
provided by the used GeoTools 2.5.5. A year later GeoTools 2.7.0 (sep 2010) is released that mentioned strong improved ArcSDE performance
and added support for caching.

FEWS Shape Tile Cache

A (remote) layer is divided virtually in a maximum of 64 tiles and a minimum of 200 shapes on average per tile. These tiles are downloaded on
demand as soon the user pans/zooms to the area of his interest. To reduce the load on the remote server, network and to improve the
performance the downloaded tiles are cached on a file system (directory). When a repaint is requested for a tile and the tile is available in the
cache the tile from the cache will be used to perform the repaint.

Updating cached tiles

All updates done to a remote layer are visible the next day to the FEWS users. GMT 0.00 is seen as the start of a new day. All tiles not
downloaded today are invalidated. When a repaint of an outdated tile is requested this tile is re-downloaded in the background. While a tile is
downloaded the text CACHING appear in the lower left of the map. While downloading the outdated tile is displayed.

Sharing caches

To reduce the load on the ArcSDE server even more multiple FEWS OC/SA systems can share the same tile cache. When multiple users are
interested in the same area of a remote layer the area is downloaded from the server once and not for every user separately.
When a tile is repainted and downloaded earlier today the tile will not be re-downloaded but the tile from the cache will be used.

Cache Shape files on a file system instead of ArcSDE/WFS remote layers.

A tile cache can also be used when a layer is connected to a shape file located on a (network) file system. A shape tile cache to a shape file can
strongly reduce the memory usage of FEWS. In the FEWS about box the amount of memory used by the shape layers is displayed. When the
shape memory usage is more than 50MB it can be worthwhile to setup a map cache. Tiles are removed from memory when they are no longer
visible. Both shape file and the tile cache it self can be located on a network drive and shared by multiple users at the same time. Using a tile
cache can also reduce the network load when a shape file is located on a network drive. A tile is not recreated as long the time stamp of a shape
file is not modified. It is important that detailed layers become visible at a appropriate zoom level so not too much tiles are visible at the same
time.

Configuration

In a map in the explorer or spatial display configuration it is possible to define one or more connections.

<server>iris2</server>
<port>5154</port>
<database>irist</database>
<user>me</user>
<password>123</password>

]]>

<url>[Link]

]]>

The a layer can be added to the map this way:

<connectionId>sde</connectionId>
<layerId>INTWIS2.ODS_GEO_KADASTRALE_PERCELEN</layerId>
<visible>true</visible>
<lineColor>black</lineColor>
<fillColor>white</fillColor>

]]>

624
To see with layer ids are available you can download the earlier mentioned opensource gis [Link]

A global property "mapLayersCacheDir " is required to setup the map cache.

mapLayersCacheDir=%REGION_HOME%/mapLayersCacheDir

It is allowed that the cache is shared by multiple users simultaneously and located on a network drive.

02 Longitudinal Display
Longitudinal display

Since release 2007.02, the Time Series/Data display is able to handle longitudinal profiles in both graph and tables. It therefore
has overtaken the functionality of the Longitudinal Profile Display discussed below.

The longitudinal display is used in DELFT-FEWS for viewing longitudinal (vector) time series. These time series can be dynamically animated
against.

The Id of the longitudinal display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the LongitudinalDisplay with an Id of e.g. LongitudinalDisplay is for example:

LongitudinalDisplay 1.00 [Link]

LongitudinalDisplay File name for the LongitudinalDisplay configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 147 Example of a configuration of the Longitudinal Display

625
Figure 148 Root elements of the LongitudinalDisplay configuration

displayGroup

Root element for each displayGroup. A display group forms one of the main nodes in the tree view and may contain multiple displays. Multiple
display groups may be defined.

Parameter;

name : name of the display group (used in the tree view)

description

Optional description of the configuration. Used for reference purposed only.

display

Root element for the configuration of a display within a group. Multiple displays may be defined in each group.

Attributes;

name : name of the display (used in the tree view)


showMinSeries : Boolean option to indicate if minimum of time series (in current relative view period) is to be plotted in display for
reference.
showMaxSeries : Boolean option to indicate if maximum of time series (in current relative view period) is to be plotted in display for
reference.

Figure 149 Elements of the display element of the LongitudinalProfile configuration

description

Optional description of the display. Used for reference purposed only

timeSeriesSet

Time series set to be displayed. This should be a longitudinal time series. The location Id of the time series set must refer to a Branch definition to
allow an x-axis to be defined (see Regional configuration).

xaxis

Unit to display the xaxis in. Enumeration of "m" and "km".

thresholds

626
Thresholds may be plotted in the display for identified branch points. If this item is included, then a list of thresholds to be displayed can be
configured.

threshold

Identifier for threshold to be plotted. Attribute is the threshold id, which is a reference to a thresholdValueSet defined in the ThreholdValueSets
(see Region configuration).

branchLabel

Label in the branch (See branch definition in Region configuration) where the thresholds are to be plotted.

03 What-If Scenario Display


What-If scenario display
What-If scenarios can be applied in DELFT-FEWS to explore the influence of uncertainties in input data, module structure and module
parameters. When running a forecast, a what-if scenario may be selected from a list of available scenarios. A display plug-in is available to define
what-if scenarios.

The configuration of the display defines only what time series what-if scenarios may be applied to. The layout of the display cannot be configured.

The Id of the what-if scenario display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file
for configuring the display with an Id of e.g. WhatIFScenarioFilters is for example:

WhatIFScenarioFilters 1.00 [Link]

WhatIFScenarioFilters File name for the WhatIFScenarioFilters configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 150 Elements of the What-if scenario display configuration

description

Optional description of the configuration. Used for reference purposes only.

variableSets

Definition of variables (time series) of what-if scenarios may be applied to.

Attributes;

variableId : ID of the variable (group). Appears in list of variable to be selected in defining a what-if scenario in the display.
variableType : Optional type definition of variable (default to "any")
convertDatum : Optional Boolean flag to indicate if datum is to be converted for the what-if scenario defined. This may be required when
defining a typical profile what-if scenario.

configFiles

Template for defining what-if scenario applied to module parameters and module datasets. These templates are used when creating the

627
what-if scenario. Should be defined using optional/required elements if what-if scenarios for module parameters and module datasets are
to be supported.

Figure 151 Elements of the variable element definition in the What-if display configuration.

timeSerieSet

Time series set the what-if scenario is to be defined for. The relative view period over which the what-if scenario applies changes to the data is
defined in the TimeSeriesSet.

04 Lookup Table Display


Lookup table display
The lookup table display is a display plug-in that extends the What-if scenario functionality. The display allows the user to interactively explore the
impact of uncertainties in input data on the results of forecasts based on a lookup table. Interactive usage means the runs of the lookup table
module will be on the local machine (i.e. the Operator Client in the live system).

For each lookup table to be used interactively, the input series for which scenarios may interactively applied must be configured. The workflow to
run must also be configured. Note that the report generated by the lookup table module as a result of the run is also viewed locally. As a
consequence the workflow run is normally a copy of the normal lookup table run, with the exception that the report module is configured to send
the reports to a local directory rather than to the web served (via the Report_Export module).

The Id of the lookup table display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the display with an Id of e.g. LookupDisplay is for example:

LookupDisplay 1.00 [Link]

LookupDisplay File name for the LookupDisplay configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

628
Figure 152 Elements of the Lookup Table display configuration

general

Root element for the general settings section

reportDirectory

Root directory of the reports directory on the local OC for exporting reports generated by the lookup table display to and for viewing these from.

externalBrowser

Address of the external browser used to display reports generated by the interactive lookup table display.

lookupTableDisplayDescriptor

Root element for the description of a lookup table. Each entry defined will be available in a drop down list in the lookup table display, allowing
lookup tables to be run individually . Multiple entries may exist.

Attributes;
descriptorId : Id of the lookup table to be run from the lookup table display. This Id is used to populate the drop down list to select a lookup table
to run from.

description

Optional description of the configuration. Used for reference purposes only.

workflowDescriptorId

Id of the workflow to run to explore the impacts of uncertainties in the input variables. This workflow must be defined in the workflows
configuration (see Regional Configuration) and an appropriately configured workflow must be available (see Workflows).

prefixWhatIfScenarioDescriptorId

Optional prefix for the What-If scenario defined. Item is redundant and need not be defined.

inputVariable

Definition of a time series set to be used as an input variable. This variable may be amended interactively by the user.

Attributes;

629
variableId : ID of the variable (group). Appears in list of variables to be selected in the lookup table display.
variableType : Optional type definition of variable (defaults to "any")
convertDatum : Optional Boolean flag to indicate if datum is to be converted for the what-if scenario defined.

outputVariable

Definition of the output variable for the lookup table display. This variable is currently not used, but is already available for a possible future
extension of the lookup table display.

timeSeriesSet

Definition of the time series set to be used for the input/output variable.

subDir

Optional definition of the sub directory the report produced by the lookup table is read from (may also be defined in the reportFileName item).

reportFileName

HTML file name of the report to be displayed as a result of the run made by the lookup table display. This file name (and sub directory) must the
same as the report created in the appropriate module in the workflow run. If this is not the case no report will be displayed.

05 Correlation Display
Correlation display
The correlation display is a display plug-in that extends the correlation module functionality. The display allows the user to interactively establish
correlation between upstream and downstream locations and derive a forecast based on these correlations.

For each correlation display to be used interactively, the configuration included only the definition of the correlationEventSets to be used. The
layout of the display cannot be configured.

The Id of the correlation display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the display with an Id of e.g. Correlationdisplay is for example:

Correlationdisplay 1.00 [Link]

Correlationdisplay File name for the Correlationdisplay configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

630
Figure 153 Root elements of the configuration display configuration.

inputTimeSerieInfo

TimeSeriesSet defined for the input data. This time series set is used when applying the correlation established to a complete hydrograph.

eventSetsDescriptorId

Id of the correlationEventSets to be used in the display. The event set must be defined in the CorrelationEventSetsDescriptors configuration (See
regional Configuration).

travelTimesDescriptorId

Id of the travelTimesDescriptor to be used in the display. The travel times set must be defined in the TravelTimesDescriptors configuration (See
regional Configuration).

outputTimeSerieInfo

TimeSeriesSet defined for the output data. This time series set is used only for displaying the temporary time series to be displayed when
applying the correlation established to a complete hydrograph. This time series is not saved in the database.

correlationDisplayOptions

Root element of options element for setting line colours in the scatter plot.

Figure 154 Elements of the scatterplot item in the CorrelationDisplay configuration

scatterplotOptions

Options for setting the properties of the scatter plot. The lineStyle of the scatter plot is "none" by definition (need not be defined).

631
equationOptions

Options for setting the properties of the regression line determined with the equation established.

preferredColor

Preferred colour for plotting scatter plot / regression line. For enumeration see timeSeriesDisplay Configuration in System Configuration.

markerStyle

Marker style for scatter plot / regression line. For enumeration see timeSeriesDisplay Configuration in System Configuration.

markerSize

Marker size for scatter plot / regression line in points.

06 System Monitor Display


What [Link]

Required no

Description Definitions of optional elements of the system monitor

schema location [Link]

Entry in DisplayDescriptors <displayDescriptor id="SystemMonitorDisplay">


<description>SystemMonitorDisplay</description>
<className>[Link]</className>
</displayDescriptor>

Description
Configuration
importStatus
description
tabName
tabMnemonic
defaultTimeThreshold
extraTimeThreshold
bulletinBoard

Description

Configuration file for the optional elements of the System Monitor display. These are:

The Import Status Tab


The bulletin board

Configuration

632
importStatus

The Import Status Tab shows the last time a data type has been imported and can be colour coded based on the amount of time since the last
import. An example file is attached.

description

Optional description

tabName

Required element that defines the name of the tab in the user interface.

tabMnemonic

Optional Mnemomic for the tab.

defaultTimeThreshold

Default color coding for all datafeeds. The next element (extraTimeThreshold) can override these settings per datafeed.

Each timeTreshold element (see figure above) indicates a minimum age needed to switch to a certain colour.

extraTimeThreshold

This element is similar to the defaultTimeThreshold element. Howevere, in this case the colours are defined separately for each datafeedId.

633
The datafeedId is defined in the import module. If no datafeedId is configured in the Import module the directory from which the
files have been imported is used.

bulletinBoard

The bulletinBoard tab allows users to manually add log messages to the system. In order to use this the following should be added to the
configuration file:

<bulletinBoard>
<tabName>Bulletin Board</tabName>
<tabMnemonic>B</tabMnemonic>
</bulletinBoard>

The tabname and Mnemonic can be configured. A complete example is attached.

07 Skill Score Display


Skill Scores Display
Background
Setting Criteria for Analyses
Matching Events
Forecast Available for Events
Thresholds List
Archiving Events

Skill Scores Display


This display provides an overview of all threshold crossings in the observed and forecast time series. By matching observed and forecast
threshold crossing events, various skill scores can be computed.

ThresholdSkillScoreDisplay 1.00 [Link]

ThresholdSkillScoreDisplay File name for the skill score display configuration

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Futher information on the use of the skill score display can be found in the 11 Skill Scores Display

634
635
Work in progress - Alex.

Background

The first step is to set up a contingency table. Given criteria on time intervals to consider when matching threshold crossing events, the values of
a , b , c and d in the table below can be filled in (where e.g. a is the number of matched observed and forecast threshold crossing events).

636
Once the contingency table has been filled, different skill scores can be established:

False Alarm Rate

Probability of Detection

Critical Success Index

Critical Reliability (checks if a forecast is available at least some time (but not too late) before the observed threshold crossing)

The First Forecast of Threshold (FFOT) is determined as the average time between the T~0~ of the forecast run in which a threshold crossing
was detected and the time of the observed threshold crossing (i.e. the average lead time of the category a threshold crossing events in the
contingency table).

The Bias of paired threshold crossings is the average time between paired observed and forecast events.

Setting Criteria for Analyses

To specify what matching threshold crossings mean, a number of criteria can be used. When clicking the [Change Criteria] button the following
display appears.

The criteria in the display have the following meaning.

Min/Max T~0~ difference . These criteria define in what time interval the T0 of a forecast should fall that has predicted a threshold
crossing in order for this threshold crossing event to be included in the analysis.

Max dispatch time difference . This criterion is used to compute the Critical Reliability (CR) which determines whether a forecast was
available at the time of an observed crossing irrespective of its quality. By setting this criterion the maximum time difference can be
defined between the dispatch time of a forecast and the time of an observed crossing.

Max time threshold being too early/late . These criteria define what the maximum difference between a forecasted threshold crossing and
an observed threshold crossing are allowed to be in order to consider them as matching.

Matching Events

637
In the Matching Events tab different background colours are used to indicate in which sector of the Contingency Table events fall.

green: observed and forecast threshold crossing event match (sector a of contingency table)
yellow: a threshold crossing has been observed but not forecasted (sector c)
orange: a threshold crossing has been forecasted but not observed (sector b)

In general, skill scores are determined for all forecast locations being at a gauging station and having level thresholds. Typically these are
determined separately, but the structure of the display allows skill scores to be established for different models. This way the skill scores of the
different models can be compared.

The performance indicators that are computed on the basis of the selection in this tab are:

probability of detection
false alarm rate
critical success factor
first forecast of threshold
bias of paired thresholds

The filter boxes at the left hand side of the skill scores display enable the selection of certain locations or certain thresholds levels to be shown
alone. By activating the check box below the criteria, it's possible to display Up crossings of thresholds only.

Forecast Available for Events

In the Forecast Available for Events tab different background colours are used to indicate in which sector of the Contingency Table events fall.

green: a forecast is available for an observed threshold crossing event (sector e of contingency table)
yellow: no forecast available for an observed threshold crossing event (sector f)

638
The performance indicator that is computed on the basis of the selection in this tab is critical reliability .

Thresholds List

The Threshold List provides an overview of all threshold crossing events available for the selected locations. In the analyses subsets can be
made by setting the criteria discussed in the earlier part of this section.

639
Archiving Events

As the threshold crossing events are stored in the operational database for only some days (according to the length of the Rolling Barrel), it's
possible to manually export the list of threshold crossing events by pressing the [Export ] button in the skill scores display. This list can later be
imported in a Stand Alone system by pressing the [Import ] button. In this way, longer records can be established for analysis in an off-line
modus.

Instead of exporting the whole list of threshold crossing events, as described above, it's also possible to select some of them and export only the
selected ones by using the [Save ] button.

All threshold crossing events can also be archived automatically by the system. See also Archive Display.

08 Time Series Modifiers

Time Series Modifiers


This display is used to apply changes to time series within the FEWS database. the definition of modification to a time series is that a change is
made to that time series without actually editing the values themselves. The changes are established as a "description of the change". This could
be for example the increase of an observed level by 2 cm over a given time period. It could also be applied as an edited time series to replace the
original time series. On retrieval of a time series from the database, the data access layer checks if there are any modifiers applicable to the data
being extracted. If relevant modifiers are found, then these are applied before the data is passed to the module that has requested the data. The
module itself is not "aware" that the data has been amended by a modifiers. The diagram below illustrates this process.

640
Modifiers can be entered and managed through the Time Series Modifier Display.

Time Series Modifier Display

Modifiers are defined through the modifier display, which can be opened either through the FEWS explorer menu, or through the toolbar on the
main window (when configured).

The modifier display contains five sections;


• A tree view of the pre-configured display groups containing links to time series to which modifiers can be applied.
• A panel allowing the user to select the specific location to which a modifier is to be applied.
• A panel allowing the user to select the specific parameter to which a modifier is to be applied.
• A main window showing time series contained in the display group selected.
• An information panel allowing new modifiers to be defined, and existing modifiers to be managed.

Selecting a time series to which a modifier is to be applied

To select the time series to which the modifier is to be applied, identify and select the display group in which the time series is contained in the
tree view on the left.

Once a display has been selected, the locations panel will show all locations available in the display group selected. Select the desired location.

Once a parameter has been selected, the parameters panel will show all parameters available at the locations selected. Select the desired
parameter.

Following selection of the parameters the selection will be visible in the Create New tab of the Modifier information panel.

Creating a new modifier

To create a new modifier, select the time series (location & parameter) to which the modifier is to be defined.

641
Open the Create New tab in the Modifier information panel.

Select the name of the modifier. On selection the name defaults to a combination of the location Id and the parameter Id.

Enter an appropriate description for the modifier. This is optional and can be displayed later when managing existing modifiers.

Define the start time and the end time of the data to change. By default the times defined are equal to the time series displayed in the main
window.

Define the start time and the end time of the validity period of the modifier. By default the times defined are equal to the time series displayed in
the main window.

Select the modifier operation and enter the value with which the data is to be changed. A value does not need to be added if the option to Edit the
data or to replace it with a missing value is selected. If the option to Edit the data has been selected, use the interactive edit facility to define the
changes.

Select Create on completion of the modifier.

Modifier Operations

The types of modifier that are available are:


• Add - allows a value to be added to the original data for a defined time span.
• Subtract- allows a value to be subtracted from the original data for a defined time span
• Multiply - allows a value to be given with which original data is multiplied for a defined time span.
• Divide - allows a value to be given with which original data is divided for a defined time span.
• Replace - allows a value to be given that replaces the original data for a defined time span.
• Missing - allows the original values with missing data for a defined time span.
• Time Shift - allows a time increment to be defined with which the original data is to be shifted (not yet available).
• Edit - allows a time series of values to be defined either as a table of values or through interactive editing on the graph, which replace the time
series for a defined time span.

Modifier change start and end time

A modifier is applied to the original data for a defined time span. This is defined as a start time and an end time. These times relate to the data in
the time series, and not to the time on the computer.

The start and end time can be changed by entering the date and time values in the appropriate input boxes. These can also be changed by
clicking on the display. Through the right mouse button a menu is available that allows selection of either the start time or the end time.

Start and end times selected are displayed as vertical lines in the display.

Modifier enable start and end time

By default the modifier is valid for the same period as the start and end time of the modifier. However, a modifier can be set to be valid during a

642
different period than the data to which it applies. A modifier will only be applied to the data if the Forecast T0 of the workflow falls within the valid
period.

The period of validity can be entered as a start and end time in the appropriate text fields.

Editing data interactively

The time series modifier display allows the data to be edited interactively. Once the Edit operation has been selected, data can be manually
edited in two ways.

To enter data by clicking on graph, choose the Enable chart editing option. Use the mouse to draw the changed data on the display.

To enter data by entering data in a table, select the Enable table editing option. Enter the changes through the table. The column that contains the
location & parameter selected is shown with the values against a white background. All other columns remain with a grey background.

Once editing the data is complete, select the Create button to add the new modifier.

Managing modifiers

Through the Modifiers tab the modifiers defined can be managed. Following definition, the modifiers will have a temporary status. In the Modifiers
tab these are displayed with a blue square. Temporary modifiers are not available for use until these have been confirmed using the Apply button.

Modifiers that are available for use are preceded with a green square.

The remaining columns of this tab show the relevant details of the modifier. Each modifier can be enabled or disabled using the Active option.

When hovering over the information icon a pop-up is displayed with the description entered previously. If no description has been entered this
icon is greyed.

A modifier can be deleted through the red cross in the last column.

Note that if there are modifiers that have only a temporary status, the name of the tab will be marked in blue. Once these have all been Applied
the colour of the text will revert to black.

Uploading modifiers

Modifiers defined on the local client will only be available on that client until these have been uploaded to the central database. Through the
Upload Modifiers tab the modifiers which are considered to be used in forecast and historical runs on the central server system can be selected
and uploaded.

Once modifiers have been saved locally using the Apply button, these are available for uploading. This displays only those modifiers that have not
yet been uploaded. The modifiers to be uploaded can be selected using the Upload option. Select the Upload option to send these to the central
database and use throughout the system. Following upload the modifiers will be removed from the list displayed in this tab.

643
Constraints

Multiple modifiers can be applied to the same location & parameter. However, these will not be applied cumulatively if there is an overlap. In case
of an overlap the last modifier defined that is active will be applied.

Once a modifier has been defined it cannot be changed. At any time the modifier can be enabled and disabled, or deleted.

Configuration of the Time Series Modifiers Display


As the time series modifiers is a central component to the Delft FEWS system, its configuration is quite simple. No additional configuration need
be added. The display does require preconfigured displays to be defined using the Display Groups configuration as the time series defined in
these display groups are available for applying modifiers to.

Making this display available only requires the relevant class be called from the Explorer. It is added as an explorer Task.

An example of the configuration is provided below

<iconFile/>
<mnemonic>X</mnemonic>
<taskClass>[Link]</taskClass>
<toolbarTask>true</toolbarTask>
<menubarTask>true</menubarTask>
<accelerator>ctrl X</accelerator>
<loadAtStartup>true</loadAtStartup>

]]>

Note that the use if Icons and Mnemonics, as well as the name of the display will depend on the configuration.

09 State editor display

Interactive editing of model states


Several operational forecasting systems utilise conceptual hydrological models for calculating the response of a catchment to rainfall. These
models typically contain a number of parameters and state variables. A common requirement within the forecast process is the updating of state
variables so that the results of the models is closer to the observed behaviour. The updating of states on the basis of observed errors is often
referred to as data assimilation.

There are methods available that allow for the updating of states algorithmically, including for example Kalman Filters, empirical state correction
methods etc. Another approach is the direct updating of state variables through manual intervention.

The approach taken in that state variables are considered as time series of variables and FEWS handles these as it does any other time series.
The evolution of state variables can then be easily plotted against time as can for example the time series of resulting discharge. When a state
variable needs to be amended, the changed values are saved as non-equidistant values at the time of the change. When running the model these
values are imported into the model and at the time indicated used to overrule the value calculated internally.

To be able to use the state modifiers functionality, the adapter to the model for which the states are to be modified must have the ability to take in
the time series of amended values and overrule those used in the internal state. Additionally the model and its adapter should be able to export
the values of calculated state variables as a time series. The figure below illustrates the exchange of time series of model inputs & state values to
the external model, as well as the return of model outputs and state values.

644
State Editor Display
The State editor display supports the user in amending time series of state variables. The amended data are then passed to the model through
the model adapter to allow insertion into the state and subsequent change of response on running the model.

The state editor display is in principle independent of the DELFT-FEWS system, and has been developed as a web service client. Exchange of
data with the DELFT-FEWS database makes use of a web service data exchange protocol. To the normal user, however, the display appears as
an integrated part of the system.

The state editor display contains four main sections


• In the main window a graph is shown of the selected state variable and the response of the model. Only one variable is shown at a time for
simplicity. The arrows on the bottom right can be used to navigate to the other variables.
• The panel below the main window shows each of the state variables. For each a slider control is available which allows the user to set the
desired value. The slider controls include reference values of the climatology maximum and minimum (red lines), average (green lines) and the
original values (blue line).
• A tree view on the left showing all the models for which states are available.
• A panel on the bottom left showing times at which states are available in the database for the selected model.

645
Selecting the model for which the state is to be edited

To select the model for which the state is to be edited, identify the group which contains the desired model, open the tree if required by
double-clicking on the folder icon and select the desired model.

Once a model has been selected, the available state times for that model will be displayed, and the values of the slider controls and time series
will be updated to show the values in the time series of state variables at the time of the state selected.

Setting the time for the new state variable set

Before entering the values for the new state, the time at which these are to be defined need to be set. There are three options to set the time. This
can be done by selection of the time at which there is a state for that model in the database, it can be done by entering a time in the available
input field Enter new state date/time or it can be done by selecting the appropriate time on the graph.

Setting the values of a state variable

Select the variable to be set. If this is not displayed, use the arrow keys on the bottom right to navigate to the desired variable. If there are more
than six variables, keep scrolling to the right to reveal those that are not displayed.

Defining a new value for a state variable

The variable can be amended using the slider control. A small triangle on the display will indicate the new value at the selected time.

Save in new state variable set

Once the values of selected variables have been set, the state can be saved by selecting the Save button. Note that this will save a complete set
of state variables. In other words, if there are six state variables, and only one has been amended, then the saved set will contain all six variables.
The amended value, as well as the other five with the value of the variable at the selected time.

Current Constraints

Once a set of state variables has been defined these cannot be deleted.

The values of the state variables displayed are only those from the historical run. Values from the forecast

Configuration of the FEWS Pi-Service


The PI-Service provides an interface through which data can be exchanged with the FEWS Web Service. To avoid the external web service client,
referred to as the PI-Client having to contain intelligence on how data is stored in internally in the FEWS Database, the Pi-Service configuration

646
allows the setting up of objects to be exchanged by assigning an external ID to be used by the PI-client to an internal object.

Four FEWS objects are available for exchange


• Time series data
• Module Parameter sets
• Module datasets
• Module states

General

Several definitions are given in the general block. The exchange of data through the web service is similar to that through the General Adapter.

647
importDir

If the option to exchange data using an intermediate persistent layer is selected (see option writeToFile) then this specifies the directory the
service imports data from.

importIdMap

Specifies the Id Map to translate external identifiers for locations and parameters to internal identifiers for locations and parameters on import.

importUnitConversionId

Specifies the unit conversion table to translate external units to internal units on import.

exportDir

If the option to exchange data using an intermediate persistent layer is selected (see option writeToFile) then this specifies the directory the
service imports data to.

exportDir

Specifies the Id Map to translate external identifiers for locations and parameters to internal identifiers for locations and parameters on export.

exportUnitConversionId

Specifies the unit conversion table to translate external units to internal units on export.

writeToFile

Boolean option to exchange data through a persistent layer or to only exchange data in memory. When set to true the persistent layer is used.

timeZone

Defines the time zone to which data is to be exported to and from which it should be imported.

timeSeries

Definition of the time series objects available for exchange

648
description

Optional description for this time series object

id

Identifier of the object to be used in exchanging through the web service

exportBinFile

Boolean option. When set to true the data is exported as a bin file (or byte stream) while the headers use the PI-XML format. When false all data
and headers are included in XML format.

timeSeriesSet

Definition of the FEWS time series set. Not that the use of locationSets etc is supported, meaning that any object may contain multiple time
series.

omitMissingValues

Boolean option. When true the exported arrays will not include missing values. When false (default) missing values will be included using the
specified missing value indicator.

missingValues

Identifier for missing values. If not defined missing values will be defined as NaN.

convertDatum

Boolean option. When true the values in the exported arrays will be transformed by the datum to provide the data at global data. This applies only
for those parameter groups that this applies to.

moduleDataSet

Definition of module dataset objects available for exchange. Note that these are exchanged as zip files (streamed).

649
id

Identifier of the object to be used in exchanging through the web service.

moduleInstanceId

Identifier of the module dataset in the FEWS database using its moduleInstanceId.

parameterSet

Definition of module parameter set objects available for exchange. Elements to be defined are the same as in the moduleDataSet.

moduleState

Definition of module state set objects available for exchange. Elements to be defined are the same as in the moduleDataSet.

Configuration of the State Editor Pi-Client


The state editor display is an external display to Delft FEWS. It exchanges data with the main FEWS database using the PI-Web Service
exchange interface (see also Configuration of the FEWS Pi-Service).

Configuration of the State Editor requires configuration of several items;


• A general section defining the name of the display etc.
• A definition for the grouping of models for which states are handled
• A definition of all the models for which states are handled
• A definition of the grouping of state parameters
• A definition of the state parameter group which describes which parameters are contained in each model, and the properties of these
parameters.
• A definition of additional time series to be displayed (e.g. result series and observed series)
• A definition of how time series are displayed.

650
general

Definition of general settings for the display

displayName

Name of the display to be shown in the title bar

timeZone

time Zone to display all data in

modelGroups

Definition of the grouping of models

651
modelGroup

Definition of the a group of models. This will appear as a folder in the treeview. The attributes are;
id Identifier for the group for later reference
name Name of the group to be displayed in the tree view

modelId

list of identifiers of models included in this group (one or more)

modelGroupId

list of identifiers of model groups included in this group (one or more). This allows recursive definition so that a complete tree can be formed.
Obviously circular references are not supported.

model

definition of the models for which state variables/parameters are available for editing.
id Identifier for the model for later reference
name Name of the model to be displayed in the tree view (id is used if not defined).

locationId

Location identifier associated to this model. This is used for matching time series to the this model.

652
stateParameterGroupId

Identifier of the group of parameters considered in this model state. The parameters and their properties are defined in the stateParameterGroup
element.

resultSeriesGroupId

Identifier of the result time series. The locationId is used in matching a model to a time series.

stateParameterGroup

Definition of the groups of parameters. For each model type a group needs to be defined. Different calibrated of the same model may allow
different sets of parameters to be edited.
id Identifier for the parameter group for later reference

stateParameterId

List of identifiers of the parameters included in the group. The properties of the parameters are defined in the stateParameter section.

seriesGroup

Definition of a time series to be shown in the lower plot of the display. This time series should be associated to the model response to give
guidance on the changes made to the state parameters.
id Identifier for the group of time series. This is for reference
name Name of the group of time series.

series

Definition of a time series. This definition is related to the definition of time series in the PI-Web Service. The identifiers used in this series groups
there, must be related to identifiers to time series objects in the definition of the PI-Service.
id Identifier for the time series object as defined in the PI-Web Service configuration.
locationId Identifier for the location of the time series. If this is a group of time series then the key word $ANY$ should be used. The locationId
associated to a model selection will then be filled run-time
parameterId Identifier of the parameter of the time series. This is the same parameter uses as externalParameter in the configuration of the
idMapping.

qualifier

Qualifier for the time series. This may be min, max or mean. Used only for displaying parameter climatology.

stateParameter

653
Definition of the parameters in a state. These can then be referenced in a stateParameterGroup.
id Identifier for the parameter for later reference
name Name of the parameter to be displayed (id is used if not defined).

range

Range of the parameter


min Minimum parameter value
max Maximum parameter value

inputSeries

Input data series for displaying as the time series of this parameter. This should relate to the time series for the current parameter. There is no
explicit check if this is the case. See definition of series

outputSeries

Output data series to write the amended values of the state to. This should relate to the time series for the current parameter, and typically is a
non equidistant time series. There is no explicit check if this is the case. See definition of series.

climatology

Definition of the climatology as a time series within this configuration. This is defined as eventData, with a given max, min and mean.

climatologySeries

Definition of the climatology as a time series obtained from the FEWS database (preferred option). A series can to be defined for the min, max
and mean at each location. The qualifier can be used to assign a series to each of these.

Configuration of the Explorer to open the State Editor Pi-Client


To run the State Editor display, this needs to be configured as a task. This should be defined as a call to the appropriate class of the external web
service client. The arguments are
• Name of the configuration of the PI-Web Service Client and the PI Web Service itself. These names should be the same.
• The IP address of the computer on which the Pi-Service is run. Typically this will be on the local machine and the keyword %HOSTNAME% can
be defined.
• The port on the computer on which the Pi-Service is run. Again this is typically defined by FEWS itself when running on the local machine and
need only be specified when running remotely. When used locally the key word %PORT% should be used.

654
<iconFile>""</iconFile>
<mnemonic>C</mnemonic>
<arguments>SACSMA_StateModifiers %HOSTNAME% %PORT%</arguments>
<taskClass>[Link]</taskClass>
<toolbarTask>false</toolbarTask>
<menubarTask>true</menubarTask>

]]>

10 Interactive forecast display


This display allows forecast tasks or workflows to be run locally on the operator client. These forecast tasks may be sub-tasks to a full basin
forecast. The concept is that this display is used in conjunction with the Time Series Modifier Display. In the latter, modifiers are defined, and then
run through this display. As the modifiers will typically pertain only to changes in time series in a part of the sub-basin, only the tasks relevant to
that sub-basin need to be re-run.

Once the set of modifiers have been defined, and the results from that sub-basin are deemed acceptable, the forecaster can then move on to the
next task downstream.

The display shows a logical layout of the sub-tasks. This is defined in configuration. A downstream sub-task may be dependent on an upstream
workflow, and this can be indicated graphically. When such a dependency is defined, will only allow the running of that sub-task if the task on
which it is dependent is run and completed first.

A simple colour scheme is used to show the status of each of the sub-tasks;
• Green - indicates the sub-task is up to date.
• Yellow - indicates this sub-task is not up-to-date. However, there are no dependencies on other tasks or all tasks are up to date. The task is
therefore available to be run.
• Red - indicates this sub-task is not up to date. There are dependent tasks which are not yet up to date. The task cannot yet be run.
• Purple - indicates the task is running.

The display shows the layout of the sub-tasks in the main panel, as well as a tree view on the left. Multiple groups may be defined. When selected
in the tree view on the left, the group of sub-tasks displayed associated to that group will be displayed in the main panel.

The properties of the tasks run interactively through the display can be changed from the default properties using the Properties button. An
information panel below the tree view indicates the properties that have been set.

Selecting a group of tasks

A group of sub-tasks, usually associated to a basin can be selected from the tree view on the left. The sub-tasks displayed in the main panel will

655
be updated according to the group selected.

Running sub-tasks

To run a sub-task, select the task block it is associated to in the main panel. A dotted outline of the task block highlights that it has been selected.

A menu can be activated with options to run the task

To run the task selected, select the Run option. This is only possible if the task block is green or yellow.

To run all preceding tasks, select the Run to Selected option. This will run all tasks that have not yet been run in the defined sequence.

To run all preceding tasks, including re-running those tasks already up to date, select the Run all to Selected option. This will run all tasks that
have not yet been run in the defined sequence.

Running the group workflow on the server

Once the iteration of tasks has been completed, the workflow associated with the group selected can be submitted for running on the server. In
effect this is the same as submitting that task through the Manual Forecast Dialogue. This workflow will, however, use the properties as defined
for the sub-tasks when running these locally.

Showing results of a sub-task

To view the results of a sub task, the option Show Results... can be selected. This opens the time series display, and opens the pre-configured
display that has been configured to be associated with that sub-task.

Setting the properties of task runs

The properties of the task runs can be amended from the default settings by selecting the Properties option.

Through this display the properties of the run can be specified. This display is similar to the options available in the Manual Forecast Dialogue

A what-if scenario can be selected to be used in the task runs

The run can be set to use to cold state start by selecting the Cold State option and setting number of days prior to the Forecast T0 that the run
should start.

The run can be set to use a warm state by selecting the state search period. This can be set relative to the Forecast T0 by using the Warm State
option. Alternatively the historical run to start from can be selected in the list of dates available in the Warm State Selection option.

The forecast length can be specified to deviate from the default by selecting the option Forecast Length and entering the appropriate number of
days.

One the options have been defined, these will be used for all tasks run within the active session of the interactive forecast display.

656
Configuration of the Interactive Forecast Display
The interactive forecast display is an implementation of the TaskRun dialogue.

This is the configuration of a display and can be found in the displayConfigFiles section of the configuration.

The configuration of the display requires a layout of the flow chart to be defined, as well as the workflows associated to each group of tasks, and
sub-tasks.

657
taskRunDialog

Main element of the configuration. This may include the attributes;


Title The title of the window
showProperties An option to enable or disable the Properties dialogue on the display.

taskGroup

For each group of tasks this element is defined. This has two attributes;
Name the name of the group as displayed in the tree view on the left of the display.
workflowId the Id of the workflow which can be submitted to the central server. This workflow should run all the tasks in the group. Note that
there is no check if this is indeed the case.

658
flowChart

This element is used to define the overall layout of the flow chart shown on the main panel. The element has no attributes

scale

Scale can be used to increase and decrease the size of the flow chart, without the need to change each individual item.

taskSize

This defines the size of each task block. The values entered here are multiplied by the scale to find the actual size in pixels. There are two
attributes;
width width of each task block - the full width is found by multiplying the width defined with the scale.
height heightof each task block - the full width is found by multiplying the height defined with the scale.

simpleTask

Not used in the configuration of the Interactive Forecast display. This element should not be included.

operatorTask

Definition of each of the sub-tasks that can be run from the display. The attributes that are relevant to the configuration of this display are;
name Name of the task as it should appear on the display

659
workflowId Id of the workflow to run under this task
explorerLocationId Id of the main location associated with the sub-task. This is used to open the associated display to view results.
runLocally Option whether this task is to run locally or centrally (default: true)

center

Defines the centre of the associated task block. This coordinates are in the same scale as the taskSize. The actual pixels from the top left of the
display can be calculated by multiplying with the scale value. The attributes are;
x location of the centre of the task block from the top of the panel
y location of the centre of the task block from the left of the panel

dependency

Defines the tasks on which this task depends (i.e. its predecessors). Zero or more dependencies can be defined. The attributes are;
taskName name of the task that should be run before this one

vertex

If the task being defined and a dependent task cannot be connected by a straight line, a vertex point can be defined. The connecting line will pass
through that point. . The actual pixel location from the top left of the display can be calculated by multiplying with the scale value. The attributes
are;
x x-coordinate of the vertex from the left of the panel.
y y-coordinate of the vertex from the top of the panel

fewsPanel

Not used in the configuration of the Interactive forecast display. This element should not be included.

archiveTask

Not used in the configuration of the Interactive forecast display. This element should not be included.

Adding Interactive Forecast Dialogue to the configuration.


To use the interactive forecast dialogue, it must be added to the following general configuration files in the SystemConfig section;

DisplayDescriptors

<description>TaskRunDialog</description>
<className>[Link]</
className>

]]>

DisplayInstanceDescriptors

<description>TaskRunDialog</description>
<displayId>TaskRunDialog</displayId>

]]>

[Link]

<iconFile/>
<mnemonic>I</mnemonic>
<taskClass>[Link]</
taskClass>
<toolbarTask>true</toolbarTask>
<menubarTask>true</menubarTask>
<accelerator>ctrl I</accelerator>

]]>

Example of a configuration

11 Threshold Display

660
Threshold Overview display
The threshold display is a display plug-in that allows the user to see at a glance which locations have forecasted threshold crossings, a summary
of alarms and more detailed information about specific site forecasts. For a description of the functionality available - please see the user guide.

The Id of the threshold display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the display with an Id of e.g. Threshold overview display is for example:

Thresholds_overview_display 1.00 [Link]

Thresholds_overview_display File name as specified in the display instance descriptors

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

Figure 1: Root elements of the threshold overview display configuration.

general

Figure 2: General elements

661
This section gives the title of the dialog and allows the user to filter for workflows which are to be displayed e.g. Coastal_Forecast

displayDescriptor

Figure 3: Display descriptor elements

The section of the configuration forms the main part of the configuration. Each "tab" has different functionality and shows a different aspect of the
data. Tab 1 shows multiple hour threshold crossing aggregates (i.e. highest crossing over 4 hours). Tab 2 shows highest alarms on an hourly
basis. Tab3 gives a text summary of alarms and the optional tab 4 can be used to show additonal site data.

inputVariable

Standard timeseries set with variable ID required

columnAttributes

662
This section of the configuration relates specifically to the configuration of the site data in tab4.

12 Task Run Dialog Display


Task Run Dialog display
The threshold display is a display plug-in that allows the user to run external tasks and modify data of whatif-scenarios before running the task.
For example it can used to upload and download archives from an archive server. or to run a forecast while at the same time storing a value in a
timeseries that can be used by the forecast.

The Id of the task run dialog display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the display with an Id of e.g. task run dialogue display is for example:

Archive_display 1.00 [Link]

Archive_display File name as specified in the display instance descriptors

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

663
Figure 1: Root elements of the task run dialog overview display configuration.

flow chart

Add a flow chart to the display

simple task

664
operator task

archive task

665
13 Manual Forecast Display
Function: Configure the Manual Forecast Display

Module Name: ManualForecastDisplay

Where to Use? DisplayConfigFiles\[Link]

Why to Use? To change the default behaviour and appearance of the manual forecast dialog

Description: The Manual Forecast Dialog can be changed to alter it's default behaviour. Instructions on how to do this are given here

Preconditions: Method for starting up the display. Usually an entry in the explorer tasks section of the Explorer configuration.

Outcome(s): Updated behaviour of the manual forecast dialog

Remark(s): This module should be registered in the DisplayInstanceDescriptors and the DisplayDescriptors files

Available since: DelftFEWS200803

Contents

Contents
Configuration
Related modules and documentation
Technical reference

Configuration

Config Example [Link]

666
<coldState>
<startDate unit="hour" multiplier="72"/>
</coldState>
<warmState>
<stateSearchPeriod unit="hour" start="-72" end="1"/>
</warmState>
<forecastLength unit="day" multiplier="4"/>
<task workflowId="ImportExternal" forecastLengthSelection="false" stateSelection="false"/>
<task workflowId="Fluvial_Forecast">
<coldState>
<startDate unit="hour" multiplier="48"/>
</coldState>
<warmState>
<stateSearchPeriod unit="hour" start="48" end="24">
<description>Please use this feature to find old warm states</description>
</stateSearchPeriod>
</warmState>
<forecastLength unit="hour" multiplier="24"/>
</task>

Related modules and documentation

See

Add Macro Button

Technical reference

Entry in DisplayDescriptors: [Link]

Link to schema: [Link] [Link]

Add Macro Button


Function: Configure Manual Forecast Dialog to load batch runs (predefined)

Module Name: ManualForecastDisplay

Where to Use? DisplayConfigFiles\[Link]

Why to Use? To add a Macro button to the Manual Forecast Display

Description: The Manual Forecast Dialog includes a facility to run a predefined list of forecast. This list can be defined in an XML file.
This facility is off by default but can be switch on using the instructions given here.

Preconditions: Stand Alone system, should not be used in client-server mode. Method for starting up the display. Usually an entry in the
explorer tasks section of the Explorer configuration.

Outcome(s): A button that can be used to load a list of forecasts will be added to the manual forecast display

667
Screendump(s):

Remark(s): Schema for the actual lists of forecasts: [Link]

he XML file for the Manual Forecast Dialog can also contain other configuration options which are described elsewhere.

Available since: DelftFEWS200702

Contents

Contents
Overview
Configuration
Error and warning messages
Known issues
Related modules and documentation
Technical reference

Overview

Apart from the batch tab in the manual forecast it is also possible to run batches of forecasts that have been predefined in an xml file. To be able
to use these the display must be configured to add a button to load these files. Configuration consists of the following steps:

Add the display to the DisplayDescriptors file if this has not already been done (see entry below)
Make an XML config file for the display in the DisplayConfigFIles directory named ManualForecastDisplay 1.00 [Link]. Set the
buttonVisible element to true(see example below).
Ad an entry for this file to the DisplayInstanceDescriptors file
After restarting the system the Macro button should be visible (see screendump)
You can now load an XML file with a list of forecast to perform (example).

Configuration

Config <?xml version="1.0" encoding="UTF-8"?>


Example <manualForecastDisplay xmlns="[Link] xmlns:xsi="[Link]
xsi:schemaLocation="[Link] [Link]
<runningPredefined>
<description>Run pre-defined forecasts</description>
<directory>%REGION_HOME%</directory>
<buttonVisible>true</buttonVisible>
</runningPredefined>
</manualForecastDisplay>

The configurable items of interest are:

directory: Default directory the file dialog uses. This is where you would store you XML files with pre-defined forecasts.
buttonVisible: set to true to enable the button. Default is false.

668
Error and warning messages

No Known error messages.

Known issues

No Known issues.

Related modules and documentation

Schema for creating the list of forecasts: [Link]


Example file batch XML file
Another example file

Technical reference

Entry in displayDescriptors: <className>[Link]</className>

Link to schema: [Link]

14 ChartLayer

15 Schematic Status Display (formerly Scada Display)


Overview

The schematic status display (formerly named Scada display) in Delft-FEWS is used for displaying and monitoring data. The schematic status
display shows one or more configurable schematic views that represent data values in some way. For example, to show the most recent data
value of a given time series, it is possible to show just the numerical value, or to show a rectangle that varies in height depending on the data
value, or to show a polygon that changes in colour when the data value crosses a certain threshold, etc. How data is represented and which data
is shown can be configured in the schematic status display configuration file. The schematic status display is dynamically updated whenever new
data becomes available in the system or whenever data is changed (e.g. in the time series editor). The schematic status display is comparable to
the main map display, only the schematic status display does not and cannot refer to any geographical coordinate system. Furthermore the
schematic status display can be used to show text and figures as well as objects that represent data values.

Please note that the schematic status display in Delft-FEWS is only used for displaying data, it does not implement all features
that could be expected from a SCADA (Supervisory Control And Data Acquisition) system.

Before 2011_02, multipliers were sometimes used to do unit conversion. As of 2011_02 the standard unit conversion from
Delft-FEWS will be applied to the data shown in the Schematic Status Display. In case of migration of systems from before
2011_02, it can be necessary to verify whether the unit conversion is not applied twice.

Note when using of transformations in the Schematic Status Display, it is only supported to have outputVariables of the
transformation output of timeSeriesType temporary. Though other transformation functions may function, it is only the
UserSimple function that is supported.

Contents

Overview
Contents
Configuration
Scada Display Configuration Options
displayName
showTimeNavigatorToolbar
timeNavigatorTimeStep
dateFormat
numberFormat
variable
scadaPanel
Time Navigator Toolbar Configuration Options
timeNavigatorRelativePeriod

669
movieFrameDurationMillis
Scada Panel Configuration Options
id
name
svgFile
nodeId
textComponentBehaviourDefinition
shapeComponentBehaviourDefinition
svgObjectId
leftSingleClickAction
leftDoubleClickAction
linkPropertiesToData
useThresholdWarningLevelColors
toolTip
replaceTags
Left Single Click Action and Left Double Click Action Configuration Options
switchToScadaPanel
scadaPanelId
openDisplay
timeSeriesDisplay
timeSeriesEditor
title
variable
runWorkflow
workflowId
Link Properties To Data Configuration Options
Link Height To Data Configuration Options
variable
dataLowerLimit
dataUpperLimit
heightLowerLimit
heightUpperLimit
anchorPoint
Link Rotation To Data Configuration Options
variable
dataLowerLimit
dataUpperLimit
rotationLowerLimit
rotationUpperLimit
anchorPointX
anchorPointY
Use Threshold Warning Level Colors Configuration Options
variable
thresholdGroupId
thresholdReference
colorType
Tooltip Configuration Options
variable
toolTipText
Replace Tags Configuration Options
variable
Variable Configuration Options
variableId
locationId
overrulingRelativeViewPeriod
timeSeriesSet
Transformations within ScadaDisplay
Sample configuration for transformations within the ScadaDisplay
Known Issues
Tips And Tricks
SVG specification
Embedding image files into SVG files
Controlling the resizing behaviour of an svg document within the scada display
Determining the rotation anchor point for an SVG object in user space coordinates
Aligning text within svg text objects
Export maps from ArcGis as svg files
Reduce the size of svg files

Configuration

The schematic status display shows one or more status panels, which can be selected in turn from the list on the left hand side. It is also possible
to have multiple schematic status displays, each one with different panels. In that case there would be one configuration file for each different
schematic status display, each one with a different filename. The filename of each schematic status display should be registered in the
[Link] configuration file. When available on the file system, the name of the xml file for configuring a schematic status

670
display is for example "[Link]". To register a schematic status display in the DisplayInstanceDescriptors configuration file use
e.g. the following xml code:

<description>Schematic Status Display Twentekanalen</description>


<displayId>ScadaDisplay</displayId>

]]>

Furthermore the displayId that is used in the [Link] file should be defined in the [Link] configuration file.
This can be done with e.g. the following xml code:

<className>[Link]</className>

]]>

To be able to open a schematic status display from the user interface, there should be an explorer task for it in the [Link] configuration file.
The xml code for a schematic status display explorer task is for example:

<arguments>StatusTwentekanalen</arguments>
<taskClass>[Link]</taskClass>
<toolbarTask>true</toolbarTask>
<menubarTask>true</menubarTask>

]]>

Example Configuration files:

[Link] Example of a scada display configuration file

Twentekanalen_10min.svg Example of an svg file, which is used in the [Link] example configuration file

Scada Display Configuration Options

Below is an overview of the options that are available in the schematic status display xml schema. All configuration options are also documented
in the annotations in the schematic status display xml schema. To get the most up to date information about the available configuration options
and their documentation in the annotations, please consult the schematic status display xml schema, which is available here.

Scada display configuration elements

671
displayName

Title of this display.

showTimeNavigatorToolbar

Option to show a time navigator toolbar at the top of this schematic status display. The time navigator toolbar can be used to select the display
time for this schematic status display. It is only possible to select a display time that is contained within the configured relative period and is a valid
time according to the configured time step. This period is always relative to the current system time. If the current system time changes, then the
display time is reset to the current system time. If this option is not specified, then the time navigator toolbar is not shown.

timeNavigatorTimeStep

The default timestep by which the time navigator slider moves is by default the cardinal timestep which is configured in the [Link]
configuration file, see FEWS Explorer Configuration. This optional element can be used to use a different timestep for the time navigator than the
cardinal timestep.

dateFormat

Definitions of dateFormats that can be used for formatting dates and times in tags in texts of svg objects.

numberFormat

Definitions of numberFormats that can be used for formatting numbers in tags in texts of svg objects.

variable

Definitions of variables that can be used as input and/or output for the components in the scada display. A variable is always a time series.
Alternatively variable definitions can be embedded in the configuration below.

scadaPanel

One or more definitions of schematic status panels. In the user interface each schematic status panel will be available from the list in this
schematic status display.

Time Navigator Toolbar Configuration Options

Time navigator toolbar configuration elements

timeNavigatorRelativePeriod

This is the period of the time navigator toolbar (slider) in this schematic status display. The time navigator toolbar can be used to select the
display time for this schematic status display. It is only possible to select a display time that is contained within this period and is a valid time
according to the cardinal time step (which is configured in the [Link] configuration file, see FEWS Explorer Configuration). This period is
always relative to the current system time. If the current system time changes, then the display time is reset to the current system time. The start
and end of the period are both included.

movieFrameDurationMillis

The duration of a frame when the time navigator is animating. This is the number of milliseconds a frame/time step is visible before the next time
step becomes visible. If this option is not specified, then 200 milliseconds is used by default. When the CPU is too slow to display the specified
frame rate, a frame will be displayed longer than specified.

672
Scada Panel Configuration Options

Schematic status panel configuration elements

id

Identifier of this schematic status panel.

name

The name of this schematic status panel as it is displayed in the user interface. If not specified, then id is used as name.

svgFile

The name of an svg (Scalable Vector Graphics) file in the ReportImageFiles directory. This schematic status panel shows all svg objects that are
defined in the specified svg file. The svg objects in the svg file can be given special behaviour and/or properties using the configuration below.
See [Link] for the SVG 1.1 specification.

nodeId

Optional. Identifier that refers to a node in the topology configuration file. If specified, then the referenced topology node will be selected when this
scadaPanel is selected in the user interface. When the topology node is selected, then that may cause other things to be selected as well, like
e.g. the displayGroup in the TimeSeriesDisplay that corresponds to that node.

textComponentBehaviourDefinition

One or more items to define special behaviour and/or properties for components in this schematic status panel. Each item refers to an svg object
that is defined in the given svg file. Each item also contains definitions of behaviour and/or properties for that object. This way it is possible to e.g.
replace tags in the text of a text object with certain values from a certain time series, or to define what should happen when the user clicks on a
certain component.

Definition of special behaviour and/or properties for a text component in this schematic status panel. This refers to an svg object of type "text" that
is defined in the given svg file. This contains definitions of behaviour and/or properties for that svg object. An svg object of type "text" can be a

673
"text", "tspan", "tref", "textPath" or "altGlyph" element.

shapeComponentBehaviourDefinition

One or more items to define special behaviour and/or properties for components in this schematic status panel. Each item refers to an svg object
that is defined in the given svg file. Each item also contains definitions of behaviour and/or properties for that object. This way it is possible to e.g.
replace tags in the text of a text object with certain values from a certain time series, or to define what should happen when the user clicks on a
certain component.

Definition of special behaviour and/or properties for a shape component in this schematic status panel. This refers to an svg object of type
"shape" that is defined in the given svg file. This contains definitions of behaviour and/or properties for that svg object. An svg object of type
"shape" can be a "path", "rect", "circle", "ellipse", "line", "polyline" or "polygon" element.

svgObjectId

The id of the object in the svg file for which this item defines special behaviour and/or properties.

leftSingleClickAction

Action that is triggered when the user clicks once on this object with the left mouse button.

leftDoubleClickAction

Action that is triggered when the user double clicks on this object with the left mouse button.

linkPropertiesToData

Optional. Contains options to link properties of this component to actual data values. For example the height of the component can be changed
depending on the data values of a specified variable.

useThresholdWarningLevelColors

Optional. If specified, then the data for the specified variable within the specified relative view period is used to determine threshold crossings. For
crossed thresholds, warningLevels are activated. The color of the most severe activated warningLevel is used as the fill and/or stroke color for the
component, as specified.

toolTip

Optional. If specified, then a toolTip with the specified text is displayed for this component.

replaceTags

If specified, then the tags in the text of this component are replaced using data from the specified variable. Tags should be separated by "%"
signs. Text can be e.g. "Last value = %LASTVALUE(numberFormatId)%", which would be replaced by e.g. "Last value = 10.0". The following tags
can be used in the text (numberFormatId/dateFormatId should be replaced by the id of a numberFormat/dateFormat that is defined at the start of
this configuration file):

%MAXVALUE(numberFormatId)% is replaced by the maximum reliable or doubtful value in the time series.
%MINVALUE(numberFormatId)% is replaced by the minimum reliable or doubtful value in the time series.
%LASTVALUE(numberFormatId)% is replaced by the most recent reliable or doubtful value in the time series.
%LASTVALUETIME(dateFormatId)% is replaced by the date and time of the most recent reliable or doubtful value in the time series.
%STARTTIME(dateFormatId)% is replaced by the start date and time of the relative view period of the time series.
%ENDTIME(dateFormatId)% is replaced by the end date and time of the relative view period of the time series.

Left Single Click Action and Left Double Click Action Configuration Options

674
Click action configuration elements

switchToScadaPanel

Within this schematic status display the view will switch to the specified panel.

scadaPanelId

The id of the scadaPanel to switch to. The scadaPanel to switch to must be present in this config file.

openDisplay

Open another Delft-FEWS display.

timeSeriesDisplay

Open the timeSeriesDisplay using the specified options. The period that is shown in the display is the smallest period that completely includes the
relative view periods of all shown variables.

timeSeriesEditor

Open the timeSeriesEditor using the specified options. The data of the specified variables can be edited in the display. The period that is shown in
the display is the smallest period that completely includes the relative view periods of all shown variables.

title

Title of the display window.

variable

One or more variables to define the data that is shown in the display.

runWorkflow

Run a predefined workflow.

workflowId

The workflow descriptor id of the workflow to run. This id should refer to a workflow that is defined in the WorkflowDescriptors configuration file.
The current system time is used as the time zero (T0) for the workflow run.

Link Properties To Data Configuration Options

675
Link properties to data configuration elements

There are several options available:

height
rotation

Link Height To Data Configuration Options

Link height to data configuration elements

Optional. If specified, then for this component the height attribute is linked to the data values for the specified variable. This option can only be
used for svg objects of type "rect". If the data value is less than dataLowerLimit, then the height is set to heightLowerLimit. If the data value is
greater than dataUpperLimit, then the height is set to heightUpperLimit. If the data value is between dataLowerLimit and dataUpperLimit, then the
height will be linearly interpolated between heightLowerLimit and heightUpperLimit. If no data is available, then this component is made invisible.

Note: it is required that dataUpperLimit is greater than dataLowerLimit. However it is possible to define heightUpperLimit less than
heightLowerLimit to control the direction of the change of the height.

variable

The data for this variable is used to determine the height for this component.

dataLowerLimit

If the data value is less than or equal to dataLowerLimit, then the height will be equal to heightLowerLimit.

dataUpperLimit

If the data value is greater than or equal to dataUpperLimit, then the height will be equal to heightUpperLimit.

heightLowerLimit

The height that corresponds to the dataLowerLimit value.

heightUpperLimit

The height that corresponds to the dataUpperLimit value.

anchorPoint

The anchor point describes which part of the component should remain at the same position when the height is changed. Can be "bottom" or
"top".

Link Rotation To Data Configuration Options

676
Link rotation to data configuration elements

Optional. If specified, then for this component the rotation is linked to the data values for the specified variable. The rotation that is derived from
the data values is always relative to the rotation angle that is specified for this component in the svg file. This option can only be used for svg
objects of type "path", "rect", "circle", "ellipse", "line", "polyline", "polygon" or "text". If the data value is less than dataLowerLimit, then the rotation
is set to rotationLowerLimit. If the data value is greater than dataUpperLimit, then the rotation is set to rotationUpperLimit. If the data value is
between dataLowerLimit and dataUpperLimit, then the rotation will be linearly interpolated between rotationLowerLimit and rotationUpperLimit. If
no data is available, then this component is made invisible. If the data value is flagged as "varying direction" (e.g. varying wind direction), then the
rotation will increase linearly in time (animation). In this case the rotation will increase from rotationLowerLimit to rotationUpperLimit and then start
from rotationLowerLimit again.

Note: it is required that dataUpperLimit is greater than dataLowerLimit. However it is possible to define rotationUpperLimit less than
rotationLowerLimit to control the direction of the rotation. If rotationUpperLimit is greater than rotationLowerLimit, then increasing data values
result in clockwise rotation.

variable

The data for this variable is used to determine the rotation for this component.

dataLowerLimit

If the data value is less than or equal to dataLowerLimit, then the rotation will be equal to rotationLowerLimit.

dataUpperLimit

If the data value is greater than or equal to dataUpperLimit, then the rotation will be equal to rotationUpperLimit.

rotationLowerLimit

The rotation (in degrees) that corresponds to the dataLowerLimit value. This rotation is always relative to the rotation angle that is specified for
this component in the svg file.

rotationUpperLimit

The rotation (in degrees) that corresponds to the dataUpperLimit value. This rotation is always relative to the rotation angle that is specified for
this component in the svg file.

anchorPointX

The x coordinate of the anchor point. The rotation will be around the anchor point. This x coordinate has to be specified in the user space
coordinate system of the svg object for this component in the svg file. The user space coordinate system is determined by all transforms that are
specified in the parent svg objects of the svg object for this component. All transforms that are specified in the svg object itself are not part of the
user space coordinate system and thus should be taken into account in the coordinates that are specified here. E.g. to rotate a "rect" svg object
with attributes width="200" height="200" x="500" y="300" transform="translate(50 0)" around its center, use anchorPoint coordinates (x, y) = (650,
400). See also Determining the rotation anchor point for an SVG object in user space coordinates.

anchorPointY

The y coordinate of the anchor point. The rotation will be around the anchor point. This y coordinate has to be specified in the user space
coordinate system of the svg object for this component in the svg file. The user space coordinate system is determined by all transforms that are
specified in the parent svg objects of the svg object for this component. All transforms that are specified in the svg object itself are not part of the
user space coordinate system and thus should be taken into account in the coordinates that are specified here. E.g. to rotate a "rect" svg object

677
with attributes width="200" height="200" x="500" y="300" transform="translate(50 0)" around its center, use anchorPoint coordinates (x, y) = (650,
400). See also Determining the rotation anchor point for an SVG object in user space coordinates.

Use Threshold Warning Level Colors Configuration Options

Use threshold warning level colors configuration elements

variable

The data for this variable is used to determine threshold crossings. For crossed thresholds, warningLevels are activated. The color of the most
severe activated warningLevel is used as the fill and/or stroke color for the component, as specified below.

thresholdGroupId

Optional. If specified, then only thresholds in the specified thresholdGroup are used in the determination of threshold crossings and warningLevels
for the specified variable. If not specified, then thresholds in all thresholdGroups are used.

thresholdReference

Specify which data is used for determining threshold crossings. Either choose the first or last reliable or doubtful value within the relative view
period, or choose all reliable or doubtful values within the relative view period. Can be "first_value", "last_value" or "relative_view_period".

colorType

Specify which color type (fill and/or stroke) should be changed to use warningLevel colors. Color types that are not specified here are not
changed. Can be "fill", "stroke" or "fill_and_stroke".

Tooltip Configuration Options

Tooltip configuration elements

variable

The data from this variable is used to replace the tags in the specified toolTip text. If for a given tag the required data is not available, then that tag
is replaced by a dash symbol "-". This variable is only required if the specified toolTip text contains tags.

toolTipText

Text that is displayed in the toolTip for this component. This text can contain tags. The tags are replaced using data from the specified variable.
Tags should be separated by "%" signs. Text can be e.g. "Last value = %LASTVALUE(numberFormatId)%", which would be replaced by e.g.
"Last value = 10.0". The following tags can be used in the text (numberFormatId/dateFormatId should be replaced by the id of a
numberFormat/dateFormat that is defined at the start of this configuration file):

%MAXVALUE(numberFormatId)% is replaced by the maximum reliable or doubtful value in the time series.
%MINVALUE(numberFormatId)% is replaced by the minimum reliable or doubtful value in the time series.
%LASTVALUE(numberFormatId)% is replaced by the most recent reliable or doubtful value in the time series.
%LASTVALUETIME(dateFormatId)% is replaced by the date and time of the most recent reliable or doubtful value in the time series.
%STARTTIME(dateFormatId)% is replaced by the start date and time of the relative view period of the time series.
%ENDTIME(dateFormatId)% is replaced by the end date and time of the relative view period of the time series.

Replace Tags Configuration Options

678
Replace tags configuration elements

If specified, then the tags in the text of this component are replaced using data from the specified variable. Tags should be separated by "%"
signs. Text can be e.g. "Last value = %LASTVALUE(numberFormatId)%", which would be replaced by e.g. "Last value = 10.0". The following tags
can be used in the text (numberFormatId/dateFormatId should be replaced by the id of a numberFormat/dateFormat that is defined at the start of
this configuration file):

%MAXVALUE(numberFormatId)% is replaced by the maximum reliable or doubtful value in the time series.
%MINVALUE(numberFormatId)% is replaced by the minimum reliable or doubtful value in the time series.
%LASTVALUE(numberFormatId)% is replaced by the most recent reliable or doubtful value in the time series.
%LASTVALUETIME(dateFormatId)% is replaced by the date and time of the most recent reliable or doubtful value in the time series.
%STARTTIME(dateFormatId)% is replaced by the start date and time of the relative view period of the time series.
%ENDTIME(dateFormatId)% is replaced by the end date and time of the relative view period of the time series.

variable

The data from this variable is used to replace the tags in the text in the svg object that this component refers to. If for a given tag the required data
is not available, then that tag is replaced by a dash symbol "-".

Variable Configuration Options

Variable configuration elements

Choose between a reference to a variable or an embedded definition of a variable.

variableId

Identifier of a variable to use.

locationId

If the specified variable contains multiple locations, then specify the location to use here.

overrulingRelativeViewPeriod

Optional time period for which data should be read. This time period overrules the viewPeriod in the timeSeriesSet of the referenced variable. This
time period is relative to the selected display time in this scada display. The start and end of the period are both included. If the start and/or end of
the period is not a valid time according to the timeStep of the variable, then the start and/or end is shifted to the previous valid time (e.g. for a
period from 15:20 hours to 16:20 hours and a whole hour timeStep the period is shifted to be 15:00 hours to 16:00 hours).

timeSeriesSet

A time series set that can be used as input for a component.

Transformations within ScadaDisplay

679
Up to 2011_01 it is necessary that all data to be displayed in the ScadaDisplay is beforehand available as time series. This includes simple sums
and differences between other time series. From 2011_02 onwards, it is possible to include in the ScadaDisplay configuration one or more
transformations. These transformations will make it easier to use derived time series. The derived timeseries will be calculated on-the-fly as
temporary timeseries. The transformations are processed in the order they appear in the configuration.

NB. For 2011_02 only the UserSimpleFunction is supported and tested as a transformation that can be used.

Sample configuration for transformations within the ScadaDisplay

NB. It is required that the timeSeriesType of output variables are set to temporary.
This sample will allow the variable with variableId Observation_minus_correction to be displayed on a scadaPanel. This variable refers to a
temporary timeseries that will be updated on-the-fly with the difference between the two other variables Observation and Correction.

<variableId>Observation</variableId>
<timeSeriesSet>
<moduleInstanceId>Afgeleide_Twentekanalen</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Hydro_LMW_TK_H</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="hour" start="-1" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>

<variable>
<variableId>Correction</variableId>
<timeSeriesSet>
<moduleInstanceId>Afgeleide_Twentekanalen</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Hydro_LMW_TK_H</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="hour" start="-1" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>Observation_minus_correction</variableId>
<timeSeriesSet>
<moduleInstanceId>Afgeleide_Twentekanalen</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Hydro_LMW_TK_H</locationSetId>
<timeSeriesType>temporary</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="hour" start="-1" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>9</synchLevel>
</timeSeriesSet>
</variable>
<transformation id="TransformationObservationMinusCorrection">
<user>
<simple>
<expression>Observation - Correction</expression>
<outputVariable>
<variableId>Observation_minus_correction</variableId>
</outputVariable>
</simple>
</user>
</transformation>
...
]]>

680
Known Issues

When using Delft-FEWS the configuration can be present as files on the file system or can be contained in a local data store. The svg files that
are used for the schematic status display work in both cases. However, if the svg files refer to separate image files, then the schematic status
display can only display these images if the image files are present as files on the file system. If these separate image files are contained in a
local data store, then they cannot be displayed in the schematic status display. Therefore, when using separate image files, make sure that the
configuration is present as files on the file system. If this is not possible, then it is possible to choose from two different workarounds:

The first possible workaround is to not use separate image files. For a schematic image it is possible to create svg elements that
resemble the contents of the image. If these svg elements are added to the svg file for the schematic status display, then there is no need
to use the image file anymore.
The second possible workaround is to use embedded images instead of separate image files. The section Embedding image files into
SVG files describes how to do this.

Tips And Tricks

SVG specification

The schematic status display uses SVG files. For details on the format and possibilities of SVG files, please refer to [Link]
for the SVG 1.1 specification.

Embedding image files into SVG files

It is possible to embed images file into an SVG file. If an image file is embedded into an SVG file, then the original image file is no longer needed,
because the image data is then available from the SVG file itself. To embed an image file into an SVG file using Inkscape, do the following. Open
the SVG file in Inkscape. Add one or more images to the file in the normal way. Then select menu "Extensions", then select "Images", then select
"Embed Images". Then click "Apply". Then save the SVG file in the normal way. Now the images are embedded into the SVG file. If all the images
in an SVG file are embedded, then Delft-FEWS only needs the SVG file itself (and not the original image files) for displaying in the scada display.

Controlling the resizing behaviour of an svg document within the scada display

In an svg file in the root element use the following attributes to control its resizing behaviour: width, height, viewBox, preserveAspectRatio.

If only width and height present, then the svg document gets an absolute size, appears in the top-left corner of the display and is never
resized (not even when the display window is resized). This means it can be cut-off when the display window is too small.
If only viewBox and preserveAspectRatio are present, then the viewBox determines the rectangular region of the svg document that is
drawn in the display window (the coordinates for the viewBox edges are the same as the coordinate system used within the svg file,
usually the coordinates are in pixels). The preserveAspectRatio determines how the drawn region is sized and aligned within the display
window. In this case the svg document is automatically resized when the display window is resized.

Examples:

Resizing attributes in the svg Resizing behaviour


root element

The svg document is scaled to fit the display window and the aspect ratio is preserved.

681
The svg document is scaled and stretched to fill the window (aspect ratio is not preserved).

Only the region with coordinates 0 <= x <= 1200 and 0 <= y <= 700 pixels is shown. The svg document is not
resized when the display window is resized.

Background information:
The width and height attributes in the root svg element of an svg file determine the size of the viewport, in other words the size of the svg
document when it is viewed. The coordinates of the objects in the svg file are specified in user space, which is different from the viewport space.
The viewBox attribute in the root svg element defines the rectangle in user space that should be mapped to the edges of the viewport. The
preserveAspectRatio attribute in the root svg element determines how this mapping takes place. This mapping uses one of three possible
methods: "meet", "slice" or "none". See [Link] and
[Link] for more detailed information.

Determining the rotation anchor point for an SVG object in user space coordinates

To determine the rotation anchor point for an SVG object in user space coordinates using Inkscape, do the following. Open the SVG file in
Inkscape. Select the object. Select menu "Edit", then select "XML Editor". Then in the window that opens, in the box on the right, look for the
important attributes (e.g. "x", "y", "width", "height", "transform" or "d") and use their values to calculate the required anchor point. E.g. to rotate a
"rect" svg object with attributes width="200" height="200" x="500" y="300" transform="translate(50 0)" around its center, use anchor point
coordinates (x, y) = (650, 400).

Aligning text within svg text objects

By default text in an svg text object is left-aligned and the x and y coordinates of the object denote the upper-left corner of the object. To
right-align text in an svg text object, add the following attribute to the text element:

The entire element would then e.g. be:

%LASTVALUE(numberFormat1)% meter ]]>

When an object is right-aligned, then the x and y coordinates of the object denote the upper-right corner of the object. The attribute text-anchor
can also have the values "start" or "middle". To create multiple pieces of text with different alignments, use separate text objects.

When using right-alignment, the decimal separator for number values can be aligned by using the following number format:

Here "#" means one or more decimals before the decimal separator and ".00" means that always two decimal places are shown (number is either
rounded or padded with zero's).

Export maps from ArcGis as svg files

In ArcGis it is possible to export map as svg file. Go to "File > Export map" and select *.svg as export file type.

Reduce the size of svg files

Within inkscape the size of an svg file can be reduced by saving it as compressed svg file (*.svgz) or as plain svg file. Also cleaning up the file by
using the "vacuum defs" option in the file menu makes the file significantly smaller.

16 Modifier display
Overview
The [Link] is used to configure the modifier display. The modifier display is used in an IFD-environment to
manage modifiers.

Contents
Overview
Contents
Schema
Create modifier buttons
TimeSeriesDisplayConfig

682
Schema
Below the schema of the modifiers-display is shown.

Create modifier buttons

In the modifiers panel modifiers can be created by pressing the create modifier button and select a modifier type. A shortcut in creating
modifiers is using shortcut-buttons. The display below shows an example.

683
Besides the create modifier button two buttons are shown. A button with the text "wechng" and one with the text "aeschng". Both modifiers
can be used to create a modifier directly. For example after pressing the wechng-button a temporary wechng-modifier will be created.
Pressing this button will be the same as pressing the create-mod button and selecting "wechng".

To define for which modifier types a shortcut-button should be created a list of modifier-ids should be listed in the [Link].

A configuration example is given below.

<modifierId>wechng</modifierId>
<modifierId>aescchng</modifierId>

]]>

TimeSeriesDisplayConfig

This option can be used to adjust the gui of the TimeSeriesModifiers.

showTimeSeriesModifiersButton

A timeseriesmodifier can be shifted in time by using arrow-buttons. By default these buttons are not visible because the default value of this option
is false.

However when this option is enabled, green arrow buttons appear next to the table- and graph-button.

showTablePanel

When the display to create timeseriesmodifiers is started by default a table and graph is shown. This can be adjusted with this
option. When this option is set to false then the display will by default with only the graph shown.

defaultOperationType

The option defaultOperationType can be used to define which operation type (add,substract etc) will be selected after startup of the display.
When no option is defined the operationType timeseries will be selected.

incrementOperationTypeAdd
The option incrementOperationTypeAdd will be used to define the increment for operation types add and substract when using the spinner-button
to increase or lower the value.

08 Mapping Id's flags and units

This chapter is not finished yet. Further content is needed

Introduction
DELFT-FEWS allows open integration with various external data sources and modules. Many of these will require identification of locations and
parameters using a native definition. DELFT-FEWS allows internal location Id's, parameter id's, flags and units to be mapped to external id's.

For each external data source, or each module, a different set of ID' mappings may be defined. In specific cases a mapping may be used on
exporting data to the module and a different mapping in importing data from the module.

Three types of mapping can be defined;

IdMaps definition for mapping location ID's and Parameter ID's


FlagConversions definition for mapping data quality flags
UnitConversions definition for mapping data units

Contents
01 ID Mapping
02 Unit Conversions
03 Flag Conversions

684
01 ID Mapping

IdMaps
IdMaps are defined to map internal location and parameter ID's to external location and parameter ID's. The configuration of these can be done in
two ways. In the first separate mappings can be defined for the locations and for the parameters. Although this is the mist efficient method, it is
not suitable in all cases, as specific locations may require a different mapping,. A second definition can be created where the mapping is done on
the basis of the unique combination of location/parameter. Each IdMap configuration may only use on method of defining mappings to avoid
ambiguity.

Each IdMap configured must be registered in the IdMapsDescriptors configuration (see Regional Configuration). The Id used in registering the
IdMap is the same as the name of the configuration. When available on the file system, the name of the XML file for configuring an IdMap called
for example ImportNWP may be:

ImportNWP 1.00 [Link]

ImportNWP Fixed file name for the ImportNWP IdMap.

1.00 Version number

default Unit to indicate the version is the default configuration (otherwise omitted).

Figure 155 Root element of the IdMaps configuration

parameter

Mapping of internal to external parameters. Multiple entries may exist.

Attributes;

internal : Internal parameter Id (must be in the parameters configuration)


external : External parameter Id (free format string)
externalQualifier : Optional additional qualifier to uniquely identify a parameter (for use in conjunction with EA-XML schema only.

location

Mapping of internal to external location Id's. Multiple entries may exist.

Attributes;
internal : internal location Id (must be in the location configuration )
external : external location Id (free format string)

map

Parameter/location mapping using unique combination. Multiple entries may exist.

Attributes;

internalParameter : Internal parameter Id (must be in the parameters configuration)


internalLocation : internal location Id (must be in the location configuration )
externalParameter : External parameter Id (free format string)
externalLocation : external location Id (free format string)
externalParameterQualifier : : Optional additional qualifier to uniquely identify a parameter (for use in conjunction with EA-XML schema
only.

685
02 Unit Conversions

UnitConversions
UnitConversions are defined to map internal units to external units. For each unit to be converted, the conversion method can be defined. This
may be a simple multiplication (e.g. feet to metres), as well as a possible increment (e.g. o F to o C). The converted value is (inputUnitTypeValue *
multiplier) + increment. In DELFT-FEWS the convention for storing level data is that this is with reference to the local datum. If the external unit
specifies the datum is global, a Boolean flag can be used to indicate the data should be converted on import.

Each UnitConversion configured must be registered in the UnitConversionsDescriptors configuration (see Regional Configuration). The Id used in
registering the UnitConversion is the same as the name of the configuration. When available on the file system, the name of the XML file for
configuring a UnitConversion called for example NWPUnits may be:

NWPUnits 1.00 [Link]

NWPUnits Fixed file name for the NWPUnits UnitConversions.

1.00 Version number

default Unit to indicate the version is the default configuration (otherwise omitted).

Figure 156 Elements of the UnitConversions configuration.

addInverses

When set to true, the inverse unit conversions (from output to input unit) are automatically generated and added to this set of unitConversions

inputUnitType

Definition of the input unit. Depending on the conversion being used for import or for export this may be the unit as defined in DELFT-FEWS or
that as defined in the external data source.

outputUnitType

Definition of the output unit. Depending on the conversion being used for import or for export this may be the unit as defined in DELFT-FEWS or
that as defined in the external data source.

multiplier

Multiplier to be applied to data on import/export.

incrementer

686
Value to be added to data on import/export.

convertDatum

Boolean flag to indicate if the data is to be converted on input to local reference level. If this value is true, and the parameter to be imported to
support datum conversion (see Parameter definition in Region Configuration), the "z" value of the location is subtracted from the data (see
Location definition in Region Configuration).

03 Flag Conversions

Flag Conversions
FlagConversions are defined to map internal quality flags to external quality flags. Each flag to be converted a conversion can be identified, but a
default flag may also be given to ensure the exported or imported data carries a flag. A flag to identify missing values must also be configured.

Each FlagConversion configured must be registered in the FlagConversionsDescriptors configuration (see Regional Configuration). The Id used in
registering the FlagConversion is the same as the name of the configuration. When available on the file system, the name of the XML file for
configuring a FlagConversions called for example NWPFlags may be:

NWPFlags 1.00 [Link]

NWPFlags Fixed file name for the NWPFlags FlagConversions.

1.00 Version number

default Unit to indicate the version is the default configuration (otherwise omitted).

Figure 157 Elements of the FlagConversions configuration

flagConversions

687
Root element of for defining a flagConversion. For each element an inputFlag- outputFlag tupple must be defined.

inputFlag

Definition of the input flag

name

Optional name of the flag

value

Value of the flag

description

Optional description of the flag

defaultOutputFlag

Default output flag assigned to data imported or exported.

missingValueFlag

Flag used to identify missing values in data to be imported or exported.

09 Module datasets and Module Parameters

This chapter is not finished yet. Further content is needed

Introduction
DELFT-FEWS allows module datasets to be defined for external forecasting modules. These datasets can then be managed through the
configuration management of DELFT-FEWS. This also allows multiple versions of module datasets to be defined (e.g. with adapted module
structure or module parameters). When constructing what-if scenarios, an alternative version to the default can be selected to explore the impact
this has on the results of forecasting modules.

Definition of datasets and parameters is not a requirement for the use of external forecasting modules. Module datasets and module parameters
are only used in DELFT-FEWS by the General Adapter module. Both can only be exported to the external module. Import of module datasets
and parameters is not possible.

Two methods are available for managing module datasets and parameters:

moduleDatasSets; in the ModuleInstanceDataSets table in the database of the configuration or in the ModuleDataSets directory.
moduleParameters in the ModuleParameters table in the database of the configuration, or in the ModuleParameters directory.

Contents
01 Module Datasets
02 Module Parameters

01 Module Datasets

Module Datasets
Module datasets are defined to be exported to a module directory prior running of the module. The module datasets is identified by the

688
ModuleInstanceId of the General Adapter configuration in which it is to be used.

The module datasets is not an XML file, but a ZIP file containing all native module data. This is exported by the General Adapter to a directory
specified in the General Adapter configuration (see Module Instance configuration section).If the external module requires a directory structure,
then this information should be contained in the ZIP file, relative to the directory specified as export directory.

When available on the file system, the name of the ZIP file for configuring a module dataset for example for the ISIS model of the Eden used in
the Eden_Historical General Adapter module:

Eden_Historical 1.00 [Link]

Eden_Historical Fixed file name for the Eden_Historical module dataset.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

02 Module Parameters

Module Parameters
Module Parameters can also be managed by DELFT-FEWS similar to management of Module datasets. The difference is that where the module
datasets are handled as ZIP files, with no actual interaction between the dataset and DELFT-FEWS, module parameters can be defined in a
native DELFT-FEWS format and exchanged with the external module in the published interface format through the General Adapter module. A
prerequisite for this exchange being meaningful is that the module adapter supports this format of the published interface, and can transform this
into the native module format.

As in the module datasets, module parameters are defined in a configuration where the name is the same as the moduleInstanceId of the General
Adapter module it is to be used in (though a different name may also be called by the General Adapter- see the moduleInstance configuration
section).

When available on the file system, the name of the XML file for configuring Module Parameters for example for the Eden_Historical module may
be:

Eden_Historical 1.00 [Link]

Eden_Historical File name for the parameters for Eden_Historical.

1.00 Version number

default Flag to indicate the version is the default configuration (otherwise omitted).

The structure of the module parameter XML configuration is the same as that applied in the Published Interface format. See the relevant
documentation for the definition of the schema and required configuration.

10 Setting up an operational system

This chapter is not finished yet. Further content is needed

Introduction

689
In this chapter some additional configuration is described which is required in providing DELFT-FEWS as an operational forecasting system.
Only the elements relevant to DELFT-FEWS as either a stand-alone system or as an operator client are described. Configuration of the Master
Controller as the hub of a live forecasting system is described in a separate document.

The items to be configured are predominantly the root configuration files. These determine how a local client is started (ie if it is in stand alone
mode or if it is an operator client), as well as for the live system the details on the Master Controller to connect to and options for
synchronisation.

The root configuration items include;

clientConfig
logConfig
RollingBarrel_Database (this file should not be changed and is not described)
synchConfig
synchChannels
synchProfiles

This chapter also describes the procedure in setting up scheduled current forecasts in the live system and the procedure for setting up
enhanced forecasting.

The files described in this section requires specialist knowledge of DELFT-FEWS. Making changes in these files will influence
the behaviour of the system significantly

Contents
01 Root Configuration Files
02 Launching FEWS
03 Setting Up Scheduled Forecasts
04 Setting Up Event-Action Configuration
05 Setting up sending emails on events
06 Checklist for creating a live system from a stand alone system
07 Setting up alerts for the Alarmmodule

01 Root Configuration Files

Root configuration files


Root configuration files define the behaviour of DELFT-FEWS on the local machine. These files are synchronised in the live system environment,
nor are they available in the database. The files must be installed locally with the DELFT-FEWS system.

clientConfig

The clientConfig File determines if the instance of DELFT-FEWS is to run as a stand alone system, or if it is to connect to the master controllers
defined below.
Since 2011.01 the clientConfig file is no longer required for stand alone. Stand alone is the default when the client config is missing

Figure 158 Elements of the clientConfig configuration

clientType

Definition of the client type. Enumeration of options includes;

Operator Client
Stand Alone

690
LogConfig

To be completed

synchConfig

Looks something like this for an OC:

<fews-master-config xmlns:xsi="[Link] xsi:schemalocation="


[Link]
[Link] xmlns="
[Link]
<queueconnection>
<factory jndi="ConnectionFactory"/>
</queueconnection>
<defaultMcId>MC00</defaultMcId>
<mc id="MC00">
<jndicontext prefixes="[Link]:[Link]" factory=
"[Link]" provider="jnp://localhost:1099"/>
<queue>
<root jndi="TEST/MC00/"/>
<synch jndi="External/JMSQueue/OCIncoming" timeout="10"/>
</queue>
</mc>
<synchronisation>
<messaging maxrecords="1000" maxlobdata="30000000"/>
<processor maxlistsize="500000"/>
<schema location="nl/wldelft/fews/master/data/synchdata/synchronisation_schema.xsd"/>
</synchronisation>
<login timeout="10"/>
</fews-master-config>
]]>

The synchConfig does not normally require editting.

The only setting that may demand editting is the <login timeout="10" />, which controls the timeout for a login attempt of the OC on the MC (in
seconds). (This element may be absent, in which case the timeout is 10secs)
It may be needed to extend this timeout if the JMS server is very busy (very many clients starting up and synchronising at the same time, e.g.
when all the PCs for a workshop are all starting up at the same time).
Note: the xml config can only extend the time out from the default 10 secs. Settings less than 10 secs are ignored)

synchProfiles

The file [Link] contains several different profiles for fine-grained control over the synchronisation with the database.

The following profiles

Full Profile for synchronising fully between the Operator Client and the Master Controller

Minimal Profile for synchronising minimal between the Operator Client and the Master Controller

Custom Customizable synchronisation between the Operator Client and the Master Controller

ConfigManager Synchronisation profile for the Configuration Manager

691
FS Synchronisation profile for the Forecasting Shell Server

From version 2010.01 onwards, it is possible to get an overview of the active users. This overview is available in both the Operator Client and the
Admin Interface. In order to make this functionality work, the file [Link] has to be configured properly in each of the Operator Clients.
Before version 2010.01, the file [Link] would typically contain several profiles containing the following snippet:

<channelId>[Link]</channelId>
<schedule>
<single/>
</schedule>
<timeOut>10000</timeOut>

]]>

From version 2010.01 onwards, it is recommended to replace this snippet for the profiles 'Full', 'Minimal' and 'Custom' (not for ConfigManager) by
the following:

<channelId>[Link]</channelId>
<schedule>
<continuous>
<period divider="1" unit="minute" multiplier="3"/>
<priority>low</priority>
</continuous>
</schedule>
<timeOut>10000</timeOut>

]]>

This defines that the information needed for these overviews is synchronized every three minutes.

See also User Administration - Active Users

synchChannels

To be completed

[Link] file

[Link] file
Module Name: [Link]

Description: Sets global properties for the application

Why to Use? To override default values and to set global variables

Where to Use? In the [Link] file in the root dir of the region

Config Example [Link]

Screendump: na

Outcome(s): Updated behavior of the system

Remark(s): Use with caution

Available since: DelftFEWS200701

Overview

The [Link] file has two main uses:

1. Define global variable that can be used within the (XML) configuration files (e.g. if you define NAME_OFPROGRAM=c:\[Link] in the
[Link] file you can use the variable $NAME_OFPROGRAM$ in the configuration files).
2. To set software options.

692
Configuration

A list of available options is given below (all options are case sensitive):

Name Value Description DefaultValue

localDatastoreFormat firebird Use firebird for the local datastore (instead of msaccess
msacces). Useful if you have large (> 2GB)
datastores. Notice that in case of using firebird the
localDataStore should be located at the fysical hard
disk and not at the network.

ARCHIVE_IMPORT_PATH

ARCHIVE_EXPORT_PATH

REGION_HOME

USE_CURRENT_SIMULATED_WITH_COLD_STATE_SELECTION boolean By default current simulated data of a module false


instance is visible during a run until the running
forecast has started a new chain (i.e. selected a cold
state) for that module instance. Setting this property
to 'true' allows the use of a cold state selection in
combination with timeseries of a current (approved)
simulated forecast run

allowParameterMergeForNonSample boolean true

timeSeriesDefaultCacheSizeMB 50 Allocated memory for caching time series. More 10 Mb


memory increases the performance of handling time
series (e.g. in graphs or spatial display)

timeSeriesTaskRunCacheSizeMB 50 Allocated memory for time series that are created 10 Mb


during task runs (at a forecasting shell).

timeSeriesWriteCacheSizeMB 50 Temporary time series (synchLevel 9) are written to 10 Mb


the database when the size of the write cache is not
large enough to hold them in memory. This will slow
down the system. By increasing the size of the the
timeseries write cache this can be avoided. In
general this is only needed if you process large
amounts of gridded data.

DEFAULT_EXPIRY_DAYS Any Sets the default expiry time for timeseries in the 10 Days
number database. You can override this when storing
of days individual timeseries by specifying the expirytime in
the timeseriesSet

DEFAULT_EXPIRY_DAYS_LOGEVENT

DEFAULT_EXPIRY_DAYS_LOGEVENT_MANUAL

alwaysAllowWriteSimulatedBelongingToAlreadyRunnedModuleInstance

alwaysAllowDummyModuleInstanceRuns

SCHEMALOCATION

PI_SCHEMALOCATION

UseLenientPiTimeSeriesParser

EXPLORER_SYSTEMCAPTION any Option to set the window title of the FEWS Explorer
string (i.s.o. [Link]). In [Link] the element
value <systemCaption> underneath <systemInformation>
should contain a reference
($EXPLORER_SYSTEMCAPTION$) to this global
property variable

IDENTIFIER any MC identifier to be used in the EATimeSeriesWriter


string
value

NIMROD_DUMP_VALUES

checkFileWillExistOnCaseSensitiveFileSystem

693
GA_shiftExternalTimesToYear2000 boolean This setting is used to export data from the General false
Adapter always starting in the year 2000. True
means this setting is used. This overcomes the
issue with running FEWS after the year 10.000
which caused problems. Internally the dates are
handled normally.

REPORT_HTML2PDF_PROGRAM

REPORTS_LEFTCOLMCID MCID Reference to MCID (e.g. MC00 or MC01) to put


benchmark values in left or right column of a specific
system status report

REPORTS_RIGHTCOLMCID MCID Reference to MCID (e.g. MC00 or MC01) to put


benchmark values in left or right column of a specific
system status report

checkFileWillExistOnCaseSensitiveFileSystem boolean default = FALSE

mapLayersCacheDir any Local directory to keep an up-to-date cache of


local maplayers which are retrieved from a network drive.
directory Allows central maintenance of MapLayer-files
without storage in the central database.

LANGUAGE string Set LANGUAGE=REGIONAL to automatically use EN


the selected language in your Windows Regional
Settings.
To fix the language, you also can define a specific
language, like NL (Dutch). Default language is EN
(English)

COUNTRY

IP

PREFIXED_IP

NESTED_IP

JdbcServerPort integer IP port number on which the JDBC server will listen 2000
after startup

T0 string Date/time to set the system time to (only available


for stand-alone systems). The date/time should
conform to the pattern as configured for the system
(in the Explorer xml)

maxConfigFileSizeMB real Maximum size of configuration files that can be unlimited


imported into the localDataStore using the
ConfigManager

runInLoopParallelProcessorCount integer Maximum nr of cores (CPU's) Delft-FEWS can use 1


to run ensemble loops in parallel. Set to 100 to use
all available cores.

doCaseConsistencyCheckOnMc boolean Check new config files for case insensitive matches true if
on the MC, to prevent config corruption in localDatastoreForm
(case-insensitive) MsAccess localDataStores is MsAccess, false
Firebird

tempDir string Sets the temp dir to something other than the Windows default
windows default e.g. =F:/Temp

localDataStorePoolDir any Sets the dir where a pool of max. 10 localdatastore


local directories is stored to be used by OC at Citrix like
directory systems. Useful in case of using firebird as
localdatastoreformat, which requires the
localdatatstore to be at the fysical disk drive (not at
the network!).
For example:
localDataStorePoolDir=C:/FewsLocalDataStores
>> creates
C:\FewsLocalDataStores\<Region>\localDataStore0,
...1, ...2 etc

694
hideExternalHistoricalAfterSystemTime boolean If true, only external historical data prior to T0 is false
visible in the timeseries display. Any existing
external historical data after T0 will not be shown.

Known issues

1. all options are case sensitive


2. Some the of options listed here are deprecated

02 Launching FEWS
How to setup the launch of Delft-FEWS
There are a number of options when configuring the launch of your Delft-FEWS application. The most simple is to double click on the appropriate
executable in the bin directory e.g. C:\FEWS\bin\Your_Region.exe - this will launch FEWS directly.

In the bin there will also be a file of the same name with the extension .jpif (e.g. Your_region.jpif). Since the executable is generic (except for the
name) this file contains all the information required to launch your application.

..\jre The location of the JRE folder

-mx512m Amount of memory available for FEWS

-cp

$JARS_PATH$

[Link] Application type*

Anglian_SA Name of folder containing the configuration

*If wanting to use the config manager use line: [Link] or for the launcher use
[Link]

It is possible to add JAVA runtime options the .jpif file.

..\jre JRE Folder

-Xms768m Set initial java heap space

-Xmx1024m Set maximum java heap space

-[Link]=true turns on/off all usage of Direct3D

-[Link]=true Use the LAN system proxy settings (default FALSE)

-cp

$JARS_PATH$

[Link]

$USER_HOME\Application Folder name of configuration, of which a working copy will be copied and used in the defined directory.
Data\FEWS\Anglian_SA The base directory should always be at the same level a JRE and BIN

Notice that JAVA system properties should be defined after the -D keyword.

How to launch Delft-FEWS using the launch menu


Organisations which have multiple instances of Delft-FEWS (for example multiple regions or online/shadow type systems) may wish to use the
Delft-FEWS launcher.

You must create a folder in the root directory (same level as the bin and jre) which will contain the launcher configuration files (called for example
FEWSLauncher). You will need to have an executable and jpif in the bin directory - you start the launcher by double clicking on the executable.
But first we need to set up the launcher config files...

Firstly you can configure the password protected level of access required. Please note that this is not a highly secure method of password
protection but is meant simply to restrict access to those who require it.

695
This is done using the [Link] file (details below). The passwords are contained in the binary [Link] file.

Once you have entered the correct password you will be shown the appropriate screen from which you can choose the FEWS appplication you
wish to launch.

This is configured using the [Link] (details below). You can also display your organisations logo or picture of your choice by adding an
image in the FEWSLauncher directory called [Link] of size 455 x 540 Pixels.

[Link]
The [Link] file needs to follow the diagram in the following schema.

696
This [Link] contains the actions and roles required. The actions are linked directly to the [Link]. The 'role' describes which users are
have access to which actions. For example a forecaster might have access to the explorer only, while a system administrator may have access to
the admin interface and configuration management interfaces. For an example file click here. This file links actionIds to user roles.

The following actionIds are allowed:

ViewReports
LaunchFewsClient,
LaunchConfigManager,
LaunchAdminInterface,
Upload OnLine

In the example, the launcher is setup like the following:

The role of Forecaster has the privileges to run the fews client (LaunchFewsClient).
The role of ConfigManager is allowed to run fews, to run the configmanager, and upload files (LaunchFewsClient, LaunchConfigManager,
Upload OnLine).
The SystemManager is allowed what the configmanager can do as well as login to the admin interface (LaunchFewsClient,
LaunchConfigManager, LaunchAdminInterface, Upload OnLine).

Howto create a [Link] file from a [Link] is specified in the privileged section, see
[Link]

[Link]
This file contains the actions which are accessed through the launcher. The id links with the id given in the [Link]. You can see from the
schema that the action can link to a web page (for example the admin interface) or to a java application (fews explorer or config manager). You
will notice similarities between the attributes of the JavaAppType and those found in the .jpif file in the bin directory. An example file can be seen
here

The jvm Option gives you the chance to tweak the heap size used by the java virtual machine. Use -Xms for the initial java heap size (e.g.
-Xms256m), -Xmx for the maximum heap size (e.g. -Xms1G). The syntax is then as follows:

jvmOption="-Xmx1G -Xms256m"

697
03 Setting Up Scheduled Forecasts

Setting up scheduled forecasts


When using a live forecasting system, a scheduled forecast can be established simply through using the Manual Forecast display in the Operator
Client.

DELFT-FEWS will, however, not allow a manually submitted forecast to be set as the "current" forecast. Scheduling of Current forecast should be
only done by suitably authorised users, and follow a carefully defined scheduling plan. These users must have access to the admin interface tool
(see Admin Interface manual).

The procedure for establishing a schedule current forecast is;

Start an Operator Client. Select the workflow to be scheduled. Select the option "Scheduled Forecasting". Set the Start Time and End
Time properties as required, set the repeat time and the ShiftT0.
Submit the forecast to the Master Controller by clicking on run.
Go to the Administrator Interface. Select the tab Forecast Tasks and the Scheduled Tasks item. The forecast run just scheduled should
be available in the list.
Click on Details. This will open a new web page with relevant details on the forecast run.
Select "Download Properties". Save the XML file to a suitable location/name.
Open the XML file. Change the attribute
<makeForcastCurrent>false</makeForcastCurrent>
to
<makeForcastCurrent>true</makeForcastCurrent>
Save the XML file.

698
Cancel the forecast just submitted.
Return to the list of scheduled forecasts in the Admin Interface
Select "Schedule new Forecast"
Enter a relevant description (this will appear as a description of the run).
Enter a tag if this forecast is to be enhanced (see next section)
Select workflowId. This should be the same as the original forecast submitted.
Specify Failover behaviour. This is only relevant when two master controllers are available running in Duty/Standby. If this
workflow is to run on one Master Controller only, and should be replicated on the other then this item should be set to true. It
should then not be scheduled on the other Master Controller. If it is set to False, then the run should be scheduled on both
Master Controllers separately.
Enter details on TaskDue time and TaskRepeat time
Select the file to upload and use the browse button to load the XML file just defined.
Select Submit.
Confirm results in the Scheduled Forecasts list.

04 Setting Up Event-Action Configuration

Setting up event-action configurations


The live forecasting system can be scheduled to run workflows as a consequence of a log event generated by the system. This facility is used to
enhance forecasting, or to export current forecast data on a new forecast becoming available.

This can only be done by suitably authorised users, and should follow a carefully defined scheduling plan. These users must have access to the
admin interface tool (see Admin Interface manual).

A run that is to be enhanced must have already been submitted to the Master Controller and available in the Scheduled Forecasting Lists. The run
is identified by its tag available in that list.

The actions that can be defined are;

Resuming a suspended task


Suspending a task
Changing of scheduling interval of a task (and resuming the task if required)
Submitting a single run of a task

For each an appropriate action configuration must be defined.

The procedure in defining an action config is.

Open the Admin interface.


Select the Workflow and FSS's item
Select Event and Action Configuration .
Select Upload New Action Configuration . This is an XML file that describes the action to be taken. It includes the tag that is defined in the
Scheduled Forecast list. This must be identical (case sensitive) for a match to be made. Examples for the four types of actions are given
below.
Submit the Action Config.
Select Event Action Mappings item
Select Create New Action Event Mapping
Enter the Event Code and select the Action Configuration Id for the configuration just created.
Select submit.

NOTE: when deleting action configurations, the Action Event Mapping must be deleted first due to relations in the database.

Example of Action Config to resume a suspended task

<?xml version="1.0" encoding="UTF-8"?>


<actionxml type="task">
<enhance>
<tag name="AIRE_FORECAST"/>
<resume/>
</enhance>
</actionxml>

Example of Action Config to suspend a task

699
<?xml version="1.0" encoding="UTF-8"?>
<actionxml type="task">
<enhance>
<tag name="AIRE_FORECAST"/>
<suspend/>
</enhance>
</actionxml>

Example of Action Config to enhance a task

<?xml version="1.0" encoding="UTF-8"?>


<actionxml type="task">
<enhance>
<tag name="EDEN_FORECAST"/>
<repeatinterval interval="3600"/>
</enhance>
</actionxml>

Example of Action Config to run one instance of a task

<?xml version="1.0" encoding="UTF-8"?>


<actionxml type="task">
<oneoff>
<cardinaltime interval="900" reference="2004-01-01T[Link].000+00:00"/>
<tag name="EXPORT_CURRENT"/>
</oneoff>
</actionxml>

Note: a one off task requires a cardinal time step and a reference time to establish a correct T0 for the run. It also needs the " template task" with
the relevant tag scheduled (in suspended mode) as a singe/one-off task.

05 Setting up sending emails on events


First see here how to make an "Action Config to run one instance of a task". This couples a log event code to an action config. When a log
message with the configured event code is logged and reaches the central database, the action config is triggered. The tag in the action config
has to refer to a MC:SystemAlerter task that can send emails.

Then use the info here to create a new MC:SystemAlerter task. For the new task use the tag from the action config and make it a one-off task.
The xml file that needs to be uploaded contains the settings for the emails (see [Link] schema). In here it is possible to use the tag
%LOG% in the body of the email. This tag will then be replaced by the logmessage(s) that triggered sending the email. An example:

<alerts>
<emailalert>
<recipients>
<recipient email="[Link]@[Link]"/>
</recipients>
<configuration>
<smtp host="[Link]"/>
</configuration>
<subject>
<subjectline content="The subject line of the email to send"/>
<substitutions/>
</subject>
<body value="%LOG%"/>
<attachments/>
</emailalert>
</alerts>

]]>

When scheduling the task, set the start time somewhere in the future, otherwise the task runs immediately. Then search the new task in
"scheduled tasks" and set it to suspend. When the action config is triggered, then it copies the suspended task and runs the copy once.

700
06 Checklist for creating a live system from a stand alone system
There are a number of configuration aspects which must be considered when moving from a stand alone environment (i.e. workflows are
executed on your local PC) to a live system (i.e. workflows are executed on a forecasting shell machine).

Please ensure that these steps are followed to avoid problems in a live system environment

1. Synch levels

The synch levels determine how data is synchronised between the components of the live system. Please check all timeseries sets are assigned
a synch level. Note that when the synchlevel is omitted, it defaults to 0, so only for scalar forecasting timeseries the synchlevel can optionally be
left out.

The different synch levels which should be assigned to time series sets are described here section A.5.

2. Maintainance workflows

There are a number of maintenance tasks which should be scheduled on the live system through the admin interface (this is described in detail
here. These include the rolling barrel workflow for the forecasting shell machine.

This workflow should be created which include two "dummy" module instances:

<!--Delete records pending deletion-->


<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>MarkedRecordManager</moduleInstanceId>
</activity>
<!--Rolling barrel workflow-->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>RollingBarrel</moduleInstanceId>
</activity>

]]>

These modules do not need require configuration in the modules directory but they should be registered in the ModuleInstanceDescriptors file i.e.

<description>Ensures the forecasting shell performs the rolling barrel on


exit</description>
<moduleId>RollingBarrel</moduleId>

<moduleInstanceDescriptor id="MarkedRecordManager">
<description>Records pending deletion</description>
<moduleId>MarkedRecordManager</moduleId>
</moduleInstanceDescriptor>
]]>

These modules should also be included in the modules file (systemConfigFiles) where the link is made to the appropriate class:

<description>Export Reports from Local Datastore</description>


<className>[Link]</className>

<moduleDescriptor id="MarkedRecordManager">
<className>[Link]</className>
</moduleDescriptor>
]]>

07 Setting up alerts for the Alarmmodule

Contents
Contents
Introduction
Schedule fixed alert

701
Schedule template alert

Introduction
Starting with release 2011.02, a task for the MC_SytemAlerter workflow can be configured to send an alert to the Alarmmodule, developed by
Imtech for the IWP system. The alert sending has been implemented as an invocation of the Alarmmodule via a SOAP call (also know as a
Webservice call). This page how to describe the FEWS-related part of the implementation.

Currently these tasks can only be scheduled using the "Upload task(s) from file" functionality in the Admin interface. The uploaded XML
configuratio should conform to the [Link] schema. For examples see below. The Alarmmodule alert part is defined in the referenced
[Link] schema.

Schedule fixed alert


In the fixed alert scenario, the alert is entered as part of the taskproperties of the scheduled MC_SystemAlerter task. This can be used to send
regular or one-off alerts and will probably mostly be used for testing purposes.

<taskList xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link] xmlns="
[Link]
<task>
<taskStatus>S</taskStatus>
<runOnFailOver>false</runOnFailOver>
<taskProperties>
<description>Test IWP_0_TEST Alarm</description>
<workflowId>MC_SystemAlerter</workflowId>
<taskSelection>
<scheduledTask>
<schedulingPeriod>
<startDate>2010-12-20T[Link].000Z</startDate>
<endDate>3010-12-20T[Link].000Z</endDate>
</schedulingPeriod>
<schedulingInterval unit="minute" multiplier="30"/>
</scheduledTask>
</taskSelection>
<forecastPriority>Normal</forecastPriority>
<makeForcastCurrent>false</makeForcastCurrent>
<makeStateCurrent>false</makeStateCurrent>
<mcSystemAlerter>
<alerts>
<alarmModuleAlert>
<webserviceURL>
[Link]
<alarmDiagLine code="IWP_0_TEST" level="1" source="MC01" description="Test
Alert from FEWS"/>
</alarmModuleAlert>
</alerts>
</mcSystemAlerter>
</taskProperties>
</task>
</taskList>
]]>

Schedule template alert


The alert with the template is intended to be used as a suspended task which can be triggered through event-action configurations to trigger a
single task, in that case, the log messages triggering the alert will be added to the task, if in the template, the following tags are used:

%LOG% will be replaced by the message part of log entry


%EVENT_CODE% will be replaced by the event code of the log entry
%MC_ID% will be replaced by the MasterController ID creating the alert.

702
<taskList xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns="
[Link]
<task>
<taskStatus>S</taskStatus>
<runOnFailOver>false</runOnFailOver>
<taskProperties>
<description>IWP_0_Alarm</description>
<workflowId>MC_SystemAlerter</workflowId>
<taskSelection>
<singleTask>
<time0>2011-09-21T[Link].000Z</time0>
</singleTask>
</taskSelection>
<forecastPriority>Normal</forecastPriority>
<makeForcastCurrent>false</makeForcastCurrent>
<makeStateCurrent>false</makeStateCurrent>
<mcSystemAlerter>
<alerts>
<alarmModuleAlert>
<webserviceURL>
[Link]
<alarmDiagTemplateLine code="%EVENT_CODE%" level="1" source="%MC_ID%"
description="%LOG%"/>
</alarmModuleAlert>
</alerts>
</mcSystemAlerter>
</taskProperties>
</task>
</taskList>
]]>

11 Setting up a forecasting system

This chapter is not finished yet. Further content is needed

Introduction
This section can be used by the user as a guide when setting up a forecasting system. It will help and support with the main steps of setting up a
DELFT-FEWS application.

DELFT FEWS is a collection of standard displays, modules and plug-in. Together with external models these will form a forecasting system. In
this chapter only the standard DELFT-FEWS components will be used. For background information on the different components reference is
made to the description of the individual components in previous chapters.

In this paragraph the following subjects will be discussed:

What is required when setting up a FEWS application


Designing the Forecasting System
Creating a FEWS application, executable and basic configuration
Adding static data, configure the FEWS static configuration files
Adding Workflows and Module Instances
Configuring FEWS displays

Contents
01 Requirements
02 Designing the Forecasting System

703
03 Creating a FEWS Application Directory
04 Static Configuration

01 Requirements

What is required when setting up a DELFT-FEWS application?


Before starting configuration of a DELFT-FEWS application or editing a DELFT-FEWS configuration the following questions must be answered:

What is the goal of the FEWS application?


What do I want to use FEWS for?
Who are the end-users of the FEWS application?
What are the processes involved in making a forecast?
Which external simulation models will I use in my FEWS application?
Is there enough on-line observed meteorological and hydrological data available?
Is there any meteorological forecast data available?

These questions are just a small selection of questions that should be asked before really starting to build a FEWS. Once a clear idea about what
is required is established you can start designing and building a FEWS. Once your FEWS application is completed you can use the FEWS as a
system that helps you on a structured manner through all required steps of making a forecast.

All of the above questions will not be answered in this section, but will focus on the practical side of the system. What data do you need when
setting up a FEWS application.

ESRI Shape files for GIS map layers of the area you want to make a FEWS for. The FEWS explorer uses shape files as a background for
your locations. Make sure the GIS map layers are in the correct co-ordinate system.
Meta-data of the locations that are used in the FEWS (id's, names, co-ordinates)
Meta-data of the observed time series (hard and soft limits, thresholds, time steps)
Parameters used in the FEWS (id's, names, units, etc..)

Next you need to know what data is available at the gauging stations and in what format this will be delivered in the operational system.

All time series file formats, with good description of the file format for the time series you want to import. You also need to know the time
step of the data.
All grid file formats for the meteorological forecasts.
What is the co-ordinate system used. The FEWS needs all data, maps and locations, to be in the same co-ordinate system.
In what time zone is the date to be made available.

Extra information you need when setting up a FEWS application can be:

What is the procedure of the meteorological institute for making observations and forecasts available.
What interpolation procedures do I want to use for my time series and Grid data:
Linear interpolation of water levels and discharges?
Spatial interpolation of meteorological data, rainfall and temperature?
What is the data flow I want to use, i.e. which data is required for input for the models and which data is coming out of the models.
Do I have calibrated configuration files of the simulation models used in the FEWS
When making reports what will be the layout of the reports you want to produce.

When setting up a FEWS system it is recommended that you have a complete set of the required data before you start configuring. Experience
shows that incremental updating of the FEWS configuration files can cost you a lot of extra time. It is best to first make a layout of your system,
prepare GIS map layers, decide on the number of stations and series you want to use, decide on the interpolation procedures, decide on the
models to use, etc.., before starting to build a FEWS application.

02 Designing the Forecasting System

Designing the Forecasting System


The next step in building a FEWS application is to make a rough design of the system. This design can be very simple but it should include all
major steps and actions that will be included in the system. In our test case we will make a simple FEWS that consists of the following tasks;

import observed hydrological and meteorological data, and import forecast meteorological data
fill gaps using interpolation
run a rainfall- runoff model
show the results in a report

704
The FEWS should also check the imported data on outliers and extreme values. In a simple schema we will show the four main tasks.

Figure 159 Example of simple workflow running a sequence of modules.

Of course each of these tasks can be split into smaller elements. The import task (or workflow) will for example import data from different sources;
it is best to import data from different sources in separate elements. Each of these elements will be a Module Instance, i.e. a configuration of the
FEWS import module.

03 Creating a FEWS Application Directory

Creating a FEWS application directory


A FEWS application is made in a sub-directory with the name of the FEWS application. We will start the example by creating the sub-directory
'NewFEWS' in the main FEWS application directory, you can use the Windows Explorer to do so.

The main FEWS application directory must contain the 'bin' and 'jre' directories; these are the FEWS binaries and the Java Runtime directories.
Add a new directory named 'NewFEWS' in the main FEWS directory.

This application directory must now be filled with all required sub-directories and configuration files used by a basic FEWS application. Copy the
sub-directories of an existing application directory to the newly created 'NewFEWS' sub-directory. The minimum FEWS structure must look like
this:

"\ColdStates" empty
"\Config" FEWS configuration files
"\Help" FEWS Help file
"\Icons" FEWS icons
"\localDataStore" contains the FEWS database and cache files, empty at start
"\Map" contains the map properties file '[Link]' and shape files

705
FEWS executable

Besides a FEWS application sub-directory every FEWS has its own application executable, this is a small executable telling the main
FEWS programs where the application directory is located and which programs should be used. Associated to each executable is a jpif
file which contains information on how to start the Java Virtual machine.

e.g Files required in the \bin directory for our new FEWS application
[Link]
[Link]

It is possible to run multiple FEWS applications with one set of FEWS program files (binaries), to make a distinction between the FEWS
applications each FEWS should have an ID ('NewFEWS'). The FEWS ID is specified in the last row in associated jpif file.

04 Static Configuration
Adding static data, configure the FEWS static configuration files
Map layers
Parameters
Locations
First Prototype
Adding Workflows and Module Instances
Workflows
Module Instances
Configuring FEWS displays

Adding static data, configure the FEWS static configuration files


The first configuration steps will be to store your data in the correct configuration files. When adding data to the configuration files some order
must be followed:

Configure map layers,


Configure parameters,
Configure locations.

Map layers

The FEWS explorer contains a component to show map layers. These map layers are loaded by the FEWS explorer when the FEWS application
is started. The location of the map layers is the "\map" directory of the application. As of the 2007-1 release the maplayers are now put in the
Config/MapLayerFiles directory. As such they can now be uploaded with the Config manager and distributed to all the connected clients. When
using map layers make sure the files are ArcView Shape files and that the co-ordinate system of the shape files are the same. For for information
on how to configure the MapLayers see the Fews Explorer component documentation.

Parameters

Parameters are stored in the "Parameters" XML file, located in the regional configuration files. Make sure you keep the number of parameters
limited to a basic set, the more parameters you use the more complicated the configuration will become. An important property of a parameter or
parameter group is the unit. Use the same unit for a parameter group; this can minimise errors in configurations where conversions between time
series may introduce errors. Remember that the FEWS can convert units when importing or exporting external time series. More on configuring
parameters can be found in paragraph 5.4: "Parameters".

Locations

The Locations configuration file contains the information on all locations of the FEWS application. In a normal FEWS you will have a set of
meteorological stations and hydrological stations. You can also add basins as locations. In this case you must enter the same information as for a
location, you just treat it as a basin.
When adding locations to the system a location ID must be entered. Try using known location ID's, for example the same ID's as used in the
telemetry system. Configuration of locations is explained in chapter 5.2.

Location Sets are introduced in the FEWS to define logical groups of locations. During configuration of the FEWS locations file try to add the new
locations also to the correct location sets. More on configuring location sets can be found in paragraph 5.3: "Location Sets".

Grids, Polygons and Longitudinal Profiles also require a location to be configured in the Locations XML file. Grids and Longitudinal Profiles require
extra information, configured in the grids and branches configurations respectively.

Configuration of the location icons is done in the "LocationIcons" XML file located in the FEWS system configuration files, see chapter 4.5:
"Location Icons".

706
First Prototype

In principal all basic regional information has now been configured and we can start our first prototype. Three more files must be configured to
start our first prototype:

"Explorer" XML file, located in the FEWS system configuration files directory. Enter correct names, co-ordinate system selection, etc..
See chapter 4.2 for details about configuring the FEWS explorer configuration file.
"[Link]" ASCII file, located in the FEWS application root directory. This file must contain some global properties like the
regional directory.
"LogConfig" XML file, located in the FEWS application root directory. The correct location of the '[Link]' file (the FEWS debug log file)
must be configured in this file.

We are now ready to launch our first prototype by running the NewFEWS executable located in the '\bin' directory.

DELFT FEWS so far only includes the following configuration files:

RegionConfigFiles
Locations 1.00 [Link]
LocationSets 1.00 [Link]
Parameters 1.00 [Link]
SystemConfigFiles
DisplayDescriptors 1.00 [Link] (not edited)
DisplayInstanceDescriptors 1.00 [Link] (not edited)
Explorer 1.00 [Link]
LocationIcons 1.00 [Link]
ModuleDescriptors 1.00 [Link] (not edited)

Adding Workflows and Module Instances

The final task in setting up a basic forecasting system is configuring the workflows that perform the actual forecasting steps.

Workflows

A detailed description of workflows is given in chapter 7. In our test we will add four workflows; an Import workflow, an Interpolation workflow, a
Model workflow and a Report workflow. Each of these workflows will include one or several module instances. Workflow files are added in the
workflows directory and a descriptor of the workflow must be added to the "WorkflowDescriptors" XML file located in the regional configuration
directory.

Module Instances

Module instances are the actual configured FEWS modules, included in a workflow that do the forecasting tasks. When adding Module Instance
to the FEWS the XML configuration file must be added to the "ModuleConfigFiles" directory and the id of the Module Instance must be added to
the "ModuleInstanceDescriptors" XML file located in the regional configuration files.

Some Modules require additional configuration files to be added to the system beside the standard Module Instance configuration file. The import
module for example can include configuration files that;

Convert units of time series to FEWS units


Convert data flags from time series values to FEWS standard flags
Change the ID's of locations and parameters to FEWS id's

Configuring FEWS displays

After making the workflows the FEWS displays must be configured in order to see the data that is stored in the FEWS database.

The most important configuration file is the "filters" XML file located in the regional configuration directory. The filters configuration file defines the
locations and parameters that are displayed on the main map and in the list boxes of the FEWS explorer. Filters preferably include the parameters
and location sets; a good configuration of location sets can reduce the number of filters that need to be configured.

The FEWS displays that can be configured for making graphs are:

DisplayGroups 1.00 [Link]


TimeSeriesDisplayConfig 1.00 [Link]

These files are located in the FEWS System Configuration files.

There are two FEWS components not included in the workflows that can be configured to show additional information on the time series. These

707
are the threshold and validation components of the FEWS. It is not required to use these files, they will however add additional information to the
FEWS.

12 Configuration management Tool

This chapter is not finished yet. Further content is needed

Introduction
The Configuration Manager has been specifically created to allow the management of the configuration files for a regional configuration. Given
that DELFT-FEWS can be used in different levels, the configuration manager will need to be aware of the usage and modify its way of working
accordingly.

The management of Regional Configuration files entails the following activities:

Start the Configuration Manager


Download a configuration from a Master Controller
Import configuration files into local database
Make a selected configuration file the active file
Delete a configuration file
Export a configuration file
Upload a configuration to a Master Controller

With the above activities a configuration can be managed but also modified.

Contents
01 Managing Configurations
02 Validation of a Configuration
03 Analysis of a Configuration
04. Automatic Configuration Update

01 Managing Configurations

Managing configurations
When the Configuration Manager is started, it is initialized with the information available in the local datastore. The datastore that is used is
specified in the jpif file that needs to be provided with the executable file.

After startup the user may attempt to connect to the Master Controller of the region. The details of which are obtained from the synchronization
configuration files for that region. If a connection has been established successfully, the download and upload buttons will be activated. If a
connection cannot be established, the buttons will remain inactive.

The file menu provides a command to create a connection to the master controller, to facilitate the possibility that a user has connected to the
network after opening a Master Controller session.

708
After selecting Login the following window is shown (an example from the Southern Configuration. If multiple master controllers are available,
these will be shown also)

Download a configuration from a Master Controller

The user must manually initiate a synchronization session with the master controller to download the latest configuration files. Seeing that the
configuration manager works independent from FEWS, this is a required action to ensure that the local datastore is up to date. Only configuration
files need to be synchronized.

Click on [Download] to download all configuration files. The download button is only available if a connection has been established with a master
controller.

In the screen above the download button is not available.

If no download can be performed, this does not mean that the manager cannot be used.

Import configuration files

The import function allows a single or multiple configuration file(s) to be imported from the file system. Files can only be imported into a given
group if the Configuration Manager configuration allows this.

On import, each configuration file will be assigned a new, locally unique ID. This ID is prefixed with "CM: " following by the date/time of the import
in milliseconds. The exact date/time is arbitrary, as the local ID needs to be unique only on the local machine.

In case the file to be imported is the first file that is imported of a certain configuration schema, this file is directly set as Active.

Three types of configuration files are handled by the configuration manager: XML, HTML and Binary files. The handling of each file type is slightly
different, as is shown in the following table.

FileType Handling

709
XML The file is stored in a readable form in the data store. The content of the xml file is validated before being imported. Invalid files will
not be imported.

HTML The file is stored in a readable form in the data store. The content is not validated, as no schema will be available to validate against.

Binary Binary configuration files include all other configuration files. These may be xml configuration files, module data sets or module
states.

When a configuration file is imported, the user that has imported the file is registered.
How are configuration files displayed?
An active configuration is shown having a yellow background. A selected configuration file is shown having a blue background. An active selected
configuration file is shown with a blue background, except for the ID which is shown with a yellow background. Below an example is given of two
available Locations configuration files. The active file is selected.

Make a selected configuration file the active file

Of any configuration file instance, only one can be the active file. After selecting a file, click on [Set Active] to make the selected file the active file.
Only one configuration file may be active at any moment.

Delete a configuration file

Deleting configuration files is possible only in limited situations. A configuration file could at some stage be used in a forecast and must therefore
remain available for at least the length of the rolling barrel.

Configuration files may only be deleted in the following situation:


The configuration file ID is a local ID, ie it starts with the prefix "CM: ".
The configuration file is not the active configuration file, i.e. it is not referenced in the default table

To delete a configuration file, select the file and click on [Delete]

Export a configuration file

Exporting a configuration file allows the file to be saved to the file system. The filename will be set by the configuration manager. The filename will
follow the configuration file naming convention used in the file system:

Name Version [Link]

Where:

Name : ID of the configuration file


Version : Version number of the configuration file
Status : Default if the configuration file is the active file, a zero length string if the file is not an active configuration file.

710
After exporting, the configuration manager will start up the application that has been associated with the specific file. This association must be
configured in the configuration management configuration file.

Upload to Master Controller

Uploading a configuration (file) involves that all modified and added configuration files are synchronized with the master controller database. An
essential aspect of uploading is that the configuration files are provided with a unique Master Controller ID. The local ID that has been set for the
configuration file cannot be guaranteed to be unique for the master controller as multiple users may be changing a configuration.

During the period between downloading a configuration and the ensuing uploading of a modified configuration, in theory some one else could
have made a change to the same configuration. The configuration manager will not deal with this theoretical possibility. The procedure that is
used is that of optimistic locking, where the last changes that are made to a regional configuration are the changes that are stored.

The procedure for uploading a configuration file is:

Validate the configuration


Connect to the master controller
Request a unique ID for each configuration file having a Local ID
Reset the Local ID's to the obtained Master Controller ID's
Synchronize the configuration tables
Log off

The uploading of configuration files is carried out in a transaction, enabling the complete transaction to be rolled back if an error occurs.

Changing an existing configuration file

The procedure to follow when certain configuration file needs to be changed is simple. To do so, carry out the following steps:

Step 1: Select the configuration that needs to be changed


Step 2: Export the file by clicking on [Export]. This will start the 'editor' that has been configured for the selected configuration file. If the editor
does not allow editing, close the editor and open the configuration file in an appropriate editor
Step 3: Make the required changes and save the file. Give the configuration file an appropriate name, using the file naming convention given in
paragraph 13.2.5
Step 4: Import the modified file. Note that the filename is now shown in the description field.
Step 5: Set the newly imported file Active
Step 6: Use the analysis section to verify that the configuration is still correct. Verify the relevant workflows.
Step 7: Upload the configuration to the master controller. Check that the file has indeed been uploaded by verifying that the ID has been changed
from a Local ID to a Master Controller ID

02 Validation of a Configuration
Validation of a Configuration
Primary Validation
Direct Validation
Indirect Validation
Secondary Validation (internal dependencies validation)

Validation of a Configuration
The validation of a configuration is carried out on two levels:

Primary Validation: validation of a configuration file according to the associated schema


Secondary Validation: validation of the internal dependencies between configuration files

The Configuration Manager will not import or upload any configuration which violates the validation rules that have been set in the Configuration
Manager configuration file.

Primary Validation

Primary validation of an XML configuration file means that the content of the XML file is in accordance with the xml schema for that type of
configuration file. There are two possibilities to carry out a primary validation: direct or indirect validation.

Any other configuration files that are not XML, ie HTML and binary files, cannot be validated.

Direct Validation

711
Direct validation involves a direct check of the configuration file against the schema as defined in the configuration file. The schema that is
referenced in the xml file may or may not refer to the latest schema. Future releases of DELFT-FEWS should however have schemas that are
backwards compatible. The latest schema is the schema that is posted on the Delft Hydraulics website.

Direct validation is the most robust validation method and is recommended to be used. The Configuration Manager has been configured to use
Direct Validation.

Indirect Validation

When indirect validation is used, the configuration file is read with the code that has been created based on the latest schema at the time the code
was compiled. The file is valid if it can be successfully read.

This validation method is less robust than the direct method and should only be used when direct access to the schemas is not available.

There are a number of distinct differences in the use of either direct or indirect validation:

Direct validation ensures that a configuration file is in accordance with the latest schema. To use this option the latest version of the
Delft-FEWS code is required. Direct validation requires access to the Delft Hydraulics web site. An alternative could be to have all
schemas available locally, but this requires that the configuration files are edited to reflect the change in schema location.
Indirect validation ensures that the configuration file will always be accepted by the system in use. No connection to the internet is
required.

Secondary Validation (internal dependencies validation)

The internal dependencies of a regional configuration follows the rules of a relational database. These are described in the configuration manager
management file.

When a violation is found, the Configuration Manager will provide an appropriate message indication which violation has been found, together
with the file or files that have been found to cause the violation.

The validation of internal dependencies is only be carried out on the set of Active configuration files

03 Analysis of a Configuration

As of release 2007/01 the Analysis function is no longer available.

Analysis of a Configuration
Approach
Implementation
Matching of timeseries
Handling of cyclic references
Handling of special cases
Using the Configuration Manager Analysis Tool

Analysis of a Configuration
Configuration analysis provides the means to in detail analyze a configuration. The analysis uses the dependencies between three configuration
objects: workflows, Module Instances and timeSeriesSets to provide a visual overview of the configuration.

To analyze a configuration, activate the Analysis tab of the Configuration Manager.

712
Approach

The principle of the configuration analysis is quite simple. For each workflow in the configuration, all module instances that are used are shown in
the order in which they are used. For a selected module instance then all timeseries that are created by that module instance are displayed as the
top level timeseries.

For each of the timeseries in cascading order the following questions are answered:

which module instance created this timeseries?


which timeseries are required by that module instance?
etc

This procedure is recursively followed through until for example an import module instance is encountered. Following this through allows broken
links, unexpected starts or ends to be easily found in a configuration.

Implementation

When the Analysis mode is selected, a database of all timeseries is created. This involves that all module instances are analyzed for all the
timeseries that are created and that are required within each module instance. For a selected module instance the input and output timeseries are
matched in order to build the analysis tree.

The configuration files have been made such that a large amount of freedom is given to the user in setting up a configuration.

Matching of timeseries

Timeseries are matched on the following keys:

Key Comment

locationId A locationId is unique. A timeseries may also be identified using a locationSetId, which is a collection of locationId's. A
locationSetId may consist of a large number of locationId's. When a locationSetId is used, the locationSetId is shown in the
tree and the underlying timeseries are shown in a table.

parameterId the parameterId is unique

timeSeriesType the timeSeriesType is unique

timeStep the time step is unique

713
moduleInstanceId a timeseries will in most cases be assigned a moduleInstanceId that is equal to the moduleInstance that creates it.
However, this is not a rule. It may equally be possible that a time series is assigned a different moduleInstanceId. For each
timeseries it is therefore required that both moduleInstanceId's are used to identify the timeseries:
- the moduleInstanceId of the module that creates the timeseries. This moduleInstanceId is the same as the configuration
filename
- the moduleInstanceId that is assigned to the timeseries when it is created

Handling of cyclic references

The configuration files allow a certain degree of cyclic referencing. A cyclic reference is caused when a module creates a timeseries that matches
the input timeseries. In this case an infinite loop would be caused in the analysis processing. Note that for the actual configuration a cyclic
reference does not pose any problem whatsoever.

For certain modules a cyclic reference is expected and must be handled explicitly. This is the case for the interpolation module and for the
transformation module. In other cases when a cyclic reference is found no absolute inference can be made regarding the nature of the cyclic
reference, and further analysis must be stopped.

Transformation module

A typical example of apparent cyclic referencing in the transformation module configuration is given with the following example, in which a time
series is created through merging a number of time series where after in the same configuration the merged time series is checked to ensure that
no values below a given value are found.

The handling of the time series is carried out in two transformation steps:

Step 1: merge the inputs using data hierarchy. This results in a merged time series.
Step 2: the merged time series is transformed using an arithmetic function, applied in two segments. The result is exactly the same as the
time series in the step 1.

The above example illustrates that using the exact same timeseries as the resulting output timeseries in two separate transformation sets is
allowed. The same could also be achieved through two different module instances, but would obviously lead to additional modulInstanceId's.

Handling of this case is straightforward. When in a single transformation module multiple transformationSets are configured having the exact
same timeseries as a result, it may be safely assumed that the last timeseries is the principal result timeseries. The other timeseries may be
skipped in the analysis.

Interpolation module

A typical example of apparent cyclic references in the interpolation module is given by the following example, in which interpolation of a single
timeseries is applied three times. Each interpolation however is configured in a different manner for a special reason. The input and output
timeseries for each interpolation are however exactly the same.

Step 1: Interpolation of the burn-in period (required to create a smooth start-up for a hydrodynamic model). This is a simple linear
interpolation, over a fixed length in a fixed period. This period is usually longer than the period allowed in step 1
Step 2: Ensure that any small gaps up to e.g. 2 hours, are interpolated using linear interpolation
Step 3: Ensure that any remaining gaps are filled in with a default value. This interpolation is required to prevent unexpected crashing of
the hydrodynamic model.

Handling of this case is straightforward. When in a single interpolation module multiple interpolationSets are configured having the exact same
timeseries as a result, it may be safely assumed that the last timeseries is the principal result timeseries. The other timeseries may be skipped in
the analysis.

Handling of special cases

There are a number of special cases, exceptions to a general rule, that must be handled correctly by the analysis.

General rule: each module requires an input and an output timeseries.


Special case: some module may obtain data from other sources or may not produce a timeseries. The special cases are given the table below.

Module Input to the module Output from the module

importRun import xml files from filesystem

transformation Optional: timeseries or profile data

export xml file to filesystem

general adapter Optional: timeseries or import from filesystem in PI format optional: timeseries or export to filesystem

interpolation Input and output timeseries are always the same

Using the Configuration Manager Analysis Tool

714
For each workflow in the configuration, all module instances that are used are shown in the order in which they are used. These are displayed in
the left pane of the window:

The Configuration Manager configuration allows certain workflows to be specifically excluded from the analysis. In some cases this is required of
the workflow would create cyclic references that cannot be resolved internally. In the above example, this is the case for the
database_Maintenance workflow.

For a selected module instance then all timeseries that are created by that module instance are displayed as the top level timeseries:

In the right pane, Module Instances are shown with a green ball icon, while time series sets are shown with a yellow icon. A time series set may
consist of a single time series or possibly multiple time series. The details of the time series are shown in the table below the right hand pane:

715
In the above example the selected time series set consists of 4 time series.
Finding blind starts
Each result time series will be created through a module instance. This module instance in turn requires input data, either imported or created in
the system. For each result time series, all module instances that are used to create it can be followed through, analyzing the input data that is
used. The configuration manager will flag a blind start with an explicit error icon if a module instance input time series cannot be found.

04. Automatic Configuration Update


Function: Automatic update of configuration

Where to Everywhere: locations, locationSets, location DBF and attribute files


Use?

Why to Use? To let the system be updated by other maintenance programs.

Description: Functionality to automatically import and process regional configuration changes in locations, locationSets, location DBF and
attribute files

Available DelftFEWS2008.01, update in 2009.01 (DBF, maplayers)


since:

Contents

Contents
Overview
Configuration
Configuration Update Script ModuleConfigFile
PI Configuration Update Script
Sample input and output
Error and warning messages
Known issues
Related modules and documentation
Technical reference

Overview

To be able to have other programs that maintain the list locations, locationSets etc, it is possible to import configuration changes through an
import that imports update script files.
The configuration update works like a regular data import. There is a moduleinstance that imports PI update script files from an import directory.
The script file contains options for the next possible configuration updates:

addLocation: Add location to the Locations file.


addLocationSetItem: Add an existing location to a location set. Also adds location information to linked IdMaps and ValidationRuleSets.
removeLocationSetItem: Remove a location from a location set. The location information is also removed from linked IdMaps and
ValidationRuleSets.
editLocation: Edit an already existing location in the Locations file.
editLocationSetItem: Edit linked information for a location that is already present in the locationSet.
importRatingCurves: Import rating curves from xml file(s) in the FEWS rating curves format. The imported rating curves are added to the
ratingCurves xml file in the RegionConfigFiles directory. If a rating curve with the same ratingCurveId as an existing rating curve is

716
imported, then the existing rating curve will be replaced.
importMapLayerFiles: Import dbf files that contain location attributes. The imported dbf files are put into the mapLayerFiles configuration
directory. This only works for dbf files for which an older version is already present in the configuration. If the dbf files to import would
make the configuration invalid, then an error is logged and none of the dbf files will be imported.

ConfigManager
This functionality only works for configurations that are distributed through the database via the Config Manager.

Configuration

Configuration Update Script ModuleConfigFile

See the schema of the import moduleinstance at [Link]

The config file is very simple:

[Link]
<configUpdateScriptConfig xmlns:xsi="[Link]
xsi:schemalocation="[Link]
[Link] xmlns
="[Link]

<versionIncrement>0.1</versionIncrement>
<scriptDirectory>$IMPORT_FOLDER$/ConfigUpdate</scriptDirectory>
<failedDirectory>$IMPORT_FAILED_FOLDER$/ConfigUpdate</failedDirectory>
</configUpdateScriptConfig>
]]>

versionIncrement = Number to increment configuration file version with. Note that the increment should be zero in case no version numbers are
used in the config files.
scriptDirectory = Location of script files to be imported.
failedDirectory = Files that could not be imported due to an error are copied to this directory.
backupDirectory = Successfully imported files are moved to this directory.

PI Configuration Update Script

In the import directory the configuration update script files should be available. See the schema of the import moduleinstance at
pi_configupdatescript.xsd

717
PI_UpdateScript.xml
<configUpdateScript xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]

[Link] version="1.2">
<updateCommand>
<importMapLayerFiles>
<importPath/>
</importMapLayerFiles>
</updateCommand>

.... or

<updateCommand>
<addLocation>
<location id="loc2" name="loc2">
<description>description</description>
<shortName>short</shortName>
<toolTip>new location</toolTip>
<x>1</x>
<y>2</y>
<z>3</z>
</location>
</addLocation>
</updateCommand>

.... or

<updateCommand>
<addLocationSetItem locationsetid="Boezem_Poldergemaal_H.meting" locationid="loc2">
<idMapData idmapid="IdImportCAW" internalparameterid="[Link]" externalparameterid=
"P-new-ext" externallocationid="C-new-ext"/>
<validationRuleData hardmin="-10" rateoffall="-10" hardmax="150" parameterid="[Link]"
rateofrise="10"/>
</addLocationSetItem>
</updateCommand>

</configUpdateScript>
]]>

Note that the importPath is relative to the location of the PI script file.

Sample input and output

Example [Link]

Error and warning messages

Description of errors and warnings that may be generated

Error: Error message

Action: Action to fix

Known issues

None

Related modules and documentation

None

Technical reference

718
Entry in moduleDescriptors: Specification of: ENTRY and DESCRIPTION in the SystemConfigFiles\[Link]

<moduleDescriptor id="ConfigUpdateScript">
<className>[Link]</className>
</moduleDescriptor>

13 Additional Modules

This chapter is not finished yet. Further content is needed about how to download a configuration from a master controller and
calibration module

Introduction
In this chapter a number of additional modules are described. These modules are generally seen as a part of DELFT-FEWS. These can,
however, also be used independent of the DELFT-FEWS system, as they have been developed as modules that can be connected to
DELFT-FEWS using the same concept that is used to link third party modules to the system i.e. through the General Adapter. Currently the
modules described include:

The DELFT-FEWS Flood Mapping Module

Download a configuration from a Master Controller

The chapter also includes a brief description of the Calibration Module. While this module is a part of DELFT-FEWS, and its use is described in
the User Guide, some of the concepts are described here, as well as specific conditions it poses on the configuration of DELFT-FEWS.

Contents
01 Flood Mapping Module
03 Automatic WorkflowRunner in SA mode
04 Bayesian Model Averaging (BMA)
05 Historic Forecast Performance Tool (HFPT) Adapter

01 Flood Mapping Module

Flood Mapping Module


Data requirements
TIFF background maps
Data preparation
geoDatum
geoReferenceData
mapSectionId
point
Configuring the Flood Mapping Module
floodMapdirectories
rootDir
workDir
outputDir
inputDir
pcrDir
floodMapSet
geoDatum
longitudinalProfile
branchFile
timeSeriesFile
profileFile
floodExtentMap
input

719
profileFile
timeSeriesFile
asciiDemFile
asciiAxisFile
asciiMapSectionFile
geoReferenceFile
interpolationOptions
pcrScript
output
outputOption
asciiGrid
pcrGrid
filename
mapStackFileName
contour
filename
numberOfCountours
Running the flood map module

Flood Mapping Module


This document describes the configuration and use of the Flood Mapping module. While this module is distributed as a part of DELFT-FEWS, it
can potentially be run outside of DELFT-FEWS as well.

The role of the flood mapping module is to provide a projection of the results of a one-dimensional hydrodynamic module as a 2D flood surface
map. The result of the module can be displayed using the grid display, which is a part of DELFT-FEWS. The results can be equally exported to as
standard GIS interchange files using for example the ESRI Shape file and or ASC grid file formats.

Setting up a flood mapping module within DELFT-FEWS is equivalent to creating a new model. As such, specific requirements are posed on data
required in setting up the module, as well as some steps to make this data available to the module in the correct way.

Data requirements

The interpolated flood map is derived on the basis of the results of a one-dimensional hydrodynamic model. Effectively the results of the model,
given at the water level calculation points, are geo-referenced and subsequently interpolated to form a water level surface. The primary data
requirements are therefore those that link the 1D model to a 2D location. In some cases a careful interpretation of how the 1D model is
represented in two dimensions must be given, and a good understanding of the assumptions made in establishing the 1D hydrodynamic model for
the reach in question is prerequisite.

Georeferenced cross sections


The method used to link the 1D model results and a location on the map, is a datasets giving the coordinates of each cross section point. The 1D
model calculates levels at these points, and, using the locations given in this datasetm all information required for projecting the results in space is
available.

Important to note is the assumption made in 1D modelling that the water level calculated at each grid point in the model is valid at all locations on
the cross section. The georeferenced points available should therefore not only include one point per cross section (e.g. at the centre of the river),
but multiple points describing how the cross section crosses the floodplain and the main channel. Each of these points must carry the same
identification lable, and will be assigned the same water level calculated in the 1D model (see the example in Figure 160). For flood storages
areas, where the 1D model calculates a single water level, the outline of this area should be represented by a suitable number of points (an
example is given in Figure 161).

720
Figure 160 Example of geo-referenced cross sections. Triangles indicate geo-referenced points. Empty circles (see the dotted selection line)
show points belonging to one cross section

Figure 161 Example of a geo-referenced flood storage basin. Triangles indicate geo-referenced points. Empty circles (see the dotted selection
line) show points belonging to a storage basin for which the 1D model provides one calculated water level.

Digital Elevation Model


A digital elevation model of the reach in question should be available. This elevation model must be of sufficient extent and of sufficient resolution.
Generally these can be obtained from for example laser altimetry (LiDAR) survey data. In this case the resolution may be extremely high, without
necessarily adding to the accuracy in the final flood map. Run times and memory usage of the flood map module are quadratically proportional to
the resolution. A resolution of e.g. 10m by 10m is often sufficient, and source data may need to be aggregated prior to use in the flood mapping
module. (see data preparation).

River Axis
The flood mapping module establishes a final flood map on the basis of flood areas with a contiguous connection with the main river channel. A
shape file representing this river axis (normally a line) is required. Note that for storage basins with an embabnkment around the edge, this
contigous connection to the main channel may be a problem. As a consequence the complete storage area should be included in the river axis
theme.

721
Figure 162 River axis map. These are indicated by the hatched surfaces.

TIFF background maps

Although not strictly used by the flood mapping module, TIFF layers of the reach at sufficient resolution (e.g. 1:10000, or 1:25000) arte required to
check consistency of data and avoid problems such as positional errors etc.

Data preparation

Prior to application of the flood mapping module the source data will need to be prepared/reformatted to make this suitable.

Digital Elevation Model


The digital elevation modle for the reach in question will need to be checked on consistency and made available at a suitable resolution. Using too
fine a resolution will result in excessive run times of the module and/or memory problems. If the source model is available at too fine a resolution
then this can be aggregated. When aggregating the model, explicit consideration should be given to line elements such as embankments, roads
etc. Where these are present in the floodplain a different aggregation strategy should be used. For most areas an averaging aggregation strategy
should be used, while for line elements a strategy should be used where the aggregated cell contains the maximum value of the underlying cells.
IF available, levels along line elements can be forced in the aggregated elevation model.

The resulting flood map will return missing values where the digital elevation model contains missing values. These are often seen in the main
channel in Laser Altimetry derived elevation models. These should be filled in prior to running the flood mapping module. Technbiques such as
spatial interpolation or nearest neighbour filling can be used to remove these values. The resulting elevation model should always be checked
afterwards. Once complete the digital elevation model should be saved as an ARC-INFO format ASCII grid file

Flood map sections


The flood mapping module will interpolate a flood surface map for the reach in a set of independent sections. A flood mapping section is an area
considered to be geographically independent (in water level) from other sections in the flood map. There may be dependence through the
structure of the 1D model. In fact, the section map makes the structure of the 1D model explicit. It defines which part of the final flood map is
influenced by which part of the 1D model. This is the most apparent at a confluence, where for the piece of land between the two tributaries a
decision must be made from whci tributary each cell is to be inundated. This decision was actually already taken when establishing the structure
and cross sections of the 1D model. An independent section must also be defined for storage basins for which a level is calculated in the 1D
model. Each section is number consequitively by a unique index, starting at zero. This section map is best made as a polygon theme first, and
once completed should be saved as an ARC-INFO format ASCII grid file at the same resolution and extent as the digital elevation model.

722
Figure 163 Flood map sections. These are number consequetively, starting at 0.
River Axis map
The river axis map should also be saved as an ARC-INFO format ASCII grid file at the same resolution and extent as the digital elevation model.
If this theme is a line theme then it is good practice to buffer the line with a distance equal to the grid cell resolution prior to saving as a grid file.
The value of the grid cells is not important, but grid cells not in the river axis map should be saved as missing values.

Georeferenced points
The list of georeferenced points are saved to an XML file. The points for each of the sections in the flood map section coverage is defined in one
group. When creating a flood map

Figure 164 Elements of the geoReference points configuration.

geoDatum

Coordinate system used in defining the points. For an enumeration of available coordinate systems see Appendix B.

geoReferenceData

Definition of a set of points falling in a particular section. For each of the sections (see above) the points to be considered when establishing the
flood map for that section should be defined in separate groups.

Attributes;

label : label of the points in this group. Each label must be associated with a label in the longitudinal profile/time series used to create the
flood map.

mapSectionId

Id of the flood map section. This should comply with the section Id's in the Flood Map Section theme. The sections given in this file are
those for which a floodmap is established. If the section id is not included in this file then a flood map will not be interpolated for that
section id.

723
point

Definition of a point falling within the current section and to be allocated a calculated level associated with the current label.

Attributes;

x : x-coordinate of point (Easting)


y : y-coordinate of point (Northing)

Example of a set of geoReference points

Configuring the Flood Mapping Module

Once the data has been prepared, the flood map module itself can be configured. This is again configured through an XML file. The flood map
module can be applied in a number of different ways.
1. Input in the form of a set of time series, one for each calculation point in a river branch; output as a longitudinal profile time series.
2. Input in the form of a set of time series, one for each calculation point in a river branch; output as a flood map.
3. Input in the form of a longitudinal profile time series; output as a flood map.

When used to create flood maps, outputs can be defined to be returned in a number of ways;
1. As a time series of grids, showing distributed depth data for each time set,
2. As a single grid, showing the maximum distributed flood depth,
3. As a polygon of the maximum flood extent. This polygon can be formatted both as a Published Interface Polygon file, and as an ESRI
compatible Shape file.

724
Figure 165 Elements of the flood map module configuration

floodMapdirectories

Root element for definition of directories used in module

rootDir

Root directory for the module. All other directories can be defined relative to this root dir.

workDir

Working directory when running the module

outputDir

Output directory for resulting flood maps and publsished Interface FloodMap Stack file. This is the directory from which the General Adapter
running the flood map should be configured to read the maps.

inputDir

Input directory for time series / longitudinal profile for which a flood map is to be calculated. This is the directory where the General Adapter
running the flood map should be configured to write the data.

pcrDir

Optional location for PCRaster engine used in deriving flood maps. Required when using PC Raster executable (DLL is in the FEWS Bin
directory).

floodMapSet

Root element for the definition of activities to be run for the flood map module.

Figure 166 Elements of the flood map configuration

geoDatum

Definition of the coordinate system used in flood mapping. See Appendix B for enumeration of available options.

longitudinalProfile

Root element to be used when requesting output from the flood map module as a longitudinal profile.

branchFile

725
Published Interface formatted file with the calculation points to be included in the profile. Labels (id's) at the locations should co-incide with the
labels (id's) in the time series.

timeSeriesFile

Time series inputs. These should be given for each location to be considered in the profile. Note this can be used either to create a profile for use
in flood mapping in an ensuing step, or to create a profile for visualisation using the longitudinal profile display.

profileFile

Name of the output longitidunal profile. For each label where a match is found between the time series and the branch file data is retained in this
output file.

floodExtentMap

Root element to be used when using the module to create a flood extent map.

input

Root element for defining inputs to flood extent map.

Figure 167 Elements of the configuration of inputs from the flood map.

profileFile

Name of the longitudinal profile file (Published Interface XML format) if this is used as an input (may have been created in the previous step). The
labels in the profile file should coincide with the labels in the geoReference points file.

timeSeriesFile

Name of the time series file (Published Interface XML format) if this is used as an input. The labels in the time series file should coincide with the
labels in the geoReference points file.

asciiDemFile

Name of the digital elevation model. This must be in the Arc-info ASCII grid format.

asciiAxisFile

Name of the axis file. This must be in the Arc-info ASCII grid format.

asciiMapSectionFile

Name of the map sections file. This must be in the Arc-info ASCII grid format.

geoReferenceFile

Name of the published Interface XML file with the geo-referenced point data.

interpolationOptions

726
Options to be used in interpolation. See the interpolation module (Module Configuration) for details. The flood map module should be defined to
use bi-linear interpolation.

The options in this section should be set as;

<interpolationOptions>
<interpolationOption>bilinear</interpolationOption>
<interpolationType>seriesgeneration</interpolationType>
<valueOption>normal</valueOption>
</interpolationOptions>

pcrScript

Name of the PC Raster (GIS) script to run when creating the flood maps. This item should be set as;

For PCRaster Script (Old Version)

<pcrScript>
<pcrScriptFile>pcr_flood_clump.mod</pcrScriptFile>
</pcrScript>

See attached pcr_flood_clump.mod

For PCRaster XML Script (PCRaster 2008 Version compatible)

<pcrScript>
<pcrScriptXMLFile>[Link]</pcrScriptXMLFile
</pcrScript>

See attached [Link]

output

Root element for definition of the required output. Different output options may be selected. Multiple options may also be defined.

Figure 168 Elements of the flood map output configuration

outputOption

727
Definition of an outpus block, requesting the flood map module to return the given output type. Enumaration of available options includes;

pertime : for output as a time series of flood maps


maximumextent : for output of the maximum flood extent as a grid file.
contour : for output of the maximum flood extent map as a contour map

asciiGrid

Root element for requesting output as a time series of ASCII grid fles.

pcrGrid

Root element for requesting output as a time series of PC raster grid fles.

filename

Filename of output grid. Note that the time step number is appended E.g for time step 7 and filename "asc" this becomes "asc0000.001".

mapStackFileName

File name for the Published Interface XML format file used by the general adapter for importing the resulting flood map.

contour

Root element for requesting output as a time series of contours (polygon files). This option can only be used for creating a maximum
flood extent.

filename

Filename of output polygon file. Note that if this is given with a suffix "xml" then this is a Published Interface formatted XML file. If it is
"shp" then the output will be as an ESRI shape file.

numberOfCountours

Number of contours in the resulting map- at least 1 should be given.

Running the flood map module

The flood map module is run within DELFT-FEWS through the General Adapter. Details on the configuration options can be found in the
Module Configuration section. It is important to correctly configure the Import/Output directories correctly to allow the module to work
correctly.

An example of the General Adapter configuration is given below. Note a Java Class is run (this is the flood map module) with the name of the
XML file configuring the module as an argument. The other items in the General Adapter configuration are for defining data to export to and data
to import from the Flood Map module.

728
03 Automatic WorkflowRunner in SA mode

Workflowrunner
Workflows can automatically be started on stand alone systems by using the WorkflowRunner program. The WorkflowRunner will start the
given workflow either in the running region or start the region to run the workflow. The WorkflowRunner makes use of a socket interface started by
the stand alone system.

[Link]

In order to enable automatic workflow execution on a region, one has to configure the region to listen for workflow requests on a socket interface.
The socket interface can be configured in the piServicePortRange tag of the [Link] file. Next example shows a configured pi socket for port
8432.

]]>

A port number can only be configured once per operating system. Use different port numbers when running multiple regions on one machine.
Note that the start and end attribute of the piServicePortRange are set on the same port number to make the port number a fixed port number.

Run an automatic workflow from the command line using JPIF config

Workflow runs can be started from the command line. The easiest way to accomplish this is using a jpif configuration. In the bin directory
configure the '[Link]' and '[Link]' files. Configure the JPIF as follows

"[Link]
<Workflow id="Id">
<ip service="service" port="port" nr="nr">
\[optional system time in format: yyyy-mm-dd hh:mm:ss\]
]]></ip></Workflow>

Where the bold rows are WorkflowRunner specific. Here is an example of a valid jpif configuration:

After starting the jpif configuration either one of following situations can exist when the port numbers are correctly configured:

case result

The stand alone region is running and listening In in this case one should see that the workflow is executed in the system log of the explorer
on the correct port number. gui.

The stand alone region is not running. The region will be started up. The explorer gui appears and one should see that the workflow
is executed in the system log of the explorer gui.

04 Bayesian Model Averaging (BMA)

Contents
Introduction
Approach within FEWS

BMA in FEWS

Introduction
The Bayesian Model Averaging (BMA) is standard statistical approach for post-processing ensemble forecasts from multiple competing models.
Bayesian Model Averaging (BMA) is a standard statistical approach for post-processing ensemble forecasts from multiple competing models
(Laemer, 1978). The method has been widely used in social
and health sciences and was first applied to dynamic weather forecasting models by Raftery et al (2005). Details of the method can be found
therein.

The basic principle of the BMA method is to generate an overall forecast probability distribution function (PDF) by taking a weighted average of
the individual model forecast PDFs. The weights represent the model performance, or more specifically, the probability that a model will produce
the correct forecast. In a dynamic model application, the weights are continuously updated by investigating the model performance over the most

729
recent training period. The variance of the overall forecast PDF is the result of two components. The first component is associated with the spread
between the model forecasts. The second component is the uncertainty of each individual model forecast. The magnitude of this latter component
is also determined over the training period.
See also published paper on USE OF BAYESIAN MODEL AVERAGING TO DETERMINE UNCERTAINTIES IN RIVER DISCHARGE AND
WATER LEVEL FORECASTS

Approach within FEWS


The BMA module can be incorporated as FEWS Adapter Module
The page BMA in Fews gives a basic schematic view on how the data transfer from FEWS to BMA module and viceversa should be like.

BMA in FEWS

BMA in FEWS.
The present version of BMA within FEWS uses an R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging -
"Ensemble BMA Package". The package was developed by Chris Fraley, Adrian E. Raftery, J. McLean Sloughter and Tilmann Gneiting at the
University of Washington. This package is distributed under General Public License (version >= 2) .

R Package can be downloaded from CRAN R project website or directly from R package . However it is recomended that you download the
version of ensembleBMA from the link given below instead since the version 3.0-3 is tested and used within FEWS. The new version which is
available from the above mentioned link may or may not work flawlessly within FEWS.

Ensemble BMA Package Documentation.


For Ensemble BMA R package documentation, please refer to the online documentation on Cran R project website through this link Ensemble
BMA documentation

Versions
FEWS uses R version 2.7.0.
Package: ensembleBMA, Version: 3.0-3, Date: 2008-07-21
Supporting Package chron, Version: 2.3-24, Date: 2008-07-18
Please Note: The ensemble BMA is an older version that is no longer supported by the original developers

To run BMA in FEWS, firstly install the correct version of R on the computer where BMA Module is running .
Copy the contents of Ensemble BMA [Link] and Chron [Link] under library directory of R Package.
Please Note: Use only the package versions as mentioned above for running BMA Module in Delft-FEWS.

Systematic Diagram

Preprocessor

BMA Module preprocesser prepares the input for ensemble BMA R Package. Ensemble BMA R Package uses input as CSV format. The General
Adapater configuration of Preprocessor is shown as below.

730
<executeActivity>
<command>
<className>[Link]</className>
</command>
<arguments>
<argument>%ROOT_DIR%</argument> <!-- root directory -->
<argument>piOutputTimeSeries/[Link]</argument> <!-- outputfile -->
<argument>%TIME0%</argument> <!-- Time0 -->
<argument>0</argument> <!-- Start of Lead time period in days -->
<argument>[Link]</argument> <!-- Parameter file - each column represents a row -->
<argument>piOutputTimeSeries/[Link]</argument> <!-- Number of (partly) complete Forecasts
used for calculating the training period -->
</arguments>
<timeOut>4000000</timeOut>
</executeActivity>

The above configuration has to be repeated for different lead time. Make sure that name of output file accordingly changed.

BMA Module

BMA Module is a script written under R package which uses the ensembleBMA package written for R as briefly described above. The General
Adapater configuration for running BMA Module is shown as below.

<executeActivity>
<command>
<executable>$R_EXE$</executable>
</command>
<arguments>
<argument>--vanilla</argument>
<argument>%ROOT_DIR%/config/BMA_FEWS_Script.R</argument>
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- inputfile -->
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- outputfile qauntile -->
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- outputfile weights-->
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- outputfile bias-->
<argument>0</argument> <!- input lead time in days (not used) -->
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- inputfile Forecast Length -->
</arguments>
<timeOut>1000000000</timeOut>
<overrulingDiagnosticFile>%ROOT_DIR%/[Link]</overrulingDiagnosticFile>
</executeActivity>

The above configuration has to be repeated for different lead time. Make sure that name of input and output files are accordingly changed.

Please note: $R_EXE$ is attribute which is defined in [Link] file as "R_EXE=C:/Program Files/R/R-2.7.0/bin/[Link]"

BMA Module R Script

A typical BMA Module R Script can be downloaded from here.

The contents of BMA Module R script is briefly described as below.

731
--- Read Arguments
--- Check if files exists
--- Read Forecast Length file
--- Load Ensemble R
--- Read input data

-- Assign labels (hard coded - similar to parameter file) (R-Code - Make sure to update this line for
your model)
labels <-c("SBK_MaxLob_DWD_GME_Q.fs","SBK_MaxLob_DWD_LM_Q.fs",...............)

--- Perform ensembleBMA analaysis (R-Code)


enRData<-ensembleData(forecasts=rdata[,labels], dates=rdata$TIME, observations=rdata$OBS)
trainingrule=list(length=forecastlen,lag=1)
rDataBMA <- ensembleBMA(enRData,model="normal",trainingRule=trainingrule, control =
controlBMAnormal(maxIter=20))

--- output Quantile to File


--- output Wieghts to File
--- output Bias to File

Please make sure that the line "labels <-c("SBK_MaxLob_DWD_GME_Q.fs","SBK_MaxLob_DWD_LM_Q.fs",...............)" is changed according to
the number of models used.

Output of each BMA Module run are 3 files, with extension ...

*.wei -> weights


*.bia -> bias
*.qan -> quantiles - value for the next first forecast - (used only for checking)

Format of Weights file (*.wei)

forecast-date, weight for model one, weight for model 2 , .... and so on .... , sigma

Format of Bias file (*.bia)

B value for model 1, B value for model 2, ... and so on ...


A value for model 1, A value for model 2, ... and so on ...

Postprocessor

BMA Module postprocesser reads the output of R model prepares the data which is to be later imported into FEWS database. The General
Adapater configuration of Postprocessor is shown as below.

<executeActivity>
<command>
<className>[Link]</className>
</command>
<arguments>
<argument>%ROOT_DIR%</argument> <!-- root directory -->
<argument>piOutputTimeSeries</argument> <!-- outputDirectory -->
<argument>%TIME0%</argument> <!-- Time0 -->
<argument>3</argument> <!-- max lead time in days -->
<argument>[Link]</argument> <!-- Parameter file - each column represents a row -->
</arguments>
<timeOut>4000000</timeOut>
<overrulingDiagnosticFile>%ROOT_DIR%/[Link]</overrulingDiagnosticFile>
</executeActivity>

The postprocessor uses the output of BMA Module run (i.e. quantiles, weights and bias) and the input to generate new forecasted timeseries +
quantiles (10 , 25, 75 and 90) timeseries.

Generating Forecast Timeseries

The forecasted timeseries are generated using the weights, sigma and bias correction.

732
FOR EACH models_i (skip missing forecasts)

BMA += weight_i * (bias_a_i * forecast_i + bias_b_i)


sumweights += weight_i

END
BMA = BMA / sumweights

Quantile_10 = BMA - 0.842 * sigma


Quantile_25 = BMA - 0.675 * sigma
Quantile_50 = BMA
Quantile_75 = BMA + 0.675 * sigma
Quantile_90 = BMA + 0.842 * sigma

05 Historic Forecast Performance Tool (HFPT) Adapter

1 Introduction
This document ([Link] ) describes the Historic Forecast Performance Tool (HFPT) adapter which was first developed under
Environment Agency R&D project SC080030 ‘risk based probabilistic flood forecasting’ [1]. The original scientific name of the method is ‘Quantile
Regression’ which was subsequently renamed to HFPT. This report includes a description of the Historic Forecast Performance Tool adapter that
can be used within NFFS, the file formats for reading and writing of the quantiles, the configuration of the Historic Forecast Performance Tool
adapter in NFFS. In addition, a limited background on the method is described. In Appendix A the off-line calibration module of the Historic
Forecast Performance Tool is described.

The migration of the prototype R&D to the current version of the NFFS adaptor consists of several steps:

Increase robustness module for operational purpose. This includes adding error handling and creation of log files, adding flags to module
output/result files, module under subversion (SVN), simplified configuration (removing unnecessary items).
Updating test configurations SC080030 for case studies developed under the R&D project.
Documentation of configuration of the adapter in NFFS.

[1]
[Link]

2 Role in NFFS
The role of the Historic Forecast Performance Tool is to provide a probability distribution of the water level forecasts (or flow) conditioned on the
deterministic water level forecast (or flow forecast). This can one, a few or many or percentiles or quantiles (including median or any other
percentile/quantile like 0.05, 0.10, 0.25, 0.50, 0.75, 0.95).

The Historic Forecast Performance Tool adapter is linked to NFFS by means of the general adapter (see Figure 2.1).

figure [Link]

Figure 2.1 Schematic Interction between between Delft-FEWS and Historic Forecast Performance Tool adapter (see Werner et al., 2004, Weerts
et al., 2010)

3 Method description

3.1 Historic Forecast Performance Tool


With increasing leadtime, many sources of uncertainty impact the accuracy of forecasts, with different uncertainty components dominating at
different lead times. In an operational setting, forward propagation of all these uncertainties can be infeasible because it requires many data (e.g.
meteorological ensemble forecasts) or many model runs (EA, 2011?).

The Historic Forecast Performance Tool (i.e. Quantile Regression) adapter as developed in R&D project SC080030 (see Weerts et al., 2011)
makes use of offline-derived quantiles (median, quartiles, percentiles, etc) of the probability density function of the forecast error at different lead
times (i.e. climatology of the forecast error at different lead times). The estimates of the quantiles of the forecast error are conditional on the
(deterministic) water level forecast (or flow forecast) and leadtime. In real-time, based on the water level forecast and leadtime, the moment of the
forecast error is looked up and added to the water level forecast (or flow forecast).

733
The Historic Forecast Performance Tool estimates the uncertainty due to all uncertainty sources affecting the forecast error. In NFFS (i.e.
Delft-FEWS), the Historic Forecast Performance Tool is implemented as a post-processor on a deterministic forecast (see Figure 3.1).

[Link]

Figure 3.1 Example Historic Forecast Performance Tool as postprocesser in NFFS

3.2 Calibration Quantile Regression relationships


The Historic Forecast Performance Tool is based on Quantile Regression that is a method for estimating conditional quantiles (Koenker, 2005;
Koenker and Basset, 1978; Koenker and Hallock, 2001). This requires conditioning of the Quantile Regression relationships on a calibration
dataset of forecast values and associated forecast errors at each leadtime of interest (see Appendix A). This calibration is carried out off-line and
is described in more detail in Appendix A. The end result of the calibration are a number of files describing the QR relationships (see Figure 3.1)
per leadtime containing a look up table of the percentiles/quantiles of the forecast error conditional on the water level forecast (or flow forecast).

Although, it is possible to estimate the QR relationships for each leadtime, this is in practise unfeasible. Therefore, an interpolation approach
(linear) between the QR relationships of different leadtimes (i.e. assuming that the change in error characteristic between leadtimes is linear) is
used. Depending on the response time of the catchments the lead-time interval between the estimated QR relationships may vary (1-2 hours vs 3
hours). For example, for Todmorden an interval of 2 hours is used while for the Upper-Severn an interval of 3 hours is used.

3.3 Content of QR relationships files


The QR relationships are contained in csv files. These csv files contains 2 + n columns, where n is the number of quantiles for which estimates
are made. The file has a one-line header, which contains the names of the quantiles estimated. Each row contains a water level (or flow) value
along with the additive error quantiles, the following columns are given per row:

column description

1 Record number

2 Hindcasted discharge or water level value at the given lead time (are ordered in ascending order)

3 (n+2) Quantile error, belonging to the hindcasted value, given in the same row

An example of the content of such a file is given below:

942 4.9666 -1.57933 -0.42273 -0.13417 0.146459 0.622485

943 5.03154 -1.5847 -0.42347 -0.1344 0.146697 0.623916

944 5.039295 -1.59015 -0.42417 -0.13444 0.147244 0.625369

945 5.071049 -1.59569 -0.42562 -0.13537 0.147856 0.627183

946 5.087218 -1.60132 -0.42711 -0.13711 0.148691 0.629604

947 5.098072 -1.60704 -0.42863 -0.13839 0.149717 0.632067

948 5.113213 -1.61287 -0.42952 -0.13907 0.15061 0.634572

949 5.129232 -1.62231 -0.4305 -0.13991 0.151202 0.637121

950 5.150331 -1.63292 -0.43171 -0.14068 0.151703 0.639716

951 5.15863 -1.64373 -0.43221 -0.14179 0.151718 0.641067

952 5.175358 -1.65474 -0.43394 -0.1433 0.151812 0.641883

953 5.176646 -1.66597 -0.43826 -0.14371 0.152727 0.642714

954 5.219848 -1.67742 -0.44298 -0.14486 0.153603 0.643563

955 5.232494 -1.6891 -0.45232 -0.14687 0.153829 0.644428

956 5.259691 -1.70103 -0.46806 -0.14825 0.154138 0.645962

957 5.271904 -1.71322 -0.47251 -0.14872 0.154948 0.648547

958 5.28762 -1.72569 -0.47522 -0.14922 0.156024 0.65119

734
4 Functionality HFPT Adapter

4.1 Introduction
The Historic Forecast Performance Tool adapter is written in R (R Development Core Team, 2010) and executed via running an R script in the
general adapter (Weerts et al., 2010; Weerts et al., 2011). The R package can be downloaded from [Link]

Besides the base R package the following R libraries/packages are needed

Hmisc

XML

zoo

quantreg

The [Link] can be run via the General Adapter using command line arguments.

4.2 Configuration of the HFPT module in NFFS

4.2.1 [Link]

The R package contained in the ModuleDataSet files is exported and unzipped to the modules directory creating the directory
%REGION_HOME%/Modules/R-2.13.0. In the [Link] files the location of the [Link] must be defined as follows

R_EXE=%REGION_HOME%/Modules/R-2.13.0/bin/[Link]

The location of the HFPT adapter is assumed to be under %REGION_HOME%/Modules/HFPT and this is also assumed to be the
%ROOT_DIR%.

4.2.2 ModuleDataSets

The base package and the additional libraries are contained the directory named R-2.13.0 (+/-29Mb). This directory is stripped as much as
possible from unnessary items like html and pdf files. However, because of limitations on the size of the ModuleDataSet files in NFFS, due to the
use of weblogic (Boot, pers. comm.), the directory is split up into four ModuleDataSet files all smaller than 12 Mb. Resulting in four
ModuleDataSet Files

R_part1_Module.zip

R_part2_Module.zip

R_part3_Module.zip

R_part4_Module.zip

The Retrieve_Zipped_Configurations.xml workflow exports all ModuleDataSet files by for example adding these lines to the
Retrieve_Zipped_Configurations.xml file

<activity>

<runIndependent>true</runIndependent>

<moduleInstanceId>Midlands_US_HFPT_Modules</moduleInstanceId>

</activity>

would export the ModuleDataSets configured in the moduleInstance Midlands_US_HFPT_Modules. The same needs to be done the four
ModuleDataSet files containing R.

The Midlands_US_HFPT_Modules file looks like

<?xml version="1.0" encoding="UTF-8"?>

<generalAdapterRun xmlns="[Link] xmlns:xsi="[Link]

735
xsi:schemaLocation="[Link] [Link]

<general>

<description>Export modules directory</description>

<rootDir>%REGION_HOME%/Modules/</rootDir>

<workDir>%ROOT_DIR%</workDir>

<exportDir>%ROOT_DIR%</exportDir>

<exportDataSetDir>%ROOT_DIR%</exportDataSetDir>

<importDir>%ROOT_DIR%</importDir>

<dumpFileDir>$GA_DUMPFILEDIR$</dumpFileDir>

<dumpDir>%ROOT_DIR%</dumpDir>

<diagnosticFile>%ROOT_DIR%</diagnosticFile>

<convertDatum>false</convertDatum>

</general>

<activities>

<exportActivities>

<exportDataSetActivity>

<moduleInstanceId>Midlands_US_HFPT_Modules</moduleInstanceId>

</exportDataSetActivity>

</exportActivities>

</activities>

</generalAdapterRun>

Of course all the modules must be added to the [Link] file:

<moduleInstanceDescriptor id="R_part1_Module">

<description>Retrieves R_part1 zipped modules</description>

<moduleId>GeneralAdapter</moduleId>

</moduleInstanceDescriptor>

<moduleInstanceDescriptor id="R_part2_Module">

<description>Retrieves R_part2 zipped modules</description>

<moduleId>GeneralAdapter</moduleId>

</moduleInstanceDescriptor>

<moduleInstanceDescriptor id="R_part3_Module">

<description>Retrieves R_part3 zipped modules</description>

<moduleId>GeneralAdapter</moduleId>

</moduleInstanceDescriptor>

<moduleInstanceDescriptor id="R_part4_Module">

<description>Retrieves R_part4 zipped modules</description>

<moduleId>GeneralAdapter</moduleId>

</moduleInstanceDescriptor>

736
<moduleInstanceDescriptor id="Midlands_US_HFPT_Modules">

<description>Retrieves Midlands_US_HFPT_Modules</description>

<moduleId>GeneralAdapter</moduleId>

</moduleInstanceDescriptor>

4.2.3 Folder structure of HFPT module directory

The following folder structure is necessary and contained in the ModuleDataSet file

config

QR_models

• locationId[1]

• locationId[2]

• locationId[3]

• locationId[n]

Work

4.2.4 Location and file naming convention

Each QR relationships relationship is specific in terms of

• Forecast location

• Forecast leadtime

The folder location of each QR relationships for a specific location (2638 in this example) in our example would be as follows:

%ROOT_DIR%\HFPT\QR_models\2638

where 2638 specifies the locationId. The QR relationships are contained in comma-separated text files. The file naming convention, associated
with the QR relationships should be

%ROOT_DIR%\Modules\HFPT\QR_models\2638\QR_2638*_LT*.csv

The string _LT means 'Lead Time' and is used to find the associated lead time with the error model. characters in between _LT and .csv indicate
the lead time in hours (for example QR_2638_LT03.csv).

Example of Structure of Modules dir in NFFS

4.3 General Adapter Run


The Historic Forecast Performance Tool adapter is linked to NFFS by means of the general adapter (see Figure 2.1).

4.3.1 Header

The T0 is used by the HFPT module and should be exported as argument. To be able to do that the time0Format should be define in the general
section of the general adapter run. This enables the use of the %TIME0% later in the general adapter run. Below an example of the general
section is given. This will look the same for each moduleInstance.

<general>

<rootDir>%REGION_HOME%/Modules/HFPT</rootDir>

<workDir>%ROOT_DIR%/Config</workDir>

<exportDir>%ROOT_DIR%/work</exportDir>

<importDir>%ROOT_DIR%/work</importDir>

<dumpFileDir>$GA_DUMPFILEDIR$</dumpFileDir>

<dumpDir>%ROOT_DIR%</dumpDir>

737
<diagnosticFile>%ROOT_DIR%/work/[Link]</diagnosticFile>

<time0Format>yyyy-MM-dd HH:mm:ss</time0Format>

</general>

4.3.2 Exporting activities

The input files are exported by the general adapter in PI-timeseries format and contains the forecast water level (or flow) for the location. The
locationId of the forecasted time series is used to identify the error model directory.

The HFPT module is stateless. The relative viewperiod determines the length of the exported timeseries. This can be adjusted according to the
specific requirements of the region. If necessary the start can also be controlled by a dummy exportStateActivity (not shown).

Below an example configuration of the exportTimeSeriesActivity. Note that this example if for Upper Severn Midlands where they use a hourly
timestep.

<exportTimeSeriesActivity>

<exportFile>[Link]</exportFile>

<timeSeriesSets>

<timeSeriesSet>

<moduleInstanceId>Severn_Usev_FlowToLevel</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link]</parameterId>

<locationId>2638</locationId>

<timeSeriesType>simulated forecasting</timeSeriesType>

<timeStep unit="hour" multiplier="1"/>

<relativeViewPeriod unit="hour" start="-24" end="48"/>

<readWriteMode>add originals</readWriteMode>

</timeSeriesSet>

</timeSeriesSets>

</exportTimeSeriesActivity>

4.3.3 Execute activities

After exporting the input file as [Link], the general adapter carries out the execute activities as shown below.

<executeActivities>

<executeActivity>

<command>

<executable>$R_EXE$</executable>

</command>

<arguments>

<argument>--vanilla</argument>

<argument>%ROOT_DIR%/config/QR_FEWS.R</argument>

<argument>%ROOT_DIR%/work/[Link]</argument>

<argument>%ROOT_DIR%/work/[Link]</argument>

<argument>%TIME0%</argument>

738
</arguments>

<timeOut>60000</timeOut>

</executeActivity>

</executeActivities>

This executeActivity produces the error estimates conditional on the forecast water level time series contained in [Link]. These results are
written in [Link]. The number of output timeseries depends on the number of quantiles specified in the csv files containing the error model.

Each timeseries gets a suffix in the parameterId based on the header of the error model file

<parameterId>[Link].Q5</parameterId>

<parameterId>[Link].Q25</parameterId>

Etc.

Several flags can be added to the timeseries in [Link]:

Flag=”0” value is equal to original value (t < t0)

Flag=”1” value is corrected and reliable

Flag=”2” value is corrected, reliable but interpolated in between lead times

Flag=”5” value is unreliable, extrapolated beyond the domain Quantile Regression relationships calibration

4.3.4 Importing activities

The time series in the [Link] are imported during the import activities.

<importActivities>

<importTimeSeriesActivity>

<importFile>[Link]</importFile>

<timeSeriesSets>

<timeSeriesSet>

<moduleInstanceId>QR_2638_H_Forecast</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link]</parameterId>

<locationId>2638</locationId>

<timeSeriesType>simulated forecasting</timeSeriesType>

<timeStep unit="hour" multiplier="1"/>

<readWriteMode>add originals</readWriteMode>

</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceId>QR_2638_H_Forecast</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link].05</parameterId>

<locationId>2638</locationId>

<timeSeriesType>simulated forecasting</timeSeriesType>

<timeStep unit="hour" multiplier="1"/>

<readWriteMode>add originals</readWriteMode>

739
</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceId>QR_2638_H_Forecast</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link].25</parameterId>

<locationId>2638</locationId>

<timeSeriesType>simulated forecasting</timeSeriesType>

<timeStep unit="hour" multiplier="1"/>

<readWriteMode>add originals</readWriteMode>

</timeSeriesSet>

Etc.

Etc.

</importTimeSeriesActivity>

</importActivities>

To be able to import the timeseries the parameters outputted by the HFPT module adapter must be specified in the
\Config\RegionConfigFiles\[Link] (containing the definitions of all parameters).

4.3.5 Log files

During the the HFPT Module run a log file is created in the work directory. Below an example of the log file and screenshot of display in NFFS.

<?xml version="1.0" encoding="UTF-8"?>

<Diag

xsi:schemaLocation="[Link] [Link]

version="1.2" xmlns="[Link] xmlns:xsi="[Link]

<line level="4" description="Running R in FEWS"/>

<line level="3" description="Running Quantile Regression via R in FEWS"/>

<line level="4" description="Starting R-script"/>

<line level="4" description="input file : F:\NFFS_UF\Midlands_SA\Modules\QR/work/[Link]"/>

<line level="4" description="output file : F:\NFFS_UF\Midlands_SA\Modules\QR/work/[Link]"/>

<line level="4" description="Reading data, locationId: 2074 startDate: 2008-01-04 [Link] endDate: 2008-01-07 [Link]"/>

<line level="4" description="Simulated Q read from F:\NFFS_UF\Midlands_SA\Modules\QR\work\[Link]"/>

<line level="3" description="------------------------------------------------------------------------"/>

<line level="4" description="working with location: 2074 System date/time: Thu Apr 14 [Link] 2011"/>

<line level="3" description="------------------------------------------------------------------------"/>

<line level="4" description="Loading QR models for location: 2074"/>

<line level="4" description="Loading QR models for location: 2074"/>

<line level="4" description="QR models found for location: 2074"/>

<line level="4" description="Generating corrected Q through QR"/>

<line level="3" description="Forecast quantiles generated"/>

<line level="4" description="Writing forecast quantiles to PI-timeseries: F:\NFFS_UF\Midlands_SA\Modules\QR\work\[Link]"/>

740
<line level="4" description="Forecast quantiles successfully written to PI-timeseries: F:\NFFS_UF\Midlands_SA\Modules\QR\work\[Link]"/>

<line level="3" description="Finished running QR post-processing via R in FEWS"/>

</Diag>

4.4 Display options


Once the forecast run is approved, the results can be displayed. This can be best taken care off in the
Config\SystemConfigFiles\[Link] file. Of course it is also possible to make the individual timeseries visible via the
Config\RegionConfigFiles\[Link] file.

Below an example of the configuration in the DisplayGroups. This will display the area between 5-95 (gray) and 25-75 (blue) in different colours.
The colours available are listed

<display name="Llanidloes">

<description>2072</description>

<subplot>

<timeSeriesSet>

<moduleInstanceSetId>DODO_Historical</moduleInstanceSetId>

<valueType>scalar</valueType>

<parameterId>[Link]</parameterId>

<locationId>2072</locationId>

<timeSeriesType>simulated historical</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<relativeViewPeriod unit="hour" start="-24" end="0"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceSetId>DODO</moduleInstanceSetId>

<valueType>scalar</valueType>

<parameterId>[Link]</parameterId>

<locationId>2072</locationId>

<timeSeriesType>simulated forecasting</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<relativeViewPeriod unit="hour" start="-12" end="50"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceSetId>DODO</moduleInstanceSetId>

<valueType>scalar</valueType>

<parameterId>[Link]</parameterId>

741
<locationId>2072</locationId>

<timeSeriesType>simulated forecasting</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<relativeViewPeriod unit="hour" start="-12" end="50"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceId>ImportTelemetry</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link]</parameterId>

<locationId>2072</locationId>

<timeSeriesType>external historical</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<relativeViewPeriod unit="hour" start="-24" end="0"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

</subplot>

<subplot>

<area>

<color>gray50</color>

<opaquenessPercentage>50</opaquenessPercentage>

<timeSeriesSet>

moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link].05</parameterId>

<locationId>2072</locationId>

<timeSeriesType>simulated
forecasting</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<relativeViewPeriod unit="hour" start="0" end="48"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link].95</parameterId>

<locationId>2072</locationId>

742
<timeSeriesType>simulated
forecasting</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<relativeViewPeriod unit="hour" start="0" end="48"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

</area>

<area>

<color>blue</color>

<opaquenessPercentage>50</opaquenessPercentage>

<timeSeriesSet>

<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link].25</parameterId>

<locationId>2072</locationId>

<timeSeriesType>simulated
forecasting</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link].75</parameterId>

<locationId>2072</locationId>

<timeSeriesType>simulated
forecasting</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<relativeViewPeriod unit="hour" start="0" end="48"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

</area>

<timeSeriesSet>

<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link]</parameterId>

<locationId>2072</locationId>

<timeSeriesType>simulated forecasting</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

743
<relativeViewPeriod unit="hour" start="0" end="48"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link].50</parameterId>

<locationId>2072</locationId>

<timeSeriesType>simulated forecasting</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<relativeViewPeriod unit="hour" start="0" end="48"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceSetId>DODO</moduleInstanceSetId>

<valueType>scalar</valueType>

<parameterId>[Link]</parameterId>

<locationId>2072</locationId>

<timeSeriesType>simulated forecasting</timeSeriesType>

<timeStep multiplier="60" unit="minute"/>

<relativeViewPeriod unit="hour" start="-12" end="50"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

<timeSeriesSet>

<moduleInstanceId>ImportTelemetry</moduleInstanceId>

<valueType>scalar</valueType>

<parameterId>[Link]</parameterId>

<locationId>2072</locationId>

<timeSeriesType>external historical</timeSeriesType>

<timeStep multiplier="15" unit="minute"/>

<relativeViewPeriod unit="hour" start="-24" end="0"/>

<readWriteMode>read only</readWriteMode>

</timeSeriesSet>

</subplot>

</display>

5 References
Koenker, R.: Quantile Regression, Cambridge University Press., 2005.

744
Koenker, R.: Quantile regression in R: A vignette, [online] Available from: [Link] 2010.

Koenker, R. and Basset, G.: Regression Quantiles, Econometrica, 46(1), 33-50, 1978.

Koenker, R. and Hallock, K. F.: Quantile Regression, The Journal of Economic Perspectives, 15(4), 143-156, 2001.

Weerts, A.H., J. Schellekens, F. Sperna Weiland, 2010. Real-time geospatial data handling and forecasting: Examples from DELFT-FEWS
forecasting platform/system, IEEE J. of Selected Topics in Appied Earth Observations and Remote Sensing, 3, 386-394, doi:
10.1109/JSTARS.2010.2046882.

Weerts, A.H., H.C. Winsemius, J.S. Verkade, 2011. Estimation of predictive hydrological uncertainty using quantile regression: Examples from
the National Flood Forecasting System (England and Wales), Hydrol. Earth Syst. Sci., 15, 255--265, doi:10.5194/hess-15-255-2011. Available
from: [Link]

Werner, M.G.F., Van Dijk, M. and Schellekens, J., 2004, DELFT-FEWS: An open shell flood forecasting system, In 6th international conference
on Hydroinformatics, Liong, Phoon and Babovic (Eds.), World Scientific Publishing Company, Singapore, 1205-1212.

A Calibration of Quantile Regression Relationships

In order to derive the error models, a long enough hindcast needs to be performed. For each lead time considered, an error model (i.e. relation
between forecast value, forecast error and leadtime) can be derived from such a hindcast. Background information can be found in Weerts et al.
(2011) also at [Link] or directly via
[Link]

A procedure has been written in R to support this. To use this procedure, first produce a long enough hindcast and make sure that observed
values are written to a CSV file as follows:

19 Apr 2004 13:51,131.069

11 Aug 2004 23:21,309.644

24 Oct 2004 10:21,345.762

09 Jan 2005 07:36,347.430

17 Apr 2005 02:21,183.950

25 Oct 2005 20:06,246.336

09 Nov 2005 16:21,192.033

12 Jan 2006 01:06,146.348

16 Mar 2006 00:06,159.018

29 Mar 2006 08:21,277.829

12 Jan 2007 06:15,404.215

22 Jan 2008 13:45,372.617

07 Sep 2008 13:30,398.788

30 Nov 2009 19:30,386.434

The forecast values should be written in exactly the same manner. The dates are not used in the current version. Simply make sure that both files
contain the same number of lines and that the observed top value in time corresponds to the top value of the simulated values. If for instance QR
error relationships are derived for a lead time of 6 hours, and the first entry in the observations represents the date and time 2010-01-01 [Link]
then the first value in the paired simulation CSV file should contain the forecast value of the forecast with t0 = 2009-12-31 [Link] and lead time
6 hours. This setup is not very user friendly and we may change this in the future to use PI-timeseries with lead time information in the header
instead.

745
Once these files are known, the user can run the batch file 'QR_derive.bat' with the following 6 inputs as arguments (make sure R is in your path,
if not, the batch file will assume R is located in C:\program files\R\bin):

CSV - observed values

CSV - forecast values

Location name

Lead time

Quantiles

Missing value

A dataset for location VIKING1 has been delivered with the error derivation procedure. An example command line has been given below:

QR_derive "CSV/VIKING1_obs_01.csv" "CSV/VIKING1_mod_01.csv" "VIKING" "1" "0.05, 0.25, 0.5, 0.75, 0.95" "-990"

14 Tips and Tricks


Naming conventions for defining module config files
Specifying time series type

Naming conventions for defining module config files


For complex forecasting systems the number of configuration files can be very large. This is particularly the case for the module config files, and
because the names of these are used in Time Series Sets for storing and retrieving data, the names given should be chosen logically. Before
configuring large numbers of XML files it is wise to define a naming convention and use this throughout. An example of such a convention is
where a number of steps are used to process data prior to running a model.

A forecast model run for the HBV model in the Rhine may be defined in a module called:

HBV_Rhine_Forecast.xml

Data processing steps such as an interpolation module may then be called

HBV_Rhine_ForecastInterpolate.xml

Or a data merge module

HBV_Rhine_ForecastMergeInputs.xml

This clearly indicates the association between modules and brings structure to the configuration.

Specifying time series type


The DELFT-FEWS database handles time series of each of the four types quite differently. As a consequence the allocation of a time series type
can have significant effect on how easy it is to use that data within the system. There are also secondary considerations, including size of the
database, automatic synchronisation to operator clients etc.

The convention is that data series sourced from external systems (telemetry, meteorological forecasts) are referred to as either External Historical
or External Forecasting. All data produced by the system itself is referred to as Simulated Historical or Simulated Forecasting.

To increase flexibility, this convention may be extended;

All data from external sources transformed by DELFT-FEWS in such a way that the transformation is invariant remain External
Historical. This includes transformations such as rating curves, catchment averages etc.

15 External Modules connected to Delft-FEWS


Adapter Manuals — Deltares employee access only
Developing a FEWS (Compliant) Adapter — an overview
External model specific files

746
Delft3D-FEWS adapter configuration manual
Models linked to Delft-Fews — An overview of the models linked via the Published Interface

Adapter Manuals
The general adapter allows FEWS to run external models (for example hydrological and hydrodynamic models) outside of FEWS. This page is the
respository for the latest third party manual adapters and should be read in conjunction with the section of configuration manual relating to the
general adapter.

Please be aware that this page is not available to external users due to copyright issues.

Name Size Creator Creation Date Comment

EA FFS ISIS Adapter [Link] 77 kB janse_a 22-02-2008 16:41

FEWS-URBS [Link] 63 kB janse_a 03-04-2008 13:42

PDM Adaptor [Link] 621 kB Alex Minett 10-04-2008 14:09

M11 Adapter Manual [Link] 1.23 MB Alex Minett 12-10-2007 10:23

HEC-HMS model adapter


Currently (December 2010) HEC-HMS is used in three configurations:

1. FEWS Po (Paolo Reggiani)


2. FEWS Georgia (Paolo Reggiani)
3. FEWS Sudan (Jan Verkade)

A concise manual of how to add a HEC-HMS model as an adapter in FEWS is attached.

HEC-RAS Model Adapter

Contents

Contents
Introduction
Operating Forecasting Model
Operating HEC-RAS Model and FEWS Adapter
Download
Interface between FEWS and HEC-RAS
ID Mapping
Directory structure
Technical details about communication between HEC-RAS adapter and DELFT-FEWS system.
Description of the HEC-RAS data files
Configuring HEC-RAS adapter
Add global properties for hecras model and binaries
Overriding gate, levee breach settings
List of input and output variables which can be exchanged with the Delft-FEWS system and HEC-RAS adapter
Running model from FEWS

Introduction

The conceptual solution for the interface between HEC-RAS and FEWS has been illustrated in Figure 1. Two modes of working are identified that
each support a basic use case. The modes of working are:

Operational forecasting mode


Calibration mode

The technical implementations for both modes of working are quite different. For running HEC-RAS in operational forecasting mode from FEWS, a
software interface will be required that directly controls the model runs.

Calibration is considered as an activity that should be carried out offline from the forecasting system. This means that no direct control from
FEWS will be required but a user will need to be able to migrate model datasets (calibrated schematizations) from the HEC-RAS calibration

747
environment to the forecasting environment.

Present documentation will describe the first mode of operation. For details about operating model in the calibration mode please check standard
HEC-RAS documentation.

Operating Forecasting Model

Figure 1 Components used to run forecasts using HEC-RAS model in the FEWS/CHPS system

Operating HEC-RAS Model and FEWS Adapter

The HEC-RAS model provides the compute engine for running a hydraulic model schematization for a section of a river or a part of a river system.
The HEC-RAS Adapter forms the interface between the FEWS Forecasting Shell and the HEC-RAS model.

The HEC-RAS compute engine is, as its name suggests, the component that actually performs the HEC-RAS simulation. This simulation is
controlled from the FEWS Adapter, and all run time data such as initial and boundary conditions, and parameter settings are passed through the
adapter from and to the FEWS Forecasting Shell.

Download

Download of the model adapter is not available here: pls e-mail to Delft-FEWS Product Management for more information.
Configuration Manual: how to add a hecras model in [Link]

Interface between FEWS and HEC-RAS

The FEWS Adapter for HEC-RAS forms the interface between the FEWS Forecasting Shell and the HEC-RAS model. The adapter accepts the
request from the Forecasting Shell to run HEC-RAS, and imports the required data provided by the Forecasting Shell.

This data shall be provided in a standardized XML interface format, the FEWS Published Interface. Once a HEC-RAS run has been completed,
relevant results are passed back to the Forecasting Shell in the form of the standardized XML interface format.

A schematic representation of the communication between the Forecasting Shell and the HEC-RAS model via the FEWS Adapter is shown in the
diagram below.

748
Figure 2 Data flows involved during run of HEC-RAS model FEWS adapter

The FEWS Adapter allows running of HEC-RAS by FEWS. The FEWS Adapter should be considered as a thin communication (software) layer on
top the existing HEC-RAS engine. The adapter is tightly connected to the model engine. For longer term consistency, a FEWS adapter should
therefore preferably be maintained by the owner of the model code, in this case HEC. The FEWS Adapter for HEC-RAS shall be developed by
HEC or handed over to HEC upon completion.

The features of the are listed in the tables below.

Preprocessing 01 Clean up work and output folder

Preprocessing 02 Create module diagnostics file in Published Interface (PI) format

Preprocessing 03 Read the time series from PI time series

Preprocessing 04 Convert input PI time series into RAS *.b01 files

Launcher 01 Run HEC-RAS with run period and model alternative

Postprocessing 01 Open/create module diagnostics file in PI format

Postprocessing 02 Read the output time series from the RAS DSS and binary output files

Postprocessing 03 Write the time series to the [Link]

Postprocessing 04 Write the time series to the [Link]

Postprocessing 05 Write the updated PI state file to export folder

ID Mapping

The location parameters used in FEWS can be coupled to HEC-RAS DSS path names through ID-mapping. The configuration files for
ID-mapping should be created separately for each HEC-RAS model. Please consult 08 Mapping Id's flags and units for more information on how
to configure id mapping in FEWS system.

Directory structure

The data directories and configuration files that are required for operating the FEWS Adapter for HEC-RAS have been shown below.

749
Note that only binary and configuration files relevant to the HEC-RAS adapter are included, in a real configuration a lot more files can be involved
used by another modules of the FEWS system.

+---bin
| <FEWS binaries>
\---nerfc_sa

|
+---Config
| +---ColdStateFiles
| | HECRAS_CONNECTICTUT_UpdateStates [Link]....cold state files
| |
| +---IdMapFiles
| | [Link].......................... custom mappings for the HEC-RAS variables
and locations
| |
| +---ModuleConfigFiles
| | HECRAS_CONNECTICTUT_Forecast.xml............ main configuration file of the adapter
| |
| +---ModuleDataSetFiles
| | HECRAS_CONNECTICTUT_UpdateStates.xml.........zipped hecras files, transported to
Models directory
| |
| \---ModuleParFiles
| HECRAS_CONNECTICUT_Parameters............. configuration file which allows to
override some model and structure parameters
|
\---Models
\---hec/hecras
+---bin........................................ directory which contains all HEC-RAS
executables for Windows and Linux platforms
| [Link].......................... generates binary file containing detailed
model output
| dss_writer
| [Link]................... converts geometry files from GUI ASCII
format to binary
| geo_pre
| [Link]............................. performs steady flow simulations
| steady
| [Link]........................... performs unsteady flow simulations
| unsteady
| [Link]
| [Link]
| [Link]
| [Link].1
| libwldelft_native.so
| [Link]............. pre- and pos- adapter, Coverts HEC-RAS
data files to/from FEWS-PI format
| [Link]............................. main library used by the adapter, reads
and writes HEC-RAS data files
| [Link]
| [Link]
| [Link]...................... the rest of the files below are FEWS
dependencies used by adapter
| [Link]
| Delft_FEWS_castor.jar
| Delft_FEWS_schemas.jar
| Delft_PI.jar
| Delft_PI_castor.jar
| Delft_Util.jar
| jaxp-api-1_3.jar
| [Link]
| jaxp-sax-1_3.jar
| jaxp-xalan-1_3.jar
| jaxp-xercesImpl-1_3.jar
| [Link]
| [Link]
| [Link]
| [Link]

750
| xerces-c_2_8.dll
| [Link]
| [Link]
|
\---connecticut
| run_info.xml.......................... a file generated by FEWS containing paths,
run options
|
+---input.................................. input directory of the adapter, input
FEWS-PI time series files
| [Link]
|
+---log.................................... log messages written by the hec-ras
adapter
| [Link]
|
+---output................................. contains HEC-RAS output converted from the
binary and dss output files
| [Link]
|
\---work................................... working directory of the adapters
ctfld2ras.b01
ctfld2ras.b02
ctfld2ras.b03
ctfld2ras.c02
ctfld2ras.f04
ctfld2ras.g02
ctfld2ras.p01
ctfld2ras.p02
ctfld2ras.p05
[Link]
ctfld2ras.r01
ctfld2ras.r02
ctfld2ras.r03
ctfld2ras.r05
ctfld2ras.u01

751
ctfld2ras.u02
ctfld2ras.x02

Technical details about communication between HEC-RAS adapter and DELFT-FEWS system.

Communication between FEWS system and pre-/post- adapter strictly follows the FEWS Published Interface format.

Current implementation of the HEC-RAS adater has all files required to run it (even in a stand-alone mode, without DELFT-FEWS system). The
diagram below shows all dependencies from the FEWS libraries.

An adapter itself works only as a bridge between [Link] library and DELFT-FEWS system. [Link] provides a set of functions which allow
to read/write all required HEC-RAS data files, including files used by the graphical user interface of HEC-RAS model.

For more technical details about functionality used by the adapter see [Link] and [Link] files in attachment.

Current version of HEC-RAS adapter is able to update all required HEC-RAS GUI files automatically when model is started from
DELFT-FEWS. As result the user is able to get a complete model input generated by the DELFT-FEWS. This allows user to
analyze model input in details using HEC-RAS GUI.

Description of the HEC-RAS data files

Table 1 List of files to be read and written by adapter

Extension Description pre-adapter input pre-adapter output post-adapter input post-adapter output

.prj project file

.p01 plan files

.g01 geometry files

.f01 flow files

.u01 unsteady flow files

.b01 unsteady run files

.x01 input file for geometry preprocessor

.r01 steady run file

.O01 binary output file

.bco model log file

.c01 output of geometry preprocessor

.dss input / output files

.hyd01 input file for geometry preprocessor

Configuring HEC-RAS adapter

HEC-RAS model adapter follows standard way of integrating external models into the Delft-FEWS system by use of General Adapter. For more
details about configuration of General Adapter please check 05 General Adapter Module.

A very important part of the configuration is defined under the <exportRunFileActivity> element. It contains path to the RAS project file, location

752
of the RAS binary files and list of variables to be written into the output files. Additionally user may override logging level of the adapter to DEBUG
in order to see more detailed output from the adapter. This is useful during configuration of the adapter since list of possible output variables that
model can produce or list of input variables that can be consumed by the adapter are also printed to the log file.

List of output variables is defined under outputTimeSeriesParametersFilter item uses Regular Expressions. In most cases it is
a list of variable names delimited with '|' character and for those variables where name can occur in another variable names
(e.g. FLOW and FLOW AT GATE) it is necessary to use ^ as a prefix and $ as a suffix of the variable. For example:

]]>

Example configuration of the HEC-RAS adapter:

<generalAdapterRun xmlns:xsi="[Link] xsi:schemalocation="


[Link] [Link] xmlns
="[Link]
<general>
<description>hecras Model for Kennebec River</description>
<rootDir>$HECRASMODELDIR$/kennebec</rootDir>
<workDir>%ROOT_DIR%/work</workDir>
<exportDir>%ROOT_DIR%/input</exportDir>
<exportDataSetDir>%ROOT_DIR%/work</exportDataSetDir>
<exportIdMap>IdExportHECRAS</exportIdMap>
<exportUnitConversionsId>ExportHECRAS</exportUnitConversionsId>
<importDir>%ROOT_DIR%output</importDir>
<importIdMap>IdImportHECRAS</importIdMap>
<importUnitConversionsId>ImportHECRAS</importUnitConversionsId>
<dumpFileDir>$GA_DUMPFILEDIR$</dumpFileDir>
<dumpDir>%ROOT_DIR%</dumpDir>
<diagnosticFile>%ROOT_DIR%/log/[Link]</diagnosticFile>
</general>
<activities>
<startUpActivities>
<purgeActivity>
<filter>%ROOT_DIR%/log/*.*</filter>
</purgeActivity>
<purgeActivity>
<filter>%ROOT_DIR%/input/*.*</filter>
</purgeActivity>
<purgeActivity>
<filter>%ROOT_DIR%/output/*.*</filter>
</purgeActivity>
<purgeActivity>
<filter>%ROOT_DIR%/work/*.*</filter>
</purgeActivity>
</startUpActivities>
<exportActivities>
<exportStateActivity>
<moduleInstanceId>HECRAS_KENNEBEC_UpdateStates</moduleInstanceId>
<stateExportDir>%ROOT_DIR%/work</stateExportDir>
<stateConfigFile>%ROOT_DIR%/work/[Link]</stateConfigFile>
<stateLocations type="file">
<stateLocation>
<readLocation>[Link]</readLocation>
<writeLocation>[Link]</writeLocation>
</stateLocation>
</stateLocations>
<stateSelection>
<warmState>
<stateSearchPeriod unit="day" start="-10" end="-1"/>
</warmState>
</stateSelection>
</exportStateActivity>

753
<exportTimeSeriesActivity>
<exportFile>%ROOT_DIR%/input/[Link]</exportFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>HECRAS_KENNEBEC_Preprocessing_UpdateStates</
moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>QINE</parameterId>
<locationId>SIDM1ME</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>HECRAS_KENNEBEC_Preprocessing_UpdateStates</
moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>STID</parameterId>
<locationId>CASM1ME</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</timeSeriesSets>
</exportTimeSeriesActivity>
<exportDataSetActivity>
<moduleInstanceId>HECRAS_KENNEBEC_UpdateStates</moduleInstanceId>
</exportDataSetActivity>
<exportParameterActivity>
<fileName>[Link]</fileName>
<moduleInstanceId>HECRAS_KENNEBEC_UpdateStates</moduleInstanceId>
</exportParameterActivity>
<exportRunFileActivity>
<exportFile>%ROOT_DIR%/run_info.xml</exportFile>
<properties>
<string value="%ROOT_DIR%/work/[Link]" key="hecRasProjectFile"/>
<string value="$HECRASBINDIR$" key="hecRasBinDirectory"/>
<string value="^STAGE$|^FLOW$" key="outputTimeSeriesParametersFilter"
/>
<string value="^STAGE$|Hydr Radius L" key=
"outputLongtitudionalProfileParametersFilter"/>
<string value="DEBUG" key="logLevel"/>
<string value="false" key="skipBinaryOutput"/>
<string value="LD_LIBRARY_PATH=$HECRASBINDIR$:$LD_LIBRARY_PATH"
key="hecRasEnvironment"/>
</properties>
</exportRunFileActivity>
</exportActivities>
<executeActivities>
<executeActivity>
<command>
<className>[Link]</className>
<binDir>$HECRASBINDIR$</binDir>
</command>
<arguments>
<argument>%ROOT_DIR%/run_info.xml</argument>
</arguments>
<timeOut>1500000</timeOut>
</executeActivity>
</executeActivities>
<importActivities>
<importStateActivity>
<stateConfigFile>%ROOT_DIR%/work/[Link]</stateConfigFile>

754
<synchLevel>20</synchLevel>
</importStateActivity>
<importTimeSeriesActivity>
<importFile>%ROOT_DIR%/output/[Link]</importFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>HECRAS_KENNEBEC_UpdateStates</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SSTG</parameterId>
<locationId>AUGM1ME</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>HECRAS_KENNEBEC_UpdateStates</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SQIN</parameterId>
<locationId>AUGM1ME</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</timeSeriesSets>
</importTimeSeriesActivity>
</importActivities>
</activities>
</generalAdapterRun>

]]>

The hecras files (b01,prj,u01,x01,[Link] etc.)are saved in the /Config/ModuleDataSet directory. These are copied to the
/Model/hecras/<model>/work directory during the exportDataSet activity in the General Adapter file.

Add global properties for hecras model and binaries

The $HECRASBINDIR$ property is defined in the [Link] at the same level of the Config and Models directory:

HECRASMODELDIR=%REGION_HOME%/Models/hec/hecras
HECRASBINDIR=$HECRASMODELDIR$/bin

Overriding gate, levee breach settings

In a current version of the HEC-RAS adapter the user may also override computational interval of the model as well as structure parameters.

This can be done using parameters file which need to be also referenced by the HEC-RAS module config file. See <exportParameterActivity>
element in the general adapter configuration above.

An example belows shows list of the parameters which are supported for now.

Name of the structure defined in the HEC-RAS must exactly match group id and location of the structure (station, river,
chainage) must be the same as locationId.

<modifierType>HECRAS</modifierType>
<group id="default" name="hec-ras run parameters">
<parameter id="ComputationInterval">
<description>Computation interval in minutes. Does not change interval of output
data.</description>
<intValue>5</intValue>
</parameter>
</group>

<!-- Gate name and locationId should be equal to what is defined in the HEC-RAS gui -->
<group id="Gate #1" name="hec-ras gate parameters">
<locationId>CT River R1/18100</locationId>

755
<!--

Gate parameters depend on the mode selected in the RAS configuration files (gui),

Posssible modes which can te set in the gui are:

based on upstream WS (default)


based on specified reference
based on difference in stage
-->

<!-- parameters valid all modes -->


<parameter id="RateOpen">
<dblValue>0.05</dblValue>
</parameter>

<parameter id="RateClose">
<dblValue>0.05</dblValue>
</parameter>

<parameter id="MaxOpen">
<dblValue>20.0</dblValue>
</parameter>

<parameter id="MinOpen">
<dblValue>0.0</dblValue>
</parameter>

<parameter id="InitOpen">
<dblValue>3.0</dblValue>
</parameter>

<!-- parameters specific for "based on upstream WS" -->


<parameter id="ZOpen">
<description/>
<dblValue>4.0</dblValue>
</parameter>

<parameter id="ZClose">
<description/>
<dblValue>3.0</dblValue>
</parameter>

<!-- parameters specific for "based on specified reference" -->


<!--
<parameter id="ReferenceWSType">
<description>Valid values: Reach, RiverStation, StorageArea</description>
<stringValue>Reach</stringValue>
</parameter>

<parameter id="ReferenceWS">
<description>Depending on the ReferenceWSType parameter</description>
<stringValue>R1</stringValue>
</parameter>

<parameter id="referenceWSOpen">
<description>Reference elevation at which gate begins to open</description>
<dblValue>4.0</dblValue>
</parameter>

<parameter id="referenceWSClose">
<description>Reference elevation at which gate begins to close</description>
<dblValue>3.0</dblValue>
</parameter>

756
-->

<!-- parameters specific for "based on difference in stage" -->


<!--
<parameter id="stageDiffUSType">
<description>Upstream Reach, RiverStation or StorageArea location for stage
difference computation</description>
<stringValue>Reach</stringValue>
</parameter>

<parameter id="stageDiffUS">
<description>Depends on the stageDiffUSType parameter</description>
<stringValue>Reach</stringValue>
</parameter>

<parameter id="stageDiffDSType">
<description>Downstream River, Reach, RiverStation or StorageArea location for stage
difference computation</description>
<stringValue>Reach</stringValue>
</parameter>

<parameter id="stageDiffDS">
<description>Depends on the stageDiffDSType parameter</description>
<stringValue>R1</stringValue>
</parameter>

<parameter id="stageDiffOpen">
<description>Stage difference at which gate begins to open</description>
<dblValue>0.1</dblValue>
</parameter>

<parameter id="stageDiffClose">
<description>Stage difference at which gate begins to close</description>
<dblValue>0.1</dblValue>
</parameter>
-->
</group>

<group id="Levee Breach" name="hec-ras levee breach parameters">


<locationId>CT River R1/248658</locationId>

<parameter id="IsActive">
<description>true when breach is activated, otherwise model skips it during
computations</description>
<boolValue>false</boolValue>
</parameter>

<parameter id="IsWSStart">
<description>true if trigger for failure is WS elevation</description>
<boolValue>true</boolValue>
</parameter>

<parameter id="ThresholdWS">
<description>water surface elevation for breaching</description>
<dblValue>3.4028E38</dblValue>
</parameter>

<parameter id="ThresholdDuration">
<description>threshold time (hours) for breaching</description>
<dblValue>3.4028E38</dblValue>
</parameter>

<parameter id="StartDate">
<description>Start date for breaching (e.g. 01MAR2001)</description>

757
<stringValue/>
</parameter>

<parameter id="StartTime">
<description>Start time for breaching (e.g. 1630)</description>
<stringValue/>
</parameter>

<parameter id="CenterStation">
<description>Center of breach (XS station / location)</description>
<dblValue>8800.0</dblValue>
</parameter>

<parameter id="BottomWidth">
<description>Final bottom width</description>
<dblValue>500.0</dblValue>
</parameter>

<parameter id="BottomElevation">
<description>Final bottom elevation</description>
<dblValue>-10.0</dblValue>
</parameter>

<parameter id="LeftSideSlope">
<description>Left side slope</description>
<dblValue>2.0</dblValue>
</parameter>

<parameter id="RightSideSlope">
<description>Right side slope</description>
<dblValue>2.0</dblValue>
</parameter>

<parameter id="BreachTime">
<description>Full formation time (hours)</description>
<dblValue>1.0</dblValue>
</parameter>

<parameter id="WeirCoef">
<description>Breach weir coefficient</description>
<dblValue>2.6</dblValue>
</parameter>

<!-- parameter below are used only when IsPipe = true -->
<parameter id="IsPipe">
<description>true if piping failure, false if overtopping</description>
<boolValue>true</boolValue>
</parameter>

<parameter id="PipingCoefficient">
<description>Piping coefficient (default is .8)</description>
<dblValue>0.8</dblValue>
</parameter>

<parameter id="InitialPipingElevation">
<description>Initial piping elevation</description>
<dblValue>-0.5</dblValue>
</parameter>
</group>

]]>

List of input and output variables which can be exchanged with the Delft-FEWS system and HEC-RAS adapter

758
The HEC-RAS adapter is configured properly and forecast is performed from the Delft-FEWS system - a list of input and output variables will be
written into the standard log file of the system. The location and variables are based on the active <region>.b01 file of the HEC-RAS model
configured in the GUI of HEC-RAS. Pre-adapter of the HEC-RAS will provide list of all possible input variables and locations in the following part
of the log file:

Locations and variables listed after the line Found input at locations: can be configured in the Delft-FEWS as a part of adapter input, e.g.
[Link] in this case may contain something like the lines below:

[Link]
<timeZone>0.0</timeZone>
<series>
<header>
<type>instantaneous</type>
<locationId>CT RIVER R1/334752.0</locationId>
<parameterId>Flow Hydrograph</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>-999.0</missVal>
<stationName>Connicut River at Thompsonville</stationName>
<units>cms</units>
</header>
<event value="14.98" time="[Link]" flag="0" date="2008-11-06"/>
<event value="14.705" time="[Link]" flag="0" date="2008-11-06"/>
<event value="14.43" time="[Link]" flag="0" date="2008-11-06"/>
<event value="14.155" time="[Link]" flag="0" date="2008-11-06"/>
<event value="13.88" time="[Link]" flag="0" date="2008-11-06"/>
<event value="13.605" time="[Link]" flag="0" date="2008-11-06"/>
...
]]></series>

Note that <parameterId> and <locationId> are exactly the same as the variables and locations listed in the log file.

In the same way list of all output variables and locations can be found in the post-adapter log output, for example:

Note that parameter names to be written into the output FEWS-PI file will contain only the short name of the parameter, e.g.: Q Right and not
Flow in right overbank., (cfs).

The variables listed here will be written into the file specified as a "--output-binary-pi-file=<path>" argument of the post-adapter. Example of the
resulting FEWS-PI xml can be found below:

759
Example of the FEWS-PI containing binary output of the HEC-RAS model
<timeZone>0.0</timeZone>
<series>
<header>
<type>instantaneous</type>
<locationId>CT River R1/334752.0</locationId>
<parameterId>W.S. Elev</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>NaN</missVal>
<units>[?]</units>
</header>
<event value="32.06013" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.06013" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.034" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.03394" time="[Link]" flag="0" date="2008-11-06"/>
...
<event value="32.03618" time="[Link]" flag="0" date="2008-11-07"/>
<event value="32.03598" time="[Link]" flag="0" date="2008-11-08"/>
</series>
<series>
<header>
<type>instantaneous</type>
<locationId>CT River R1/334752.0</locationId>
<parameterId>E.G. Elev</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>NaN</missVal>
<units>[?]</units>
</header>
<event value="32.06734" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.06734" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.056885" time="[Link]" flag="0" date="2008-11-06"/>
...
]]></series>

Additionally to the variables available in the binary output of the HEC-RAS, usually called <file>.O01, a DSS output is available. In most cases it
contains FLOW and STAGE variables. Example of the FEWS-PI generated from the DSS file is given below:

760
Example of the FEWS-PI containing DSS output of the HEC-RAS model
<timeZone>0.0</timeZone>
<series>
<header>
<type>instantaneous</type>
<locationId>CT RIVER R1/0.00</locationId>
<parameterId>FLOW</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>NaN</missVal>
<units>CFS</units>
</header>
<event value="24.38823" time="[Link]" flag="0" date="2008-11-06"/>
<event value="-5.8442316" time="[Link]" flag="0" date="2008-11-06"/>
<event value="68.705124" time="[Link]" flag="0" date="2008-11-06"/>
<event value="391.09784" time="[Link]" flag="0" date="2008-11-06"/>
...
<event value="438.6425" time="[Link]" flag="0" date="2008-11-07"/>
<event value="-5259.6562" time="[Link]" flag="0" date="2008-11-08"/>
</series>
<series>
<header>
<type>instantaneous</type>
<locationId>CT RIVER R1/0.00</locationId>
<parameterId>STAGE</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>NaN</missVal>
<units>FEET</units>
</header>
<event value="5.0" time="[Link]" flag="0" date="2008-11-06"/>
<event value="5.0" time="[Link]" flag="0" date="2008-11-06"/>
<event value="5.0" time="[Link]" flag="0" date="2008-11-06"/>
<event value="5.0" time="[Link]" flag="0" date="2008-11-06"/>
...
]]></series>

Running model from FEWS

Check Delft-FEWS User Guide on how to run configured model from the Delft-FEWS system.

SYNHP Adapter

Introduction

In FEWS Extreme Discharges the SYNHP routing model was applied for the reach between Basel and Maxau. SYNHP is operated by the BfG,
the Landesamt für Wasserwirtschaft Rheinland Pfalz and the Landesamt für Umweltschutz Baden-Württemberg (Lammersen et al 2002,
Rheinland Pfalz, 1993). This model describes the routing of the flood wave through a series of linear stores, and although the model exists for the
Rhine downstream of Maxau, it is applied here only to the reach between Basel and Maxau. FEWS Extreme Discharges is an old version of the
FEWS system and it used it's own adapters. SYNHP will now be part of FEWS GRADE and therefore a new Module Adapter should be created in
order for FEWs to be able to communicate with SYNHP. The Module adapter will be implemented in Java and will be run from the General
Adapter.

The FEWS communicates with all forecasting modules through adapters. These adapters may be applied at two levels:

At the first level through use of the General Adapter and a published interface. The forecasting module must then comply with this
published interface. Provision is made in the published interface for static data, dynamic time series data, dynamic time series of grid
data as well as passive handling of module initial state files, execution of one or more steps to run the module and elementary
diagnostics.
At the second level through a specific adapter developed for the particular forecasting modules. A library of these is available, including
models such as the SOBEK hydrodynamic model, the HBV Rainfall-Runoff model, the LISFLOOD Rainfall-runoff model etc.

761
The Adapters component thus comprises of a General Adapter Module and of an, in principle, unlimited number of Model Specific Adapter
Modules. The FEWS component layout with the location of the adapter used for the SYNHP model component can be seen from the underneath
picture.

Figure 1: FEWS component layout with Adapters component


This document presents the detailed design of the SYNHP adapter that is part of the Adapters component. This adapter allows the DELFT FEWS
to communicate with the SYNHP. This adapter is a second level type adapter.

2 Role in FEWS

The role of the SYNHP adapter is to allow SYNHP to be run by the General Adapter using the DELFT FEWS Published Interface for module
input. The adapter also allows the output of SYNHP to be read by the General Adapter using the DELFT FEWS published interface. The General
Adapter export time series to the Published Interface (PI) file format. The Module pre-processor converts this PI-format to a specific ASCII format
to be read by the SYNHP. The postprocessor then converts the output from SYNHP back to a PI-file format. Figure 2 shows the position of
SYNHP and it's adapter in the Delft FEWS system.

762
Figure 2 The role of the SYNHP adapter in the Delft FEWS system

Functionality

The main functionality of the SYNHP adapter is to be able to communicate with the SYNHP model. The SYNHP adapter will convert the
Published Interface (PI) format, exported by the General Adapter, to a an ASCII format specific fro SYNHP. It will also convert the output from
SYNHP back to PI-format, which is then being imported by the General Adapter.

Design

Introduction

The SYNHP module will be implemented like all other external modules, with a pre-processor, and a postprocessor. The pre- and post-processor
will convert the time series from the Delft FEWS PI-format to a standard ASCII file format. The conventions of the ASCII file format will be
described in the section 'input data'.

Input data

The SYNHP module adapter receives the required time series form the General Adapter. The module adapter converts the PI file format to an
ASCII file format. The input file for SYNHP should be named '[Link]'. It must be able to link the received input data to the correct variable
SYNHP model. The General Adapter takes care of the ID-mapping and Parameter mapping for the time series to be stored in the local data store
within the NFFS.

ASCII file format conventions


Line 1: Comments with date split in TAG=dd MONAT=mm and JAHR=yyyy RUN=<nr of run>
Line 2: Comments with name of location and start date ([Link])
Line 3: hh:mm<tab>data
....

Example:
DATUM TAG=01 MONAT=01 JAHR=1961 RUN=306768
Basel *01.01.1961
00.00 245
00.00 276
00.00 297
00.00 315
00.00 318
00.00 320
00.00 326
00.00 336
00.00 353
00.00 368
00.00 378

Output data

The calculated time series must be stored in the NFFS database. These time series must be sent back to the General adapter to be stored in the
local data store. The module adapter converts the ASCII file format (named '[Link]') to a PI file format. The module adapter must be able to link
the calculated time series to the correct output variable. The General Adapter takes care of the ID-mapping and Parameter mapping for the time
series to be stored in the local data store within the NFFS.

ASCII file format conventions


Line 1: Standard text '***WOPL-5.7/30.05. 1 *** BIN-0 : input-1.0' with date when file was created at the end
Line 2: Comments start date ([Link]) DT=<model time step>*<nr of time step in output time step>
Line 3: Text 'BEZEICHNUNG'
Line 4: <name of output location>
Line 5: Text 'KILOMETRIERUNG'
Line 6: <Kilometer nr of output location>
Line 7: Text 'MODUL-NUMMERN'
Line 8: <Number of model>
Line 9: Text 'STATUS'
Line 10: <Status>
Line 11: Text 'VORGAENGER'
Line 12: Text '#063'
Line 13: <time step> <data>
....

Example:
***WOPL-5.7/30.05. 1 *** BIN-0 : input-1.0 300907
1.01.1961 DT=0.20*60***** 1 : IST
BEZEICHNUNG
MAXAU
KILOMETRIERUNG

763
363.00
MODUL-NUMMERN
008
STATUS
2
VORGAENGER
#063
0 324.04
60 332.35
120 354.40
180 382.40
240 415.28
300 448.13
360 477.91
420 497.62
480 505.75
540 509.72
600 509.40
660 510.89

Configuration

The schema used for the configuration of the SYNHP Module adapter is very straightforward. Some standard configuration option for an external
module should be available and therefore both the pre- and the post-adapter configuration files are similar to other external module like SOBEK
and HBV.

The pre- and post-adapter are very similar in structure. The only difference is the specification pre- or post- for the AdapterActivities. The general
section contains a reference to each folder within the SYNHP Module folder. It specifies which folder is the workDir, the importDir, the exportDir,
where the diagnostics file can be found and what number is used to indicate a missing value. An example of the pre-adapter configuration file is
given below.

The next section contains the activities, both pre- and post adapter activities are part of this section. They consist of a dateFormat indicator, the
name of the input file (to be read by the adapter) and the name of the output file (to be created by the adapter). The dateFormat indicator
determines the format used for in the output file for date or time. The next bit of this section contains the mapping of the time series to the correct
input or output series for/from the Module. It converts FEWS internal location and parameter id's to the id's or columns used for the input for and
output from the SYNHP model. An example of the post-adapter configuration file is given below.

764
Developing a FEWS (Compliant) Adapter

This is an overview of some of the requirements for developing a Delft-FEWS compliant adapter. This list is by no means
exhaustive so before starting to develop an adapter please contact [Link]@[Link]

Development of FEWS Adapter


The model adapter as described in this manual provides the interface between a so-called model and the Delft-FEWS system. It enables FEWS
to run such a model, thus providing the essential forecasting functionality. For each particular FEWS application applying a model, however, some
aspects of this system have to be configured by the user in order for the system to work correctly. To achieve this, it is useful that the user has at
least some basic understanding of the relation between Delft-FEWS, the model adapter and the forecasting model. In this section, a brief
overview of this relation is provided. For more information, the user is referred to the FEWS manual: General Adapter.

Delft-FEWS and the general adapter

A key feature of DELFT-FEWS is its ability to run external modules to provide essential forecasting functionality. The General Adapter is the part
of the DELFT-FEWS system that implements this feature. It is responsible for the data exchange with these modules and for executing the
modules and their adapters. The Delft3D model adapter is an example of such a module run from the General Adapter (see also FEWS manual:
General Adapter).

In order to develop a model adpater for a FEWS application, it is important to have a clear understanding of the relation between the General
Adapter and a model adapter. This section summarizes some of the functionalities included in the FEWS general adapter module, and their
relation to a model adapter.

The schematic interaction between the general adapter and an external moduleis shown in the below figure.

765
Figure 1: schematic interaction between the FEWS and an external module

This figure is understood in the following way:

1. The general adapter is that part of DELFT-FEWS which exports model input, imports model output data, and executes the pre-adapter,
module and post-adapter.
2. Export and import of model data is done in datafiles following the Published Interface (PI) XML format.
3. The preAdapter, Module and postAdapter together form the model adapter. The model adapter is initiated from the general adapter.
4. The preAdapter is that part of the model adapter which converts input data in PI XML format to native model input data.
5. The postAdapter is that part of the model adapter which converts native model output data to PI XML format to be imported by FEWS.
6. The Module is that part of the model adapter which starts a model simulation.

Essential in this division of tasks between model adapter and general adapter is that from the vantage point of the general adapter the model
adapter is a black box, and visa-versa. Exchange of information between these components is done based on exchange of PI XML data files;
DELFT-FEWS does not have any knowledge about the modelling system (Delft3D in this case), whereas the modelling system does not have any
knowledge about DELFT-FEWS. Translation of data from one component to the other is done using the model adapter. Thus model adapter is not
a part of the Delft-FEWS system itself, but is essentially an external program initiated by DELFT-FEWS through the general adapter. In other
words, for Delft-FEWS system a model adapter(s) + model is a model itself which read PI XML files as input and writes PI XML files as output.

This external program (the model adapter) should provide all the functionalities to, i) convert specific PI XML data to specific native model input
files and model state ii) initiate a model simulation and iii) convert model output data and end state to PI XML data readable by DELFT-FEWS. In
addition, this model adapter has the following tasks, iv) logging of errors during all steps by the model adapter and by the model itself and v)
administation of the model state. For both these tasks pre-defined PI XML file formats exists readable by DELFT-FEWS.

The model adapter should be as far as possible made configurable and must be done in a consistent way.

Log messages and diagnostics

External Modules and their adapters can be written in any programming language. However, in case Java is used, they have to implement the
interface ModuleAdapterRunnable. Modules and module adapters can only communicate with the General Adapter via the Published Interface.
The only 2 means to exchange information with the General Adapter are:

A diagnostic file written in the Published Interface format. If such a diagnostic file is available the General Adapter will read it and write
corresponding logs to the FEWS system.
The return code indicating the result of an execution.

766
return code meaning

non Zero graceful failure - read message

0 successful execution

Modules and their adapters cannot use exception or event handling as a means to communicate with the General Adapter.

Return code error information


The GA distinguishes between two types of failures of a module or module adaptor, graceful and non graceful.

On graceful failure the GA expects a non-zero return code. The type of error and the accompanied message are stored in the diagnostics
file according to the GA.
On non graceful failure the stack trace will be written for adapter (applies to adapters written within FEWS only) otherwise generic
message will be given that adapter has crashed.
On successful execution the GA expects a zero return code.

Actions taken by the GA on:

a non graceful failure or on a graceful failure that indicates a fatal error (see the definition of the diagnostics file below)
the GA will make a binary dump of the entire module configuration (a list of directories supplied by the module adaptor
constructor).
The GA will stop processing the remaining instructions.
all other occasions
the messages in the diagnostics file are passed on to the event logging system
further operation is continued

Diagnostics file
The published interface documentation describes the (simple) xml file format. Per batch (pre processing - module run - post processing) one
diagnostics file is always expected. Each line/message in this file contains the actual text and the warning level attached to this text. The warning
levels in the diagnostics file will be interpreted by the general adapter. according to the following table:

Level Name Description Example

3 info information, all is well Module PreProcessor : program ended

2 warning warning information Module Processor : unresolved symbol

1 error critical problems Module Processor: module fails (returns 1)

0 fatal fatal error, complete module crash Module Processor: division by zero

All levels higher than 3 are regarded as non-essential (debug) information. The warnings are recorded in the system, but no actions will be taken.

Updating PI State XML file within Pre and Post Model Adapters

Updating PI State XML file within Pre and Post Model Adapters
This briefly explains the method of using the State files of for a model in FEWS environment.

Generally all model requires only the state at the start of the run and writes the state at end of each run.

In FEWS, in an update run (sometimes refers to as Historical Run), the initial state of the model is copied from database to appropraite
directory/file where the model expects it to be.
And at end of each run a state is copied back to FEWS database to be used again for the next (update/forecast) model run.

To facilitiate this, the following steps are followed in FEWS


FEWS copies the model state file from database to a model directory say ".\Model\state". The model state can be a zipped file which can then be
unzipped by Preprocessor will and copy it to the appropraite model state directory. This zipped file may contain one or more state files depending
on a model (for example HBV, model has number of state file which is equivalent to number of of districts). The information of state locations and
date and time of state is written to a PI State XML file. An example is shown below.

767
<?xml version="1.0" encoding="UTF-8"?>
<State
xsi:schemaLocation="[Link]
[Link]
version="1.2" xmlns="[Link]
xmlns:xsi="[Link]
<stateId>warm</stateId>
<timeZone>0.0</timeZone>
<dateTime date="2008-02-10" time="[Link]"/>
<stateLoc type="file">

<readLocation>D:\Fews\App\po_sa\Modules\Topkapi\Trebbia\Fews_in\State\Topkapi_in.stt</readLocation>

<writeLocation>D:\Fews\App\po_sa\Modules\Topkapi\Trebbia\Fews_in\State\Topkapi_in.stt</writeLocation>
</stateLoc>
</State>

Model is then runnned, using this state file, for a given period. The states files are written back to the state directory say ".\Model\state" directory
at the end of simulation.
After the model run is completed, the Postprocessor then copies the state file or files back to the zip model state zipped file (say:[Link])

The Postprocessor not onlyu copy back the state file but also updates the pi state file with the last state file and changes the <dataTime> field to
appropraite date time

For example , if the model is runned till 2006-06-30 00:00,

then the date and time in the <dateTime> field should be changed to <dateTime date="2006-06-30" time="[Link]"/> and
the file name under <writeLocation> should be changed to
<writeLocation>D:\FewsPO\Modules\Topkapi\Reno\States\[Link]</writeLocation>

The General Adapter will then accordingly act on the new pi state file and stores the correct state to FEWS database.

External model specific files


PDM State file
The ISIS .ini file

PDM State file


The PDM state files (sfilein) contain the following data:

0
5 123456
79.48435 4.6171375E-02 0.0000000E+00 0.0000000E+00 0.0000000E+00

The third line of this file contains data about:

(1) Soil moisture content (mm)


(2) baseflow (mm/hr)
(3) surface flow (mm/hr)
(4) previous surface flow (mm/hr)
(5) inflow into surface flow store

The ISIS .ini file


In order to configure an isis model in fews you must configure the adaptor with an .ini file. This file is generated in by ISIS (file > export to EA
FFS).

Introduction

The ISIS/RS adapter for the EA FFS will be driven by an INI file created by ISIS. The purpose of the file is to identify the locations and unit types
of the inputs, outputs and controls of the ISIS model in a form that the adapter can read without knowledge of the ISIS DAT file format.

The file format has been developed with the following in mind:

768
To minimise coding and maximise the sharing of code between the adapter and FloodWorks, the parameters are the same as those in
the equivalent FloodWorks parameter file.
The element list is generic, and should allow adapters to be developed in future for other FloodWorks algorithms such as PDM and KW
with a minimum of effort. Other adapters would use the same code to read the INI file. This has resulted in a relatively verbose file. Most
of the elements in the file are actually size specifications, indicating the sizes of arrays of parameters.
The generic approach means there are several elements that will always take the same value in the ISIS adapter but which would have
different values in, say, a PDM adapter.

Sections

The INI file is in 14 sections as follows:

1. General
2. InputDimensions
3. Inputs
4. ControlDimensions
5. Controls
6. OutputDimensions
7. Outputs
8. StatesDimensions
9. RealParameterDimensions
10. RealParameters
11. IntegerParameterDimensions
12. IntegerParameters
13. CharacterParameterDimensions
14. CharacterParameters
The sections can be in any order.

Parameters

The full parameter list is below, followed by endnotes explaining some of the particular ISIS values. The following definitions will be useful:

Flow input series: QTBDY marked as being used as an input to the operational model
Stage input series: HTBDY marked as being used as an input to the operational model
Wind input series: component of a WIND unit marked as being used as an input to the operational model
Simple control: GAUGE, VERTICAL SLUICE, RADIAL SLUICE, ABSTRACTION, GATED WEIR, PUMP, or BLOCKAGE marked as being
used as an input to the operational model
Flow output series: node marked as providing flow output to the operational model
Stage output series: node marked as providing stage output to the operational model

Name Generic meaning ISIS value

General

AlgorithmID Algorithm ID IWRS2

ModelID Model ID The ID of this particular ISIS model - the name specified for the export
(e.g. MyModel)

InputDimensions

Total Total number of input series Number of flow or stage input series + 2 * Number of wind input series

Count0d Number of scalar sets of input 0


series

Count1d Number of 1-d arrays of input 2


series

Count2d Number of 2-d arrays of input 0


series

Size1d Dimensions of 1d series Number of flow or stage input series, 2 * Number of wind input series

Size2d Dimensions of 2d series Omit

Inputs

IDs Location IDs for the input data ISIS node label for each flow or stage input,
streams Identifier for each wind component input stream (each separated by a
comma and a space)

ControlDimensions

769
Total Total number of control series Number of simple controls + 5 * Number of breaches

Count0d Number of scalar sets of control 0


series

Count1d Number of 1-d arrays of control 2


series

Count2d Number of 2-d arrays of control 0


series

Size1d Dimensions of 1d series Number of simple controls, 5 * Number of breaches

Size2d Dimensions of 2d series Omit

Controls

IDs Location IDs for the control data Upstream ISIS node label for each simple control,
streams Identifier for each breach component stream (each separated by a
comma and a space)

OutputDimensions

Total Total number of output series Total number of output series

Count0d Number of scalar sets of output 0


series

Count1d Number of 1-d arrays of output 1


series

Count2d Number of 2-d arrays of output 0


series

Size1d Dimensions of 1d series Number of output series

Size2d Dimensions of 2d series

Outputs

IDs Location IDs for the output data ISIS node label for each output (each separated by a comma and a
streams space)

StatesDimensions

Total Total number of states 0

Count0d Number of scalar states 0

Count1d Number of 1-d arrays of states 0

Count2d Number of 2-d arrays of states 0

Size1d Dimensions of 1d states Omit

Size2d Dimensions of 2d states Omit

RealParameterDimensions

Total Total number of real parameters 3

Count0d Number of scalar real 3


parameters

Count1d Number of 1-d arrays of real 0


parameters

Count2d Number of 2-d arrays of real 0


parameters

Size1d Dimensions of 1d real parameter Omit


arrays

Size2d Dimensions of 2d real parameter Omit


arrays

RealParameters

770
Values Real parameters Blank Timeout per 1h of simulated time (s), Save interval (s)

IntegerParameterDimensions

Total Total number of integer 2 + Number of flow or stage input series + Number of simple controls +
parameters Number of output series +
2 * Number of breaches

Count0d Number of scalar integer 2


parameters

Count1d Number of 1-d arrays of integer 3


parameters

Count2d Number of 2-d arrays of integer 1


parameters

Size1d Dimensions of 1d integer Number of flow or stage input series, Number of simple controls, Number
parameter arrays of output series

Size2d Dimensions of 2d integer 2 , Number of breaches


parameter arrays

IntegerParameters

Values Integer parameters Version number (currently 4), ISIS label length (8 or 12), Unit type for
each flow or stage input series ,
Unit type for each simple control series ,
Unit type for each output series ,
Unit type and component count for each breach

CharacterParameterDimensions

Total Total number of character 1+


parameters Number of flow or stage input series +
Number of simple controls +
Number of output series +
Number of wind input series +
2 * Number of breaches

Count0d Number of scalar character 1


parameters

Count1d Number of 1-d arrays of 4


character parameters

Count2d Number of 2-d arrays of 1


character parameters

Size1d Dimensions of 1d character Number of input series,


parameter arrays Number of simple controls,
Number of output series,
Number of wind input series

Size2d Dimensions of 2d character 2 , Number of breaches


parameter arrays

CharacterParameters

Values String parameters Blank,


ISIS node label for each flow or stage input,
Upstream ISIS node label for each simple control,
ISIS node label for each output,
ISIS node label for each wind input,
Upstream and downstream ISIS node label for each breach

Example

Attached.

Delft3D-FEWS adapter configuration manual

Delft3D-FEWS adapter WIKI

771
The present WIKI contains a manual for configuration of the Delft3D-FEWS adapter. This adapter provides the interface between the
Delft-FEWS forcasting shell and the Delft3D modelling package. It has the following main features:

1. The Delft3D-FEWS adapter supports the following packages of the Delft3D suite: FLOW, WAQ (including ECO, BLOOM, CHEM etc)
and PART.
2. The Delft3D-FEWS adapter provides all of the basic functionalities required to run the models in an operational system.
3. The Delft3D-FEWS adapter is setup to be fully compliant with the Delft-FEWS system and philosophy.

In this manual, the following items will be adressed:

For a brief overview of some generic features of Delft3D and existing Delft3D-FEWS applications, see General.
For a required steps manual on how to setup the Delft3D-FEWS adapter for a FEWS application, see Adapter configuration.
Examples of a configured Delft3D-FEWS system are provided in Example configuration to serve as a guideline in setting up new
systems.
Best practices with regard to configuring Delft3D-FEWS systems are provided in Best practices.

For more information on Delft-FEWS, the reader is referred to Delft-FEWS WIKI. For more information on the Delft3D modelling package, the
reader is referred to Delft3D website.

Delft3D-FEWS adapter WIKI

Table of Contents

1. General

2. Adapter configuration

01 Design philosophy of Delft3D model adapter


02 Adapter configuration - Delft3D model adapter in relation to FEWS
03 Adapter configuration - configuration workflow
04 Adapter configuration - XML configuration scheme
05 Adapter configuration - template files
06 Adapter configuration - naming conventions
07 Adapter configuration - state handling and communication files

3. Example configuration

4. Best practices

Contact
The Delft3D-FEWS adapter was setup and tested to fascilitate all "standard" modelling applications in Delft3D. In case of missing features or
bugs, however, please contact Daniel Twigt or Arjen Markus. Similarly, with questions concerning the contents of this WIKI, please contact
Daniel Twigt.

1. General
This section provides some generic information on Delft3D and Delft-FEWS. Also, examples of existing Delft3D-FEWS applications are provided
for the interested reader.

Delft3D | Delft-FEWS | Delft3D-FEWS

Delft3D

Delft3D is the main 3D modeling package of Deltares. The package consists of a number modules, each of which has a specific purpose.
Available modules are:

Delft3D-FLOW: Module for 2D and 3D hydrodynamic simulations.


Delft3D-WAQ: Module for 2D and 3D water quality simulations. Can be used with hydrodynamic flow fields determined by Delft3D-FLOW.
Delft3D-PART: Module for particle tracking simulations. Can be used with hydrodynamic flow fields determined by Delft3D-FLOW.
Delft3D-WAVE: Module for ....

For more information on these modules, see Delft3D website.

Delft-FEWS

772
Delft-FEWS provides an open shell system for managing forecasting processes and/or handling time series data. Delft-FEWS incorporates a wide
range of general data handling utilities, while providing an open interface to any external (forecasting model). The modular and highly configurable
nature of Delft-FEWS allows it to be used effectively for data storage and retrieval tasks, simple forecasting systems and in highly complex
systems utilising a full range of modelling techniques. Delft-FEWS can either be deployed in a stand-alone, manually driven environment, or in a
fully automated distributed client-server environment. For more information, see FEWS WIKI.

Delft3D-FEWS

Subject of the present WIKI is the Delft3D-FEWS adapter. This adapter provides the interface between Delft3D and Delft-FEWS, based on the
FEWS design philosophy. This implies that:

1. FEWS manages data streams and workflows to execute model simulations / forcasts.
2. Delft3D is used to perform model simulations / forcasts based on data provided by FEWS.
3. The adapter provides the interface between both; it converts output data by FEWS to native model input files, manages model state
handling, executes model simulations, converts model output data to the FEWS PI XML file format.
In this way, the Delft3D model adapter allow to embed a Delft3D model in a operational FEWS system.

Whereas FEWS was originaly setup to fascilitate 1D/2D operation runoff modelling, the system has also found its way into the 3D realm of open
waters and lakes. The combination of Delft-FEWS with the 3D Delft3D modeling package offers a range of new possibilities in this sense. For
example, with regard to operational surge modelling in open waters
and water quality modeling, where vertical variability may be essential.
Up to this date, a number of pilot projects have been performed at Deltares to exploit these benefits. A short summary of these projects is
provided below.

Algenbloei FEWS application

ADD INFORMATION

StPetersburg DSS Demonstrator FEWS application

ADD INFORMATION

2. Adapter configuration

Introduction

This section of the WIKI contains all information required for configuring the Delft3D-FEWS adapter for a FEWS application.

1. An overview of the design philosophy based upon which the adapter was developed is provided in section Design Philosophy. This
section motivates the choice for the current approach, states the high-level assumptions upon which the adapter is based and described
the high-level design choices.
2. Section Configuration workflow provides a step-by-step plan which the user is adviced to follow when setting up a Delft3D-FEWS
system.
3. Section GeneralAdapter Configuration provides some useful background information on the relation between the Delft3D-FEWS
adapter (model adapter) and the FEWS General Adapter from which this model adapter is run.
4. Section XML configuration scheme provides background information on the XML configuration scheme used for the Delft3D-FEWS
adapter. This section elaborates the contents of this file and the way in which to configure it.
5. Section Template files provides background information on the template files used by the Delft3D-FEWS adapter. This section
elaborates the contents of these files and the way in which to configure them.
6. Section Naming conventions describes the various naming conventions used for the XML configuration scheme and the template files.
During configuration these naming conventions should be adhered to strictly.

Example files from a fully configured Delft3D-FEWS system are provided for reference and discussed in the separate section Example
configuration.

Best practices with regard setting up a Delft3D-FEWS system are provided in section Best practices.

Table of Contents

01 Design philosophy of Delft3D model adapter

02 Adapter configuration - Delft3D model adapter in relation to FEWS

03 Adapter configuration - configuration workflow

04 Adapter configuration - XML configuration scheme

05 Adapter configuration - template files

773
06 Adapter configuration - naming conventions

07 Adapter configuration - state handling and communication files

01 Design philosophy of Delft3D model adapter


The Delft3D-FEWS model adapter is intended to work which each of the Delft3D modules listed in the General section. Since these modules can
differ siginificantly in:

Model input file format


Model output file format
Relationships between different Delft3D modules

A number of high-level design choices have been made during development of this model adapter. These choices were made to:

Provide consistency between the adapters for the different Delft3D packages
Make the adapter fully compliant with "the FEWS standard"
Keep the effort required for adapter configuration to a minimum
Keep the adapter largely unaware of the different types of native model input file format, thus making it less prone to errors due to
changes in these formats.

For the Delft3D-FEWS model adapter user, these choices amount to the following practical issues, which should be taken into account during
adapter configuration:

1. The adapter assumes that a fully calibrated and setup model is provided for configuration in the FEWS system. Additional changes to the
model schematisation may require additional changes in the configuration of the model adapter (though not necessarily).

2. The adapter works using templates for native model input files. In each of these template files, keywords serve as placeholders for
dynamic time series data to be obtained from FEWS. The model adapter subsequently replaces these placeholder keywords with the
appropriate data, exported from FEWS in PI XML format from the generalAdapter module. The template files have to be prepared during
configuration of the model adapter, following pre-defined naming convention described in this manual. By using this approach, the
adapter is (almost) independant of the structure of the Delft3D input files. Also, this approach is adequate for all Delft3D modules.

3. The model adapter is subdivided over three sub-modules. These sub-modules can also be run independantly of one and other, which is
relevant for specific Delft3D applications, line coupled Delft3D-Sobek models, or coupled FLOW-WAQ simulations. The sub-modules are:
The pre-adapter, which prepares the native model input data based on data exported to PI XML by FEWS from the
generalAdapter.
The adapter, which executes the model simulation.
The post-adapter, which converts selected model output to the appropriate PI XML data types to be imported by FEWS.

4. The adapter and template files allow for combining dynamic data provided by FEWS with static data included in the template files. This is
relevant in case, for example, a significant number of constant discharges are included in a FLOW or WAQ model (can be up to 50+). In
that case, these constant discharges do not have to included in (governed from) the FEWS configuration.

5. The adapter assumes that dynamic data is provided by FEWS at all times. Error checking of this data should be done in FEWS primarily
(based on available FEWS functionalities). If inappropriate data is provided by FEWS, the error checking done by the model adapter is
limited. Instead, the adapter will log and display error messages as provided by the model in such a situation.

6. The adapter applies a specific configuration file, used to define some adapter specific settings. The contents of this file is described in this
manual.

7. Delft3D-FEWS was originally setup to work with 1D and 2D models (river system and catchments modelling). The Published Interface
(PI) XML file format, used for data transfer between FEWS and model systems, was setup to accomadate this. This implies that the PI
XML format supportes both 1D and 2D data types. For 3D data, however, no specific PI XML data type is available. This implies that in
FEWS, a 3D dataset should be seen as a stack of 2D or 1D grids and timeseries. The user should take this into account during
configuration by, for example, assigning each layer of a 3D grid with a unique parameter/location combination in FEWS.

02 Adapter configuration - Delft3D model adapter in relation to FEWS


The Delft3D model adapter as described in this manual provides the interface between a Delft3D model and the Delft-FEWS system. It enables
FEWS to run such a Delft3D model, thus providing the essential forecasting functionality. For each particular FEWS application applying a Delft3D
model, however, some aspects of this system have to be configured by the user in order for the system to work correctly. To achieve this, it is
useful that the user has at least some basic understanding of the relation between Delft-FEWS, the model adapter and the forecasting model
(Delft3D in this particular case). In this section, a brief overview of this relation is provided. For more information, the user is referred to the FEWS
manual: General Adapter.

Delft-FEWS and the general adapter

A key feature of DELFT-FEWS is its ability to run external modules to provide essential forecasting functionality. The General Adapter is the part
of the DELFT-FEWS system that implements this feature. It is responsible for the data exchange with these modules and for executing the
modules and their adapters. The Delft3D model adapter is an example of such a module run from the General Adapter (see also FEWS manual:

774
General Adapter).

In order to configure a Delft3D-FEWS application, it is important to have a clear understanding of the relation between the General Adapter and
the Delft3D model adapter. This section summarizes some of the functionalities included in the FEWS general adapter module, and their relation
to the Delft3D model adapter.

The schematic interaction between the general adapter and an external module (like the Delft3D model adapter) is shown in the below figure.

Figure 1: schematic interaction between the General Adapter and an external module

This figure is understood in the following way:

1. The general adapter is that part of DELFT-FEWS which exports model input, imports model output data, and executes the pre-adapter,
module and post-adapter.
2. Export and import of model data is done in datafiles following the Published Interface (PI) XML format.
3. The preAdapter, Module and postAdapter together form the model adapter. The model adapte is initiated from the general adapter.
4. The preAdapter is that part of the model adapter which converts input data in PI XML format to native model input data.
5. The postAdapter is that part of the model adapter which converts native model output data to PI XML format to be imported by FEWS.
6. The Module is that part of the model adapter which starts a model simulation.

Essential in this division of tasks between model adapter and general adapter is that from the vantage point of the general adapter the model
adapter is a black box, and visa-versa. Exchange of information between these components is done based on exchange of PI XML data files;
DELFT-FEWS does not have any knowledge about the modelling system (Delft3D in this case), whereas the modelling system does not have any
knowledge about DELFT-FEWS. Translation of data from one component to the other is done using the model adapter. This model adapter not a
part of the Delft-FEWS system itself, but is essentially an external program initiated by DELFT-FEWS through the general adapter. This external
program (the model adapter) should provide all the functionalities to, i) convert specific PI XML data to specific native model input files, ii) initiate a
model simulation and iii) convert model output data to PI XML data readable by DELFT-FEWS. In addition, this model adapter has the following
tasks, iv) logging of errors during all steps by the model adapter and by the model itself and v) administation of the model state. For both these
tasks pre-defined PI XML file formats exists readable by DELFT-FEWS.

Note that, while the Delft3D model adapter provides all these functionalities, as described in this document, an amount of system configuration is
required to get each particular Delft3D-FEWS system operational. This configuration (described in the remainder of this document) must be done
in a consistent way for the model adapter and the general adapter based on the relationship outlined above.

03 Adapter configuration - configuration workflow


This section outlines the different steps required in setting up the Delft3D-FEWS adapter for a Delft3D model and a FEWS application. This

775
process is represented as a number of workflows (or steps). The FEWS configurator should fellow these steps and adhere to the conventions
therein in order to setup a Delft3D-FEWS system. Some specifics on naming conventions and template files are outlined in separate sections (see
sections Adapter configuration - naming conventions and Adapter configuration - template files ).

Setting up a Delft3D-FEWS application consists of three basic steps:

1. Preparing the Delft3D model setup. Note; It is assumed a fully setup and calibrated model is provided for configuration in FEWS (see
section Design philosophy). This fully setup model is subsequently prepared for usage from FEWS in this step.

2. Preparing the model adapter XML scheme (hereinafter called [Link], but other names can prescribed by the user).

3. Preparing the General Adapter XML scheme (hereinafter called [Link], but other names can be prescribed by the user).

These three steps are outlined below in more detail. Before doing so, however, the user should setup a directory structure which adheres to the
conventions listed below.

Note that, for reference, examples of each of the steps listed below are provided in section Example configuration.

Directory structure | Step 1: Preparing the Delft3D model setup | Step 2: Preparing the General Adapter XML scheme | Step 3: Preparing the
model adapter XML scheme

Directory structure

To minimize the number of choices that have to be made by the user (thus reducing the possibility for mistakes), the Delft3D-FEWS adapter
expects a fixed set of directories and files. Some directories, however, are changeable by the user for practical purposes. This (mandatory)
directory structure is illustrated by figure 1.

Figure 1: mandatory directory structure for setting up Delft3D-FEWS configuration.

The directory structure illustrated in figure 1 is elaborated in more detail in the below table. Conventions as indicated in this table should be
adhered to be the user when setting up a Delft3D-FEWS application.

Directory/file Purpose Status

%rootdir% Root directory of Delft3D module. For example, %FEWS_root%/Modules or Changeable


%FEWS_root%/Modules/Flow. Can be prescribed by the user in [Link]

%rootdir%/input/ Contains all the files with timeseries and map stacks exported by Delft-FEWS Fixed

%rootdir%/input/[Link] XML-file with (scaler) timeseries, exported from [Link] Fixed

%rootdir%/input/map_<param>.xml XML-file describing the map stacks with parameter <param> (see naming conventions in Fixed
section Adapter configuration - naming conventions )

%rootdir%/stateInput/ Contains the initial conditions (state) from which the computation must start. Exported Fixed
from [Link]

%rootdir%/stateInput/[Link] The XML-file describing the time of the state files Fixed

%rootdir%/output/ Contains all the files with timeseries and map stacks as output by the adapter (model Fixed
results) and to be imported by Delft-FEWS

%rootdir%/output/[Link] XML-file with the modelled (scaler) timeseries, to be imported by Delft-FEWS Fixed

776
%rootdir%/output/map_<param>.xml XML-file describing the modelled map stacks with parameter <param> (see naming Fixed
conventions in section Adapter configuration - naming conventions )

%rootdir%/stateOutput/ Contains the new state file produced by the model, to be imported by Delft-FEWS. Fixed

%rootdir%/stateOutput/[Link] The XML-file describing the time of the new state files, to be imported by Delft-FEWS Fixed

%rootdir%/logs/ Contains the diagnostics file produced by the GeneralAdapter and by the model adapter Fixed

%rootdir%/logs/[Link] The XML-file containing diagnostic information as output by the GeneralAdapter and by Fixed
the model adapter

%rootdir%/<workdir> Directory in which the model computation will be run. Specified by the user in Changeable
[Link]. For example, %rootdir%/Flow or %rootdir%/<modelname>

%rootdir%/<modeldir> Directory in which the static (non-changing) model schematisation is stored, to be copied Changeable
to <workdir> by the model adapter. Specified by the user in [Link]. For
example, %rootdir%/FlowSchematisation or %rootdir%/<modelname>Schematisation

Note that it is possible to include multiple <workdir>'s and <modeldir>'s in a %rootdir% folder (for example, FLOW and WAQ model). In this case,
all fixed folders will be shared by both models.

Step 1: Preparing the Delft3D model setup

For a Delft3D model to be used with the Delft3D-FEWS adapter, a number of adaptations have to be made to particular model input files. This
process is illustrated by the workflow in figure 2. The different steps in this workflow are elaborated in more detail below.

Figure 2: workflow of required adaptations to Delft3D model for usage in Delft3D-FEWS application.

1a) The Delft3D-FEWS adapter work using template model input files. In these templates, placeholder keywords can be assigned, which
are replaced by dynamic data from FEWS by the model adapter. These placeholder keywords have to be included during configuration of
the Delft3D-FEWS application. Naming conventions for keywords and template files are described in sections Adapter configuration -
naming conventions and Adapter configuration - template files .

1b) In a similar fashion, the simulation time frame has to be updated by the model adapter for new model simulations. This is achieved by
including placeholder keywords for the model timeframe in the MDF (FLOW) or INP (WAQ, PART) files. Additionally, for FLOW
simulations applying gridded meteorological forcing, placeholder keywords have to be included for the spatially varying meteorological
fields in the MDF file. Naming conventions for keywords and template files are described in sections Adapter configuration - naming
conventions and Adapter configuration - template files .

1c) In case WAQ or PART simulations where the hydrodynamics are obtained from a preceeding FLOW simulation (in the form of
communication files), the WAQ template files should refer to the correct FLOW <workdir> (see directory structure) where the
communication files are stored.

1d) Once all template files are prepared, both these template files and the static model schematisation (grid files etc) have to be included
in the appropriate <modeldir> (see directory structure)

Step 2: Preparing the General Adapter XML scheme

777
By default, the FEWS general adapter module is used to execute the Delft3D adapter, export the necessary input and state data for a model
simulation and import the necessary output and state data prepared by the model adapter. This is described in more detail in section Adapter
configuration - generalAdapter configuration . The general adapter module must be configured in the correct way to provide the required input to
the Delft3D adapter, execute the Delft3D adapter and import the output provided by this adapter. This process is illustrated by the workflow in
figure 3. The different steps in this workflow are elaborated in more detail below.

Figure 3: workflow of required configuration to general adapter scheme ([Link]).

1a) In the <general> section of the [Link], the correct <rootDir> has be be specified (hereinafter specified as %rootdir%,
see also directory structure described above).

1b, 1c and 1d) The <exportDir>, <importDir> and <diagnosticFile> directories have to be set to the appropriate paths, based on the
%rootdir% specified at step 1a and the directory structure described above (respectively %rootdir%/input, %rootdir%/output and
%rootdir%/logs/[Link]).

2a) Under <exportActivities>, <exportStateActivity> the <stateExportDir> should be set to %rootdir%/stateInput. The <StateConfigFile>
should be set to %rootdir%/stateInput/[Link]. See also directory structure above.

2b) Under <exportActivities>, <exportTimeSeriesActivity>, dynamic timeseries data should be exported to


%rootdir%/input/[Link]. Under <exportActivities>, <exportMapStacksActivity>, dynamic mapstack data (grids) should be exported
to %rootdir%/input/map_<param>.xml, where <param> is a parameter dependant keyword described in section xxx.

3a) Under <executeActivities>, <executeActivity>, <command>, the pre-adapter, adapter and model adapter have to be executed with
the <classname> option, refering to the appropriate JAVA class (include names!). Under <arguments>, the following execution arguments
are mandatory: 1) %rootdir%, and 2) model adapter XML scheme ([Link]) (at which location?).

4a) In <importActivities>, <importStateActivity>, the <stateConfigFile> should be set to %rootdir%/stateOutput/[Link] (see directory
structure described above).

4b) In <importActivities>, <importTimeSeriesActivity>, the appropriate timeseries as prepared by the model adapter based on mapping
relations described in the [Link] file have to be included. Note that in this case, no importIdMap is required. In
<importActivities>, <importMapStacksActivity>, the appropriate map stacks as prepared by the model adapter based on mapping
relations described in the [Link] file have to be included. Note that in this case, no importIdMap is required.

Step 3: Preparing the model adapter XML scheme

Include text

778
Figure 3: workflow of required configuration to model adapter scheme ([Link]).

04 Adapter configuration - XML configuration scheme


The Delft3D-FEWS model adapter applies an XML configuration file in order to group configuration dependant settings in a practical way. This
XML file has an associated XSD scheme which defines the XML file format in a structured way. This XML/XSD approach is used for all
configurable FEWS files.
During Delft3D-FEWS configuration, the user is required to configure the contents of the model adapter XML file for that particular Delft3D-FEWS
setup. While this XML file is ASCII based and can be edited using a standard text editor, the user is advised to use specific XML editors, like
XMLSpy, to do this. This way, the user can benefit from the structured file contents as defined in the XSD scheme.

In this section, the contents of the Delft3D-FEWS model adapter XML scheme is described and the user is explained on how to edit this file for
particular Delft3D-FEWS setups. While this is also briefly discussed in the section Configuration workflow, this section provides more details on
the various settings in this file.

For this manual, we will assume that the XML configuration file is named [Link] (in accordance with the XSD scheme). During
configuration, however, the user is free to rename this file is required.

XML configuration file

As outlined above, the Delft3D-FEWS model adapter applies an XML configuration file in order to group configuration dependant settings in a
practical way. More precisely, this configuration file is used to define model adapter settings not supported by the FEWS General Adapter module
(or settings requiring additional flexibility). During setup of this file, the goal was to prevent duplicate information in the model adapter XML
configution file and the general adapter module, thus preventing possible conflicts and errors in the system configuration. As such, available
settings in the model adapter XML configution are limited to those strictly required by the system.

Figure 1 below shows the XSD scheme for the Delft3D-FEWS model adapter XML configuration file. In the below text, the sections <general>,
<preAdapter> and <postAdapter> are explained in more detail.

PLACEHOLDER FIGURE XSD SCHEME

Section <general>

Because Delft3D consists of several modules that can be used in different configurations, it is necessary to specify which module should be run
(the keyword module). This module (either FLOW, WAQ, ECO, PART or WAVE) determines together with the string specified as the run-id
(keyword <runId>) which files will be used (see Table 1 below). With these four keywords the user should be able to define the characteristics of
most runs of Delft3D modules. For some particular cases, for example when running Delft3D-FLOW with RTC, additional input arguments are
required to executate the model. These arguments should be provided as additional input arguments when executing the model adapter from the
general adapter module. For more information, see section Configuration workflow.

The keyword <workDir> indicates in which directory the computational programs will start and the keyword <modelDir> should refer to the
directory containing the template files and the other (fixed) files that together make up the input for the computation. See also section
Configuration workflow about the relation between these directories and the overall directory structure of a Delft3D-FEWS application.

Keyword Settings

<description Optional description of file contents

<module> FLOW, WAQ, ECO, PART or WAVE

779
<runId> RunId of template input files (<runId>.mdf, <[Link]> or <[Link]>

<workDir> Working folder in which simulation is run

<modelDir> Repository directory of static model data and templates

Table 1: Configurable settings in <general> section of XML configuration file Delft3D-FEWS adapter.

Section <preAdapter>

The <preAdapter> section contains one keyword only, <steeringTimeSeriesName>, the name of the timeseries that is to be used as to determine
the time frame of the simulation (found in the timeseries exported by Delft-FEWS).

The reason for this keyword is that Delft-FEWS determines the actual modelling timeframe based on user defined start and stop times in the
general adapter module and on the availability of state information within this timeframe. Because of the latter, the start time of a model simulation
is not necessarily fixed but may vary based on the available of this state information (see also section State handling). The Delft3D model should
be able to cope with this by changing the simulation period accordingly. To achieve this, the model adapter will assess the user specified
<steeringTimeSeriesName> and will base its simulation period (starttime and stoptime) on the duration of this timeseries.

The name of the timeseries is to be formed in this way: 'external name of the parameter/external name of the location'. For instance: if the
external parameter name is 'H' and the location is 'Southern boundary', then the name for that time series is: 'H/Southern boundary'. See also the
section on Naming conventions.

Section <postAdapter>

The section <postAdapter> describes the actions to be taken after completion of the model run. Rather than blindly export all the results from the
model run to Delft-FEWS and let it pick up the timeseries and map stacks of interest, the adapter exports only those timeseries and map stacks
described in this section. This is preferable given the (possibly) significant file size of Delft3D output files.

To achieve this, the <postAdapter> section contains a mapping table relating Delft3D output to internal Delft-FEWS parameters and locations.
Based on these mapping relations, the postAdapter will convert native model output to PI XML timeseries and mapStacks to be imported by
FEWS during the <importActivities> of the general adapter module (see also section Configuration workflow).
Note that this mapping table works in a similar way as FEWS IdMaps, where external locations and parameters and internal locations and
parameters are related to each other. Since the mapping table in this configuration file already links model output to the appropriate internal
locations and parameters in FEWS no specific IdMap is required in this case. To this end it is essential that these locations and parameters exist
in the given FEWS application, however.

The external locations and parameters as described in this mapping table are derived from the parameter names and the location names as seen
in Delft3D-GPP (the adapter uses the same library as Delft3D-GPP to read the model output files). This implies that the external locations and
parameters should be set based on naming conventions which are outlined in section Naming conventions.

05 Adapter configuration - template files


The Delf3D model adapter uses template input files for each of the Delft3D models run using this adapter. This implies that a number of
adaptations have to be made to particular model input files as provided with the original model. More specific, to those model input files in which
dynamic data or settings have to be included based on data provided by FEWS, being:

1. Model attribute files with dynamic timeseries data.


2. Model master definition files (*.mdf for FLOW, *.inp for WAQ and PART and *.mdw for WAVE) in which the modelling timeframe is
defined.

In these templates, placeholder keywords have to be assigned, which are replaced by dynamic data from FEWS by the model adapter. These
placeholder keywords have to be included during configuration of the Delft3D-FEWS application. In the below section, these template files and
keywords are descibed per Delft3D module.

NOTE: USE SI UNITS AT ALL TIMES!


NOTE: USE NaN AT ALL TIME!
CHECK FOR PRESENCE OF -999 and ft etc BY ADAPTER AND GIVE WARNING?
CHECK FOR MISMATCH IN TIMEZONE BETWEEN FEWS TIMESERIES AND MODEL? NOT ALWAYS SPECIFIED IN MDF, EXPLICIT
MENTIONING IN MANUAL

Delft3D-FLOW | Delft3D-WAQ | Delft3D-PART | Delft3D-WAVE

Delft3D-FLOW

Delft3D-FLOW (referred to as FLOW hereinafter) applies a wide range of attibute files for different types of input data. Essentially, all forcing data
is contained in these files, and the FLOW Master Definition File (MDF) referres to these. For FLOW, the Delft3D model adapter distinguishes
between the following types of files:

1. Files containing timeseries data (*.bct, *.bcc, *.bcb, *.dis, *.eva, *.tem, *.wnd)
2. Files containing gridded data (gridded meteorological forcing)
3. The master definition file (*.mdf)

780
In these files, keywords should be included in the following way:

Files containing timeseries data

Each timeseries which has to be updated by the Delft3D model adapter based on data provided by FEWS, has to be replaced by the following
keyword.

Keyword Description

FLOW_TIMESERIES Placeholder to fill in the timeseries in the so-called tim format (only the time and data including the 'number of records'
entry where applicable, not the header). The keyword should be followed by the names of all timeseries that should be
filled in there, separated by spaces (see example below). Note that naming of timeseries should be done in accordance
with naming conventions described in Naming conventions.

Example, based on discharge input file

table-name 'Discharge : 1'


contents 'inoutlet '
location 'Location_1 '
time-function 'non-equidistant'
reference-time 01012008
time-unit 'minutes'
interpolation 'linear'
parameter 'time ' unit 'min'
parameter 'flux/discharge rate ' unit 'm3/s'
parameter 'Salinity ' unit 'ppt'
parameter 'Temperature ' unit '°C'
FLOW_TIMESERIES 'q/bound-1' 's/bound-1' 't/bound-1'

In this particular case, timeseries for discharge, salinity and temperature will be added by the model adapter for this location. Note that a fixed
reference time is assumed. The model adapter will subsequently determine the relative timeframe of the included timeseries with respect to this
reference time (as required by FLOW).

Files containing gridded data

Files containing gridded data are build from scratch by the model adapter, based on mapStack data exported by the general adapter (see section
Configuration workflow). This can be achieved based on placeholder keywords in the MDF file (see next section).

The master definition file

In the master definition file the following keywords have to be included:

Keyword Description

FLOW_TIME_START Start of the simulation (format in accordance with Delft3D-FLOW). This is actually the time in minutes since the
reference time, found in the mdf-file.

FLOW_TIME_STOP Stop of the simulation (format in accordance with Delft3D-FLOW)

FLOW_TIME_RST Total simulation duration, applies to output restart (state) file at end of model simulation. Note that if this keyword is
omitted and a fixed interval in specified, the postAdapter will select the last restart file written by the model.

FLOW_MAPSTACK Placeholder for the name of the file that will hold the gridded forcing data (as found in the mapstack files exported by
FEWS). It should be followed by the name of the parameter, for example, FLOW_MAPSTACK 'pressure'. In this case,
XML mapStack data described by the file map_pressure.xml will be used to construct the input file [Link] for
FLOW. What about the name for the reference grid?

781
Example MDF file

...
Itdate= #2008-01-01#
Tunit = #M#
Tstart= FLOW_TIME_START
Tstop = FLOW_TIME_STOP
Dt = 10
...
Restid= #<runId>.rst#
...
Flmap = FLOW_TIME_START 60 FLOW_TIME_STOP
Flhis = FLOW_TIME_START 10 FLOW_TIME_STOP
Flpp = FLOW_TIME_START 0 FLOW_TIME_STOP
...
Flrst = FLOW_TIME_RST
...
Filwu = FLOW_MAPSTACK 'windu'
Filwv = FLOW_MAPSTACK 'windv'
Filwp = FLOW_MAPSTACK 'pressure'
Filwr = FLOW_MAPSTACK 'humidity'
Filwt = FLOW_MAPSTACK 'temperature'
Filwc = FLOW_MAPSTACK 'cloudiness'

Additional notes on preparation of FLOW model for Delft3D model adapter

Some additional items which have to be taken into account during preparation of a Delft3D model for usage by the Delft3D model adapter are;

1. The adapter assumes a fixed reference time is applied. This time (as indicated in the MDF template file) will be used to determine the
relative time frame for all timeseries in the attribute files. This also implies that the reference time as indicated in the MDF file and in these
attribute files have to be identical.
2. The adapter assumes that astronomical tidal forcing data is provided with the original model (if applicable). This implies that tidal
components as prescribed in the *.bca and *.cor files will be used. These components are static forcing from the vantage point of FEWS.
3. The interval for map and timeseries output as specified in the MDF (Flmap and Flhis) should correspond with the interval of the PI XML
data imported by FEWS under <importActivities> in the general adapter.
4. The model adapter will check all attribute files found in the static data repository (<modelDir>, see section Configuration workflow) for the
abovementioned keywords, as will it check the MDF file. Note that the name of the MDF file should match the <runId> as specified in the
model adapter configuration file (see section XML configuration scheme). The user is free in naming fo the attribute files.
5. In is assumed that the FLOW model starts from a spatially varying restart file at all times, whether this is a 'warm' state file or a 'cold'
initial state file. This file has a fixed name at all times. This implies that the model output state (restart file) is renamed to this fixed name
by the model adapter. During configuration, a cold state file in a similar format as a restart file must be provided.

Delft3D-WAQ

In contrary to Delft3D-FLOW, Delft3D-WAQ applies a single input file, *.inp file (through additional files can be included in the *.inp file using the
INCLUDE statement). Both the simulation timeframe, timeseries data and gridded data are specified in this file.

The following keywords can be included in the *.inp file:

Keyword Description

WAQ_TIME_START Start of the simulation (format in accordance with Delft3D-WAQ/ECO: yyyy/mm/dd-hh:mm:ss)

WAQ_TIME_STOP Stop of the simulation (format in accordance with Delft3D-WAQ/ECO: yyyy/mm/dd-hh:mm:ss)

WAQ_TIMESERIES Placeholder to fill in the timeseries in the WAQ /ECO format (only the time and data, not the header). The keyword
should be followed by the names of all timeseries that should be filled in there, separated by spaces (see example
below).

WAQ_MAPSTACK Placeholder for the name of the file that will hold the gridded forcing data (as found in the mapstack files exported by
FEWS). It should be followed by the name of the parameter, for example, WAQ_MAPSTACK 'windvel'. In this case, XML
mapStack data described by the file map_windvel.xml will be used to construct the input file [Link] for WAQ.

In addition to these keywords, it is important to note that the inp file should refer to the communication files as output by a preceeding FLOW
simulation. In all likelyhood, this FLOW simulation was during an earlier phase of the FEWS workflow, in a <workDir> specified in the model
adapter configuration file (see sections Configuration workflow and XML configuration scheme). The communication file paths in the inp file
should point towards this <workDir>. Note that it is stongly advised to use relative paths in this case!

Below, examples of an *.inp template file are provided.

782
Example INP file (timeframe)

...
WAQ_TIME_START ; start time
WAQ_TIME_STOP ; stop time
0 ; constant timestep
0003000 ; time step
...
WAQ_TIME_START WAQ_TIME_STOP 0120000 ; monitoring
WAQ_TIME_START WAQ_TIME_STOP 0120000 ; map, dump
WAQ_TIME_START WAQ_TIME_STOP 0120000 ; history
...

Example INP file (path of communication files)

...
-2 ; first area option
'..\<workDir FLOW>\[Link]' ; area file
;
-2 ; first flow option
'..\<workDir FLOW>\[Link]' ; flow file
;
...

Example INP file (timeseries data 1)

...
TIME BLOCK
DATA
'Continuity' 'Salinity' 'DetC' 'DetN' 'DetP'
WAQ_TIMESERIES 's/bound-1' 'DetC/bound-1' 'DetN/bound-1' 'DetP/bound-1'
...

Example INP file (timeseries data 2)

...
FUNCTIONS
'Wind'
LINEAR
DATA ;

WAQ_TIMESERIES 'windvel/bound-1'
...

Example INP file (gridded data)

...
SEG_FUNCTIONS
'Radsurf' ; name of segment function
ALL
WAQ_MAPSTACK 'sunshine'
...

Example INP file (initial conditions and restart)

...
'<runId>.res' ; initial conditions in binary file
'<runId>.res' ; binary file
...

Delft3D-PART

783
Delft3D-WAVE

06 Adapter configuration - naming conventions


Presently, the following parameters can be selected (taken from gnf_data.h):

07 Adapter configuration - state handling and communication files


PM

3. Example configuration
PM

4. Best practices

Models linked to Delft-Fews


The table below gives an overview of the models linked via the Published Interface to the Delft-Fews system. For these model a model adapter is
available. Please note that adapters that have not been developed by Deltares cannot be used without permission of the owner. All models
indicated in bold typeface are running in operational systems.

Model Type Supplier/Owner Country

ISIS Hydrodynamic HR/Halcrow UK

PDM Rainfall-Runoff CEH UK

TCM Rainfall-Runoff CEH UK

KW Routing (kinematic wave) CEH UK

PACK Snow Melt CEH UK

ARMA Error Correction CEH UK

PRTF Event Based RR PlanB UK

PCRASTER Dynamic Modelling Software Pcraster environmental software Netherlands

TRITON Surge propagation/Overtopping PlanB UK

TWAM 2D Hydrodynamics PlanB UK

STF Transfer functions EA UK

DODO Routing (layered Muskingum) EA UK

MCRM Rainfall-Runoff EA UK

Modflow96/VKD 3D groundwater Deltares/Adam Taylor Netherlands/UK

Mike11 Hydrodynamics DHI Denmark

NAM Rainfall-Runoff DHI Denmark

TOPKAPI Rainfall-Runoff Univ. of Bologna Italy

HBV Rainfall-Runoff (inc snowmelt) SHMI Sweden

Vflo Distributed Rainfall-Runoff Vieux & Assiciates USA

SWMM Urban Rainfall-Runoff USGS USA

HEC-RAS Hydrodynamic USACE USA

HEC-HMS Hydrological USACE USA

784
HEC-ResSim Reservoir Simulation USACE USA

Snow17 Snow Melt NWS USA

SAC-SMA Rainfall-Runoff NWS USA

Unit-HG Unit-Hydrograph NWS USA

LAG/K Routing (hydrological) NWS USA

SARROUTE Routing (hydrological) NWS USA

SSARRESV Reservoir Simulation NWS USA

RESSNGL Reservoir Simulation NWS USA

BASEFLOW Baseflow Simulation NWS USA

CHANLOSS Channel loss Simulation NWS USA

APICONT Rainfall-Runoff NWS USA

CONSUSE Consumptive use of River Simulation NWS USA

GLACIER Glacier simulation NWS USA

LAYCOEF Routing Model NWS USA

MUSKROUT Routing Model NWS USA

RSNELEV Rain Snow Elevation Simulation NWS USA

SACSMA-HT Rainfall-Runoff (Heat Transfer) NWS USA

LAYCOEF NWS USA

TATUM Routing Model NWS USA

RTC Tools Real-Time Control, Model Predictive Control, Reservoir Simulation Deltares Netherlands

PRMS Rainfall-Runoff Univ. of Karlsruhre Germany

SynHP Hydrodynamics BfG Germany

SOBEK Hydrodynamics, Water Quality, RR Deltares Netherlands

SOBEK-2d Linked 1d/2d inundation modelling Deltares Netherlands

DELFT-3D 2/3D Hydrodynamics Deltares Netherlands

Sacramento Rainfall-Runoff Deltares Netherlands

RIBASIM Water distribution + Reservoir Deltares Netherlands

REW Distributed Rainfall-Runoff Deltares Netherlands

DELFT3D 2/3D Hydrodynamics/ Water quality Deltares Netherlands

Flux 1D Hydrodynamics Scietec Austria

URBS rainfall-runoff and hydrological routing Don Caroll Australia

Grid2Grid Distributed Hydrologic Model CEH UK

Het Wageningen Rainfall-Runoff Haskoning Netherlands

Model

WASIM-ETH Distributed Rainfall-Runoff Joerg Schulla Switzerland

PREVAH Distributed Rainfall-Runoff WSL-Switzerland Switzerland

PCOverslag Calculation of wave overtopping and wave runup Deltares Netherlands

Modflow

785
Modflow can be connected to Delft-FEWS using the Modflow adapter developed by Adam Taylor.

Documentation in the form of a[ PDF file is available as an attachment.

PCOverslag
PCOverslag can be connected to Delft-FEWS using the PCOverslagAdapter developed by Deltares.

The files needed to run the PCOverslagAdapter from Delft-FEWS can be found in the install artifacts [Link]. The following files
should be located in the bin directory in the PCOverslag Module location:

Adapters_PCOverslag.jar
[Link]
Delft_PI.jar
Delft_PI_castor.jar
Deflt_Util.jar
[Link]
[Link]
[Link]
[Link]

Input

Wave height
Wave direction
Wave period
Waterlevel

Output

Golf oploop
Golf overslag
Golf oploop niveau
Golf overslag niveau
Overslag debiet

Below is an example of the general adapter configuration file, to be used with version Stable2011.02 onwards.

<?xml version="1.0" encoding="UTF-8"?>


<generalAdapterRun xmlns="[Link]
xmlns:xsi="[Link] xsi:schemaLocation="[Link]
[Link]
<!-- General information for General Adapter run -->
<general>
<description>PC Overslag model voor het IJsselmeer</description>
<rootDir>%REGION_HOME%/Modules/PCOverslag</rootDir>
<workDir>%ROOT_DIR%/work</workDir>
<exportDir>%WORK_DIR%/input</exportDir>
<exportDataSetDir>%ROOT_DIR%/profiles</exportDataSetDir>
<exportIdMap>Id_PCOverslag</exportIdMap>
<importDir>%WORK_DIR%/output</importDir>
<importIdMap>Id_PCOverslag</importIdMap>
<dumpFileDir>%REGION_HOME%/DumpFiles</dumpFileDir>
<dumpDir>%ROOT_DIR%</dumpDir>
<diagnosticFile>%WORK_DIR%/diagnostics/[Link]</diagnosticFile>
<convertDatum>false</convertDatum>
</general>
<activities>
<startUpActivities>
<purgeActivity>
<filter>%WORK_DIR%/input/*.*</filter>
</purgeActivity>
<purgeActivity>
<filter>%WORK_DIR%/output/*.*</filter>
</purgeActivity>
<purgeActivity>
<filter>%ROOT_DIR%/profiles/*.*</filter>
</purgeActivity>
<purgeActivity>

786
<filter>%WORK_DIR%/*.*</filter>
</purgeActivity>
</startUpActivities>
<exportActivities>
<exportTimeSeriesActivity>
<exportFile>[Link]</exportFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>Kopieer_Hydra_naar_Dijkvak</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Dijkvak</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-6" end="12"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>Kopieer_Hydra_naar_Dijkvak</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Dijkvak</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-6" end="12"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>Kopieer_Hydra_naar_Dijkvak</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Dijkvak</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-6" end="12"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>Kopieer_Hydra_naar_Dijkvak</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Dijkvak</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-6" end="12"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</timeSeriesSets>
</exportTimeSeriesActivity>
<exportDataSetActivity>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
</exportDataSetActivity>
<exportRunFileActivity>
<description>This pi run file is passes as argument to PcOverslagAdapter</description>
<exportFile>%WORK_DIR%/[Link]</exportFile>
<properties>
<description>Specific configuration required for PcOverslagAdapter</description>
<string value="no" key="WITH_ITERATION"/>
<string value="%ROOT_DIR%/profiles" key="PROFILE_DIR"/>
</properties>
</exportRunFileActivity>
</exportActivities>
<executeActivities>
<executeActivity>
<description>PC Overslag Adapter</description>
<command>
<className>[Link]</className>
<binDir>%ROOT_DIR%/bin</binDir>
</command>
<arguments>

787
<argument>%WORK_DIR%/[Link]</argument>
</arguments>
<timeOut>300000</timeOut>
</executeActivity>
</executeActivities>
<importActivities>
<!-- Import PC Overslag results-->
<importTimeSeriesActivity>
<importFile>[Link]</importFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</timeSeriesSets>
</importTimeSeriesActivity>
</importActivities>

788
</activities>
</generalAdapterRun>

Properties

Two properties must defined for the [Link] export file.

WITH_ITERATION

With this option there is a choice between running the PCOverslag dll's with or without iteration (Default is no).

PROFILE_DIR

This is the path where the profile description files are located. The profile description files are ascii files which describe the characteristics and
geometry of the profiles which will be computed.

An example of a profile (*.prfl) file is given below:

DAM 3
DAMHOOGTE 1.8
RICHTING 280
KRUINHOOGTE 3.16
VOORLAND 1
0.000 1.800 0.000
-61.440 -4.500 1.000
-49.500 1.470 1.000
-3.500 2.170 1.000
0.000 3.160 1.000
MEMO
profiel handmatig toegevoegd (MvR, 08/10/2007)
obv rapport 110303/OF2/249/000144/AM, profiel L601dv
hoogte havendam obv AHN_5
Locatie: 177000;539229

For more details on the PCOverslag application see the helpdeskwater pages and the PCOverslag programming guide (in Dutch).

RTC Tools

Name Size Creator Creation Date Comment

RTCTools - Technical Reference - 20... 1.22 MB Dirk Schwanenberg 15-04-2010 17:49

[Link] 3.34 MB Dirk Schwanenberg 15-04-2010 17:49

[Link] 13.60 MB Dirk Schwanenberg 15-04-2010 17:49

[Link] 11 kB Dirk Schwanenberg 15-04-2010 17:49

RTC Tools is a modelling package for Real-Time Control. It can be applied as a stand alone application in Delft-FEWS or linked to hydraulic
modelling packages via OpenMI.

Background
Areas of application
Integration into Delft-FEWS
Contact

Background

RTC (Real-Time Control) Tools originates from the integration of several project-specific reservoir simulation modules in flood forecasting systems
for Austria, Germany and Pakistan. It's original design in Java in 2007, also refered to as the Delft-FEWS reservoir module, aims at the simulation
of pool routing in reservoirs and reservoir systems including related reactive controllers and operating rules.

Support for more advanced model predictive controllers was introduced in 2008 and extended in 2009. This includes the implementation of a
kinematic wave model as an additional internal model for the predictive controllers as well as the introduction of adjoint systems for selected

789
modeling components. Latter resulted in significant speed-ups of these controllers.

In 2010, the concept of triggers for switching on and off controllers and operating rules was introduced for enabling the simulation of more
sophisticated heuristic control schemes. Furthermore, the software was redesign in C++ and enhanced by a C# OpenMI wrapper for integration
into modeling packages such as SOBEK or Delft3D (ongoing activity in the first half of 2010).

Areas of application

RTC Tools aims at the simulation of various real-time control techniques.

Because of the need for internal modeling in Model Predictive Controllers, the tool includes a number of simple routing models. These enable also
its stand-alone use in forecasting systems. Furthermore, the OpenMI interface allows a user to couple the tool to a wide range of hydraulic
modeling packages.

The software pays special attention to state handling. This includes by definition all system states of triggers, controllers, operating rules and all
modeling components.

Integration into Delft-FEWS

Check the Technical Reference for the integration of RTC Tools into Delft-FEWS

Contact

Please note that the software is still available as a beta version!

If you have any suggestions or ideas for enhancing, please contact:

[Link]@[Link]

17 Launcher Configuration

Introduction
The Launcher application of FEWS requires two types of configuration file:

[Link] : Configure the applications to start up


[Link] : Configure user roles and passwords

Contents
Launcher XML
Security XML

See [Link]

Launcher XML

Security XML

18 FEWS data exchange interfaces

Introduction
Currently there are a number of mechanisms that allow external applications to exchange data with the Delft-FEWS. Importing and exporting
data via a number of possible file formats is used in current operational systems.

Other, more interactive, methods have been developed and are described in this section. Up till now these mechanisms are not fully developed

790
yet. They have been set up as test cases, each with a particular goal in mind. The following sections will give a description of the current status
of these projects, what their strong points are and what their weaknesses are.

Fews JDBC server provides a simple JDBC interface. The FEWS JDBC server uses an OC region configuration and initializes a synchronization
process with the MC to update the data. The client application sets up a connection to the JDBC driver and uses SQL queries to retrieve data.
Currently it is only possible to retrieve data from the FEWS system and not write data to the FEWS system.

Fews PI service provides a simple API using a SOAP framework called XFire. The SOAP framework is set up using an OC region configuration.
This OC also initializes a synchronization process with the MC to update the data. The client application uses XFire to retrieve a proxy of the
API. With the proxy instance the client application can retrieve data from the FEWS system and also write data to the FEWS system. The
exchange of data occurs using strings containing the content of FEWS PI files.

Fews Workflow runner service provides a simple API using a SOAP framework called XFire. The SOAP framework is set up by passing a
configuration file as an argument in the main method of the service runner class. Once the service is started by the runner class, the client
application uses XFire to retrieve a proxy of the API. With the proxy instance the client application can run single task workflows on the MC. On
running the a workflow the client must pass the input timeseries arrays as argument to the proxy. The output timeseries produced by the
workflow run are written to output files configured in the configuration file.

Contents
Fews JDBC server
Fews PI service
Fews Workflow Runner service
JDBC vs. FewsPiService

Fews JDBC server


Introduction
Fews JDBC Interface
Locations
Parameters
Timeseries
ExTimeSeries
TimeSeriesGraphs
Filters
TimeSeriesStats
Installing a FEWS JDBC Server
Windows
Linux
Starting JDBC Service from FEWS Explorer
Setting up connection in DbVisualizer
Setting up an ODBC-JDBC bridge
Example SQL queries
Example Locations queries
Example Filters queries
Example TimeSeries queries
Example TimeSeriesGraphs queries
Example code
Setting up a connection in JAVA
Miscellaneous
Using a different port number (available 200901)
Rolling Barrel
(Java) JDBC Clients, Timezones and DayLightSaving conversion
Known issues

Introduction
To be able to query timeseries directly using SQL statements Delft-Fews can be set up to act as a jdbc server. This can be done using an OC
configuration (which will log in and automatically synchronise date with the MC, thereby assuring all data is constantly being updated), or by
running this stand-alone. In the latter case the system will only export what is in the local datastore at startup.

Fews JDBC Interface


The JDBC Interface provides a virtual access to (virtual) FEWS tables. The JDBC server allows a client application to query the available tables.
However not all SQL query statements are supported. Also the type of SQL statements allowed on a table varies per table. See the section on
SQL queries for more details.

The following information is available through the JDBC Server:

791
Locations

The locations table allows the client application to query the available FEWS locations.

Parameters

The parameters table allows the client application to query the available FEWS parameters.

Timeseries

The timeseries table allows the client application to query the available FEWS timeseries. The information shown in the TimeSeries table provided
by the JDBC server does not match the information of the FEWS TimeSeries table. The JDBC server provides a view of the data of a queried
timeseries.

ExTimeSeries

The extended timeseries table allows the client application to query the available FEWS timeseries. The information shown in the ExTimeSeries
table provide by the JDBC server is similar to the information presented in the FEWS TimesSeries table. The JDBC server provides a view of the
metha data of a queried timeseries.

Note! It is currently not possible to query the ExTimeSeries due to bugs.

TimeSeriesGraphs

The TimeSeriesGraphs table allows the client application to retieve an image of a FEWS timeseries chart for the queried timeseries. The query
returns a byte array value containing the content of a BufferedImage.

Filters

The Filters is set up as a view. This is because the Filters does not represent a FEWS table. Instead the Filters view represents the content of the
FEWS configuration file '[Link]'.

792
TimeSeriesStats

The TimeSeriesStats is set up as a view. This is because the TimeSeriesStats does not represent a FEWS table. Instead the TimeSeriesStats
view shows the results of a statistical analysis performed on the timeseries returned by the query.

Installing a FEWS JDBC Server

Windows

Step 1: Install an OC

Step 2: Delete the "[Link]" from the "OC" directory. When starting the application a new "[Link]" file will be
generated for logging.

Step 3: Make a new "<OC-Name>_JDBC.exe" and "<OC-Name>_JDBC.jpif" file in the \bin directory. The "<OC-Name>_JDBC.jpif" must contain
the following information.

..\jre
-mx512m
-cp
$JARS_PATH$
[Link]
<OC-Name>_JDBC

Step 4: Start the FewsJdbcServer by clicking on the <OC-Name>_JDBC.exe. The Server will start as an OC and synchronise its localDataStore
with the Central Database using the synchprofiles of an OC.

Step 5: Stop the FewsJdbcServer by killing the application using the System Monitor. In the attachements an exe is provided that opens a console
window. If this console window is stopped, the FEWS JDBC driver process is also stopped.

Install windows service


Follow the above listed steps to install and test the JDBC server. Finally stop the server and proceed with the next steps, based on the attached
file JDBC service [Link]

Step 6: unzip the "JDBC service [Link]" to a directory at the same level as the bin and application directory, eg. like "service"
Step 7: replace in the file "run_installscript.bat" the BIN directory and the FEWs application name and directory
Step 8: run the batch file "run_installscript.bat"
Step 9: go to the services window and define the correct properties for the just installed service, like

automatic startup
correct user settings in login tab
restart options after 5 minutes

Notice that the batch calls the file install_JDBC_Service.bat, that contains a list of the *.jar files in the bin directory. If these filenames have
changed or the list has changed, this list should be updated. If not, running the service may not be successful. Also notice that your JAVA_HOME
environment variable has been set and refers to your JRE directory. This JRE directory should not contain space characters in the name. If so,
make a copy of your JRE to a directory with a name without space and set in the run_installscript.bat the JAVA_HOME variable to this new path.

Linux

Step 1: Install an OC

Step 2: Delete the "[Link]" from the "OC" directory. When starting the application a new "[Link]" file will be
generated for logging.

Step 3: Take the fews_jdbc.sh script file and place this one level higher than the \bin directory.

Step 4: Go to the directory where the ./fews_jdbc.sh script file is located and type ./fews_jdbc.sh <OC-Name>.

Step 5: Stop the FEWS JDBC service by typing exit in the console window where the JDBC startup script was executed. An other option is to kill
the process of the FEWS JDBC service.

793
Starting JDBC Service from FEWS Explorer

For debugging purpose it is possible to start the JDBC from the stand-alone FEWS Explorer. With the F12 key you get a list of debug options.
Select "start embedded vjdbc server". The service will start and can be accessed from a database viewer.

Setting up connection in DbVisualizer

Step 1: Install DbVisualizer on your PC. Make sure it is not installed in a folder with spaces, such as "Program Files". When there is a space in the
folder name, it will NOT work correctly. This is a DbVisualizer bug that can not be solved by FEWS.
Step 2: Copy the files "[Link]" and "[Link]" to a folder on your computer. These are the drivers used by DBVisualizer. Also
this folder name should not contain any space characters (use the 8.3 format).
Step 3: Add a new JDBC driver to DBVisualiser:

Start DbVisualizer
Open the Tools menu and the Driver Manager
Create a new driver and give it the name "vjdbc". Load the two jar files in the "User Specified" tab. * Close the Driver Manager Window.

Step 4: Create a new Database Connection in DbVisualizer.

Give it the Alias "<OC-Name> JDBC"


Select the vjdbc driver
Enter the database URL: "jdbc:vjdbc:rmi://<host>:2000/VJdbc,FewsDataStore (under <host>, enter the machine where the fews jdbc
application runs. You can get the IP adress by typing ipconfig in the command line of the Server). The number "2000" is the default port
number, the correct port number is shown in the FEWS log file on the Server when it is started.

Setting up an ODBC-JDBC bridge

The FEWS JDBC Server has been tested with the Easysoft JDBC-ODBC bridge, this can be purchased. This allows the user to access the JDBC
Server from other applications like Microsoft ACCESS that do only support ODBC. To use the JDBC driver with the ODBC-JDBC bridge, do the
following:

Install the Easysoft JDBC-ODBC bridge


Go to the Windows Start Menu -> Settings -> Control Panel -> Administrative Tools -> Data Sources (ODBC)
Select the System DDS tab and add a new data source.

Make sure you add the [Link] and [Link] file to the classpath
The url is: jdbc:vjdbc:rmi://<host>:2000/VJdbc,FewsDataStore (under <host>, enter the machine where the fews jdbc application runs)

When the FEWS JDBC application runs you can test the connection using the Test button.

794
Example SQL queries
There are a number of SQL queries that can be used to retrieve data from the database. Only (read-only) statements are supported. Statements
must be formatted as:

SELECT [DISTINCT] <select_expr> FROM TABLE_NAME [WHERE <where_condition>] [ORDER BY COLUMN_NAME [ASC
|DESC]]

<select_expr>: (* | <COLUMN_NAME [, COLUMN_NAME, ...|, COLUMN_NAME, ...])

<where_condition>: COLUMN_NAME <operator> [AND <where_condition> OR <where_condition> LIKE


<where_condition>]

<operator>: (= | <> | < | > <value>) | BETWEEN <value> AND <value>

For the Locations, Parameter and Filters table the SQL Query "Select * from <TableName>" is allowed. For the TimeSeries Table this query will
return an error.

A valid query for the TimeSeries Table is as follows:

SELECT * from TimeSeries


WHERE moduleInstanceId = 'ImportSHEF'
AND parameterId = 'FMAT'
AND locationId = 'DETO3IL'
AND valueType = 'scalar'
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]'
AND timeSeriesType = 'external forecasting'
AND timeStep = 'unit=hour multiplier=6'

Or, when using filter id's:

SELECT time, value from TimeSeries


WHERE filterId = 'ImportSHEF'
AND parameterId = 'FMAT'
AND locationId = 'DETO3IL'
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]

Note ! When creating a query using the clause time BETWEEN '2007-03-17 [Link]' AND '2007-04-01 [Link]', then it
is good to realise that the start time is used as system time for retrieving the timeseries data. This could be important when
retrieving 'external forecasting' data with an 'externalForecastTime' later than the start [Link] will result in no data being
returned.

Example Locations queries

SELECT name, y,x from Locations ORDER BY name DESC


SELECT name, y,x from Locations WHERE X > '161000'
SELECT * from Locations where id = '46DP0003' OR id = '46DP0004'
SELECT name from Locations WHERE name <> 'Meerselsche Peel (WAM)'
SELECT id, name, y, x from Locations WHERE id LIKE '254%'
SELECT id, name, y, x from Locations WHERE name LIKE '%STUW%' or name LIKE '%Gemaal%'

Example Filters queries

Return all location and parameter combinations from a specific filter

SELECT id, locationid, parameterid FROM filters WHERE id = 'ImportSHEF' ORDER BY location

795
Return all locations from a specific filter

SELECT DISTINCT locationid FROM filters WHERE id = 'ImportSHEF'

Return a list of the main filter groups

SELECT DISTINCT id FROM filters WHERE issubfilter = false

Example TimeSeries queries

The Time series can be queried with or without the Filter ID. An example of a query without using the filter ID is:

SELECT * from TimeSeries


WHERE moduleInstanceId = 'ImportSHEF'
AND parameterId = 'FMAT'
AND locationId = 'DETO3IL'
AND valueType = 'scalar'
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]'
AND timeSeriesType = 'external forecasting'
AND timeStep = 'unit=hour multiplier=6'
AND Value BETWEEN '1.9' AND '2.0'

The same query with the use of a filter ID will be as follows:

796
SELECT * from TimeSeries
WHERE filterId = 'ImportSHEF'
AND parameterId = 'FMAT'
AND locationId = 'DETO3IL'
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]'
AND Value BETWEEN '1.9' AND '2.0'

Note on Time Series Queries:

All values are in the configured time zone of the JDBC application.
All unreliable values will not be returned in the query. The complete time step of unreliable values is missing in the
returned recordset.

Example TimeSeriesGraphs queries

The Time series can be extracted from the database as a graph (binary obejct) through the Timeseriesgraphs table. Queries with or without the
Filter ID can be used, similar to the time series table. An example of a query with the use of a filter ID is:

SELECT * from TimeSeriesgraphs


WHERE filterId = 'ImportSHEF'
AND parameterId = 'FMAT'
AND locationId = 'DETO3IL'
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]'

By default the graphs have a size of 300 (width) * 200 (height) pixels. In the SQL query the width and height can also be fixed.

SELECT * from TimeSeriesgraphs


WHERE filterId = 'ImportSHEF'
AND parameterId = 'FMAT'
AND locationId = 'DETO3IL'
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]'
AND height = 100 AND width = 150

As from 201001 it is allowed to combine data from different locations and/or parameters into one graph by 'joining' them using OR-operators.
Such a clause with OR-operators must be put in between brackets:

SELECT * from TimeSeriesgraphs


WHERE filterId = 'ImportSHEF'
AND parameterId = 'FMAT'
AND (locationId = 'DETO3IL' OR locationId = 'DETO3IL2')
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]'
AND height = 100 AND width = 150

SELECT * from TimeSeriesgraphs


WHERE filterId = 'ImportSHEF'
AND (parameterId = 'FMAT' OR parameterId = 'FMAT2')
AND locationId = 'DETO3IL'
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]'
AND height = 100 AND width = 150

SELECT * from TimeSeriesgraphs


WHERE filterId = 'ImportSHEF'
AND (parameterId = 'FMAT' OR parameterId = 'FMAT2' OR parameterId = 'FMAT3')
AND (locationId = 'DETO3IL' OR locationId = 'DETO3IL2' OR locationId = 'DETO3IL3')
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]'
AND height = 100 AND width = 150

As from 201001 it is possible to optionally specify the time zone for the resulting graph; time clauses in the query remain to be specified in GMT.

Example of a graph query which will plot the data in GMT-1:

797
SELECT *
FROM TimeSeriesgraphs
WHERE filterId = 'Ott_ruw'
AND parameterId = '[Link]'
AND (locationId = '10.H.59' OR locationId = '15.H.20')
AND time BETWEEN '2008-05-01 [Link]' AND '2008-05-01 [Link]'
AND height = 500 AND width = 750 AND timezone='GMT-1';

Example code
Here follows some example code of how client applications can set up a connection to a JDBC server hosted by a FEWS OC.

Setting up a connection in JAVA

No special jars other than the ones provided by the JRE are required.

Miscellaneous

Using a different port number (available 200901)

By default the port number of the JDBC Server is 2000. It is possible to use a different port number when starting the application. In the
[Link] a property can be added like this:

JdbcServerPort=2078

This will start the JDBC Server on port 2078.

Rolling Barrel

When the FEWS JDBC Server is started, the OC rolling barrel configuration will not be used. Instead the Rolling Barrel will run once a day at
02:00 GMT. After the FEWS Rolling Barrel, the compact Database script (only for MS ACCESS databases) will also be executed automatically.

(Java) JDBC Clients, Timezones and DayLightSaving conversion

FEWS stores timeseries with timestamp in GMT, without DayLightSaving (DLS) conversion.

JDBC Client applications like DBVisualizer adopt timezone settings from the (local) Operating System.
This means that data is converted (from FEWS GMT) to local timezone. When DLS conversion is active, a query on data from the night that DLS
is switched (zomertijd to wintertijd, when clock is set back a hour) results in 'double' timeseries records between 2:00 and 3:00 AM.

The JVM for the JDBC client (like DBVisualizer) can be started with an extra commandline option, and forces timezone setting for the JVM rather
than adopting it from the local OS. This commandline option looks like:
-[Link]=GMT
or
-[Link]=GMT+1
or
-[Link]=GMT-5
and so on...

When starting DBVisualizers JVM with -[Link]=GMT results are in GMT, without DLS conversion.

Another noticeable issue:


The FEWS-JDBC Server, started as described above, in a standalone manner, has a (hardcoded) timezonesetting of GMT.
The FEWS-JDBC Server can also be started embedded from the FEWS Explorer using F12 key. In the latter case it runs in the timezone set for
the FEWS Explorer!
This means that a standalone FEWS-JDBC Server and an embedded FEWS-JDBC Server started from for instance a FEWS Explorer with Dutch
timezone settings, may give different timestamps on (the same) timeserie values with a shift up to 2 hours, depending on DLS conversion.

Known issues
[Link]: [Link]: no protocol....
This is an exception that occurs due to a bug in DBVisualizer. Check whether DBVisualizer OR the vjdbc drivers are located in directories
that contain spaces in their path. Move them to a directory path without spaces to solve this issue.

798
Fews PI service
Introduction
Fews PI Service API
Getter methods
System info
Identifiers
Content
Timeseries
Client datasets
Setter methods
Run methods
Conversion methods
Data management methods for client data sets
Installing a PI Service Client
PI Service configuration
Initializing PI Service in Explorer config
Installing a FEWS PI Service as a backend process
Example code
Setting up a connection in JAVA
Setting up a connection in C
Setting up method calls JAVA
Setting up method calls C
Appendix
FewsPiService WSDL
FewsPiService API
FewsPiServiceConfig XSD
Running Delft-FEWS in the background

Introduction
The Fews PI service data exchange uses XFire, a java SOAP framework. This framework allows a client application to obtain a proxy instance to
the FewsPiService API. With this API the client can retrieve data from an OC or write data to an OC. Before a client application can access the
FEWS system there is some configuration work that needs to be done.

User's looking to use XFire on a new project, should use CXF instead. CXF is a continuation of the XFire project and is
considered XFire 2.0. It has many new features, a ton of bug fixes, and is now JAX-WS compliant! XFire will continue to be
maintained through bug fix releases, but most development will occur on CXF now. For more information see the XFire/Celtix
merge FAQ and the CXF website.

To use the Fews PI service from other programming languages, see Using the Fews PI service from C, which also contains an example of using
Tcl.

You can run the FEWS system in the background, if this is needed or desirable (see below).

Fews PI Service API


Description of the methods provided by the Fews PI Service API.

Getter methods

System info

Retrieves a client configuration file.

clientId: File name of client configuration file located in the OC configuration directory 'PiClientConfigFiles'. This file is free format and
content is only read by client application. Only requirement is that content is text based.
fileExtension: Extension of client file.
returns: Text file containing client configuration.

Get current OC system time.

clientId: <id not required>


returns: Date field containing OC system time.

799
Get last time that OC is updated.

returns: Long representation of time (modified Julian date).

Retrieves display unit for a specific parameter

returns: String holding the unit

Identifiers

Get OC timeZone id.

clientId: <id not required>


returns: String representation of time zone.

Retrieves filter identifiers selected in the FEWS Explorer

returns: String array holding the identifiers of the selected filters

Retrieves location identifiers selected in the FEWS Explorer

returns: String array holding the identifiers of the selected locations

Retrieves parameter identifiers selected in the FEWS Explorer

returns: String array holding the identifiers of the selected parameters

Retrieves active (selected) segment/node id in the FEWS Explorer Topology panel

returns: String identifier to active segement node

Get list of available ColdState ids.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a ModuleState element in the service configuration file.
returns: List of ColdStateGroup ids.

Get available warm state times.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a ModuleState element in the service configuration file.
returns: Available warm state times for requested ModuleState.

Get available ensemble member ids.

clientId: <id not required>


ensembleId: Id of requested ensemble.
returns: Available member indices of requested ensemble.

800
Content

Get the configured filter ids from the FEWS system.

piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_Filters XML file containing Fews Filters.

Get location information available for the passed 'filterId' argument.

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not
provided then no id mapping will be done.
filterId: Filter Id. Can be retrieved using the String getFilters(String piVersion) method.
piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_Locations XML file containing the locations available for passed filter id.

Get timeseries parameter information available for the passed 'filterId' argument.

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not provided then no id mapping will be done.
filterId: Filter Id. Can be retrieved using the String getFilters(String piVersion) method.
piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_TimeSeriesParameters XML file containing the parameters available for passed filter id.

Retrieves rating curves content in PI-rating curve format for selecetd rating curve identifiers

ratingCurveIds: String array with lcoation identifiers of the ratings


returns: Xml string in Pi_ratyingcurves format holding rating curves

Get log messages produces by the last run of given task id.

clientId: <id not required but can not be null>


taskId: Task ID for which to retrieve log messages. Only messages for the last run are returned.
returns: String containing the log messages in the format defined by the PI Diag XSD

Get a module data set.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a ModuleDataSet element in the service configuration file.
ensembleId: <currently not supported>
ensembleMemberIndex: <currently not supported>
returns: Binary content of the ModuleDataSet file for requested ModuleState.

Get a module parameter set.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a ModuleParameterSet element in the service configuration file.
ensembleId: <currently not supported>
ensembleMemberIndex: <currently not supported>
returns: String content of the ModuleParameterSet file for requested ModuleState.

Get a cold state file.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.

801
id: Reference to the ID of a ModuleState element in the service configuration file.
stateTime: Time for which to retrieve warm state file. Time values can be obtained from method getAvailableStateTimes
ensembleId: <currently not supported>
ensembleMemberIndex: <currently not supported>

Timeseries

Get the header information for requested timeseries.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a TimeSeries element in the service configuration file.
taskId: <id not required however can not be null>
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
parameterIds: Subset of parameter IDs for which to retrieve timeseries.
locationIds: Subset of location IDs for which to retrieve timeseries.
ensembleId: Id of the ensemble, can be null.
ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
thresholdsVisible: (Optional) Option to add threshold values in the header if set to TRUE. Default is FALSE
returns: String content of a PiTimeseries XML file only containing header information.

Get header and data for requested timeseries.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a TimeSeries element in the service configuration file.
taskId: <id not required however can not be null>
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
parameterIds: Subset of parmater IDs for which to retrieve timeseries.
locationIds: Subset of location IDs for which to retrieve timeseries.
ensembleId: Id of the ensemble, can be null.
ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
thresholdsVisible: (Optional) Option to add threshold values in the header if set to TRUE. Default is FALSE
returns: String content of a PiTimeseries XML file.

Get binary data for requested timeseries.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a TimeSeries element in the service configuration file.
taskId: <id not required however can not be null>
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
parameterIds: Subset of parameter IDs for which to retrieve timeseries.
locationIds: Subset of location IDs for which to retrieve timeseries.
ensembleId: Id of the ensemble, can be null.
ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
returns: Content of the binary file that can be exported together with the PITimeseries XML files.

Get the header information for requested timeseries using a filter id.

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not
provided then no id mapping will be done.
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
filterId: Filter Id. Can be retrieved using the String getFilters(String piVersion) method.
locationIds: Subset of location IDs for which to retrieve timeseries.
parameterIds: Subset of parameter IDs for which to retrieve timeseries.
useDisplayUnits: (Optional) Option to export values using display units (TRUE) instead of database units (FALSE).

802
piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_Timeseries XML file only containing header information.

Get the timeseries data for requested timeseries using a filter id.

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not
provided then no id mapping will be done.
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
filterId: Filter Id. Can be retrieved using the String getFilters(String piVersion) method.
locationIds: Subset of location IDs for which to retrieve timeseries.
parameterIds: Subset of parameter IDs for which to retrieve timeseries.
convertDatum: Option to convert values from relative to location height to absolute values (TRUE). If FALSE values remain relative.
useDisplayUnits: Option to export values using display units (TRUE) instead of database units (FALSE).
piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_Timeseries XML file only containing header information.

Get the timeseries data for requested timeseries using a filter id.

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not provided then no id mapping will be done.
timeZero: Forecast time zero.
segmentId: identifier of the segment node
startTime: Start date/time of run - [Link] if the configured default is to be used
endTime: End date/time of run - [Link] if the configured default is to be used
threhsoldsVisisble: Option to include (TRUE) or exclude (FALSE) thresholds in the time series headers. Default is FALSE
returns: String content of a Pi_Timeseries XML file containing all timeseries including headers for this segemnt

Client datasets

Retrieves all identifiers of sgement nodes holding one or more client datasets

returns: Array of string identifiers of nodes

Retrieves all identifiers to client datasets available for this segement node

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
nodeId: Segment node identifier
returns: Array of string identifiers referring to client datasets

Retrieves description of a specific client dataset from the local datastore

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client dataset
returns: String description of dataset content

Retrieves client dataset from the local datastore

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client dataset
returns: Byte content of the requested client dataset

Retrieves latest time when a client dataset has been modified

803
returns: Modifcation Time as Long (modified Julian date)

Setter methods

Sets system time (Time zero)

systemTime: new SystemTime

Insert log messages

clientId: <id only used as description>


piDiagnosticsXmlContent: String containing the log messages in the format defined by the PI Diag XSD

Insert a module data set.

<not implemented>

Insert a module parameter set.

<not implemented>

<not implemented>

Insert a timeseries.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
taskId: <id not required>
id: Reference to the ID of a TimeSeries element in the service configuration file.
piTimeSeriesXmlContent: Time Series content in the form of a Pi timeseries xml file.
ensembleId: Id of the ensemble
ensembleMemberIndex: Ensemble member index for this time series. NULL if this is not an ensemble.

Insert a timeseries using the binary format.

clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
taskId: <id not required>
id: Reference to the ID of a TimeSeries element in the service configuration file.
piTimeSeriesXmlContent: Time Series content in the form of a Pi timeseries xml file.
byteTimeSeriesContent: TimeSeries data content in the form of a byte array.
ensembleId: Id of the ensemble
ensembleMemberIndex: Ensemble member index for this time series. NULL if this is not an ensemble.

Insert a timeseries using the Filters configuration.

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not
provided then no id mapping will be done.
piTimeSeriesXmlContent: Time Series content in the form of a Pi timeseries xml file. Timeseries must be
available in the FEWS Filters configuration file.
byteTimeSeriesContent: (Optional) TimeSeries data content in the form of a byte array.
convertDatum Option to convert the values to values that are relative to location height (TRUE). If FALSE then no conversion is
performed.

Run methods

804
Create a new task.

clientId: <not required>


returns: Unique task id.

Run a newly created task.

clientId: <not required>


taskId: Id obtained by calling method createTask
workflowId: Id of workflow to run by task.
startTime: <not required>
timeZero: <not required>
endTime: <not required>
coldStateId: <not implemented>
userId: Id of user running task.
description: Description
returns: TaskRun id

Cancel a running task.

<not implemented>

Wait for a running task to finish.

clientId: <not required>


taskId: Id of task run. Returned by callingmethod runTask.
waitMillis: Wait time in milli seconds.
returns: TRUE if task run completes successfully, else FALSE

Conversion methods

Converts parameter values from base unit to display unit

parameterId: Parameter identifier, used to trace down the associated base unit as well as the display unit
value: array of values to be converted
returns: Array of converted values (floats)

Converts parameter values from display unit to base unit

parameterId: Parameter identifier, used to trace down the associated base unit as well as the display unit
value: array of values to be converted
returns: Array of converted values (floats)

Converts stage value to discharge using a rating curve at a specific location valid for a particular moment in time

locationId: Identifier of the rating cruve location


time: time for which the rating curve should be valid
stages: array of stages to be converted to discharge
returns: array of values respresenting discharges

Converts discharge value to stage using a rating curve at a specific location valid for a particular moment in time

locationId: Identifier of the rating cruve location


time: time for which the rating curve should be valid

805
discharge: array of discharges to be converted to stage
returns: array of values respresenting stage

Data management methods for client data sets

Saves client data set to local datastore

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client data set
description: (Optional) description of the client dataset content
dataSet: Byte object holding the client dataset
nodeId: Segment/nodeId which is associated to this client dataset

Updates existing client data set in local datastore

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client data set
description: (Optional) description of the client dataset content
dataSet: Byte object holding the client dataset
nodeId: Segment/nodeId which is associated to this client dataset

Removes existing client data set from local datastore

clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client data set

Initiates a database synchronization to upload the client data set to the Master Controller

Installing a PI Service Client


Setting up a PI Service Client requires the following actions:

Install an OC or SA.
Select an available port number on which the service will be listening.
Configure a FewsPiServiceCofig file for the OC/SA region.

PI Service configuration

The Pi Service configuration files are located in the directory 'PiServiceConfigFiles' of the region configuration. These files link the IDs know by
the client applications to FEWS data, such as TimeSeries, States, ModuleDataSets and ModuleParameterSets.

A Pi Service file has five sections:

1. General: Contains general configuration, such as import and export locations. Id mapping information for mapping client
location/parameter ids to FEWS location/parameter ids.
2. TimeSeries: Contains the mapping of client timeseries ids to the FEWS timeseries sets. Also some extra export options.
3. ModuleDataSet: Contains the mapping of client moduleDataSet ids to the FEWS moduleInstance descriptor ids.
4. ModuleParameterSet: Contains the mapping of client moduleParameterSet ids to the FEWS moduleInstance descriptor ids.
5. ModuleState: Contains the mapping of client moduleState ids to the FEWS moduleInstance descriptor ids.

If a client application requires an application specific configuration file then this file must be configured in the directory ' PiClientConfigFiles' of the
region configuration. This file is free format (text) and can be obtained from the API by calling getClientConfigFile.

Initializing PI Service in Explorer config

The FEWS system does not automatically start up the PI service listener environment. This needs to be configured in the Explorer configuration
file located in the directory 'SystemConfigFiles' of the region configuration. To do this the following line must be entered at the end of Explorer
configuration file:

]]>

806
Where 'start' and 'end' represent the port number range within which the PI Service must find an available port. To fix the PI Service to a single
port enter the same number twice.

Installing a FEWS PI Service as a backend process

It is also possible to initialize the FEWS PI Service as a backend process. The backend process is the same as the FEWS PI Service Client
application, but then without the user interface. To run the service as a backend process the client needs to be installed first, as described above.
After this has been completed continue with the following installation procedures:

The backend FEWS PI Service can be started in Windows by installing a Windows service that can start and stop the FEWS PI Service. Unpack
the following archive containing the service installation files: NT_FewsEnvironmentShell_Service_Install.zip. Open a DOS command prompt in the
directory containing the unpacked archive. Run the install:

<region name="name">

# Where 'work dir' is the directory containing the BIN, JRE and REGION dirs
# and 'region name' is the directory name of the region for which to install the service
]]></region>

Start and stop the new service using the Windows services application. The service will have the following name; FewsShell <region name>.

Or in Linux by using the [Link] file. This file should be placed in the same directory as the links to the JRE, BIN and the REGION dirs.

start

# stop the servie


./fews_piservice.sh <REGION name="NAME"> stop
]]></REGION>

This backend FEWS PI Service does not use the Explorer configuration, as described above, to obtain the listener port number. This is to make
sure that the client FEWS PI Services can run independently from the backend FEWS PI Services without port conflicts. Instead the backend
FEWS PI Service obtains its listener port from the [Link] files, using the entry:

PiServicePort=2001

When not configured the value defaults to 2001.

Example code
Here follows some example code of how client applications can set up a connection the PI Service of a running FEWS OC.

Setting up a connection in JAVA

Before starting the client will require the following library: [Link]. This library can be found in the bin directory of the FEWS system.

Setting up a connection in C

Before starting the client will require the following librarys: [Link] and the libraries belonging to the TclSOAP package.
While there is a C++ libraries to handle SOAP, we could not find a pure C library. At first sight the C++ libraries seem to make it necessary to
define classes and so on before you can start using them. With the Tcl approach the code remains quite simple. (We have not investigated
exactly what TclSOAP requires, but it can be downloaded from ActiveState)

For the example [Link] file used refer to the attachment [Link]

807
#include <tcl.h>

/* Define some helper variables and functions */

Tcl_Interp *interp;

int main( int argc, char *argv[] ) {

/* Get the library started */

if ( Init( argv[0] ) != TCL_OK ) {


fprintf( stderr, "Sorry, initialisation failed: %s\n",
Tcl_GetStringResult( interp ) );
}

/* Here insert calls to Pi Service */


fprintf( stdout, "Locations: %s\n", getLocations( "id not required" ) );
fprintf( stdout, "New task: %s\n", createTask( "id not required" ) );
}

void TearDown () {Tcl_Finalize();}

int EvalFile (char *fileName) {


return Tcl_EvalFile(interp, fileName);
}

int Init (char *argv0) {


char *pchar;
char buffer[1000];

/* Initialise the library itself */

Tcl_FindExecutable(argv0);
interp = Tcl_CreateInterp();
if (Tcl_Init(interp) != TCL_OK) {
return TCL_ERROR;
}

/* Initialise the FEWS PI services */

strcpy( buffer, argv0 );


pchar = strrchr( buffer, '/' );
if ( pchar == NULL ) {
pchar = strrchr( buffer, '\\' );
}
if ( pchar == NULL ) {
pchar = buffer;
}
strcpy( buffer, "[Link]" );

return EvalFile( buffer );


}
]]></tcl.h>

Setting up method calls JAVA

Here follow some code examples of how to query the PI Service proxy from the client application. The configuration used to make these examples
can be found in the attached Example Configuration.

* With this test the system time of a running Fews Region (on port 8191) can be retrieved.
If the
* system time of the region is changed then the returned value here will match the change.
*/

808
public void testSystemTime() {
String clientId = "not required";
Date date = [Link](clientId);

[Link]("Current System time : " + [Link](date));


}

/**
* Return the warm state times from the data base for the configured moduleState
id=RSNWELEV_DETO3I
* in the service configuration file = TestConfig.
*/
public void testGetWarmStateTimes() {
//name of service config file
String clientId = "TestConfig";
//id of moduleState element
String moduleId = "RSNWELEV_DETO3I";

Date[] stateTimes = [Link](clientId, moduleId);

[Link]("State times for Module " + moduleId);


for (Date stateTime : stateTimes) {
[Link]("time: " + [Link](stateTime));
}
}

/**
* Return the cold state ids from the data base for the configured moduleState
id=RSNWELEV_DETO3I
* in the service configuration file = TestConfig.
*/
public void testGetColdStateIds() {
//name of service config file
String clientId = "TestConfig";
//id of moduleState element
String moduleId = "RSNWELEV_DETO3I";

String[] stateIds = [Link](clientId, moduleId);

[Link]("Cold state ids for model " + moduleId);


for (String id : stateIds) {
[Link]("cold state: " + id);
}
}

/**
* Return the warm state file from the data base for the configured moduleState
id=SNOW17_LSMO3U
* in the service configuration file = TestConfig. Write file to location
d:/temp/<moduleStateId>.zip
*/
public void testGetModuleStateBinary() throws IOException {
//name of service config file
String clientId = "TestConfig";
//id of moduleState element
String moduleId = "SNOW17_LSMO3U";

Date[] stateTimes = [Link](clientId, moduleId);


byte[] binary = [Link](clientId, moduleId, stateTimes[0],
null, -1);

ZipInputStream zipInputStream = new ZipInputStream(new ByteArrayInputStream(binary));


try {
[Link](zipInputStream, new File("d:/temp/" + moduleId + ".zip"));

809
} finally {
[Link]();
}
}

/**
* Returns the content of a client configuration file from the config directory
PiClientConfigFiles.
*
* Only requirement is that a configuration file (any type) with the clientId as name and
moduleId as extension
* must exist in this config directory.
*/
public void testGetClientConfigurationFile(){
//Name of client file
String clientId = "MyClientConfigFile";
//extension of client file
String moduleId = "txt";

String clientText = [Link](clientId, moduleId);

[Link]("Client file content: " + clientText);

/**
* Retrieve the content of the locations xml. Write this to d:/temp/[Link]
* @throws IOException
*/
public void testGetLocations() throws IOException {

String locations = [Link]("id not required");

[Link]("Content of locations xml file: " + locations);


[Link]("d:/temp/[Link]", IOUtils.UTF8_CHARSET, locations);

/**
* Return the member indices for the ensemble id = "ESP" from the timeseries table.
*
* This only works when there are ensemble members in the TimeSeries table.
*/
public void testGetEnsembleMemberIndices(){

String ensembleId = "ESP";


int[] indices = [Link]("id not required", ensembleId);

[Link]("Ensemble id " + ensembleId + " contains " + [Link] + "


members.");
[Link]("indices are: ");
for (int indice : indices) {
[Link](indice);
}

/**
* Get the logentries for an Import TaskId.
*
* Check the LogEntries table to find a matching taskId.
*
* @throws IOException
*/

810
public void testGetLogInformation() throws IOException {

String taskId = "NWSNWMC00:0000026"; //=import


String logInfo = [Link]("id not required", taskId);

[Link]("Content of log: " + logInfo);


[Link]("d:/temp/[Link]", IOUtils.UTF8_CHARSET, logInfo);

/**
* Return the timeseries defined in the service configuration file under timeSeries element
with id 'Reservoir'.
*
* Filter using parameter ids QIN and RQIN and location ids DETO3, GPRO3 and FOSO3
*
* Check timeseries table to look for existing times for these timeseries.
*
* @throws IOException
*/
public void testGetTimeSeries() throws IOException {

Date systemTime = [Link]("id not required");


Calendar instance = [Link]([Link]("GMT"));
[Link](2009, 4, 1);
Date start = [Link]();
[Link](2009, 4, 11);
Date end = [Link]();
String[] params = new String[]{"QIN", "RQIN"};
String[] locs = new String[]{"DETO3","GPRO3","FOSO3"};
String headers = [Link]("TestConfig", "Reservoir", null,
start, systemTime, end, params, locs, null, -1);

[Link]("Content of tineseries headers: " + headers);

String timeseriesfile = [Link]("TestConfig", "Reservoir", null,


start, systemTime, end, params, locs, null, -1);

[Link]("d:/temp/[Link]", IOUtils.UTF8_CHARSET, timeseriesfile);

/**
* Upload a pi_diag log file to the region.
*
* Make sure to create an input file in corresponding directory.
*
* @throws IOException
*/
public void testInsertLogMessages() throws IOException {

String logText = [Link]("d:/temp/[Link]");


[Link]("used as description", logText);
}

/**
* Upload a pi_timeseries file to the Region. The timeseries in this file must be linked to
timeseries configured
* in the PiServiceConfig file.
*
* Make sure to create an input file in the corresponding directory.
*
* @throws IOException
*/
public void testInsertTimeSeries() throws IOException {

811
String timeseriesText = [Link]("d:/temp/[Link]");
[Link]("Input timeseries: " + timeseriesText);
[Link]("TestConfig", null, "Reservoir", timeseriesText, null, -1);
}

/**
* Schedule a task run and wait for it to finish.
*
*/
public void testRunningTask(){
String taskId = [Link]("id not required");
Date date = [Link]("");
[Link]("Starting task with id " + taskId + " time=" + new
Date([Link]()));
String taskRunId = [Link]("id not required", taskId, "Santiam_Forecast",
date, date, date, null, null, "Test user", "PiWebservice taskrun");

boolean result = [Link]("id not required", taskRunId, 120000);


[Link]("Task with id " + taskId + (result ? " finished successfully" : "
failed"));
[Link]("time=" + new Date([Link]()));
}
]]></moduleStateId>

Setting up method calls C

Here follow some code examples of how to query the PI Service proxy from the client application. The configuration used to make these examples
can be found in the attached Example Configuration.

Appendix

FewsPiService WSDL

Example of the FewsPiService WSDL definition. This definition file is used by other applications to connect to the webservice.
[Link]

FewsPiService API

Example of the FewsPiService class. This class defines the API.

/* ================================================================
* Delft FEWS
* ================================================================
*
* Project Info: [Link]
* Project Lead: Karel Heynert ([Link]@[Link])
*
* (C) Copyright 2003, by WL | Delft Hydraulics
* P.O. Box 177
* 2600 MH Delft
* The Netherlands
* [Link]
*
* DELFT-FEWS is a sophisticated collection of modules designed
* for building a FEWS customised to the specific requirements
* of individual agencies. An open modelling approach allows users
* to add their own modules in an efficient way.
*
* ----------------------------------------------------------------
* [Link]
* ----------------------------------------------------------------
* (C) Copyright 2003, by WL | Delft Hydraulics

812
*
* Original Author: Erik de Rooij
* Contributor(s):
*
* Changes:
* --------
* 10-Sep-2007 : Version 1 ();
*
*
*/
package [Link];

import [Link];

/**
* TODO <code>ISSUES</code>
*<li>Optional Date arguments can not be set to NULL when not used.</li>
*<li> When retrieving timeseries. What use is it to add ensemble Id? Because the timeseries set
already contains the ensemble id. </li>
*<li> Some interface call do not require the clientId. Should we remove this to simplify things?</li>
*/
public interface FewsPiService {

String getLocations(String clientId);


/**
* Retrieve the configuration file for the .

* @param clientId Id of web service client (obtained on command line when invoked)
* @param fileExtension. Case insensitive. Extension of the config file (e.g. xml, ini), One
client can have multiple config files
* with different file extensions.
* @return client config text file. Format of file must be known by client.
*/
String getClientConfigFile(String clientId, String fileExtension);

/**
* Create a new Task. Use return Task id when exporting data to the WebService.

* @param clientId Id of web service client (obtained on command line when invoked)
* @return Task id for the new task.
*/
String createTask(String clientId);

/**
* Return the current time zero of the system.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @return Time zero
*/
Date getSystemTime(String clientId);

/**
* TODO
*
* ID of Configured timezone for the webservice
* @param clientId
* @return
*/
String getTimeZoneId(String clientId);

/**
* Run a Single task. TaskId must be obtained using the method {@link
FewsPiService#createTask(String)}.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task Id
* @param workflowId Workflow Id

813
* @param startTime start date/time of run - NULL if the configured default is to be used
* @param timeZero Forecast time zero.
* @param endTime end date/time of run - NULL if the configured default is to be used
* @param coldStateId String identifying the cold state to use - NULL if a cold state start is not
forced
* @param scenarioId String identifying the "what if" scenario - NULL if not used
* @param userId Id of user running task.
* @param description Description
* @return Returns the TaskRun id
*/
String runTask(String clientId, String taskId, String workflowId, Date startTime, Date timeZero,
Date endTime, String coldStateId, String scenarioId, String userId, String description);

/**
* Request Ids of available cold states
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param id Id of the State module instance for which to retrieve the cold state ids.
* @return List of available cold state groups
*/
String[] getColdStateIds(String clientId, String id);

/**
* Request run status of task. This can be used to wait for a task to complete
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task Id
* @param waitMillis number of milli-seconds to wait between status requests
* @return boolean if task is complete or has been cancelled
*/
boolean waitForTask(String clientId, String taskId, int waitMillis);

/**
* cancel task. Cancel a running task
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task Id
*/
void cancelTask(String clientId, String taskId);

/**
* Retrieve the indices for the given ensemble id.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param ensembleId Id of the ensemble
* @return All valid indices for this ensemble.
*/
int[] getEnsembleMemberIndices(String clientId, String ensembleId);

/**
* Write timeseries associated to a specific task to webservice. The webservice will store this
information in the database.
*
* If the time series is an ensemble then each ensemble member needs to be submitted individually.
* using the <i>ensembleMemberIndex</i> argument. For deterministic time series the
<i>ensembleMemberIndex</i> is NULL
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid ensemble member index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is NULL the time series is written as data visible to all other processes in FEWS
* In case it is TaskId the time series will be used only by the task run with TaskId (e.g.
scenario time series)
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param id Id of the Pi timeseries xml content.
* @param piTimeSeriesXmlContent Time Series content in the form of a Pi timeseries xml file.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an

814
ensemble.
*/
void putTimeSeries(String clientId, String taskId, String id, String piTimeSeriesXmlContent,
String ensembleId, int ensembleMemberIndex);

/**
* Write timeseries. The webservice will store this information in the database.
*
* <p>
* For performance reasons it is possible to split the timeseries header information from the
timeseries data. The header information
* is stored in the <i>piTimeSeriesXmlContent</i> and the timeseries data is stored in the
<i>byteTimeSeriesContent</i>.
* <p>
* If the time series is an ensemble then each ensemble member needs to be submitted individually.
* using the <i>ensembleMemberIndex</i> argument. For deterministic time series the
<i>ensembleMemberIndex</i> is NULL
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid ensemble member index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is NULL the time series is written as data visible to all other processes in FEWS
* In case it is TaskId the time series will be used only by the task run with TaskId (e.g.
scenario time series)
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param id Id of the Pi timeseries xml content.
* @param piTimeSeriesXmlContent TimeSeries content in the form of a Pi timeseries xml file.
* @param byteTimeSeriesContent TimeSeries data content in the form of a byte array.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
void putTimeSeriesBinary(String clientId, String taskId, String id, String piTimeSeriesXmlContent,
byte[] byteTimeSeriesContent, String ensembleId, int ensembleMemberIndex);

/**
* Write information about the parameter set file given the webservice.
*
* The TaskId may be NULL or the requested task id.
* In case it is NULL the parameters version will be upaded
* In case it is TaskId the parameters will be used only by the task run with TaskId (e.g.
scenario run)
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param id Id of parameter set
* @param piParameterSetXmlContent Parameters content in the form of a Pi parameters xml file.
* @param validityStartTime Start time of parameter validity (NULL if not applicable)
* @param validityEndTime End time of parameter validity (NULL if not applicable)
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
void putModuleParameterSet(String clientId, String id, String taskId, String
piParameterSetXmlContent, Date validityStartTime, Date validityEndTime, String ensembleId, int
ensembleMemberIndex);

/**
* Write information about the dataset file for the given taskId back to the webservice. The
webservice will store this information and will add it to the
* task properties when the task is run.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param id Id of Module DataSet file.
* @param byteModuleDataSetContent Zipped module dataset file
* @param validityStartTime Start time of dataset validity (NULL if not applicable)
* @param validityEndTime End time of dataset validity (NULL if not applicable)

815
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
void putModuleDataSet(String clientId, String taskId, String id, byte[] byteModuleDataSetContent,
Date validityStartTime, Date validityEndTime, String ensembleId, int ensembleMemberIndex);

/**
* Write state information to webservice. The webservice will store this information in the
database.
*
* <p>
* The state information consists of two seperate parts. The <i>piStateXmlContent</i> containing
information
* about the state files. And the <i>byteStateContent</i> containing the actual state data for the
module.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param piStateXmlContent Pi state xml file.
* @param byteStateFileName name of the state file data content byte array.
* @param byteStateContent State file data content in the form of a byte array.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
void putState(String clientId, String taskId, String piStateXmlContent, String byteStateFileName,
byte[] byteStateContent, String ensembleId, int ensembleMemberIndex);

/**
* Put a log message
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param piDiagnosticsXmlContent Pi Diagnostics xml file.
*/
void putLogMessage(String clientId, String piDiagnosticsXmlContent);

/**
* Read module dataset information from webservice.
*
* <p>
*Default data set is returned.
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the module data set .
* @return Module data set file as byte array.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
byte[] getModuleDataSet(String clientId, String id, String ensembleId, int ensembleMemberIndex);

/**
* Read module parameter set information from webservice.
*
* <p>
*Default data parameterSet is returned.
*
* @param clientId Id of webservice configuration that is to be queried
* @param id name of the binary module parameter set file.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
* @return Module parameter set PiParameters xml.
*/
String getModuleParameterSet(String clientId, String id, String ensembleId, int
ensembleMemberIndex);

816
/**
* Read all available state times for requested state file.
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the module state .
* @return All available state times for this module state file.
*/
Date[] getAvailableStateTimes(String clientId, String id);

/**
* Read module state information from webservice.
*
* <p>
*Module state data file is returned for given time. Use method {@link
FewsPiService#getAvailableStateTimes(String, String)}
* to retrieve the available state times.
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the state .
* @param stateTime Time for which to retrieve a state file.
* @return Module state data file as byte array.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
byte[] getModuleStateBinary(String clientId, String id, Date stateTime, String ensembleId, int
ensembleMemberIndex);

/**
* Read the timeseries from the webservice. Returns a pi timeseries xml file containing the
timeseries information.
*
* <p>
* If the ensemble id has been configured for this timeseries then add the
<i>ensembleMemberIndex</i> as argument. Use NULL if not an ensemble
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is TaskId the time series is retrieved for the taskId only for simulated time series
* In case it is NULL time series for the current forecast will be retreived
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the time series string.
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param startTime start date/time of run - NULL if the configured default is to be used
* @param timeZero Forecast time zero.
* @param endTime end date/time of run - NULL if the configured default is to be used
* @param parameterIds Subset of parmaters for which to retrieve timeseries.
* @param locationIds Subset of locations for which to retrieve timeseries.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
* @return PiTimeseries xml file content.
*/
String getTimeSeries(String clientId, String id, String taskId, Date startTime, Date timeZero,
Date endTime, String[] parameterIds, String[] locationIds, String ensembleId, int
ensembleMemberIndex);

/**
* Read the timeseries from the webservice. Returns a pi timeseries xml file
* containing the timeseries headers information. Retrieve the timeseries data using the method
* {@link FewsPiService#getTimeSeriesBytes(String, String, String, [Link], [Link],
[Link], String[], String[], String, int)}
*
* <p>
* If the ensemble id has been configured for this timeseries then add the
<i>ensembleMemberIndex</i> as argument. Otherwise

817
* this argument is skipped by the webservice.
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is TaskId the time series is retrieved for the taskId only for simulated time series
* In case it is NULL time series for the current forecast will be retreived
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the time series string.
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param startTime start date/time of run - NULL if the configured default is to be used
* @param timeZero Forecast time zero.
* @param endTime end date/time of run - NULL if the configured default is to be used
* @param parameterIds Subset of parmaters for which to retrieve timeseries.
* @param locationIds Subset of locations for which to retrieve timeseries.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
* @return PiTimeseries xml file content.
*/
String getTimeSeriesHeaders(String clientId, String id, String taskId, Date startTime, Date
timeZero, Date endTime, String[] parameterIds, String[] locationIds, String ensembleId, int
ensembleMemberIndex);

/**
* Read the timeseries data from the webservice. Returns the data belonging to the
* timeseries that are retrieved when the method {@link FewsPiService#getTimeSeriesBytes(String,
String, String, [Link], [Link], [Link], String[], String[], String, int)}
* is called using the same arguments.
*
* <p>
* If the ensemble id has been configured for this timeseries then add the
<i>ensembleMemberIndex</i> as argument. Otherwise
* this argument is skipped by the webservice.
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is TaskId the time series is retrieved for the taskId only for simulated time series
* In case it is NULL time series for the current forecast will be retreived
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the time series string.
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param startTime start date/time of run - NULL if the configured default is to be used
* @param timeZero Forecast time zero.
* @param endTime end date/time of run - NULL if the configured default is to be used
* @param parameterIds Subset of parmaters for which to retrieve timeseries.
* @param locationIds Subset of locations for which to retrieve timeseries.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
* @return PiTimeseries xml file content.
*/
byte[] getTimeSeriesBytes(String clientId, String id, String taskId, Date startTime, Date
timeZero, Date endTime, String[] parameterIds, String[] locationIds, String ensembleId, int
ensembleMemberIndex);

/**
* get a log message associated to a specified taskId
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @return PiDiagnostics XML file content.
*/
String getLogMessages(String clientId, String taskId);

818
}

FewsPiServiceConfig XSD

Example of the FewsPiServiceConfig schema file.

819
820
Running Delft-FEWS in the background
When you use the PI service it is not necessary to have the main window displayed or anyone
operating the program at all. Under Windows, there is always a monitor, but on Linux machines
this may not be the case, especially with server machines.

To start Delft-FEWS in the background on Linux with no window present, use the following
receipe:

Start the X virtual framebuffer server, Xvfb


Set the DISPLAY environment variable to point to that display
Start FEWS in the background

This way no window will be visible and no monitor will be needed. A small shell script
will take care of the details:

821
#
# Start the Xvfb server using screen "1" (to avoid issues with a possibly running X server)
Xvfb :1
#
# Set the DISPLAY environment variable so that FEWS will use the Xvfb server
#
export DISPLAY=:1.0
#
# Start FEWS (standalone or operator client) in the background
#
bin/[Link] REGION

Using the Fews PI service from C

Example C interface
When querying the FEWS database with a small C program like the one below, the result is a string - in simple cases, just the value or values you
wanted, in other cases the contents of an XML file, conforming to the FEWS published interface (PI).

The program below is a translation of most of the Java example. It simply prints the answer, but in an actual program you will need to parse the
XML content to extract the relevant information:

/*
* Sample program, using the wrappers
*/

#include <stdio.h>

/*
* Wrapper code is included
*/
#include "pi_wrapper.c"

/*
* Main program
*/

int main( int argc, char *argv[] ) {

char *result;
char *stateTimes;
char *firstTime;
char *textContents;
char *zipContents;
FILE *outf;
char *startTime;
char *systemTime;
char *endTime;
char *timeZero;
char *date;
char *taskId;
char *taskRunId;
int success;

char *parameters[] = {"QIN", "RQIN"};


char *locations[] = {"DETO3", "GPRO3", "FOSO3"};
int nParameters = 2;
int nLocations = 3;

/* Get the library started */


Init( argv[0] );
InitPiService( "localhost", 8100 );

printf( "Current system time: %s\n", getSystemTime( "aa" ) );

822
printf( "State times: %s\n", getAvailableStateTimes( "TestConfig", "RSNWELEV_DETO3I" ) );
printf( "Cold states: %s\n", getColdStateIds( "TestConfig", "RSNWELEV_DETO3I" ) );

result = getAvailableStateTimes( "TestConfig", "RSNWELEV_DETO3I" );


stateTimes = strdup( result ) ; /* Save the result - we need it in the next step */

/* This test is excluded: there is no binary state file in the sample configuration */
#if 0
/* Extract the first time */

firstTime = getElement( stateTimes, 0 );


zipContents = getModuleStateBinary( "TestConfig", "SNOW17_LSMO3U", NULL, -1, &size );

outf = fopen( "[Link]", "wb" );


fwrite( zipContents, size, outf );
fclose( outf );

free( zipContents );
#endif

/* Note: name of file, extension of file */


textContents = getClientConfigFile( "MyClientConfigFile", "txt" );
outf = fopen( "[Link]", "w" );
fprintf( outf, "%s", textContents );
fclose( outf );

printf( "Locations:\n---begin---\n" );
printf( "%s", getLocations( "aa" ) );
printf( "---end---\n" );

printf( "Ensemble members: %s\n", getEnsembleMemberIndices( "aa", "ESP" ) );

printf( "Log message: %s\n", getLogMessages( "aa", "NWSNWMC00:0000026" ) );

/* Use pre-formatted strings for these test programs */


startTime = "2009-04-01T[Link]+00:00";
endTime = "2009-04-11T[Link]+00:00";

systemTime = getSystemTime( "aa" );

#define TRUE 1
#define FALSE 0

printf( "Timeseries:\n---begin---\n" );
printf( "%s", getTimeSeries( "TestConfig", "Reservoir", NULL,
startTime, systemTime, endTime,
parameters, nParameters, locations, nLocations,
NULL, -1, FALSE ) );
printf( "---end---\n" );

/*
* Running a task ...
*/
printf( "Running Santiam_Forecast ...\n" );

taskId = createTask( "aa" );


date = getSystemTime( "aa" );

taskRunId = runTask( "aa", taskId, "Santiam_Forecast", date, date, date, NULL, NULL,
"Test user", "PiWebservice taskrun" );

success = waitForTask( "aa", taskRunId, 120000); /* Wait for 120 seconds = 120000 ms */

printf( "%s\n", (success? "OK!" : "Problem!") );

823
TearDown();
}

Some notes

The program in the attachment uses several external libraries to take care of the actual connection. These libraries are: the Tcl library (version 8.4
or 8.5 should do) and the TclSOAP extension. More on this below.

As for the C interface itself:

Not all the services documented on the Wiki work for the sample configuration, as a local data store is not provided. This means that not
all services could be tested, but the main ones can be.
The C example produces the same output as the Tcl example, but there is a caveat when using the C interface: most wrapper functions
return a pointer to the Tcl result. Upon the next call to one of these wrapper functions, this pointer will be either invalidated ("dangling
pointer") or the contents of the memory it points to is changed. A solution would be to make a duplicate of the return value (see
getModuleStateBinary() for instance), but that puts the responsibility of cleaning up at the user's side.
Several C functions have extra arguments: getTimeSeries() takes two lists of IDs, the length of these lists is stored in an extra argument.
For returning the zipped contents of a module state file, I have also introduced an extra argument (C's strings are terminated with a null
byte, unless you use some count).
For convenience I have written a function that extracts an element from a space-separated list of words etc. This function contains a
memory leak, but that should not present a big problem.

To link the program using the gcc compiler, use:

gcc -o example example.c -I. -ltcl84

Code

The C code for the wrapper functions and a Tcl example are available via the attachments.

More on the Tcl libraries

The Tcl library and the TclSOAP extension can be downloaded from ActiveState ([Link]
The TclSOAP extension uses the TclDOM 2.6 extension and Tcllib. All of these are available as open source.

Fews Workflow Runner service


Introduction
Fews Workflow Runner Service API
Installing a Workflow Runner Service
Installing the MC Service component
Installing the Workflow Runner
Example code
Setting up a connection
Running Workflows
Appendix
WebService XSD

Introduction
The Fews Workflow Runner service uses XFire, a java SOAP framework. This framework allows a client application to obtain a proxy instance to
the FewsWebServiceRunner API. With this API the client can run workflows on the MC from the client code. The timeseries produced by the
workflow run can read by the client application. Before a client application can access the FEWS system there is some configuration work that
needs to be done.

User's looking to use XFire on a new project, should use CXF instead. CXF is a continuation of the XFire project and is
considered XFire 2.0. It has many new features, a ton of bug fixes, and is now JAX-WS compliant! XFire will continue to be
maintained through bug fix releases, but most development will occur on CXF now. For more information see the XFire/Celtix
merge FAQ and the CXF website.

Fews Workflow Runner Service API


Description of the methods provided by the Fews Workflow Runner Service API.

824
Runs a FEWS workflow on the MC.

clientId: A descriptive id used in logging and passed as user id in the taskProperties. Required
workflowId: A workflow id known by the MC configuration. Required
forecastStartDateTime: The start time of the forecast. If provided a module state at or before the start time will be used. When not
specified the the forecast will start at the last available warm state or will use a cold state when no warm state is available. WARNING !
Because XFire does not support nulls for date/times pass new Date(0) instead of null. Optional
forecastDateTime0: The time for new saved states during this run, a time observed data is likely to be available for all stations. When
not specified the current time will be used. WARNING! Because XFire does not support nulls for date/times pass new Date(0) instead of
null.
forecastEndDateTime: The end time of the forecast. When not specified a default is used specified in the fews configuration.
WARNING! Because XFire does not support nulls for date/times pass new Date(0) instead of null. Optional.
inputTimeSeries: The input timeseries required by the workflow.
returns: The output timeseries produced by the workflow.
throws: An exception when something goes wrong.

Installing a Workflow Runner Service


The Workflow Runner Service actually consists of two service components. The first service component is the McTaskWebService and is hosted
by the MC. The second service component is the FewsWebService which is the component being described under heading Fews Workflow
Runner Service API. The FewsWebService is started up by the client application.

Installing the MC Service component

The MC Service component is started up by the MC (TaskWebServiceRunner). The client application does not use this service directly. The only
configuration required for this component is that the following line is added to the MC configuration file; [Link].

]]>

Installing the Workflow Runner

The Workflow Runner service requires a configuration file that is an instance of the [Link].

port: This is the port number on which the FewsWebService will be hosted. This port must be accessible by the client application.
timeOutSeconds: This is the length of time that the FewsWebService will wait for the workflow to complete runnging.
inputPiTimeSeriesFile: This is the file from which the MC workflow run will read the input timeseries. When calling the FewsWebService
API the timeseries passed as argument will be written to this file. This file must therefore match the file configured in the MC workflow.
outputPiTimeSeriesFile [1..>: These are the files to which the MC workflow will write the output timeseries. When calling the
FewsWebService API the timeseries are read from the output files after the workflow run is completed. These timeseries are returned by
the call. These files must therefore match the files configured in the MC workflow.
mcTaskWebService: This contains information that allows the FewsWebService to connect to a specific running instance of the
McTaskWebService. Although this entry is optional it is required!

Example of a WebService xml file.

Starting on Windows

Step 1: Install the webservice [Link]: Attach the webservice package!

Step 2: Make a new [Link] and [Link] file in the \bin directory. The [Link] must contain the following
information.

/[Link]
]]>

Step 4: Start the FewsWebServiceRunner by clicking on the [Link].

Step 5: Stop the FewsWebServiceRunner by killing the application using the System Monitor.

Install windows service

TODO: Package not available!

Starting on Linux

Step 1: Install the webservice [Link]: Attach the webservice package!

Step 2: Set the correct paths in the fews_webservice.sh script.

Step 3: Start the fews_webservice.sh script by typing ./fews_webservice.sh start

825
Step 4: To stop the service type ./fews_webservice.sh stop

To make sure that the service keeps running there is also a 'watcher' script. This script should be run as a cron job. What this script does is, check
if the web fews webservice script is still running. If the service is not running the it is restarted.

Example code
Here are some examples of how a client application would instantiate a FewsWebService and fire of a workflow to the MC.

Before starting the client will require the following library: [Link]. This library can be found in the bin directory of the FEWS system.

Setting up a connection

Running Workflows

Appendix

WebService XSD

Example of the WebService XSD

826
JDBC vs. FewsPiService
Currently the FEWS JDBC server and the FewsPiService co-exist within the FEWS system. They are both hosted by an instance of a FEWS
Operator Client. At present an Operator Client can host either an instance of the FEWS JDBC server or an instance of the FewsPiService. Not
both at the same time.

What are the advantages of the JDBC server?

827
simple access to Locations, Parameters and Timeseries
predefined graphs
simple timeseries statistics
access to the Filter configuration

What are the disadvantages of the JDBC server ?


read only
can not run workflows
hard to expand functionality

What are the advantages of the FewsPiServer?


provides access to WarmStates, ModuleParameters and ModuleDataSets
read and write options
able to run workflows
easy to expand functionality

What are the disadvantages of the FewsPiServer?


no predefined graphs. Need to make your own
complex configuration required (Note! currently work in progress to simplify configuration)
no [Link] and [Link] (~containing timeseries parameters)

19 Parallel running of ensemble loops


Function: runInLoopParallelProcessorCount, set the amount of cores available to Delft-FEWS when running ensemble workflows

Module Name: runInLoopParallelProcessorCount

Where to Use? global properties file

Why to Use? to speed-up ensemble runs on multi core machines

Description: The runInLoopParallelProcessorCount en try in the global properties files indicated the number of cores Delft-FEWS may
use when running ensemble members in a loop

Preconditions: 2009-02 release, multi core cpu or multi cpu computer

Outcome(s): speed-up of the computations

Scheendump(s): link to attached screendump(s) for displays only

Remark(s): The speedup that may be obtained is highly dependent on the type of module you are running

Available since: DelftFEWS200902

Contents
Contents
Overview
Configuration
Tested modules
Sample input and output
Error and warning messages
Known issues
Related modules and documentation
Technical reference

Overview
Delft-FEWS can split ensemble workflows (that have the runInLoop element set to true) over multiple cores. Based on the available amount of
cores a number of queues is made, one for each core. When running the activity the different ensemble members are added to the different
queues. An example of a workflow that can use this feature is shown below:

828
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>MOGREPS_Spatial_Interpolation</moduleInstanceId>
<ensemble>
<ensembleId>MOGREPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>

Configuration
By default Delft-FEWS will only use one core and all tasks are run one after another. To enable the parallel running of ensemble members the
runInLoopParallelProcessorCount entry must be set in the global properties file. Here you either specify the number of cores to use or specify 100
to use all available cores.

In the global properties

Config Example
# to use all available cores/cpu's:
runInLoopParallelProcessorCount=100

For all internal Delft-FEWS modules that have been tested no changes are needed to the configuration. For external modules that are run using
the General adapter some changes may be needed to the configuration.

Tested modules

Module Remarks

Transformation (old) Test ok

Interpolation Test ok. Interpolation via DLL not tested

TransformationModule (new) Test ok. Interpolation via DLL not tested

pcrTransformation Test ok

General Adapter test ok

Sample input and output


Sample input and output

Error and warning messages


Description of errors and warnings that may be generated

Error: Error message

Action: Action to fix

Known issues
Running modules in parallel means you will use more memory

In some cases, the increase in speed may be very limited. Although it depends on a case by case basis the following simple rules may be used to
determine the experted increase in execution speed:

execution time of an individual module <= 1 sec: expected increase < 20%
execution time of an individual module > 1 sec < 10: expected increase between 20 and 50%
execution time of an individual module > 10 sec: expected increase > 50 % and < 100 %

The percentage given in the list above should be scaled using the amount of cores used. The 100% in the example above is a two-fold increase
using two cores.

829
Other factors that influence this are the amount of data being retrieved and store in the FEWS database in relation to the total execution time and
(in the case of an external module) the amount of data written to and read from the file system.

Related modules and documentation


Links to related parts of the system

Technical reference

Entry in moduleDescriptors: none

Link to schema: none

Appendices
A Colours Available in DELFT-FEWS
B Enumerations

A Colours Available in DELFT-FEWS

The following colour names can be used in DELFT-FEWS

none none medium spring green medium spring green 348017

alice blue alice blue eff7ff medium turquoise medium turquoise 48cccd

white white ffffff medium violet red medium violet red ca226b

antique white antique white f9e8d2 midnight blue midnight blue 151b54

antique white1 antique white1 feedd6 mint cream mint cream f5fff9

antique white2 antique white2 ebdbc5 misty rose misty rose fde1dd

antique white3 antique white3 c8b9a6 misty rose2 misty rose2 ead0cc

antique white4 antique white4 817468 misty rose3 misty rose3 c6afac

aquamarine aquamarine 43b7ba misty rose4 misty rose4 806f6c

black black 000000 navajo white navajo white fddaa3

blanched almond blanched almond fee8c6 navajo white2 navajo white2 eac995

blue blue 0000ff navajo white3 navajo white3 c7aa7d

blue violet blue violet 7931df navajo white4 navajo white4 806a4b

brown brown 980517 navy navy 150567

cadet blue cadet blue 578693 old lace old lace fcf3e2

cadet blue1 cadet blue1 99f3ff olive drab olive drab 658017

cadet blue2 cadet blue2 8ee2ec olive drab1 olive drab1 c3fb17

cadet blue3 cadet blue3 77bfc7 olive drab2 olive drab2 b5e917

cadet blue4 cadet blue4 4c787e olive drab3 olive drab3 99c517

coral coral f76541 olive drab4 olive drab4 617c17

cornflower blue cornflower blue 151b8d orange orange f87a17

cyan cyan 00ffff orange red orange red f63817

dark goldenrod dark goldenrod af7817 orange red2 orange red2 e43117

830
dark goldenrod1 dark goldenrod1 fbb117 orange red3 orange red3 c22817

dark goldenrod2 dark goldenrod2 e8a317 orange red4 orange red4 7e0517

dark goldenrod3 dark goldenrod3 c58917 orchid orchid e57ded

dark goldenrod4 dark goldenrod4 7f5217 pale goldenrod pale goldenrod ede49e

dark green dark green 254117 pale green pale green 79d867

dark khaki dark khaki b7ad59 pale green1 pale green1 a0fc8d

dark olive green dark olive green 4a4117 pale green2 pale green2 94e981

dark olive green1 dark olive green1 ccfb5d pale green3 pale green3 7dc56c

dark olive green2 dark olive green2 bce954 pale green4 pale green4 4e7c41

dark olive green3 dark olive green3 a0c544 pale turquoise pale turquoise aeebec

dark olive green4 dark olive green4 667c26 pale turquoise1 pale turquoise1 bcfeff

dark orange dark orange f88017 pale turquoise2 pale turquoise2 adebec

dark orange1 dark orange1 f87217 pale turquoise3 pale turquoise3 92c7c7

dark orange2 dark orange2 e56717 pale turquoise4 pale turquoise4 5e7d7e

dark orange3 dark orange3 c35617 pale violet red pale violet red d16587

dark orange4 dark orange4 7e3117 pale violet red1 pale violet red1 f778a1

dark orchid dark orchid 7d1b7e pale violet red2 pale violet red2 e56e94

dark orchid1 dark orchid1 b041ff pale violet red3 pale violet red3 c25a7c

dark orchid2 dark orchid2 a23bec pale violet red4 pale violet red4 7e354d

dark orchid3 dark orchid3 8b31c7 papaya whip papaya whip feeccf

dark orchid4 dark orchid4 571b7e peach puff peach puff fcd5b0

dark salmon dark salmon e18b6b peach puff2 peach puff2 eac5a3

dark sea green dark sea green 8bb381 peach puff3 peach puff3 c6a688

dark sea green1 dark sea green1 c3fdb8 peach puff4 peach puff4 806752

dark sea green2 dark sea green2 b5eaaa pink pink faafbe

dark sea green3 dark sea green3 99c68e plum plum b93b8f

dark sea green4 dark sea green4 617c58 powder blue powder blue addce3

dark slate blue dark slate blue 2b3856 red red ff0000

dark slate gray dark slate gray 25383c rosy brown rosy brown b38481

dark slate gray1 dark slate gray1 9afeff rosy brown1 rosy brown1 fbbbb9

dark slate gray2 dark slate gray2 8eebec rosy brown2 rosy brown2 e8adaa

dark slate gray3 dark slate gray3 78c7c7 rosy brown3 rosy brown3 c5908e

dark slate gray4 dark slate gray4 4c7d7e rosy brown4 rosy brown4 7f5a58

dark turquoise dark turquoise 3b9c9c royal blue royal blue 2b60de

dark violet dark violet 842dce royal blue1 royal blue1 306eff

deep pink deep pink f52887 royal blue2 royal blue2 2b65ec

deep pink2 deep pink2 e4287c royal blue3 royal blue3 2554c7

deep pink3 deep pink3 c12267 royal blue4 royal blue4 15317e

deep pink4 deep pink4 7d053f sandy brown sandy brown ee9a4d

831
deep sky blue deep sky blue 3bb9ff sea green sea green 4e8975

deep sky blue2 deep sky blue2 38acec sea green1 sea green1 6afb92

deep sky blue3 deep sky blue3 3090c7 sea green2 sea green2 64e986

deep sky blue4 deep sky blue4 25587e sea green3 sea green3 54c571

dim gray dim gray 463e41 sea green4 sea green4 387c44

dodger blue dodger blue 1589ff sienna sienna 8a4117

dodger blue2 dodger blue2 157dec sky blue sky blue 6698ff

dodger blue3 dodger blue3 1569c7 sky blue1 sky blue1 82caff

dodger blue4 dodger blue4 153e7e sky blue2 sky blue2 79baec

firebrick firebrick 800517 sky blue3 sky blue3 659ec7

floral white floral white fff9ee sky blue4 sky blue4 41627e

forest green forest green 4e9258 slate blue slate blue 737ca1

ghost white ghost white f7f7ff slate blue1 slate blue1 7369ff

gold gold d4a017 slate blue2 slate blue2 6960ec

goldenrod goldenrod edda74 slate blue3 slate blue3 574ec7

gray gray 736f6e slate blue4 slate blue4 342d7e

gray0 gray0 150517 slate gray slate gray 657383

gray100 gray100 ffffff slate gray1 slate gray1 c2dfff

gray18 gray18 250517 slate gray2 slate gray2 b4cfec

gray21 gray21 2b1b17 slate gray3 slate gray3 98afc7

gray23 gray23 302217 slate gray4 slate gray4 616d7e

gray24 gray24 302226 spring green spring green 4aa02c

gray25 gray25 342826 spring green1 spring green1 5efb6e

gray26 gray26 34282c spring green2 spring green2 57e964

gray27 gray27 382d2c spring green3 spring green3 4cc552

gray28 gray28 3b3131 spring green4 spring green4 347c2c

gray29 gray29 3e3535 steel blue steel blue 4863a0

gray30 gray30 413839 steel blue1 steel blue1 5cb3ff

gray31 gray31 41383c steel blue2 steel blue2 56a5ec

gray32 gray32 463e3f steel blue3 steel blue3 488ac7

gray34 gray34 4a4344 steel blue4 steel blue4 2b547e

gray35 gray35 4c4646 tan tan d8af79

gray36 gray36 4e4848 thistle thistle d2b9d3

gray37 gray37 504a4b turquoise turquoise 43c6db

gray38 gray38 544e4f violet violet 8d38c9

gray39 gray39 565051 violet red violet red e9358a

gray40 gray40 595454 violet red1 violet red1 f6358a

gray41 gray41 5c5858 violet red2 violet red2 e4317f

gray42 gray42 5f5a59 violet red3 violet red3 c12869

832
gray43 gray43 625d5d violet red4 violet red4 7d0541

gray44 gray44 646060 wheat wheat f3daa9

gray45 gray45 666362 yellow yellow ffff00

gray46 gray46 696565 yellow green yellow green 52d017

gray47 gray47 6d6968 aquamarine1 aquamarine1 87fdce

gray48 gray48 6e6a6b aquamarine2 aquamarine2 7deabe

gray49 gray49 726e6d aquamarine3 aquamarine3 69c69f

gray50 gray50 747170 aquamarine4 aquamarine4 417c64

gray51 gray51 787473 azure azure efffff

gray52 gray52 7a7777 azure2 azure2 deecec

gray53 gray53 7c7979 azure3 azure3 bcc7c7

gray54 gray54 807d7c azure4 azure4 7a7d7d

gray55 gray55 82807e beige beige f5f3d7

gray56 gray56 858381 bisque bisque fde0bc

gray57 gray57 878583 bisque2 bisque2 ead0ae

gray58 gray58 8b8987 bisque3 bisque3 c7af92

gray59 gray59 8d8b89 bisque4 bisque4 816e59

gray60 gray60 8f8e8d blue1 blue1 1535ff

gray61 gray61 939190 blue2 blue2 1531ec

gray62 gray62 959492 blue3 blue3 1528c7

gray63 gray63 999795 blue4 blue4 151b7e

gray64 gray64 9a9998 brown1 brown1 f63526

gray65 gray65 9e9c9b brown2 brown2 e42d17

gray66 gray66 a09f9d brown3 brown3 c22217

gray67 gray67 a3a2a0 burlywood1 burlywood1 fcce8e

gray68 gray68 a5a4a3 burlywood2 burlywood2 eabe83

gray69 gray69 a9a8a6 burlywood3 burlywood3 c6a06d

gray70 gray70 acaba9 burlywood4 burlywood4 806341

gray71 gray71 aeadac chartreuse chartreuse 8afb17

gray72 gray72 b1b1af chartreuse2 chartreuse2 7fe817

gray73 gray73 b3b3b1 chartreuse3 chartreuse3 6cc417

gray74 gray74 b7b6b4 chartreuse4 chartreuse4 437c17

gray75 gray75 b9b8b6 chocolate chocolate c85a17

gray76 gray76 bcbbba coral2 coral2 e55b3c

gray77 gray77 bebebc coral3 coral3 c34a2c

gray78 gray78 c1c1bf coral4 coral4 7e2817

gray79 gray79 c3c4c2 cornsilk cornsilk fff7d7

gray80 gray80 c7c7c5 cornsilk2 cornsilk2 ece5c6

gray81 gray81 cacac9 cornsilk3 cornsilk3 c8c2a7

833
gray82 gray82 cccccb cornsilk4 cornsilk4 817a68

gray83 gray83 d0cfcf cyan1 cyan1 57feff

gray84 gray84 d2d2d1 cyan2 cyan2 50ebec

gray85 gray85 d5d5d4 cyan3 cyan3 46c7c7

gray86 gray86 d7d7d7 cyan4 cyan4 307d7e

gray87 gray87 dbdbd9 firebrick1 firebrick1 f62817

gray88 gray88 dddddc firebrick2 firebrick2 e42217

gray89 gray89 e0e0e0 firebrick3 firebrick3 c11b17

gray90 gray90 e2e3e1 gainsboro gainsboro d8d9d7

gray91 gray91 e5e6e4 gold1 gold1 fdd017

gray92 gray92 e8e9e8 gold2 gold2 eac117

gray93 gray93 ebebea gold3 gold3 c7a317

gray94 gray94 eeeeee gold4 gold4 806517

gray95 gray95 f0f1f0 goldenrod1 goldenrod1 fbb917

gray96 gray96 f4f4f3 goldenrod2 goldenrod2 e9ab17

gray97 gray97 f6f6f5 goldenrod3 goldenrod3 c68e17

gray98 gray98 f9f9fa goldenrod4 goldenrod4 805817

gray99 gray99 fbfbfb green1 green1 5ffb17

green green 00ff00 green2 green2 59e817

green yellow green yellow b1fb17 green3 green3 4cc417

hot pink hot pink f660ab green4 green4 347c17

hot pink1 hot pink1 f665ab honeydew honeydew f0feee

hot pink2 hot pink2 e45e9d honeydew2 honeydew2 deebdc

hot pink3 hot pink3 c25283 honeydew3 honeydew3 bcc7b9

hot pink4 hot pink4 7d2252 honeydew4 honeydew4 7a7d74

indian red indian red 5e2217 ivory ivory ffffee

indian red1 indian red1 f75d59 ivory2 ivory2 ececdc

indian red2 indian red2 e55451 ivory3 ivory3 c9c7b9

indian red3 indian red3 c24641 ivory4 ivory4 817d74

indian red4 indian red4 7e2217 khaki1 khaki1 fff380

khaki khaki ada96e khaki2 khaki2 ede275

lavender blush lavender blush fdeef4 khaki3 khaki3 c9be62

lavender blush2 lavender blush2 ebdde2 khaki4 khaki4 827839

lavender blush3 lavender blush3 c8bbbe lavender lavender e3e4fa

lavender blush4 lavender blush4 817679 linen linen f9eee2

lawn green lawn green 87f717 magenta1 magenta1 f43eff

lemon chiffon lemon chiffon fff8c6 magenta2 magenta2 e238ec

lemon chiffon2 lemon chiffon2 ece5b6 magenta3 magenta3 c031c7

lemon chiffon3 lemon chiffon3 c9c299 maroon1 maroon1 f535aa

834
lemon chiffon4 lemon chiffon4 827b60 maroon2 maroon2 e3319d

light blue light blue addfff maroon3 maroon3 c12283

light blue1 light blue1 bdedff maroon4 maroon4 7d0552

light blue2 light blue2 afdcec moccasin moccasin fde0ac

light blue3 light blue3 95b9c7 orange1 orange1 fa9b17

light blue4 light blue4 5e767e orange2 orange2 e78e17

light coral light coral e77471 orange3 orange3 c57717

light cyan light cyan e0ffff orange4 orange4 7f4817

light cyan2 light cyan2 cfecec orchid1 orchid1 f67dfa

light cyan3 light cyan3 afc7c7 orchid2 orchid2 e473e7

light cyan4 light cyan4 717d7d orchid3 orchid3 c160c3

light goldenrod light goldenrod ecd872 orchid4 orchid4 7d387c

light goldenrod1 light goldenrod1 ffe87c peru peru c57726

light goldenrod2 light goldenrod2 ecd672 pink2 pink2 e7a1b0

light goldenrod3 light goldenrod3 c8b560 pink3 pink3 c48793

light goldenrod4 light goldenrod4 817339 pink4 pink4 7f525d

light goldenrod yellow light goldenrod yellow faf8cc plum1 plum1 f9b7ff

light pink light pink faafba plum2 plum2 e6a9ec

light pink1 light pink1 f9a7b0 plum3 plum3 c38ec7

light pink2 light pink2 e799a3 plum4 plum4 7e587e

light pink3 light pink3 c48189 purple purple 8e35ef

light pink4 light pink4 7f4e52 purple1 purple1 893bff

light salmon light salmon f9966b purple2 purple2 7f38ec

light salmon2 light salmon2 e78a61 purple3 purple3 6c2dc7

light salmon3 light salmon3 c47451 purple4 purple4 461b7e

light salmon4 light salmon4 7f462c red1 red1 f62217

light sea green light sea green 3ea99f red2 red2 e41b17

light sky blue light sky blue 82cafa salmon1 salmon1 f88158

light sky blue2 light sky blue2 a0cfec salmon2 salmon2 e67451

light sky blue3 light sky blue3 87afc7 salmon3 salmon3 c36241

light sky blue4 light sky blue4 566d7e salmon4 salmon4 7e3817

light slate blue light slate blue 736aff seashell seashell fef3eb

light slate gray light slate gray 6d7b8d seashell2 seashell2 ebe2d9

light steel blue light steel blue 728fce seashell3 seashell3 c8bfb6

light steel blue1 light steel blue1 c6deff seashell4 seashell4 817873

light steel blue2 light steel blue2 b7ceec sienna1 sienna1 f87431

light steel blue3 light steel blue3 9aadc7 sienna2 sienna2 e66c2c

light steel blue4 light steel blue4 646d7e sienna3 sienna3 c35817

light yellow light yellow fffedc sienna4 sienna4 7e3517

835
light yellow2 light yellow2 edebcb snow snow fff9fa

light yellow3 light yellow3 c9c7aa snow2 snow2 ece7e6

light yellow4 light yellow4 827d6b snow3 snow3 c8c4c2

lime green lime green 41a317 snow4 snow4 817c7b

magenta magenta ff00ff tan1 tan1 fa9b3c

maroon maroon 810541 tan2 tan2 e78e35

medium aquamarine medium aquamarine 348781 thistle1 thistle1 fcdfff

medium blue medium blue 152dc6 thistle2 thistle2 e9cfec

medium forest green medium forest green 347235 thistle3 thistle3 c6aec7

medium goldenrod medium goldenrod ccb954 thistle4 thistle4 806d7e

medium orchid medium orchid b048b5 tomato tomato f75431

medium orchid1 medium orchid1 d462ff tomato2 tomato2 e54c2c

medium orchid2 medium orchid2 c45aec tomato3 tomato3 c23e17

medium orchid3 medium orchid3 a74ac7 turquoise1 turquoise1 52f3ff

medium orchid4 medium orchid4 6a287e turquoise2 turquoise2 4ee2ec

medium purple medium purple 8467d7 turquoise3 turquoise3 43bfc7

medium purple1 medium purple1 9e7bff turquoise4 turquoise4 30787e

medium purple2 medium purple2 9172ec wheat1 wheat1 fee4b1

medium purple3 medium purple3 7a5dc7 wheat2 wheat2 ebd3a3

medium purple4 medium purple4 4e387e wheat3 wheat3 c8b189

medium sea green medium sea green 306754 wheat4 wheat4 816f54

medium slate blue medium slate blue 5e5a80 yellow1 yellow1 fffc17

B Enumerations
A.1 GeoDatum
A.2 Time Zones
A.3 Units
A.4 Data quality flags
A.5 Synchronisation Levels

A.1 GeoDatum
DELFT-FEWS may use a number of national coordinate system as geo-datum. These are referenced by all configurations requiring a definition of
geodatum.

All coordinates are handled internally as WGS 1984 (longitude-latitude). To add a new coordinate system to DELFT-FEWS, the transformation
between WGS-1984 and that system will need to added as Java class to DELFT-FEWS

The lists of GeoDatum supported are:

+ WGS 1984 (Geographic projection; longitude-latitude)


+ SVY21 (Singapore)
+ Ordnance Survey Great Britain 1936 (Great Britain)
+ CH1903 (Switzerland)
+ Rijks Driehoekstelsel (The Netherlands)
+ Gauss Krueger Austria M34 (Austria)
+ Gauss Krueger Austria M31 (Austria)
+ Gauss Krueger Meridian3 (Germany)
+ TWD 1967 (Taiwan)

Plus other organisation specific coordinate conversions.

836
The user can also specify the UTM zone - this should be in the form UTM48N or UTM48S. The zones are shown below:

A.2 Time Zones


DELFT-FEWS supports a number of time zones:

+ GMT Greenwich Mean Time (UTC+0.00)


+ CET Central European Time (UTC+1.00)
+ EET Eastern European Time (UTC+2.00)
+ WET Western European Time (UTC+1.00)

You can now specify any timezone in relation to GMT e.g.

<timeZoneName>GMT+9</timeZoneName>

]]>

A.3 Units
DELFT-FEWS supports a list of units. Most of these are SI units.

Unit Description

m Metres

mm Millimetres

m3/s Cubic meters per second

oC Degrees Centigraed

mm/hr Millimetres per hour

% Percentage

degrees Degrees (directional)

Bft Beaufort

m/s Metres per second

- Dimensionless

W/m2 Watts per metre squared

837
A.4 Data quality flags
Quality flags are constructed on a philosophy of two qualifiers. The first describes the origin of the data and the second the quality.

Possible origins of data are:

+ Original: This entails the data value is the original value. It has not been amended by DELFT-FEWS
+ Completed: This entails the original value was missing and was replaced by a non-missing value.
+ Corrected: This entails the original value was replaced with another non-missing value.

Possible qualifiers are:

+ Reliable: Data is reliable and valid


+ Doubtful: The validity of the data value is uncertain
+ Unreliable: The data value is unreliable and cannot be used.

Following this specification, the table below gives an overview of quality flag enumerations

Table 1 Enumeration of quality flags

Enumeration Description

0 Original/Reliable
The data value is the original value retrieved from an external source and it successfully passes all validation criteria set.

1 Corrected/Reliable
The original value was removed and corrected. Correction may be through interpolation or manual editing.

2 Completed/Reliable
Original value was missing. Value has been filled in through interpolation, transformation (e.g. stage discharge) or a model.

3 Original/Doubtful
Observed value retrieved from external data source. Value is valid, but marked as suspect due to soft validation limits being
exceeded.

4 Corrected/Doubtful
The original value was removed and corrected. However, the corrected value is doubtful due to validation limits.

5 Completed/Doubtful
Original value was missing. Value has been filled in as above, but resulting value is doubtful due to limits in
transformation/interpolation or input value used for transformation being doubtful.

6 Missing/Unreliable
Observed value retrieved from external data source. Value is invalid due to validation limits set. Value is removed

7 Corrected/Unreliable
The original value was removed and corrected. However, corrected value is unreliable

8 Completed/Unreliable
Original value was missing. Value has been filled in as above, but resulting value is unreliable,

9 Missing value in originally observed series. Note this is a special form of both Original/Unreliable and Original/Reliable.

Notes:

No difference is made between historic and forecast data. This is not considered a quality flag. The data model of DELFT-FEWS is
constructed such that this difference is inherent to the time series type definition.

A.5 Synchronisation Levels


To allow optimisation of data flows in DELFT-FEWS when set-up in a distributed environment, synchronisation levels can be defined. These
synchronisation levels are integers. The behaviour of each synch level is determined through the configuration of the synchronisation channels
and synchronisation profiles. When required additional synchronisation levels can be added to further refine the synchronisation process. The
current convention is;

synchLevel description application

0 (Default) All scalar data from a forecast run all systems

1 Scalar time series imported from telemetry. NB the length of data synchronised will depend on the all systems
login-profile selected. Typically this will be data generated up to 7 days ago

2 All grid data from a forecast run (e.g. Flood Mapping results) all systems

838
3 Large volumes of scalar data such as CatAvg data (forecasts, actuals & NWP) all systems

4 Used for data imported infrequently such as Astronomical or Climatological data all systems

5 Data edited on OC all systems

6 (small) Grid data imported from external forecast (synchronised to OC) all systems

7 Grid data imported from external forecast (synchronised to FSS & MC only, and not to OC) all systems

8 Performance indicator time series. These are time series that do not need to be synchronised with a all systems
short synchronisation interval or when a forecaster logs in with a minimum profile.

9 Temporary time series not requiring synchronisation. all systems

11 Specific ModuleDataset files, which should be downloaded and activated directly after logging in most systems
and after each upload of a new version of the file (synch to OC). This is used in the Configuration
manager when uploading the module dataset

16 (large) Grid data imported from external forecast (synch. to OC) NFFS: to distinguish
between small
(synchLevel 6) and large
grids

20 WarmStates(synch to OC) all systems

21 Aggregated grids (flows/heads) forecasts used in NGMS

22 Grids data (heads) timeseries used in NGMS

23 Grids data (flows) timeseries used in NGMS

30 Timeseries modifiers used in CHPS (NWS)

90-100 Reserved used internally

WhatIfScenarioEditor
Scenario Editor

Module Name: WhatIfScenarioEditor (display)

Description: Visual version of a what if editor. Defines similar whatifs as the (old) WhatIfScenarioFilter

Why to Use? To provide graphical feedback on time series defined. To assign location properties to dummy locations

Where to Use?

Config Example

Screendump: Use the 'attachment' option to add image file here

Outcome(s):

Remark(s):

Available since: DelftFEWS200801

839

You might also like