FEWSDOC Configuration Guide
FEWSDOC Configuration Guide
6
1.1 01 Structure of a DELFT-FEWS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 02 Data Handling in DELFT-FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 03 System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.1 01 FEWS Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.2 02 Time Series Display Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.3 03 Display Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.3.4 04 Location Icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.3.5 05 Module Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.3.6 06 Display Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
1.3.7 07 Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
1.4 04 Regional Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.4.1 01 Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.4.2 01 - Related Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.4.3 02 LocationSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
1.4.4 03 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
1.4.5 05 Branches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.4.6 06 Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.4.7 07 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
1.4.8 08 ValidationRulesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
1.4.9 09 Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
1.4.10 10 ThresholdValueSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
1.4.11 11 ColdModuleInstanceStateGroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
1.4.12 12 ModuleInstanceDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
1.4.13 13 WorkflowDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
1.4.14 14 IdMapDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
1.4.15 15 FlagConversionsDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
1.4.16 16 UnitConversionsDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
1.4.17 17 CorrelationEventSetsDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
1.4.18 18 TravelTimesDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
1.4.19 19 TimeUnits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
1.4.20 20 Historical Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
1.4.21 21 Value Attribute Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
1.4.22 22 Locations and attributes defined in Shape-DBF files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
1.4.23 23 Qualifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
1.4.24 24 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
1.4.25 25 ModifierTypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
1.4.26 26 TimeSteps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
1.5 05 Configuring the available DELFT-FEWS modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
1.5.1 01 Interpolation Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
1.5.2 02 Transformation Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
1.5.3 03 Import Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
[Link] Available data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
[Link].1 HCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
[Link].2 HymosAscii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
[Link].3 LMW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
[Link].4 MM3P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
[Link].5 Pegelonline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
[Link].6 WQCSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
[Link].7 ArcInfoAscii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
[Link].8 ArcWat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
[Link].9 BIL Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
[Link].10 BUFR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
[Link].11 CSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
[Link].12 Database import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
[Link].13 Delft-Fews Published Interface timeseries Format (PI) Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
[Link].14 DINO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
[Link].15 DIVER MON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
[Link].16 FewsDatabase Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
[Link].17 Gray Scale Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
[Link].18 hdf4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
[Link].19 HYMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
[Link].20 KNMI CSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
[Link].21 KNMI EPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
[Link].22 KNMI HDF5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
[Link].23 KNMI IRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
[Link].24 KNMI SYNOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
[Link].25 Landsat-HDF5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
[Link].26 LUBW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
[Link].27 Matroos NetCDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
[Link].28 Msw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
[Link].29 NETCDF-CF_PROFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
[Link].30 NETCDF-CF_GRID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
[Link].31 NETCDF-CF_TIMESERIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
[Link].32 NOOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
[Link].33 NTUQUARTER Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
[Link].34 NTURAIN Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
[Link].35 SSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
[Link].36 TMX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
[Link].37 Wiski . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
[Link].38 WSCC csv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
[Link].39 Singapore OMS Lake Diagnostic System files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
[Link].40 EasyQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
[Link].41 McIdasArea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
[Link].42 Keller IDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
[Link].43 Obserview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
[Link].44 generalCsv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
[Link].45 DINO Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
[Link].46 GermanSnow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
[Link].47 Delft3D-Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
[Link].48 CERF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
[Link].49 SWE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
[Link].50 NetcdfGridDataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
[Link].51 IP1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
[Link].52 IFKIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
[Link].53 IJGKlepstanden . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
[Link].54 Radolan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
[Link].55 Bayern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
[Link] Custom time series import formats using java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
[Link].1 [Link] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
[Link].2 [Link] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
[Link].3 [Link] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
[Link] Import data using OPeNDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
[Link] Import Module configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
1.5.4 04 Export modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
[Link] EA XML Export Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
[Link].1 GRDC Export Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
[Link] Export module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
[Link] Export module, available data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
[Link].1 BfG Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
[Link].2 CSV Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
[Link].3 DINO Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
[Link].4 Fliwas Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
[Link].5 GIN Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
[Link].6 GRDS Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
[Link].7 iBever Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
[Link].8 Menyanthes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
[Link].9 NetCDF Alert Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
[Link].10 NETCDF-CF_GRID_MATROOS Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
[Link].11 NETCDF-CF_GRID Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
[Link].12 NETCDF-CF_PROFILE_MATROOS Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
[Link].13 NETCDF-CF_PROFILE Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
[Link].14 NETCDF-CF_TIMESERIES_MATROOS Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
[Link].15 NETCDF-CF_TIMESERIES Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
[Link].16 NetCDF MapD Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
[Link].17 PI Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
[Link].18 Rhine Alarm Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
[Link].19 SHEF Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
[Link].20 TSD Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
[Link].21 UM Aquo export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
[Link] Rdbms Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
[Link] Report Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
1.5.5 05 General Adapter Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
1.5.6 06 Lookup Table Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
1.5.7 07 Correlation Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
1.5.8 08 Error Correction Module (ARMA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
[Link] AR Module Background information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
1.5.9 09 Report Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
[Link] Improved report configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
[Link] Tags in report template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
[Link] Using Colours in Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
1.5.10 10 Performance Indicator Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
1.5.11 11 Amalgamate Import Data Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
1.5.12 12 Archive Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
1.5.13 13 Rolling Barrel Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
1.5.14 14 Support Location Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
1.5.15 15 Scenario Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
1.5.16 16 Pcraster Transformation (pcrTransformation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
[Link] List of pcraster functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
1.5.17 17 WorkflowLooprunner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
1.5.18 18 Mass-balances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
1.5.19 19 Rating curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
1.5.20 20 Transformation Module (Improved schema) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
[Link] Accumulation Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
[Link].1 AccumulationMeanInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
[Link].2 AccumulationSum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
[Link].3 AccumulationSumInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
[Link].4 AccumulationSumOriginAtTimeZero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
[Link] Adjust Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
[Link].1 AdjustQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
[Link].2 AdjustQMeanDailyDischarge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
[Link].3 AdjustQUsingInstantaneousDischarge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
[Link].4 AdjustStage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
[Link].5 AdjustTide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
[Link] Aggregation transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
[Link].1 Aggregation Accumulative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
[Link].2 Aggregation Instantaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
[Link].3 Aggregation InstantaneousToMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
[Link].4 Aggregation MeanToMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
[Link] DisaggregationTransformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
[Link].1 Accumulative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
[Link].2 Instantaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
[Link].3 MeanToInstantaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
[Link].4 meanToMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
[Link].5 weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
[Link] DischargeStage Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
[Link].1 DischargeStageMergedRatingCurves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
[Link].2 DischargeStagePower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
[Link].3 Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
[Link] Events Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
[Link].1 EventsDischargeVolume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
[Link].2 EventsDuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
[Link].3 EventsMaximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
[Link].4 EventsMeanDischargeVolume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
[Link].5 EventsNumberOfEvents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
[Link] Filter Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
[Link].1 FilterLowPass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
[Link] Interpolation Serial Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
[Link].1 Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
[Link].2 directionLinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
[Link].3 extrapolateExponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
[Link].4 Transformation - InterpolationSerial Linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
[Link] Interpolation Spatial Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
[Link].1 InterpolationBilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
[Link].2 InterpolationSpatialAverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
[Link].3 InterpolationSpatialClosestDistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
[Link].4 InterpolationSpatialInverseDistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
[Link].5 InterpolationSpatialMax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
[Link].6 InterpolationSpatialMin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
[Link].7 InterpolationSpatialSum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
[Link].8 InterpolationSpatialWeighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
[Link] Lookup transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
[Link].1 Multidimensional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
[Link].2 Simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
[Link] Merge Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
[Link].1 Simple Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
[Link] Review transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
[Link].1 Stage Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
[Link].2 TidalBalanceReview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
[Link] StageDischarge transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
[Link].1 StageDischargeMergedRatingCurves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
[Link].2 StageDischargePower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
[Link].3 StageDischarge table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
[Link] Statistics Summary Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
[Link] Structure Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
[Link].1 crumpWeir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
[Link].2 crumpWeirBackwater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
[Link].3 flatVWeir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
[Link].4 flatVWeirBackwater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
[Link].5 StructurePumpFixedDischarge Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
[Link].6 StructurePumpHeadDischargeTable Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
[Link].7 StructurePumpSpeedDischargeTable Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
[Link].8 StructurePumpSpeedHeadDischargeTable Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
[Link] TimeShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
[Link].1 Constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
[Link] User Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
[Link].1 UserPeriodic Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
[Link].2 UserSimple Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
[Link] DayMonth Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
[Link] PCA and Regression Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
[Link] Selection Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
[Link].1 Selection of independent lows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
[Link].2 Selection of independent peaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
[Link].3 Selection of lows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
[Link].4 Selection of maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
[Link].5 Selection of minimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
[Link].6 Selection of peaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
1.5.21 21 Secondary Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
[Link] Checks for counting reliable, doubtful, unreliable and missing values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
[Link] FlagsComparisonCheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
[Link] SeriesComparisonCheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
1.5.22 22 forecastLengthEstimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
1.5.23 23 Decision Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
[Link] Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
1.5.24 24. ImportAmalgamate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
1.6 06 Configuring WorkFlows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
1.7 07 Display Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
1.7.1 01 Grid Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
[Link] Coupling ArcSDE and WFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
1.7.2 02 Longitudinal Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
1.7.3 03 What-If Scenario Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
1.7.4 04 Lookup Table Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
1.7.5 05 Correlation Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
1.7.6 06 System Monitor Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
1.7.7 07 Skill Score Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
1.7.8 08 Time Series Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
1.7.9 09 State editor display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
1.7.10 10 Interactive forecast display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
1.7.11 11 Threshold Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
1.7.12 12 Task Run Dialog Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
1.7.13 13 Manual Forecast Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
[Link] Add Macro Button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
1.7.14 14 ChartLayer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
1.7.15 15 Schematic Status Display (formerly Scada Display) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
1.7.16 16 Modifier display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
1.8 08 Mapping Id's flags and units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
1.8.1 01 ID Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
1.8.2 02 Unit Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
1.8.3 03 Flag Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
1.9 09 Module datasets and Module Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
1.9.1 01 Module Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
1.9.2 02 Module Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
1.10 10 Setting up an operational system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
1.10.1 01 Root Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
[Link] [Link] file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
1.10.2 02 Launching FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
1.10.3 03 Setting Up Scheduled Forecasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
1.10.4 04 Setting Up Event-Action Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
1.10.5 05 Setting up sending emails on events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
1.10.6 06 Checklist for creating a live system from a stand alone system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
1.10.7 07 Setting up alerts for the Alarmmodule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
1.11 11 Setting up a forecasting system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
1.11.1 01 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
1.11.2 02 Designing the Forecasting System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
1.11.3 03 Creating a FEWS Application Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
1.11.4 04 Static Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
1.12 12 Configuration management Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
1.12.1 01 Managing Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
1.12.2 02 Validation of a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
1.12.3 03 Analysis of a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
1.12.4 04. Automatic Configuration Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
1.13 13 Additional Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
1.13.1 01 Flood Mapping Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
1.13.2 03 Automatic WorkflowRunner in SA mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
1.13.3 04 Bayesian Model Averaging (BMA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
[Link] BMA in FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
1.13.4 05 Historic Forecast Performance Tool (HFPT) Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
1.14 14 Tips and Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
1.15 15 External Modules connected to Delft-FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
1.15.1 Adapter Manuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
[Link] HEC-HMS model adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
[Link] HEC-RAS Model Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
[Link] SYNHP Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
1.15.2 Developing a FEWS (Compliant) Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
[Link] Updating PI State XML file within Pre and Post Model Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
1.15.3 External model specific files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
[Link] PDM State file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
[Link] The ISIS .ini file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
1.15.4 Delft3D-FEWS adapter configuration manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
[Link] 1. General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
[Link] 2. Adapter configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
[Link].1 01 Design philosophy of Delft3D model adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
[Link].2 02 Adapter configuration - Delft3D model adapter in relation to FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
[Link].3 03 Adapter configuration - configuration workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
[Link].4 04 Adapter configuration - XML configuration scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
[Link].5 05 Adapter configuration - template files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
[Link].6 06 Adapter configuration - naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
[Link].7 07 Adapter configuration - state handling and communication files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
[Link] 3. Example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
[Link] 4. Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
1.15.5 Models linked to Delft-Fews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
[Link] Modflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
[Link] PCOverslag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
[Link] RTC Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
1.16 17 Launcher Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
1.16.1 Launcher XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
1.16.2 Security XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
1.17 18 FEWS data exchange interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
1.17.1 Fews JDBC server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
1.17.2 Fews PI service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
[Link] Using the Fews PI service from C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
1.17.3 Fews Workflow Runner service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
1.17.4 JDBC vs. FewsPiService . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
1.18 19 Parallel running of ensemble loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
1.19 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
1.19.1 A Colours Available in DELFT-FEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
1.19.2 B Enumerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
1.20 WhatIfScenarioEditor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
Delft-FEWS Configuration Guide
Introduction
The DELFT-FEWS configuration guide provides the advanced user of DELFT-FEWS with the information required to set-up and maintain a
configuration of DELFT-FEWS. The objective of the guide is to be used both as a reference manual during the development and maintenance of
an implementation of DELFT-FEWS, as well as to provide some of the background philosophy on how to go about setting up a forecasting
system. It is expected that the reader of this guide has a basic understanding of DELFT-FEWS and its structure.
To understand how to configure DELFT-FEWS, a good understanding of the structure of the configuration is required. The first part of this guide
therefore gives an introduction into the different parts of configuration. In the second part of the guide the concepts used in DELFT-FEWS for
handling data are explained.
Section 3 gives details on configuration of system components such as display settings while section 4 describes the various regional
configuration components which relate to a specific regional FEWS systems (e.g. monitoring locations). Section 5 provides documentation on
the 'module instances' available in DELFT-FEWS for example interpolation of data or how to configure an external model such as ISIS (using
the General Adapter Module) including how these can be configured to achieve the required functionality.
Section 6 explains how configured modules are linked into logical tasks through configuration of workflows. In section 7 the configuration of user
displays is discussed. Section 8 discusses mapping information from external data sources. Section 9 discusses handling of static module data.
In section 10 some elements of configuring DELFT-FEWS as an operational system are discussed.
In section 11 a brief introduction is given on how to set-up a forecasting system. This is to give some idea on how to approach configuring a
complex system such as DELFT-FEWS. A brief guide in the use of the configuration management module to support configuration is given in
section 12. Section 13 details the additional functionality available in additional modules available through DELFT-FEWS.
Section 14, finally, gives some tips and tricks on configuring DELFT-FEWS.
Contents
01 Structure of a DELFT-FEWS Configuration
02 Data Handling in DELFT-FEWS
03 System Configuration
04 Regional Configuration
05 Configuring the available DELFT-FEWS modules
06 Configuring WorkFlows
07 Display Configuration
08 Mapping Id's flags and units
09 Module datasets and Module Parameters
10 Setting up an operational system
11 Setting up a forecasting system
12 Configuration management Tool
13 Additional Modules
14 Tips and Tricks
15 External Modules connected to Delft-FEWS
17 Launcher Configuration
18 FEWS data exchange interfaces
19 Parallel running of ensemble loops
Appendices
WhatIfScenarioEditor
Introduction
6
The configuration of DELFT-FEWS is defined in a set of XML files. In this section the different parts of the configuration are introduced. An
understanding of these different parts of the configuration is required before attempting configuration of a DELFT-FEWS system.
When dealing with the configuration, it is important to note that the configuration may be retrieved from the local file system or from the local
database.
In the former case the configuration is a set of directories, each containing different parts of the configuration. These directories are
all contained under the Config directory.
In the latter case the configuration is contained entirely within the local database in a set of tables, each table containing different
parts of the configuration. Each of the tables in the local database reflects one of the sub-directories in the file system.
When initiating DELFT-FEWS, it will first look for configuration stored on the local file system. If this is found, then the system will expect to use all
configuration from the file system. If the Config directory is not available the system will expect to use all configuration from the database. If
neither is found then an appropriate error message is issued and the system will stop.
The configuration stored in either the Config directory of the database is configuration that is common to all versions of an implementation of
DELFT-FEWS for a particular forecasting system. In the live system situation the contents of the database will be synchronised between all
operator clients and forecasting shell servers in the system, and is therefore expected to be identical in all parts of the system.
Besides this configuration that is synchronised, there is also a small set of XML files referred to as the root configuration files. These may be
unique to each operator client and/or forecasting shell server. This root configuration is required to identify for example if the particular instance of
DELFT-FEWS is operating in stand-alone mode or as an operator client, and for the latter cases information such as IP-addresses for the Master
Controller the operator client should log on to. These root configuration files are always used from the file system. They have no effect on the
hydrological configuration and are normally not changed during configuration of the forecasting system side of DELFT-FEWS.
Table 1 Overview of different configuration items contained either in the config directory or in the database
Definition of regional configuration, including all locations, parameters etc. RegionConfigFiles RegionConfigurations Single
Definition of system configuration items, including the plug-ins available to the SystemConfigFiles SystemConfigurations Single
system, definition, icons etc.
Definition of modules for handling data and running forecasting models ModuleConfigFiles ModuleInstanceConfigs Multiple
Cold states for modules. Zip file containing model specific data exported by GA ColdStateFiles ColdSateFiles Multiple
usually before running a model
Definition of mapping of ID's and parameters between external sources (e.g. IdMapFiles IdMaps Multiple
telemetry, modules) and ID's and parameters defined in the DELFT-FEWS
configuration
Definition of unit conversions between external sources (e.g. telemetry, modules) UnitConversionFiles UnitConversions Multiple
and units used in DELFT-FEWS
Definition of flag conversions between external sources (e.g. telemetry, modules) FlagConversionFiles FlagConversions Multiple
and flags used in DELFT-FEWS
Definition of layout of user displays, including What-if scenarios, Grid Display etc.) DisplayConfigFiles DisplayConfigurations Multiple
Zipped files containing datasets for modules used by the forecasting system. ModuleDataSetFiles ModuleInstanceDatasets Multiple
Definition of HTML template files used in creating HTML reports for use on the ReportTemplateFiles ReportTemplates Multiple
web server.
Map layers (shape files) used in main map display and spatial interpolation MapLayerFiles MapLayerFiles Single
Icons used in main map display and button bar IconFiles IconFiles Single
7
Table 2 Overview of different configuration items contained in the file system only
Root Configuration. Several XML files describing some of the settings specific to the These files are contained in the root of the
Operator Client used (e.g. client configuration, IP addresses) DELFT-FEWS configuration directory
Configurations that are active and used as a default can be identified both in the file system and in the database.
On the file system a naming convention is introduced to identify which of the possible multiple versions are used as a default.
Other version of configuration will have a different version number. The <default> item is omitted.
Example:
The default version of the configuration settings for the FEWS Explorer could be:
A second version may exist. This should then not have the <default> item in the file name:
Explorer [Link]
If the configuration does not include the "default" item it will not be used. This configuration may be reverted to by adding the "default" flag - and
removing it from the other file.
In the database the default version for each configuration item is identified in an associated table. For each configuration item a default table is
available. This is identified by a table with the same name, prefixed by "Default". For example for the SystemConfigurations a table with the name
DefaultSystemConfigurations identifies which of the available versions in the former table is to be used a default.
There are two types of configuration in DELFT-FEWS. In the first set, for each different schema type, only one default configuration file may be
used and the name of the configuration file is unique. For the second set of configuration, multiple configuration types may be available for a
specific schema. The names of these may be defined by the user. An XML file contained in the regional configuration element is then used to
register these XML files with a user specified name to the system, and identify the type of configuration. This file is referred to as a descriptor file.
Table1 identifies for which type of configuration a single files per type is allowed and for which multiple instances for each type of configuration
may exist.
8
parameterId
locationId/locationSetId
timeSeriesType
timeStep
relativeViewPeriod / relativeForecastPeriod
externalForecastMaxAge
externalForecastTimeCardinalTimeStep
readWriteMode
synchLevel
expiryTime
delay
multiplier
divider
incrementer
ensembleId
Introduction
One of the most important properties of DELFT FEWS as a forecasting system is its ability to efficiently deal with large volumes of dynamic data.
Dynamic data covers mainly time series data in various formats (scalar- 0D, vector - 1D, grid - 2D, and polygon data - 2D). Dynamic data also
includes the management of model states produced by the system.
A thorough understanding of how DELFT-FEWS handles dynamic data is fundamental in the correct configuration of an operational system. For
each of the different types of dynamic data specific optimisations have been introduced.
To allow handling of time series data, the concept of a "Time Series Set" is introduced. A Time Series Set is used to retrieve and submit data to
the database. In this chapter the concept of the time series set is explained.
Time series are considered to be available from two sources. All time series sourced from external systems are considered as "External". All time
series produced by the forecasting system itself are considered as "Simulated".
Time series are considered to be of two categories in relation to time. Historical time series are continuous time series that describe a parameter
at a location over a period of time. Forecast time series are different to historical time series in that for each location and parameter one forecast
is independent of another forecast. A forecast is characterised by its start time and the period it covers. Generally when a new forecast is
available for a given location and parameter it will supersede any previous forecast for that location parameter. Each forecast is therefore an
independent entity.
+ External Historical
+ External Forecasting
+ Simulated Historical
+ Simulated Forecasting
There are significant differences in how each of these time series are handled.
In an online system DELFT-FEWS will incrementally import observed data as it becomes available from external systems. This data should be
imported as an External Historical time series. When data marked as external historical is presented to the system with exactly the same values
and covering the same period as data for that location/parameter already available in the database then it will be ignored. Only new data is
imported and stored. If data for a given period is already available but is changed (manual edit or update), then the new values will be added to
the database. For each item of data added to the database, a time stamp is included to specify when the data was made available to the system.
When data of the external historical type is requested from the database, the most recently available data over that whole period is returned. If the
data for that period was imported piecewise, then the individual pieces will be merged prior to the data being returned. An example is given in
Figure 1 where data is imported sequentially. Each data imported/edited is indicated using a different line style. At the request for the complete
series (a) the most recent data available over the complete period is merged and returned. The data imported at 12:00 partially overlaps that
imported at 10:00. As the 12:00 data is the most recent, it will persist in the complete series. A manual edit may be done (or interpolation) to fill
the gap in the data. This will be returned in a subsequent request for the complete series. Although a complete series is returned, the data is
stored as it is imported, including a time stamp indicating when the import happened. If at a later stage the data available at directly preceding the
manual edit is requested, then the additional data will not be included in the complete series.
9
Figure 1 Schematic representation of data imported as external historical
External forecasts are imported by DELFT-FEWS as these are made available by the external forecasting systems. Again each forecast is
imported and stored individually. External forecasts are referenced by the start time of that forecast. When retrieving an external forecast time
series from the database, the most recent available forecast, as indicated by the forecast start time will be returned. The most recently available
forecast is determined as the latest forecast with a start time earlier or equal to the start of the forecast to be made using DELFT-FEWS (forecast
T0). It is thus not possible to see an external forecast time series on request, as the latest available is always returned.
With possible exceptions for modules considering multiple forecasts (e.g. performance module), only one external forecast is returned. Different
external forecasts are not merged.
Simulated historical time series are similar to the external historical time series in that they are continuous in time. The difference is that the time
series are referenced through the forecast (model) run they have been produced by. As a consequence the time series can be retrieved either by
directly requesting it through opening the run and viewing, or if the run is approved.
Simulated historical time series are generally produced by model runs where a model initial state is used. Each time series has a history, i.e. the
state used as its initial condition. Each state again has a history, i.e. the model run that produced the state. This history is used by the database in
constructing a continuous time series.
Simulated forecast time series are again similar to external forecasting time series. Again the main differences is that they are referenced through
the forecast (model) run they have been produced by. As a consequence the time series can be retrieved either by directly requesting it through
opening the run and viewing, or if the run is approved. Simulated forecast time series are treated in the same way as the external forecast time
series in that the last approved forecast (referred to as the current forecast) is seen as a default. All other runs can be seen on request only. Note
that the last approved forecast which is shown by default may not be the last available forecast.
Figure2 schematically shows how a sequence of runs producing simulated historical and simulated forecasting time series are stored. Each
simulated historical run uses the module state saved at the end of the previous run. It can be seen that these simulated historical traces are
treated as a continuous time series when requested later. For the forecasting time series, only the most recent (approved) time series is
displayed.
Figure2 Schematic overview of handling simulated forecasting and simulated historical time series. Three subsequent forecasts are shown, and
10
the resulting complete time series returned when requested after 12:00. The historical time series is traced back using the state used to create the
link to a previous run. For the forecast time series the most recent forecast supersedes previous forecasts.
The time series type simulated historical should only be assigned to time series that have a relation to a previous time series
through a model state. In all other cases, the time series is independent, and should be allocated simulated forecasting as time
series type.
Time series sets form a large part of the configuration. Most modules have a standard structure, where the configuration starts with a request of
specific set of data from the database using one or more input time series sets, a number of functional items which describe how the data is
transformed, and one or more output time series sets which are used to store the data in the database under a unique combination of keys.
Figure3 shows the elements of the Time Series Set complex type. a number of these elements are compulsory (solid borders), while other
elements are optional (dashed borders). If any of the required elements is omitted then the primary validation of that configuration will fail.
Depending on the information required, each of the elements will be used differently. Some elements are simple flags, indicating specific
properties of the time series set if they are present. For others a string or value must be given in the configuration. In code this will mean that
value or string is assigned to a variable with the name of the element. Other elements may also contain properties. The example of the time series
set below illustrates the different types of element of the time series set.
Optional items may also need to be required to fulfil the requirements of the module using the time series set. This will be indicated in this manual
for those modules where appropriate.
11
Figure3 Schema of the Time Series Set Complex type
description
This is an optional description for the time series set. It is only used a caption in configuration and is not stored with time series.
moduleInstanceId/moduleInstanceSetId
The module instance Id is the ID of the module that has written the data in the time series set to the database. This ID is one of the primary keys
and is required to uniquely identify the data on retrieval.
In the time series set a single module instance Id may be referenced or multiple module instance Id's. The latter is done either by including a list
of module instance Id's or by referencing a module instance set Id. This again resolves to a list of module instance set Id's as defined in the
[Link] configuration file.
One or more moduleInstanceId may be defined, or a single ModuleInstanceSetId. These cannot be mixed
valueType
This specifies the dimension/data type of the time series. This element is an enumeration of the next types;
scalar
longitudinalprofile
grid
polygon
sample
parameterId
12
The parameterId describes the parameter of the data in the time series. This Id is a cross reference to the [Link] configuration file in the
regional configuration defining the parameters. The reference is not enforced through an enumeration in the XML schema. If a parameter not
included in the parameter definition is referred to, an error will be generated and an appropriate message returned.
locationId/locationSetId
The locationId is a reference to the location for which the data series is valid. Each individual data series may belong to one location only. In the
time series set a single location may be referenced or multiple locations may be referenced. The latter is done either by including a list of
locationId's or by referencing a locationSetId. This again resolves to a list of locationId's as defined in the [Link] configuration file.
One or more locationId's may be defined, or a single locationSetId. These cannot be mixed.
timeSeriesType
This specifies the type of time series (see discussion above). This is an enumeration of;
external historical
external forecasting
simulated historical
simulated forecasting
timeStep
This is the time step of the time series. The time step can be either equidistant or non-equidistant. The time step is defined in the parameters of
the timeStep element;
Attributes;
unit (enumeration of: second, minute, hour, day, week, month, year, nonequidistant)
multiplier defines the number of units given above in a time step (not relevant for nonequidistant time steps).
divider same function as the multiplier, but defines fraction of units in time step.
timeZone defines the timeZone of the timeStep, this is only relevant for units of a day or larger.
For hourly timesteps this may also be relevant in the case of half-hourly timezones. Untested at the oment.
decade
<timeStep daysOfMonth="1 11 21"/>
relativeViewPeriod / relativeForecastPeriod
The relative view period defines the span of time for which data is to be retrieved. This span of time is referenced to the start time of the forecast
run (T0) the time series set is used in. If the time series set is not used in a forecast run (e.g. in the displays), then the reference is to the
DELFT-FEWS system time.
Parameters
13
unit identifies the time unit with which the time span is defined (enumeration of second, minute, hour, day, week).
start identifies the start time of the time span with reference to the T0 (in multiples of the unit defined).
end identifies the start time of the time span with reference to the T0 (in multiples of the unit defined).
startOverrulable Boolean flag to indicate if the start time given may be overruled by a user selection.
endOverrulable Boolean flag to indicate if the end time given may be overruled by a user selection.
For equidistant time series, all values at the time interval within the span defined will be returned by the database following a request. If no values
are found, then missing values will be returned at the expected time step. For non-equidistant time series, all values found within the time span
are returned. If none are found, then no values are returned.
The relativeForecastPeriod Period is relative to the T0 of the current/selected forecast or the external forecast time of an (the latest) external
forecast. Use relativeViewPeriod instead for simulated series created by the current task run.
Of the parameters in the relative view period, only the end and unit parameters are compulsory. Generally, however, the start
time will also be required. It is omitted only if the start time is determined through selection of a module state.
If the start time is overruled through user selection, it may only be earlier than the start time defined The same holds for the end
time, but then later. The start and end time thus define the minimum time span of data.
Figure 4 Schematic representation of the relative view period with reference to the T0. The start and end time defined may be overruled if the
appropriate parameters are set to true.
externalForecastMaxAge
when the externalForecastMaxAge is not configured there is no maximum age for a forecast series to be used, so the returned external forcast
can be very old when there is no recent forecast available. ALL external forecasts after the T0 are ALWAYS ignored. The age of an external
forecast is defined as the time span between the external forecast time and T0.
Attributes;
externalForecastTimeCardinalTimeStep
When no external forecast exists in the data store younger that the specified age a new external forecast is returned with a minimum age that
applies to the specified cardinal time step.
Attributes;
multiplier defines the number of units given above (not relevant for nonequidistant time steps).
timeZone defines the timeZone, this is only relevant for units of a day or larger.
readWriteMode
14
The readWriteModel definition is mainly used in the definition of filters to be applied in the time series display when used in edit mode. This
element is an enumeration of;
add originals implies the data is new and is added to the database.
editing only visible to current task runs implies any changes made remain invisible to other tasks (used in What-If scenarios)
editing visible to all future task runs implies any changes made will be visible to other task
read originals only implies all edited, corrected or interpolated data should be ignored.
The only enumeration that can be used in timeseriessets in FEWS modules is:
read complete forecast reads the complete forecast series from the database. If this enumeration element is used no relative View
Period has to be configured
It is a good convention to set this property to read only in all input blocks.
synchLevel
This is an integer value determining how the data is stored and synchronised through the distributed system. There is no enumeration as the
synchLevel is used in the configuration of synchronisation, where optimisations can be defined for each synchLevel. The convention used is
explained in the Live System configuration section.
expiryTime
This element allows the time series created to have a different expiry time to the default expiry time. This means it may be removed earlier, or
later, by the rolling barrel function. For temporary series the value may be set to a very brief period. For other time series (e.g. Astronomical input
series), the value should be set sufficiently high.
Attributes;
delay
This element allows the time series retrieved to be lagged (positive or negative). The time stamps of the series will then be shifted by the period
specified on retrieval. This is used only when retrieving time series from the database, and not inversely when submitting time series to the
database.
Attributes;
multiplier
This element allows the time series retrieved to be multiplied by the factor given. This is used only when retrieving time series from the database,
and not inversely when submitting time series to the database.
divider
This element allows the time series retrieved to be divided by the factor given. This is used only when retrieving time series from the database,
and not inversely when submitting time series to the database.
incrementer
This element allows the time series retrieved to be incremented by the factor given. This is used only when retrieving time series from the
database, and not inversely when submitting time series to the database.
ensembleId
15
A time series set may be defined to retrieve all members of an ensemble at once (for example in evaluation of ensemble statistics). This is done
by defining the optional ensembleId. The ensembleId should also be defined when writing new ensemble members. (e.g. on importing ensembles
in the import module).
Example:
<timeSeriesSet>
<description>Example time series set</description>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>[Link]</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="-48" end="24" endOverrulable="true"/>
<readWriteMode>read only</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="100"/>
</timeSeriesSet>
When dealing with ensembles, the ensembleId need only be defined if the workflow activity these are used in must retrieve the
complete ensemble, or if members are to be written under a new ensembleId. In other cases the ensembleId need only be
defined in the workflow definition (see Workflows chapter). For the TimeSeriesSets defined in modules there is then no
difference between running in ensemble mode and running normally.
03 System Configuration
Introduction
The system configuration items form a primary part of the configuration of DELFT-FEWS as a system. It includes definition of the functional
elements DELFT-FEWS has available (both GUI plug-ins and Module plug-ins). The layout of the main GUI (the FEWS Explorer) and the Time
Series display are also defined.
For each of the configuration items listed above only one configuration is active (or default) at any one time. Each item is defined in an XML file
with a unique name.
Many of the configuration items required will include references to strings. To avoid duplication, a tag can be defined in the
[Link] file in the root configuration and the tag name used in the XML file.
To use a tag, add this in the [Link] file.
To reference a tag include the sting $TAG_NAME$, where TAG_NAME is the tag to be used.
Contents
01 FEWS Explorer
02 Time Series Display Configuration
03 Display Groups
04 Location Icons
05 Module Descriptors
06 Display Descriptors
07 Permissions
16
01 FEWS Explorer
What [Link]
Required yes
Introduction
The layout of the FEWS Explorer is configured in an XML file in the SystemConfigurations section. When available on the file system, the name of
the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
Figure5 shows the main elements of the Explorer configuration, These are divided into a number of sections, each of which will be discussed
individually (these are indicated with a + sign).
System Information
This section contains some general information of the DELFT-FEWS configuration.
description
An optional description of the configuration element. This is for reference purposes only and will not be used elsewhere.
systemCaption
The caption that will appear in the title bar of the FEWS Explorer window.
systemHelpFile
17
Reference to the file name and location of the help file
Panel sizes
This optional section allows the panel sizes to be pre-set by the configuration as a percentage of the window size.
Explorer Options
Depreciated section
18
Figure 7 Elements in the Explorer Options section of the Explorer configuration
The explorer options define whether a number of items are visible on the main display when started. Each of these may either have the value
"true" or "false". The items listed in Figure 7 lists all the items. The names are self-explanatory.
Map
In this section the background maps can be defined. The configuration of this section is described in the Grid Display (definition of a background
map).
Zoom extents
The zoom extents is used to define the pre-configured zoom levels that can be selected from the main display.
zoomExtent
Main element of each zoomExtent defined. Note that multiple zoom extents may exist, with the elements below to be defined for each.
Attributes:
19
title name of the zoom extent in the list of extents.
Coordinates of the corners of the zoom extent (specified in the geoDatum selected below)
mnemonic
Explorer Tasks
The explorer tasks define the tasks that can be carried out from the explorer. These tasks are for example the initiation of plug-ins such as the
time series display.
explorerTask
Main element of each explorer Task. Note that multiple tasks may exist, with the elements below to be defined for each.
Attributes;
iconFile
Reference to an icon file to be used in the toolbar. If left empty the name will be used to identify the task in the toolbar
20
Mnemonic
taskExe
Command line for executable to run on initiating the task (the task may be either a call to an executable or to a Java class)
taskClass
Java class to run on initiating the task (the task may be either a call to an executable or to a Java class)
arguments
workDir
Description
toolbarTask
menubarTask
allowMultipleInstances
Permission
statusBar
The status Bar settings define the format of the time string displayed in the status bar
dateTimeFormat
String defining the time format for time displayed in the status bar. For example HH:mm:ss will display time as [Link].
restSettings
This section includes additional settings for the FEWS Explorer.
21
Figure 11 Elements in the Rest Settings section of the Explorer configuration
defaultSelectFilterId
geoDatum
Default definition of the geographic datum. This is an enumeration of geographic datums supported. As further geographic datums are supported,
the list will be expanded;
dateTimeFormat
Format definition for time strings in displays (e.g. yyyy-MM-dd HH:mm:ss is resolved to 2004-07-03 [Link])
cardinalTimeStep
Default cardinal time step for the system. The system time will be rounded down to the actual time to the closest cardinal time step.
Attributes;
timeZone
Defines the default time zone for the system. The default time zone is used for all times in user displays, unless locally overruled. This includes
time series displays and the system time. The time zone used by the system should conform to a time zone that does not consider summer time.
If this optional entry is not included then the timeZone is considered to be UTC+0:00 (or GMT). The time zone can be defined in two ways:
timeZoneOffset: The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or
fraction of hours) offset. (e.g. +01:00)
timeZoneName: Enumeration of supported time zones. See appendix B for list of supported time zones.
logPanelConfig
Configuration of the log panel at bottom of the main display. This can be configured to show all messages (DEBUG level and up), or filtered from
a defined level. Two types of log message can be displayed; those generated by the DEBUG logger and those by the EVENT logger. In normal
use the latter is defined to show messages from a level of INFO or above. The former is not normally used except for configuration in the stand
alone when additional information may be useful. Different settings are available for stand alone clients and for operator clients
22
Figure 12 Elements in the Log Panel section of the Explorer configuration
clientFilter
clientId
Definition of log filters for Operator client or for Stand alone system (both may be included).
logFilter
eventType
Level
Level of log message below which messages are not displayed. Enumeration of DEBUG, INFO, WARN, ERROR, FATAL ERROR
rollingBarrelOptions
This allows you to set the rolling barrel options for the client. Available options for the type are:
not_automatic: The Rolling Barrel will only run if you launch it using the F12 menu
startup_only: The Rolling Barrel will only run when starting up the client
shutdown_only: The Rolling Barrel will only at showdown of the client
interval: The Rolling Barrel will run at the specified interval
Example:
<type>interval</type>
<interval unit="hour" multiplier="1"/>
]]>
Schema:
23
parameterListConfig
This allows you to set the default sorting option for the parameters in the Explorer. Available options are:
default: Use the default sorting from the configuration file [Link].
name: Sort by parameter name (ascending).
Example:
<sortOption>name</sortOption>
]]>
Schema:
notification
The system can notify the completion of a manually dispatched task run when the notification property is enabled (i.e. enabled=TRUE).
Example:
<exportTimeSeries visible="true"/>
]]>
By default the exported time series will not do any ID mapping on exporting. Pre-defined ID mapping configuration files can be configured in the
interactiveExportFormats element. In the example below the export type iBever will always use the ID Mapping configuration file
IdExportKwaliteit. For each export type a default ID mapping file can be configured.
csv
dutch-csv
gin-xml
Hymos 4.03
Hymos 4.5
iBever
Menyanthes
pi-xml
UM-Aquo
Some export formats (like UM Aquo) explicitly require an idMap before they are enabled in the file export menu.
24
Example:
<interactiveExportFormat>
<name>iBever Export</name>
<exportType>iBever</exportType>
<IdMapId>IdExportKwaliteit</IdMapId>
</interactiveExportFormat>
<interactiveExportFormat>
<name>HYMOS Transferdatabase 4.03</name>
<exportType>Hymos 4.03</exportType>
<IdMapId>IdHYMOS</IdMapId>
<flagConversionsId>ExportHYMOSFlagConversions</flagConversionsId>
</interactiveExportFormat>
<interactiveExportFormat>
<name>HYMOS Transferdatabase 4.50</name>
<exportType>Hymos 4.5</exportType>
<IdMapId>IdHYMOS</IdMapId>
<flagConversionsId>ExportHYMOSFlagConversions</flagConversionsId>
</interactiveExportFormat>
<interactiveExportFormat>
<name>UM aquo</name>
<exportType>UM-Aquo</exportType>
<IdMapId>IdExportUMAQUO</IdMapId>
</interactiveExportFormat>
]]>
All geographic locations used in DELFT-FEWS are resolved to WGS 1984. If another coordinate system is to be used, then the
transformation between this and WGS 1984 will need to be added. There is a class definition for these transformations. Once
added the enumeration used here can be extended
Care needs to be taken when working with time zones. Mixing time zones can lead to great confusion. It is wise to define the
time zone as an offset with respect to UTC and use this throughout. In configuring import data, a local time zone can be used. It
is advisable to always set time zones when required.
Required no
Description Configuration file for the time series display (line styles etc)
Introduction
description
General Display Configuration
convertDatum
maximumInterpolationGap
valueEditorPermission
lableEditorPermission
commentEditorPermission
Default view period
Time Markers Display Configuration
timeMarkerDisplayOptions
color
lineStyle
label
25
Parameters display configuration
Default Options
ParameterDisplayOptions
PreferredColor
lineStyle
markerStyle
markerSize
Precision
min
max
maxTimeSpan
Module instance mapping
moduleInstanceMapping
description
Statistical functionality
Calendar aggregation (with associated time step)
Relative aggregation (with associated time span)
Duration exceedence
Duration non-exceedence
Moving average (with associated time span)
Central moving average (with associated time span)
Accumulation interval (with associated time span or time step)
Accumulation aggregation (with associated time span or time step)
Frequency distribution (with associated samples)
Gaussian curve (with associated samples)
Show peaks above value
Show lows below value
Scatterplot
Boxplot
Schemas for the slider
movingAccumulationTimeSpan
timeStep
samples
Descriptive Function Group
Info functions (if this type of function is specified, the display provides a hint to select a column in the table in order to get more
descriptive information):
Time series information available
Descriptive statistical functions
Duration curve
Introduction
The layout of the time series display is configured in an XML file in the System Configuration folder. When available on the file system, the name
of the XML file is for example:
TimeSeriesDisplayConfig Fixed file name for the time series display settings
default Flag to indicate the version is the default configuration (otherwise omitted).
Figure 13 shows the main elements of the TimeSeriesDisplay configuration, These are divided into a number of sections, each of which will be
discussed individually (these are indicated with a + sign).
26
Figure13 Elements in the TimeSeriesDisplay configuration
description
An optional description of the configuration. This is for reference purposes only and will not be used elsewhere.
convertDatum
This optional Boolean setting can be used to start the time series display showing levels against the global reference level. The default is that
levels are displayed against the local reference level. The difference between local and global reference is defined in the locations definition (see
regional settings).
maximumInterpolationGap
Maximum gapsize which can be filled in by linear or block filler in the data editor
valueEditorPermission
27
Optional permission which must be assigned to a user to to edit values in the data editor
lableEditorPermission
Optional permission which must be assigned to a user to to edit labels in the data editor
commentEditorPermission
Optional permission which must be assigned to a user to to edit comments in the data editor
Parameters
unit identifies the time unit with which the time span is defined (enumeration of second, minute, hour, day, week).
start identifies the start time of the time span with reference to the T0 (in multiples of the unit defined).
end identifies the start time of the time span with reference to the T0 (in multiples of the unit defined).
systemTime
displayTime
timeZero
threshold
forecastConfidence1
forecastConfidence2
forecastConfidence3
Within longitudinal profile displays, the markers can be set to display the minimum or maximum values. A variety of river bed levels can be
included in a display if these are specifed in the [Link] file.
leftBankLevel
leftFloodPlainLevel
leftMainChannelLevel
longitudinalProfileMaximum
longitudinalProfileMinimum
rightBankLevel
rightFloodPlainLevel
rightMainChannelLevel
To visualize model layer elevations when drawing a cross section in a spatial plot, one should use: bottomLayerLevel
topLayerLevelThis applies only for parameters with unit in meters
timeMarkerDisplayOptions
Attributes;
color
28
Colour of the time series marker line (see enumeration of colours in appendix A).
lineStyle
Line style of time series marker line. Enumeration of "solid", "none", "bar", "dashdot", "dashed", "dotted", "solid;thick", "dashdot;thick",
"dashed;thick"; "dotted;thick"; "area"; "constant".
The "constant" label should preferably only be applied if the time series holds only one value.
label
Figure showing the options in the ParameterDisplayConfig section of the TimeSeriesDisplay configuration
Default Options
The Default Options can be configured (and has the same sequence as the single parameter display options) to be set as default appearance.
This default is overruled by an individual ParameterDisplay Configuration for a parameter.
29
Figure showing the options in the Default Options section of the TimeSeriesDisplay configuration
ParameterDisplayOptions
30
PreferredColor
The preferred colour for the line and markers. This colour will only be used if it is not yet available in the current graph. If it is, then the next colour
in the template order will be selected.
lineStyle
Line style of time series line. Enumeration of "solid", "none", "bar", "dashdot", "dashed", "dotted".
markerStyle
Marker style for markers plotted on the vertices of the line. Enumeration of "none", "+", "x", "circle", "square", "rectangle", "diamond", "triangleup" ,
"triangledown".
markerSize
Precision
31
Decimal precision with which values are given in the table.
min
Minimum of the y-scale to be used as a default for all displays of this parameter. The minimum is used, unless a value in the time series is lower
than this minimum, in which case that is used. The minimum defined here can also be overruled in individual template definition in the
DisplayGroups (see below).
max
Maximum of the y-scale to be used as a default for all displays of this parameter. This maximum is used, unless a value in the time series is
higher than this maximum, in which case that is used. The maximum defined here can also be overruled in individual template definition in the
DisplayGroups (see below).
maxTimeSpan
Optional field, only intended to be used for viewing non-equidistant series. Specifying this value will prevent the drawing of line segments in the
TimeSeriesDisplay between adjacent time/value points for which the time difference exceeds the specified maximum time span.
moduleInstanceMapping
Attributes;
id ModuleInstanceId to be replaced
description
Statistical functionality
Time series statistical functions are statistical functions that use one equidistant, scalar time series to produce another one using a statistical
function. When defined, a list of statistical functions will be displayed in the Time Series Display. If the user selects a function then for each
32
(equidistant, scalar) time series in the display another one is generated and displayed with the calculated results. Calculated time series are
displayed in the same graph as the time series they are based on.
Some statistical functions require an additional accumulation time span, time step or samples which the user can select using a slider. As soon as
the user selects a different value from the slider the calculation is launched with the new value.
The Statistical functions group defines dedicated graphing options shown in the combo box in the toolbar:
Creates an aggregation of a time series array according to the selected time step.
Attributes:
function: calendarAggregation
timeStep: Time steps that the user selects from using the slider.
Creates an aggregation of a time series array. A relative time step is calculated by the selected time span and the start time of the period from the
time series array.
Attributes:
function: relativeAggregation
movingAccumulationTimeSpan: Time spans that the user selects from using the slider.
Duration exceedence
Sorts the values in a time series array and its flags in descending order.
Attributes:
function: durationExceedence
ignoreMissings: when true, missing values are ignored and each duration will be calculated from the available values within the current
33
time window.
When false, missing values are added to the end of the sorted series.
Duration non-exceedence
Sorts the values in a time series array and its flags in ascending order.
Attributes:
function: durationNonExceedence
ignoreMissings: when true, missing values are ignored and each duration will be calculated from the available values within the current
time window.
When false, missing values are added to the end of the sorted series.
A moving average calculates the mean value of the all values within the selected time window.
Attributes:
function: movingAverage
ignoreMissings: when true, missing values are ignored and each average will be calculated from the available values within the current
time window.
When false, calculated values will be set to missing if one or more values within the current time window are missing.
movingAccumulationTimeSpan: Time spans that the user selects from using the slider.
The moving average function only works for true equidistant data (i.e. no daysOfMonths etc.)
The difference between moving average and central moving average is that the central moving average uses values before and after the current
value to calculate the average. Moving average only uses values in the past.
A central moving average calculates the mean value of the time window of which the current value is in the middle. It is the same as the moving
average, but shifted to the past for half the time window.
Attributes:
function: centralMovingAverage
ignoreMissings: when true, missing values are ignored and each average will be calculated from the available values within the current
time window.
When false, calculated values will be set to missing if one or more values within the current time window are missing.
movingAccumulationTimeSpan: Time spans that the user selects from using the slider.
The central moving average function only works for true equidistant data (i.e. no daysOfMonths etc.)
The difference between moving average and central moving average is that the central moving average uses values before and after the current
value to calculate the average. Moving average only uses values in the past.
The accumulation interval function creates cumulative series for several intervals. For the first output time step in each interval the output value
equals the input value for that time step. For a given other output time step the output value is equal to the sum of the input value for that time
step and the previous output value. The intervals are defined by the selected time span or the selected time step. If a time span is selected, then
the function uses the time span as the size for the intervals and the first interval starts at the start of the period that is visible in the time series
display. If a time step is selected, then the function uses the selected time step to create intervals. Each period between two consecutive time
steps in the selected time step is an interval.
Attributes:
function: accumulationInterval
movingAccumulationTimeSpan: Time spans that the user can select using the slider.
or
timeStep: Time steps that the user can select using the slider.
The accumulation interval sums the values for all time steps within the selected time window range. The time window range is defined by the
associated time span or time step.
Attributes:
function: accumulationAggregation
movingAccumulationTimeSpan: Time spans that the user can select using the slider.
or
34
timeStep: Time steps that the user can select using the slider.
The frequencies of the available values are counted and are plotted within a number of bins to create a frequency distribution. The number of bins
can be selected using the slider. The data range that is covered by the bins can be changed as follows. Clicking the "Set boundaries" button
brings up the "Set boundaries" dialog. In this dialog the lowest bin boundary and the highest bin boundary can be changed. The space between
these two boundaries is divided regularly into the selected number of bins. Initially the boundaries are in automatic mode. In automatic mode for
each time series the minimum data value and the maximum data value are used as boundaries. When the user selects new boundaries manually,
then the new boundaries will be used instead, i.e. manual mode. In manual mode the boundaries are fixed and the same boundaries are used for
all time series, until they are changed again. This makes comparisons between different time series possible. When the user clicks the
"Automatic" button, then the boundaries will be in automatic mode again.
In manual mode the selected boundaries are remembered. This means that when the user closes and re-opens the time series display or starts
working in another separate time series display, then in manual mode the previously selected boundaries will still be used for new frequency
distributions. The mode will also be the same for all time series displays.
Attributes:
function: frequencyDistribution
samples: The number of bins that the user can select using the slider.
The mean value and standard deviation are calculated for the timeseries from which the normal distrubution function is calculated. The selected
sample determines in how many samples the normal distribution function is divided into.
Attributes:
function: gaussianCurve
ignoreMissings: when true, missing values are ignored and each average will be calculated from the available values within the current
time window.
When false, calculated values will be set to missing if one or more values within the current time window are missing.
samples: Definition of samples sizes that the user selects from using the slider.
Note: The displayed diagram is no longer a graph of unit time and therefore uses a different panel for displaying the graph. The associated tabel
panel is currently not working for this type of graph and therefore the table toggle button will be disabled.
A scatterplot is made where the x-axis shows the duration of a 'peak' (=values within this peak-area are all above the given reference level), the
y-axis shows the normalized difference between the parameter value and the reference level. The reference level can be altered by entering a
value into the input field associated with this statistical function. After clicking 'Apply' the result time series array is returned.
If no reference level is entered, then the 'peak' areas are determined according to the minimum available value of the input time series array.
Attributes:
function: showPeaksAbove
A scatterplot is made where the x-axis shows the duration of a 'low' (=values within this low-area are all beneath the given reference level), the
y-axis shows the normalized difference between the parameter value and the reference level. The reference level can be altered by entering a
value into the input field associated with this statistical function. After clicking 'Apply' the result time series array is returned.
If no reference level is entered, then the 'low' areas are determined according to the maximum available value of the input time series array.
Attributes:
function: showLowsBelow
Scatterplot
The data is displayed as a collection of points, each having the value of the timeseries determining the position on the horizontal axis and the
value of the other timeseries (one or more) determining the position on the vertical axis.
The timeseries used for the horizontal- and vertical axis can be changed by the user by using the 'Series selection' dialog, which is opened by
clicking on the 'Edit' button.
Attributes:
function: scatterPlot
Note: The displayed diagram is no longer a graph of unit time and therefore uses a different panel for displaying the graph. The associated tabel
panel is currently not working for this type of graph and therefore the table toggle button will be disabled.
35
Boxplot
The data is graphically displayed by a box-and-whisker diagram. The following five-number summaries are displayed: the smallest observation
(sample minimum), lower quartile (Q1), median (Q2), upper quartile (Q3), and largest observation (sample maximum). An additional dot is plotted
to represent the mean of the data in addition to the median.
Attributes:
function: boxPlot
Note: The displayed diagram is no longer a graph of unit time and therefore uses a different panel for displaying the graph. The associated tabel
panel is currently not working for this type of graph and therefore the table toggle button will be disabled.
movingAccumulationTimeSpan
Defines the time spans that the user can select using the slider for this function in the TimeSeriesDisplay. MovingAccumulationTimeSpan can only
be used for functions of the following type: relativeAggregation, movingAverage, centralMovingAverage, accumulationInterval,
accumulationAggregation.
timeStep
Defines the time steps that the user can select using the slider for this function in the TimeSeriesDisplay. TimeStep can only be used for functions
of the following type: calendarAggregation, accumulationInterval, accumulationAggregation.
36
samples
Defines the amounts of samples that the user can select using the slider for this function in the TimeSeriesDisplay. Samples can only be used for
functions of the following type: frequencyDistribution, gaussianCurve.
37
Descriptive Function Group
The descriptiveFunctionGroup defines the contents of the descriptive tabel. Several sub-table can be configured (see sample).
Info functions (if this type of function is specified, the display provides a hint to select a column in the table in order to
get more descriptive information):
startTime
endTime
The descriptive statistics functions are functions that are used for descriptive statistics. They can be defined to describe the distribution of the data
(e.g. mean, min, max) or the data itself (info, start_time). All descriptive statistical functions produce a single value for a time series.
The descriptive functions results are displayed in group boxes that are named according to the group names that have been defined in the
configuration file.
Attributes:
• function: Can be one of the functions below.
Information functions:
38
time window.
When false, calculated values will be set to missing if one or more values within the current time window are missing.
Duration curve
A duration curve illustrates the relationship between parameter values and their duration. When selected, the current graphs are replaced with
duration curves.
Attributes:
Attributes:
03 Display Groups
What [Link]
Required no
Introduction
displayGroups
plot
displayGroup
subplot
line
display
SubPlotArea
Introduction
A list of pre-configured displays can be configured in the Display groups. When available on the file system, the name of the XML file is for
example: DisplayGroups 1.00 [Link]
The pre-configured displays are organised in a tree view in the time series display (see example in Figure 1). Each pre-configured display is
identified by its name, and may include one or more subplots, each with one or more time series lines.
39
Figure 1 Example of time series display, showing two sub-plots and tree-view of pre-configured displays
Another option is to plot a longitudinal profile in the time series display (see figure 2). The main difference with the normal time series plot is, that
on the X-axis the river chainage is plotted. With the control toolbar a specific time step can be selected.
The display groups are configured by first listing the names of the filters to be shown in the display (for example "Rain gauges", "Gauges" and
"Fractions" in figure 3 below) under the displayGroup descriptor. The names of the subplots can then be added (e.g. "MacRitchie" and
"Woodleigh" below). Each of the subplots is assigned a plotId which links to the definitions of the plots and the time series set to be used. For
example in the Fractions displayGroup a stackPlot is defined with a max and min (this file is attached as an example). Please note that the
colours, linestyle, precision etc are defined in the TimeSeriesDisplayConfig.
40
figure 3 - example of a configured displayGroup file (click to enlarge)
displayGroups
Root element for each displayGroup. A display group forms one of the main nodes in the tree view and may contain multiple displays. Multiple
display groups may be defined.
Attributes:
plot
Attributes:
41
displayGroup
Defines the groups of plots to be viewed (i.e. the branches of the shortcuts in the display)
Attributes:
subplot
Root element for each subplot. Multiple sub-plots may be defined per display.
42
43
Attributes:
description
axisScaleUnit
lowerMarginPercentage:
upperMarginPercentage:
inverted
plotweight
thresholdAxisScaling
forecastConfidenceTimeSpan: TimeSpanComplexType
line
area: SubPlotAreaComplexType. Displays the extent of multiple time series as a single area
color: Overides colours specified in the timeseriesdiplay
lineSytle: Line style of time series marker line. Enumeration of "solid", "none", "bar", "dashdot", "dashed", "dotted".
timeSeriesSet
inverted
This tag can be used to invert the y-axis of a plot. Below a screenshot of an inverted graph.
line
It is possible to display a set of time series with two different parameters types in one plots. One parameter will be displayed on the left axis, the
44
other will be displayed on the right axis.
Below an example of how to configure a timeseries which should be displayed on the right y-axis.
<plotWeight>2</plotWeight>
<line>
<color>blue</color>
<axis>right</axis>
<timeSeriesSet>
<moduleInstanceSetId>MAP</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>MAP</parameterId>
<locationId>LNDW1XU</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</line>
<line>
<color>dark orange</color>
<axis>right</axis>
<timeSeriesSet>
<moduleInstanceSetId>MAP</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>MAP</parameterId>
<locationId>LNDW1XM</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</line>
]]>
In setting the axis to right this timeseries will be displayed at the right axis instead of the (default) left axis.
Stage/discharge plots
When a discharge is displayed, it is possible to show the stage on the right axis.
The right axis is then not a linear axis but the ticks on the right axis are calculated from the discharge ticks on the left axis.
It is also possible to display the stage and show the discharge on the right axis. The example below shows a display which plots several
discharge time series.
45
The left axis is a linear axis with ticks for the discharge. The right axis is a non-linear axis.
The ticks on the right axis are calculated from the value of the discharge on the left axis by using a rating curve.
<plotWeight>5</plotWeight>
<line>
<axis>left</axis>
<ratingAxis>
<parameterGroupId>Level</parameterGroupId>
<transformationType>dischargeStage</transformationType>
<ratingCurve>
<locationId>exampleId</locationId>
</ratingCurve>
</ratingAxis>
<timeSeriesSet>
<moduleInstanceId>STAGEQ_LEDC2_LEDC2R_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>QIN</parameterId>
<locationId>LEDC2R</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>read only</readWriteMode>
<ensembleId>QPF</ensembleId>
</timeSeriesSet>
</line>
]]>
display
Definition of a pre-configured display. Each display may contain multiple sub-plots. Multiple displays may be defined per display group.
[
46
47
Figure 4 Elements in the Display section of the DisplayGroups configuration
Attributes;
SubPlotArea
Atrributes:
48
Making stacked graphs
Since 2007/02 release the functionality of the SubPlotArea complex type has been extended to include stack plots. The only
thing needed to implement this is to add a stackPlot="true" attribute to a subplot element. Attached to this page is a example of
this funtionality.
If stackPlot is True, the timeseries of this subplot are plotted as stacked areas, except for the timeseries that are specified inside
the (optional) element <area>. Area-series are always plotted as so called 'difference area'.
Attribute stackPlot is intended as overruling of the default series paint (i.e. line or bar)
Display groups may be defined while DELFT-FEWS is running and reloaded by re-opening the time series dialogue. If a mistake
is made, then the shortcuts item to open the tree view will not appear and an appropriate message will be generated. After
resolving the mistake the item will again become available on re-loading the display.
04 Location Icons
What [Link]
Required no
General
Configuration of location icons can be used to help identify the different types of locations on the map display. This is an optional configuration
item. If it is not available then the default location icon will be used to all locations. When available on the file system, the name of the XML file is
for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
49
rootDir
This is the directory where the icons referred to are stored. By convention this directory is the <REGION>\Icons directory. The directory can be
given relative to the <REGION> directory. If the convention is followed then only "Icons" needs to be entered.
locationIcon
description
Description of the group of locations for which an icon is defined (for reference in the configuration only).
iconID
ID of the icon to be used in the display for this group of locations. This id is the same as the name of the icon file, without the ".gif" file extension.
locationId/locationSetId
The locationId is a reference to the location for which icon is used. Either one or more locationId's may be defined, or a single locationSetId.
05 Module Descriptors
What [Link]
Required yes
Only expert users should attempt to make changes in this configuration. Errors could implement the functionality of the complete
system.
The module descriptors file is used to register module plug-ins that can be used in workflows. The module descriptors define the name of the
module and the associated Java class to call. This class must implement the module plug-in interface for it to work within DELFT-FEWS. All
modules that are included in the distribution of DELFT-FEWS are registered in the Module Descriptors. When available on the file system, the
name of the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
moduleDescriptor
Root element of the module descriptor configuration. One entry is required for each module defined.
Attributes;
50
Id: Id or Name of the module
description
className
Java class called when running the module as referenced by its Id. NOTE; this class must implement the DELFT-FEWS module plug-in interface.
06 Display Descriptors
What [Link]
Required yes
Only expert users should attempt to make changes in this configuration. Errors could implement the functionality of the complete
system.
The display descriptors file is used to register display plug-ins that can be called from the DELFT-FEWS GUI. The display descriptors define the
name of the display and the associated Java class to call. This class must implement the display plug-in interface for it to work within
DELFT-FEWS. All displays that are included in the distribution of DELFT-FEWS are registered in the Display Descriptors. When available on the
file system, the name of the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
displayDescriptor
Root element of the display descriptor configuration. One entry is required for each display defined.
Attributes;
description
className
Java class called when running the display as referenced by its Id. NOTE; this class must implement the DELFT-FEWS display plug-in interface.
51
07 Permissions
What [Link]
Required no
What [Link]
Required no
General
Permissions can be added to the FEWS configuration to allow users (user groups) to access Explorer tasks, Data Editor functions, Filters, etc..
Permissions can be optionally configured in the following configuration files:
[Link]
Restrict access to explorer tasks such as the Time Series Dialog or the Grid Display. The tasks will not be available in the menus
or toolbar for users which do not have the right permissions
[Link]
Control who can add and edit values in the data editor window
Control who can add and edit labels in the data editor window
Control who can add and edit comments in the data editor window
[Link]
Control who can create, edit, delete, persist and run scenarios in the scenario editor window
[Link]
Control which displays are visible in the spatial plot window for the current user
[Link]
Control which filters are visible in the FEWS explorer for the current user
[Link]
Control which shortcuts are visible in the Time Series Display for the current user
[Link]
Control which users can view, run and approve workflows in the Forecast Dialog and Manual Forecast Dialog.
Also control which users can delete forecasts and change expiry times of forecasts in the Forecast Dialog.
NOTE: Using permissions on workflows indirectly influences the behaviour of the scenario editor window. Scenario's, based on
hidden or non-runnable workflows are not showed in the scenario editor.
Configure optional permission names in any of the above described configuration files.
Create the permissions in the permissions configuration file (Permissions 1.00 [Link]) and configure usergroup names which should
have access to the permissions.
Create the usergroups in in the usergroup configuration file (Usergroups 1.00 [Link]) and assign them user names.
This can be achieved by adding the optional permission tag to the configuration and give it a self-describing name.
When available on the file system, the name of the XML file is for example:
52
default Flag to indicate the version is the default configuration (otherwise omitted).
Permission
Usergroup
When available on the file system, the name of the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
53
Figure 4 Elements in the Usergroups configuration
Usergroup
Base tag for a usergroup configure one for each user group. Usergroups can contain other usergroups.
User
Name of the user that belongs to the usergroup. Users can be placed in multiple usergroups.
....
]]>
54
TimeSeriesDisplayConfig 1.00 [Link]
<convertDatum>true</convertDatum>
<valueEditorPermission>AllowValueEditor</valueEditorPermission>
<labelEditorPermission>AllowLabelEditor</labelEditorPermission>
<commentEditorPermission>AllowCommentEditor</commentEditorPermission>
....
]]>
04 Regional Configuration
Introduction
The regional configuration items form the basis of the region specific configuration of DELFT FEWS as a forecasting system for a particular river
system or administrative region. It includes definitions of the parameter, locations, units and flags used , which may vary per application of the
system.
The region configuration items include (items in bold are required for a minimal system):
From version 2008.03 onwards you can now choose to define Locations, LocationSets, IdMaps, DisplayGroups,
ThresholdValueSets and ValidationRuleSets in a dbf file. This can be a useful way of defining all your information in one place.
It also means you can efficiently configure Fews without XML Spy and reduces XML parsing time when starting Fews. See
chapter 22 for further details.
Many of the configuration items required will include references to strings. To avoid duplication, a tag can be defined in the
[Link] file in the root configuration and the tag name used in the XML file (see also System Configuration).
Contents
01 Locations
01 - Related Locations
02 LocationSets
03 Parameters
05 Branches
06 Grids
07 Filters
08 ValidationRulesets
55
09 Thresholds
10 ThresholdValueSets
11 ColdModuleInstanceStateGroups
12 ModuleInstanceDescriptors
13 WorkflowDescriptors
14 IdMapDescriptors
15 FlagConversionsDescriptors
16 UnitConversionsDescriptors
17 CorrelationEventSetsDescriptors
18 TravelTimesDescriptors
19 TimeUnits
20 Historical Events
21 Value Attribute Maps
22 Locations and attributes defined in Shape-DBF files
23 Qualifiers
24 Topology
25 ModifierTypes
26 TimeSteps
01 Locations
What [Link]
Required yes
DELFT-FEWS is a location oriented system. All time series data must be referenced to a (geographic) location. This location must be identified by
its geographic coordinates within the coordinate system used. When available on the file system, the name of the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted)
Optional time zone for dates and times defined in the locations configuration file. If no time zone is defined, then dates and times are in GMT.
geoDatum
Definition of the geoDatum used in defining the locations. This may be different than the geoDatum used in the displays. For enumeration of
geoDatums, see Appendix B.
location
Root element for the definition of each individual location. Multiple entries may be defined.
56
Attributes;
description
Optional description of the location. This description will appear as a tool-tip when hovering over the location in the map display.
shortName
Optional short name for the location. This string when available will replace the name in the time series display legend.
toolTip
optional element to customize the tooltip shown when hovering over a location in the main map display.
You can use use \n or CDATA or HTML when you need multiple lines. Beside tags defined in the [Link] file you can use the following
tags:
The tooltip supports html including images and hyperlinks. The url in the hyper link can be an internet url, an executable file, a document file, or a
[Link] the CDATA xml tag to include html in a xml file. Check the available HTML functionalities here.
Name: %NAME%\n
Desc: %DESCRIPTION%\n
Last value \[%LAST_VALUE%\] Time \[%LAST_VALUE_TIME%\]\n
Forecast Start Time \[%FORECAST_START_TIME%\]\n
Maximum \[%MAXIMUM_VALUE%\] Time \[%MAXIMUM_VALUE_TIME%\]
A more advanced example is, using HTML (use the <BR> tag to start new line):
57
<toolTip><![CDATA[<html>
<table id="details">
<tr>
<td width="50" valign="top">ID</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">%ID%</td>
</tr>
<tr>
<td width="50" valign="top">Naam</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">%NAME%</td>
</tr>
<tr>
<td width="50" valign="top">Type</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">%DESCRIPTION%</td>
</tr>
<tr>
<td width="50" valign="top">Foto</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">
<a href="file:/$FOTOSDIR$/%ID%.jpg" >
<img src="file:/$FOTOSDIR$/thumbs/%ID%.jpg" border="0">
</a>
</td>
</tr>
<tr>
<td width="50" valign="top">Documentatie</td>
<td width="5" valign="top">:</td>
<td width="200" valign="top">
<a href="file:/$PDFDIR$/%ID%.pdf">%ID%.pdf</a>
</td>
</tr>
</table>
</html>
]]></toolTip>
parentLocationId
Optional Id of a location that functions as a parent. In the filters child locations (locations that refer to a parent) are normally invisible. However,
they are displayed in the graphs whenever a parent is selected.
Optional. This is the period for which a location is visible in the user interface. The start and the end of the period are inclusive. If no
visibilityPeriod is defined for a location, then the location is visible for all times. Currently the visibility period is used in the map (explorer) window,
the time series display and the spatial display.
startDateTime: the date and time of the start of the visibility period. The start of the period is inclusive. If startDateTime is not defined,
then the location is visible for all times before endDateTime.
endDateTime: the date and time of the end of the visibility period. The end of the period is inclusive. If endDateTime is not defined, then
the location is visible for all times after startDateTime.
58
The elevation defined will be used for converting a parameter supporting local and/or global datum. By convention the data
stored in the DELFT-FEWS database is at the local datum. The elevation defined here is added when displaying/converting to a
global datum.
The value defined for the elevation should be the gauge zero for river gauges where an exact level is important.
When using transformations and the datum needs to be converted and also a multiplier, divider and/or incrementer are defined
in the time series set of the data, then the following equations are used.
When reading data from the database the calculation is:
value = (stored_value + z) * multiplier / divider + incrementer
When writing data to the database the multiplier, divider and incrementer of the time series set are not used, so the calculation
is:
stored_value = value - z
All time series data in DELFT-FEWS must be referenced to a location. This is the case for all data types (scalar, longitudinal,
grids & polygons).
For Grids and Longitudinal profiles, additional information may be required and defined in the grids and branches configurations
respectively. For scalar and polygon time series no additional information is required.
01 - Related Locations
Function: Functionality to define related locations and how to use them in your configuration
Why to Use? To be able to simply relate series of several locations to each other, e.g. water levels to a weir, raingauge to a catchment etc.
Description: Based on the DBF shape file or [Link] you can easily manage the configuration_
Contents
Contents
Overview
How to be used
Examples
[Link]
[Link]
timeSeriesSets (in Filters, DisplayGroups and Transformations)
Transformation to compute flows at a weir
Overview
This functionality enables linking time series between locations, without copying data or whatsoever. It can be done both in the user interface
(filters and displayGroups in the FEWS Explorer) and in the transformations.
Typical examples of this functionality are:
relate the nearest rain gauge time series to a catchment or fluvial gauge
relate water level gauges to structures: like upstream and downstream water levels to several gates.
If you relate for example a rain gauge to a list of fluvial gauges, it will look in the filters like that the location has rainfall time series as parameter.
Once you select this parameter and location to make a graph, you see that you get the rainfall time series displayed at the original rain gauge.
How to be used
Some remarks:
in timeSeriesSets you always should be sure to use the locationRelationId only if the locationRelationId is defined for all locations in the
locationSet. It is not allowed to have an undefined or empty locationRelationId. In that case a configuration error will occur. It is easy to
do: add a relatedLocationExists constraint to the locationSet.
in transformations you can easily connect series from one location to another. If you have for example a weir with two gates, you can
defined the upstream and downstream water level gauges as relatedLocations, but you refer to them through each gate.
59
Examples
[Link]
This example shows how to configure related locations in the [Link] configuration file. The upstream and downstream waterlevel gauges
are related to the two weir gate locations. Notice that the namespace relatedLocationId should be added to the XML definition.
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<location id="weir_gate1">
<parentLocationId>weir</parentLocationId>
<x>0</x>
<y>0</y>
<relatedLocationId:H_US>weir_h_us</relatedLocationId:H_US>
<relatedLocationId:H_DS>weir_h_ds</relatedLocationId:H_DS>
</location>
<location id="weir_gate2">
<parentLocationId>weir</parentLocationId>
<x>0</x>
<y>0</y>
<relatedLocationId:H_US>weir_h_us</relatedLocationId:H_US>
<relatedLocationId:H_DS>weir_h_ds</relatedLocationId:H_DS>
</location>
<location id="weir_h_us">
<parentLocationId>weir</parentLocationId>
<x>0</x>
<y>0</y>
</location>
<location id="weir_h_ds">
<parentLocationId>weir</parentLocationId>
<x>0</x>
<y>0</y>
</location>
<location id="weir">
<x>0</x>
<y>0</y>
<relatedLocationId:METEO>meteo_station</relatedLocationId:METEO>
</location>
<location id="meteo_station">
<x>0</x>
<y>0</y>
</location>
....
]]>
[Link]
This example shows how to define related locations if you use a DBF file to define the locations.
60
<esriShapeFile>
<file>myLocDBF</file>
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<id>%ID%</id>
<name>%NAME%</name>
<description>%TYPE%</description>
<parentLocationId>%PARENT_ID%</parentLocationId>
<x>%X%</x>
<y>%Y%</y>
<relation id="METEO">
<relatedLocationId>%METEO%</relatedLocationId>
</relation>
<relation id="H_US">
<relatedLocationId>%H_US%</relatedLocationId>
</relation>
<relation id="H_DS">
<relatedLocationId>%H_DS%</relatedLocationId>
</relation>
<attribute id="regio">
<text>%REGIO%</text>
</attribute>
<attribute id="type">
<text>%TYPE%</text>
</attribute>
...
</esriShapeFile>
<constraints>
<relatedLocationExists locationrelationid="METEO"/>
<relatedLocationExists locationrelationid="H_US"/>
<relatedLocationExists locationrelationid="H_DS"/>
</constraints>
]]>
This example shows how you link to a related location in the timeSeriesSet. This can be done in the filters, displayGroups and Transformations
only.
<moduleInstanceId>ImportCAW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationRelationId>METEO</locationRelationId>
<locationSetId>my_locations</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
]]>
This example shows a transformation (from the new TransformationModule) that computes the flows over the two weir gates, by using the
upstream and downstream water level gauges. By using the relatedLocations, this can be set up very easily in only one transformation.
<structure>
<generalWeirVariableHeight>
<headLevel>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationRelationId>H_US</locationRelationId>
61
<locationId>weir_gate1</locationId>
<locationId>weir_gate2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-365" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</headLevel>
<tailLevel>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationRelationId>H_DS</locationRelationId>
<locationId>weir_gate1</locationId>
<locationId>weir_gate2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-365" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</tailLevel>
<height>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>weir_gate1</locationId>
<locationId>weir_gate2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-365" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</height>
<coefficientSet>
<width>1</width>
<freeFlowLimitCoefficient>1</freeFlowLimitCoefficient>
<freeDischargeCoefficient>1</freeDischargeCoefficient>
<drownedDischargeCoefficient>1</drownedDischargeCoefficient>
</coefficientSet>
<discharge>
<timeSeriesSet>
<moduleInstanceId>Flows</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>weir_gate1</locationId>
<locationId>weir_gate2</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-365" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</discharge>
</generalWeirVariableHeight>
</structure>
]]>
02 LocationSets
62
What [Link]
Required no
Location sets may be used to define logical groups of locations. Often an action may need to be taken on a whole set of locations (e.g. validation).
By creating a LocationSet the action need only be defined once.
Any location may appear in more than one location sets. Internally a location set is simply evaluated as a list of locations.
When available on the file system, the name of the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
63
locationSet
Root element for the definition of a location set. Multiple entries may exist.
Attributes;
description
Optional description of the location set. Used for reference purposes only.
locationId
locationSetId
LocationSet ID configured to be a member of the locationSet. Multiple entries may exist. This is useful to group locationSets together.
esriShapeFile
It is also possible to define locationSets with locations that are automatically generated (so NOT defined in the [Link]) from an ESRI Shape
file. See all detailed information at the next page
03 Parameters
What [Link]
Required yes
All time series data in DELFT-FEWS must be defined to be of one of the parameters supported. This configuration file defines the list of supported
parameters, including the unit of the parameter.
Parameters are organised into ParameterGroups. All parameters within a group should have the same properties and the same units. Only
parameters of the same group may be displayed in a single (sub) plot in the time series display, though this can be overruled if requested using a
display template.
When available on the file system, the name of the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
64
Figure 25 Root element of the parameter definition
displayUnitConversionId
The unit conversions id to convert from the (internal) units to the display units. This id should be available in the UnitConversionsDescriptors. Only
required when a displayUnit is specified for a parameter group
configUnitConversionId
The unit conversions id to convert from the units specified in config files to the internal units for this parameter. This id should be available in the
UnitConversionsDescriptors. Only required when a user unit is specified for a thresholdValuesSet, validationRuleSet or ratingCurve
ratingCurveStageParameterId
This parameter is used to resolve the internal stage unit and display stage unit and the name (label) that is used for the rating curve stage
axis/column in the user interface
ratingCurveDischargeParameterId
This parameter is used to resolve the internal discharge unit and display discharge unit and the name (label) that is used for the rating curve
discharge axis/column in the user interface
parameterGroup
Root element of each definition of a parameter group. Multiple entries may exist.
Attributes;
65
Figure 26 Elements of the ParameterGroup configuration in the parameter definition
description
Optional description of the parameter group. Used for reference purposes only.
parameterType
Definition if the parameters in the group if these are "instantaneous" parameters, "accumulative" parameters or "mean" parameters.
dimension
unit
Unit of the parameters defined in the group. The unit may be selected from a list of units supported by DELFT-FEWS. These are generally SI
units. For an enumeration of supported units see Appendix B.
displayUnit
Specify when the unit seen by the user is not the same as the unit of the values internally stored in the data store. Also specify
displayUnitConversionsId above. In this unit conversions the conversion from (internal) unit to display unit should be available
usesDatum
Indicates if the parameters in the group are to be converted when toggling between local and global datum. Value is either true or false. If the
value is true, the elevation defined in the location is added to the time series in the database on conversion. See Locations
valueResolution
Default accuracy (smallest increment between two values) of the calculated or measured values for all parameters in this group. Value resolution
can also be specified for a single parameter (since 2011.01). By default the resolution is dynamic and the values are stored as a 32 bit floating
point with 6 significant digits. Floating points don't compress very well and are slow to decode. It is far more efficient to store a value as an integer
with a scale factor (= value resolution). When a 8, 16 or 24 bit integer is not big enough to achieve the value resolution the default 32 bit floating
point is used as fall back. E.g. When the accuracy of the water level is half a centimeter specify 0.005. When the accuracy of the discharge in 10
m3/s specify 10.
parameter
Definition of each parameter in a parameter group. Multiple parameters may be defined per group.
Attributes;
66
shortName
Short name for the parameter. This name will be used in the time series display and reports.
As of Fews 2010_01 there is a second root node parameters available in the [Link] schema. This new element facilitates configurations
of the parameters to be displayed in Fews Explorer in the form of a hierarchical tree. The parameters node embeds the parameterGroups element
described above. The element parameterRootNode is of type ParameterNodeComplexType and represents the top node of the hierarchical tree
structure that is to be displayed. Other parameterNodes can be nested within each instance of ParameterNodeComplexType. Each node has an
id field and can have a name and description and multiple parameterIds. The parameterIds from parameterGroups that are not included in the
hierarchical tree specified by parameterRootNode are added automatically at the root level.
id
This attribute is used as the identifier label which is displayed when ids are made visible within Fews Explorer.
name
This element is used as the name label which is displayed when names are made visible within Fews Explorer.
description
This element is used as the description label which is displayed when descriptions are made visible within Fews Explorer.
parameterId
This element must refer to the identifier of one of the parameters defined within the parameterGroups section.
NB. Each parameterId can only be used once and has to be defined within one of the parameterGroups.
parameterNode
67
<parameterGroups>
<parameterGroup id="Discharge">
...as before ...
</parameterGroup>
</parameterGroups>
<parameterRootNode id="Parameters">
<parameterNode id="Discharge">
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
</parameterNode>
<parameterNode id="Water Level">
<name>Water level</name>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
<parameterId>[Link]</parameterId>
...
</parameterNode>
</parameterRootNode>
]]>
In case there is no timeseriesset for the combination of parameter and location, the nodes will be grayed out:
68
05 Branches
What Required Description schema location
DELFT-FEWS is a location oriented system. All time series data must be referenced to a (geographic) location. Scalar time series need no
additional information. For longitudinal time series data, each point in the vector must be referenced to a location in a branch structure. This
location must be identified by its coordinate within the branch (chainage), and may also be defined by its geographic coordinates within the
coordinate system used.
When available on the file system, the name of the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
69
geoDatum
Definition of the geoDatum used in defining the locations of the branch. This may be different than the geoDatum used in the displays. For
enumeration of geoDatums, see Appendix B.
branch
Root element of the definition of a branch. Multiple entries may exist to define multiple branches.
Attributes;
id: Id of the current branch. This ID must refer to a location ID in the Locations definition.
branchName
startChainage
Chainage of the start of the branch (only used in the longitudinal display)
endChainage
Chainage of the end of the branch (only used in the longitudinal display)
upNode, downNode
Optional item in branch to create branch linkage. This information is not used in DELFT-FEWS, but may be relevant to an external module when
exported through the published interface.
zone
Optional item in branch that allows definition of a zone – this is a part of the branch that may be indicated in the longitudinal display with the name
given (currently not used in DELFT-FEWS).
pt
Definition of the points belonging to the branch. At least two points must be defined per branch.
Attributes;
chainage; coordinate of point as measured along the branch (should be greater than or equal to the start chainage and less than the end
chainage).
label; label used to identify the point
x; optional geographic coordinate of the point (Easting)
y; optional geographic coordinate of the point (Northing)
z; optional elevation of the point. The elevation is an important attribute for plotting in the longitudinal profile display. This elevation is
taken as the bed level.
description; optional description string. When defined a vertical line will be drawn in the longitudinal display at the location of this point,
and the description given displayed.
thresholdValueSetId; optional reference to an ID of a threshold value set. When defined, the threshold values will be drawn as markers
in the longitudinal display at the location of this point.
comment
06 Grids
What Required Description schema location
DELFT-FEWS is a location oriented system. All time series data must be referenced to a (geographic) location. Scalar time series need no
additional information. For grid time series data, each point in the grid must be referenced to a location in a grid structure.
Grids may be regular or irregular. In regular grids each cell has the same width, height and area within the coordinate system it is specified in.
In irregular grids the grid has a fixed number of rows and columns, but the cell height and width is not equal in each row and column. For these
70
grids additional information is required on the location of each individual cell in the grid to allow display in the grids display as well as for use in the
spatial interpolation routine.
When available on the file system, the name of the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
regular
Attributes;
locationId; location Id of the grid. This locationId must be included in the locations definition.
irregular
Attributes;
locationId; location Id of the grid. This locationId must be included in the locations definition.
Regular grids
71
description
rows, columns
geoDatum
Coordinate system the grid is defined in. This may be a different coordinate system to that used in the main map display. The coordinate system
may also differ per grid, as a grid may be regular in one coordinate system, but may be irregular in another. Defining the grid in the regular
coordinate system is easier.
firstCellCenter
Coordinates of the center of the first grid cell. The convention in DELFT-FEWS is that this is the center point of the top left cell in the grid
(Upper-Left).
firstCellCenter: x
firstCellCenter: y
firstCellCenter: z
Optional elevation of the first cell center point (Easting). If only this elevation is defined , then all cells in the grid are assumed to have the same
elevation.
xCellSize / columnWidth
Cell width of each column in the grid. The cell width is given in the unit of the coordinate system referred to in the geoDatum. Generally this is
metres, but in WGS 1984 this is decimal degrees.
The xCellSize-element is used when all cells are equal in width. Please use the columnWidth-element to define cells with variable columnWidth.
yCellSize / rowHeigt
Cell height of each row in the grid. The cell height is given in the unit of the coordinate system referred to in the geoDatum. Generally this is
metres, but in WGS 1984 this is decimal degrees.
The yCellSize-element is used when all cells are equal in height. Please use the rowHeight-element to define cells with variable height.
z / zBottom / zTop
Optional definition of the elevation of each point in the grid. This definition is only necessary where a datum is required in for example
3-dimensional interpolation. This may be applied in for example interpolating temperature grids in high mountain areas. Alternative uses are the
display of elevation in a cross section of the Spatial Display. The bottom/top layer is only displayed if the parameter unit is meters and if the
corresponding displayOptions are configured in the TimeSeriesDisplayConfig file and if the layer contains non-NaN values. The useDatum
property is not used here.
Use 'z' for the average elevation, and zBottom and zTop in case a model layer needs to be defined
Optional definition of the elevation by reference to an ASC or BIL file stored in the MapLayerFiles-directory. This definition is only necessary
where a datum is required in for example 3-dimensional interpolation. This may be applied in for example interpolating temperature grids in high
mountain areas. Alternative uses are the display of elevation in a cross section of the Spatial Display. The bottom/top layer is only displayed if the
parameter unit is meters and if the corresponding displayOptions are configured in the TimeSeriesDisplayConfig file and if the layer contains
non-NaN values. The useDatum property is not used here.
Use 'zMapLayerNAme' for the average elevation, and zBottomMapLayerName and zTopMapLayerName in case a model layer needs to be
defined.
Irregular grids
72
Figure 30 Elements of the irregular grid in the Grids configuration
description
rows, columns
geoDatum
Coordinate system the grid is defined in. This may be a different coordinate system to that used in the main map display. The coordinate system
may also differ per grid. A grid may be regular in one coordinate system, but may be irregular in another. Defining the grid in the regular
coordinate system is generally easier.
cellCentre
Definition of the cell centre points of all cells in the irregular grid. The number of cellCentre points defined must be the same as the number of
cells in the grid (rows x columns).
cellCentre: x, cellCentre: y
cellCentre: z
07 Filters
What Required Description schema location
Filters are used in DELFT-FEWS to define the locations that are displayed on the main map display, and that can be selected to display data.
Filters are defined to arrange locations, with associated parameters in logical groups. Each filter is defined as a collection of time series sets.
Filters may be defined as a nested structure, allowing for the definition of a hierarchical set of filters.
When available on the file system, the name of the XML file is for example: Filters 1.00 [Link]
73
Figure 31 Elements of the filters configuration
It is possible to explicitly define every (child)filter. This may result in to many repeating timeSeriesSet definitions. Therefore it is also possible
(since version 2009.02) to define groups of timeSeriesSets that can be used many times in the filter, additionally by using constraints on the
location attributes. See also next examples.
description
Optional description of the filter configuration. Used for reference purposes only.
defaultFilterId
Filter that is selected automatically on start up of FEWS. If not defined no filter will be selected.
filter
Definition of a filter. Multiple entries may exist where multiple filters are defined. Each filter may contain either a set of one or more time series set,
or a child filter. The child is a reference to another filter definition that again contains either a child filter or a list of time series sets. This structure
is used to construct a hierarchal tree view of filters.
Attributes:
id: Id of the filter. This ID is used in the tree view in the main display
name: optional name for the filter. For reference purposes only.
ValidationIconsVisible: This allows the user to make use of the additional validation icons available in the explorer (2009.01)
child
Reference to another filter. The child element refers to the ID of the other filter as a foreign key.
Attributes:
timeSeriesSet
Definition of a time series set belonging to a filter. Multiple time series sets may be defined.
74
The time span defined in the time series sets in the filter has an important function. It determines the time span checked for
determining status icons (e.g. missing data) in the main map display.
Not all locations need to be included in a filter. Locations that are not defined, will never be visible to the user.
The readWrite parameter defined in the time series set included in the filter will determine if the time series may be edited by the
user. If this parameter is set to read only then the time series will not support editing. This is the case for external time series
only. Simulated time series are read only by convention. Notice that editable timeSeriesSets should have synchLevel 5 to get
the edits saved into the central database.
Figure32 Example of filter configuration, as defined in the example XML configuration above.
It is possible to define filters that may become empty: no locations comply with the constraints. These filters are not displayed. An advantage of
this approach is that all possible filters can be defined once and automatically become visible when a location complies the constraints.
75
08 ValidationRulesets
What Required Description schema location
Validation rules are defined in DELFT-FEWS to allow quality checking of all time series data (scalar time series only). Several validation criteria
may be defined per time series. All validation rules for all time series are defined in this configuration. For each time series to be checked, a set of
validation rules is defined. Defining validation rules to apply to a time series set using a locationSet rather than identifying series individually can
simplify the configuration greatly. Most validation rules may be defined either as a constant value, or as a value valid per calendar month.
When available on the file system, the name of the XML file for configuring the Validation Rule Sets is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
validationRuleSet
Root element of the definition of a validation rule set. Multiple entries may exist.
76
Attributes;
validationRuleSetId: Optional reference ID for the validation rule set. Used only in messaging.
timeZone: Shift (in hours) of the time zone to be used in considering time values in validation.
unit
Specify when the unit given for the values is not the same as the (internally stored) unit of the parameter it applies to. When specified it is required
to also specify configUnitConversionsId in [Link]. In this unit conversions the conversion from the specified unit to the (internal) unit
should be available
timeSeriesSet
extremeValues
Validation rules defined to check for extreme values (hard and soft limits)
rateOfChange
Validation rules defined to check rate of change. Please note the units are per second i.e. 2m in 15mins is 0.00222.
sameReading
temporaryShift
The function equivalencese to the Values have to do with Shape-DBF file configuration. See here
This group of validation rules checks that the values in the time series do not exceed minimum and maximum limits. These limits may be defined
as soft limits or as hard limits. Values exceeding soft limits will be marked as doubtful but retained. Values exceeding hard limits will be marked as
unreliable.
77
hardMax
Validation rule for checking hard maximum. Values exceeding this limit will be marked as unreliable.
Attributes;
hardMin
Validation rule for checking hard minimum. Values exceeding this limit will be marked as unreliable.
Attributes;
softMax
Validation rule for checking soft maximum. Values exceeding this limit will be marked as doubtful.
Attributes;
softMin
Validation rule for checking soft minimum. Values exceeding this limit will be marked as doubtful.
Attributes;
monthLimit
Element used when defining variable limits per calendar month. Twelve values must be defined. When defined the monthly limit will overrule the
constant limit.
This group of validation rules checks that the values in the time series do not exceed maximum rates of change. When the rate of change limit is
exceeded, the values causing the limit to be exceeded will be marked as unreliable. Rate of change limits may be defined to be the same for the
rate of rise as for the rate of fall. These may also be defined to be different. The rates need to be specified in the unit of the timeseries it applies
per second. E.g. if you define a rate of change for a water level gauge with values in metres the rate should be given in metres per second.
rateofRiseFallDifferent
Root element used if the rate of rise limit is defined different to the rate of fall.
rateOfRise
Attributes;
78
constantValue: Maximum rate of rise, used irrespective of date of the value. [unitofinput/s]
rateOfFall
Attributes;
constantValue: Maximum rate of fall, used irrespective of date of the value. [unitofinput/s]
monthLimit
Element used when defining variable limits per calendar month. Twelve values must be defined. When defined the monthly limit will overrule the
constant limit.
Time series of data can be validated on series of same readings. This may be unlikely for field observations, and may indicate an instrumental
error. In some cases a small variability may still be observed, despite instrumental error. The same readings check allows for defining a bandwidth
within the value is considered to be the same.
sameReadingDeviation
Root element for definition of bandwidth the value may vary within if it is considered to the same reading. The bandwidth is twice the deviation.
Attributes;
sameReadingPeriod
Root element for definition of time span limit the value may remain the same to be considered realistic. If the reading remains the same for a
longer period of time, ensuing values will be considered unreliable.
Attributes;
constantValue: Value for time span in seconds, used irrespective of date of the value.
monthLimit
Element used when defining variable limits per calendar month. Twelve values must be defined. When defined the monthly limit will overrule the
constant limit.
Time series of data can be validated on temporary shifts. These occur when instruments are reset, and can be identified by the values rapidly
falling to a constant value, remaining at that value for a short period of time and then returning to the original value range. A complex set of
validation criteria include the rate of change as well as a maximum time the value remains the same.
79
Figure 37 Elements of the temporary shift configuration of the ValidationRuleSets.
rateOfTemporaryShift
Rate of change that must be exceeded both on change to shifted value and change back to original value range for validation rule to apply.
Attributes;
constantValue: Value for rate of change, used irrespective of date of the value.
temporaryShiftPeriod
Maximum time span constant shifted value is in time series for validation rule to apply.
Attributes;
constantValue: Value for time span in seconds, used irrespective of date of the value.
monthLimit
Element used when defining variable limits per calendar month. Twelve values must be defined. When defined the monthly limit will overrule the
constant limit.
09 Thresholds
DELFT-FEWS supports checking of time series against thresholds. When thresholds are crossed, appropriate messages may be issued.
Definition of thresholds is in two parts. In the first part of the configuration, the types of threshold used are defined. In the second, the values for
threshold valid for a particular location and time series are defined. In this section the configuration for the definition of the thresholds is defined.
DELFT-FEWS supports different types of threshold events. These include crossing of level and rate thresholds. The occurrence of a peak is also
seen as a threshold event.
For each threshold defined, two additional items need to be configured. Internally DELFT-FEWS maintains threshold events as a non-equidistant
time series, where the crossings are identified by an integer. For each threshold two unique integer Id's need to be assigned. One ID is used to
identify the upcrossing of the threshold, the other Id is assigned to identify the downcrossing. The exception to this is the peak threshold where
only a single Id needs to be assigned to identify the occurrence of the peak event. Note: in the new thresholds configuration approach
(thresholdGroups) these ids are optional and will be generated when not specified in configuration.
Similar to the Id's used for upcrossings and downcrossing, a warning level integer can be assigned to threshold crossings. This warning level is
resolved to either an icon (for display in the main FEWS GUI), or a colour (for use in reports). Warning levels need not be unique. These levels
are used only for level thresholds.
Configuration
When available on the file system, the name of the XML file for configuring the types of thresholds is for example:
80
Thresholds Fixed file name for the Thresholds configuration
default Flag to indicate the version is the default configuration (otherwise omitted).
levelThreshold
Root element for definition of a level threshold. Multiple entries may exist.
Attributes;
rateThreshold
Root element for definition of a rate threshold. Multiple entries may exist.
Attributes;
maxThreshold
Root element for definition of a peak event threshold. Multiple entries may exist.
Attributes;
upWarningLevel
Integer level used in determining icon (through ValueAttributesMap) on up-crossing of threshold (level thresholds only).
downWarningLevel
Integer level used in determining icon (through ValueAttributesMap) on down-crossing of threshold (level thresholds only).
upIntId
Unique integer level defined in threshold crossing time series (internal) on up-crossing of threshold.
downIntId
Unique integer level defined in threshold crossing time series (internal) on down-crossing of threshold.
intId
Unique integer level defined in threshold crossing time series (internal) on occurrence of peak event.
Each thresholdValue links a level (e.g. 3.28 meters) to a threshold (e.g. "top of dike"). Each threshold (e.g. "top of dike") links a crossing direction
(up or down) to a warning level (e.g. "Flood Alarm"). Each warning level corresponds to a unique integer that is called the severity of the warning
level. Also see the figure below.
81
Definitions
If a threshold only has an upWarningLevel or has upWarningLevelSeverity > downWarningLevelSeverity, then the threshold is called an
"upCrossing threshold". This means that the threshold activates its upWarningLevel when there are data values above it (e.g. flood
warning).
If a threshold only has a downWarningLevel or has downWarningLevelSeverity > upWarningLevelSeverity, then the threshold is called a
"downCrossing threshold". This means that the threshold activates a warning when there are data values below it (e.g. drought warning).
If a threshold has upWarningLevelSeverity = downWarningLevelSeverity, then the threshold is called both an "upCrossing threshold" and
a "downCrossing threshold". This means that the threshold activates its upWarningLevel if there is data above it and/or below it. It does
not make sense to have upWarningLevelSeverity = downWarningLevelSeverity, but this is possible in the old thresholds configuration
(not in the new improved thresholds configuration).
A thresholdValue with an upCrossing threshold has been crossed when there are data values above or equal to its value.
A thresholdValue with a downCrossing threshold has been crossed when there are data values below or equal to its value.
A thresholdValue with a threshold that is both upCrossing and downCrossing has been crossed when there are data values above, below
or equal to its value, i.e. always.
The most severe activated warning level is used for the warning icons and colours in the user interface and in the reports. Delft-FEWS takes the
following steps to determine the most severe activated warning level for a given time series (the threshold log events are generated in a different
but similar way).
1. First Delft-FEWS finds the thresholdValueSet (V) that corresponds to the given time series. If there is no thresholdValueSet defined that
corresponds to the given time series, then no warning levels are activated, i.e. "All clear".
2. For the given time series only the data within a given time period is used. The TimeSeriesDialog and DataEditor use the period that is
currently visible in the chart. The explorer user interface uses the relativeViewPeriod defined for the timeSeriesSet in the Filters
configuration file. The ThresholdEventCrossingModule uses the relativeViewPeriod defined for the timeSeriesSet in the
ThresholdValueSets configuration file. The ThresholdOverviewDisplay uses the configured aggregationTimeStep or relativePeriod in the
ThresholdOverviewDisplay configuration file. Please note that in the ThresholdOverviewDisplay and in the Reports the data is read using
the timeSeriesSets configured in the inputVariables. Therefore the relativeViewPeriods defined for the timeSeriesSets of the
inputVariables must include the relativePeriod for which the most severe activated warning level has to be determined. Otherwise not all
of the required data is read.
3. If the given time series contains only missing values, then no warning levels are activated, i.e. "All clear".
4. For each data value separately, Delft-FEWS considers each levelThresholdValue in V and determines if it has been crossed for the given
data value (see above for definitions of crossed). Each levelThresholdValue that has been crossed, activates its corresponding warning
level. From all the warning levels that are activated for the given data value, the most severe warning level is chosen. This is repeated for
each data value within the given time period. From the resulting warning levels for the individual data values, the most severe warning
level is chosen.
10 ThresholdValueSets
What [Link]
82
Required no
Description definition of threshold values for all locations and data types
Complementary to the definition of the types of thresholds identified, the values of the thresholds are defined in the ThresholdValueSets
configuration. The configuration of this is similar to the validation rules. Several thresholds may be defined per time series. For each time series to
be tested, a set of thresholds is defined.
Thresholds may be defined to initiate an action by the Master Controller when applied in a live forecasting system. Actions are taken in response
to a log event code. To identify which threshold crossing for which locations will initiate an actions (e.g. enhanced forecasting), an event code can
be defined in the ThresholdValueSet. When the threshold is crossed the event code is generated.
When available on the file system, the name of the XML file for configuring the ThresholdValueSets is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
thresholdValueSet
Root element for defining a set of thresholds. For each time series or time series set for which a threshold event is to be tested new element is
required.
Attributes;
description
Optional description for the ThresholdValueSet. Used for reference purposes only
unit
83
Specify when the unit given for the values is not the same as the (internally stored) unit of the parameter it applies to. When specified it is required
to also specify configUnitConversionsId in [Link]. In those unit conversions the conversion from the specified unit to the (internal) unit
should be available
levelThresholdValue
rateThresholdValue
maxThreshold
forecastAvailableThresholdValue
If a threshold crossing event is measured for a given observed parameter, then the thresholdEventCrossing module logs whether or not there is a
forecastrun available for the corresponding forecast parameter, within a given relative time period. This information is used in the
ThresholdSkillScoreDisplay
timeSeriesSet
Definition of the time series set for which the thresholds are to be tested.
ratingCurve
Convert this threshold level value to a discharge level threshold value using the ratingcurve defined here
84
Figure 40 Elements of the Level Threshold configuration of the ThresholdValueSets configuration
levelThresholdId
Id of the level threshold. This Id must refer to a threshold type defined in the Thresholds definition (see previous paragraph).
value
valueFunction
85
Function alternatives may also be used instead of the value itself (see: Location and attributes defined in Shape-DBF files).
upActionLogEventTypeId
Event code to be generated on the up-crossing of the threshold. This event code can be used to initiate for example enhanced forecasting. The
event code need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for
runs which have an a-priori approved status. This is normally the scheduled forecast run.
downActionLogEventTypeId
Event code to be generated on the down-crossing of the threshold. This event code can be used to initiate for example enhanced forecasting. The
event code need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for
runs which have an a-priori approved status. This is normally the scheduled forecast run.
rateThresholdId
Id of the rate threshold. This Id must refer to a threshold type defined in the Thresholds definition (see previous paragraph).
value
timeSpan
rainRate
Boolean indicator to identify thresholds in rain rates where the threshold is defined as the average rainRate over the timeSpan exceeding the
threshold, and a rate in for example a level where the rate is determined as the value divided by the time span.
upActionLogEventTypeId
Event code to be generated on the up-crossing of the threshold. This event code can be used to initiate for example enhanced forecasting. The
event code need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for
runs which have an a-priori approved status. This is normally the scheduled forecast run.
downActionLogEventTypeId
Event code to be generated on the down-crossing of the threshold. This event code can be used to initiate for example enhanced forecasting. The
event code need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for
runs which have an a-priori approved status. This is normally the scheduled forecast run.
86
Figure 42 Elements of the maxThreshold configuration of the ThresholdValueSets configuration
maxThresholdId
Id of the max threshold. This Id must refer to a threshold type defined in the Thresholds definition (see previous paragraph).
value
The value item is used here as a selection of peaks. The peak must exceed this value to be deemed significant (peaks over threshold)..
timeSpan
The timeSpan is used to establish independence of peaks. Peaks within timeSpan of each other are considered as being of the same event as a
message will only be issued for the highest.
actionLogEventTypeId
Event code to be generated on the threshold occurring. This event code can be used to initiate for example enhanced forecasting. The event code
need not be unique. Multiple threshold crossings may generate the same event code. Note that event codes will only be generated for runs which
have an a-prioriapproved status. This is normally the scheduled forecast run.
11 ColdModuleInstanceStateGroups
What Required Description schema location
Many forecasting models use an initial state as initial condition. When used in real time, DELFT-FEWS can be used to manage these states, such
that models are run from a warm state. Long run times in initiating models is thus avoided.
When no warm state is available a cold state will be used. Additionally the user may explicitly select the cold state to be used as model initial
condition.
A default initial condition must be available for models requiring state management. Additional groups of cold module states may also be defined.
These can be selected in for example scenario runs. While a default state is required for every model, additional states need only be defined
where available. When the state indicated is not found for a particular , DELFT-FEWS will revert to the default state. Where it is found, it will be
used as selected.
When available on the file system, the name of the XML file for configuring the ColdModuleInstanceStateGroups is for example:
ColdModuleInstanceStateGroups
Fixed file name for the ColdModuleInstanceStateGroups configuration
default Flag to indicate the version is the default configuration (otherwise omitted).
87
Figure 43 Elements of the ColdModuleInstanceStateGroups configuration
defaultGroup
Definition of the default group of module states. This is a required item, and only a single definition is allowd.
Attributes;
additionalGroup
Definition of the additional group of module states. One or more items may exist.
Attributes;
description
Optional description of the state group. Used for reference purposes only.
The name of the ZIP file containing the state follow a strict convention. This name is constructed using the moduleId of the
module using this cold state and writing the warm state, appended by the Id of the state group.
12 ModuleInstanceDescriptors
What Required Description schema location
Each module configured in DELFT-FEWS must be registered in the ModuleInstanceDescriptors configuration. This is required to identify the
module to DELFT-FEWS (the name is free format), but is also required to define the type of module through reference to the moduleDescriptors
defined (see system configuration).
When available on the file system, the name of the XML file for configuring the ModuleInstanceDescriptors is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
88
Figure 44 Root element of the ModuleInstanceDescriptors configuration
ModuleInstanceDescriptorsId
Root element of the ModuleInstanceDescriptor element. For each module defined the element is repeated. Multiple instances may exist.
Attributes;
Id: Id of the Module Instance. This Id must be unique. Normally a string is used that gives some understanding of the role of the
module (e.g. SpatialInterpolationPrecipitation).
name: Optional name for the module. Used for reference purposes only.
moduleId
Reference to the ModuleDescriptors defined in the SystemConfiguration to identify the type of module.
description
13 WorkflowDescriptors
What Required Description schema location
Each workflow configured in DELFT-FEWS must be registered in the WorkflowDescriptors configuration. This is required to identify the workflow
to DELFT-FEWS (the format of the name is free). The configuration also sets some properties of the workflow.
When available on the file system, the name of the XML file for configuring the WorkflowDescriptors is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
workflowDescriptor
Root element of the WorkflowDescriptor. New element is required for each workflow. Multiple instances may be defined.
Attributes;
Id: Id of the workflow. This Id must be unique. Normally a string is used that gives some understanding of the role of the module (e.g.
ImportExternal).
name: Optional name for the module. Used for reference purposes only.
visible: Boolean toggle to indicate if workflow is visible for selection in the manual forecast display. Non-independent workflows (e.g.
sub-workflows) should not be marked visible so that these cannot be run independently. Default is true.
89
Forecast: Boolean flag to indicated if workflow is identified as a forecast. This should be the case for workflows with simulated time series
as a results. Import workflows of external data are not forecasts. Default is true.
allowApprove. Boolean flag to indicate if workflow may be approved a-priori through manual forecast display (stand-alone only). Default is
true.
autoApprove. Boolean flag to indicate workflow should automatically be approved a-priori (stand-alone only). Default is false.
autoSetSystemTime. Boolean flag to indicate workflow should automatically adjust the system time. When the workflow is completed and
is fully or partly successful, the system time wil be set to the start time of the period written by this workflow.
If the start time is not a valid time in accordance with the cardinal timestep, the next valid time wil be used.
Default flag value is false. Applicable only on stand-alone.
14 IdMapDescriptors
What Required Description schema location
Each IdMap to support mapping external to internal location and parameter Id's configured in DELFT-FEWS can be registered in the
IdMapDescriptors configuration.
From Delft-FEWS version 2008.03 it is no longer required to identify the IdMap to DELFT-FEWS. If this IdMapDescriptor file does NOT
exist or the individual reference to an IdMap is not present, the system will assume the corresponding IdMap file exist within the
IdMapFiles directory. This functionality is similar for UnitConversion(Descriptors), FlagConversion(Descriptors) and
TravelTimes(Descriptors).
When available on the file system or in the (central) database, the content should be in line with the available IdMap files to prevent errors. When
available, the name of the XML file for configuring the IdMapDescriptors is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
IdMapDescriptor
Root element of the IdMapDescriptor. New element is required for each IdMap. Multiple instances may be defined.
Attributes;
Id: Id of the idMap. This Id must be unique. Normally a string is used that gives some understanding of the role of the module (e.g.
ImportRTS).
name: Optional name for the IdMap. Used for reference purposes only.
15 FlagConversionsDescriptors
What [Link]
Required no
Description Definition of Flag conversions used for mapping external data quality flags to DELFT-FEWS data quality flags
Each FlagConversion to support mapping external to internal data quality flags configured in DELFT-FEWS can be registered in the
FlagConversionDescriptors configuration.
90
From Delft-FEWS version 2008.03 it is no longer required to identify the FlagConversion to DELFT-FEWS. If this
FlagConversionDescriptors file does NOT exist or the individual reference to an FlagConversion is not present, the system will assume
the corresponding FlagConversion file exist within the FlagConversionsFiles directory. This functionality is similar for
IdMap(Descriptors), UnitConversion(Descriptors) and TravelTimes(Descriptors)
When available on the file system, the name of the XML file for configuring the FlagConversions is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
FlagConversion
Root element of the FlagConversion. New element is required for each FlagConversion. Multiple instances may be defined.
Attributes;
Id: Id of the FlagConversion. This Id must be unique. Normally a string is used that gives some understanding of the role of the module (
e.g. ImportRTS).
name: Optional name for the FlagConversion. Used for reference purposes only.
16 UnitConversionsDescriptors
What UnitConversionsDescriptors .xml
Required no
Description Definition of unit conversions used for mapping external units to DELFT-FEWS units
Each UnitConversion support mapping external to internal units configured in DELFT-FEWS can be registered in the UnitConversionsDescriptors
configuration.
From Delft-FEWS version 2008.03 it is no longer required to identify the UnitConversion to DELFT-FEWS. If this
UnitConversionsDescriptors file does NOT exist or the individual reference to an UnitConversion is not present, the system will assume
the corresponding UnitConversion file exist within the UnitConversionsFiles directory. This functionality is similar for
IdMap(Descriptors), FlagConversion(Descriptors) and TravelTimes(Descriptors.
When available on the file system, the name of the XML file for configuring the UnitConversionDescriptors is for example:
default Unit to indicate the version is the default configuration (otherwise omitted).
UnitConversionDescriptor
91
Root element of the UnitConversionDescriptor. New element is required for each UnitConversion identified. Multiple instances may be defined.
Attributes;
Id: Id of the UnitConversion. This Id must be unique. Normally a string is used that gives some understanding of the role of the module (
e.g. ImportRTS).
name: Optional name for the UnitConversion. Used for reference purposes only.
17 CorrelationEventSetsDescriptors
What [Link]
Required no
The correlation module in DELFT-FEWS allows forecasts for a downstream location to be established using a correlation of peak events for the
forecast site and one or more support sites. For each river multiple correlations between several sites on the river may be defined. Correlation
sets can be defined to allow logical ordering these into logical groups. This configuration file defines the groups for which correlation event data
will be later defined.
When available on the file system, the name of the XML file for configuring the CorrelationEventSetsDescriptors is for example:
CorrelationEventSetsDescriptors
Fixed file name for the CorrelationEventSetsDescriptors configuration
default Unit to indicate the version is the default configuration (otherwise omitted).
CorrelationEventSetsDescriptor
Root element of the CorrelationEventSetsDescriptor. New element is required for each CorrelationEventSet identified. Multiple instances may be
defined.
Attributes;
Id: Id of the CorrelationEventSet. This Id must be unique. Normally a string is used that gives some understanding of the group created (
[Link]).
name: Optional name for the CorrelationEventSet. Used for reference purposes only.
18 TravelTimesDescriptors
What [Link]
Required no
Description Definition of sets of travel times for correlation events (used by correlation module only)
The correlation module in DELFT-FEWS allows forecasts for a downstream location to be established using a correlation of peak events for the
forecast site and one or more support sites. For each river multiple correlations between several sites on the river may be defined. Together with
92
the correlation establishing a forecast value, an estimate of travel time between the locations can be given. This is given either as a default travel
time, or it is established through regression of the events considered. An estimate of the travel time is also used to establish which events in the
upstream and downstream location are paired.
From Delft-FEWS version 2008.03 it is no longer required to identify the TravelTimes to DELFT-FEWS. If this TravelTimesDescriptor file
does NOT exist or the individual reference to an TravelTime is not present, the system will assume the corresponding TravelTimes file
exist within the TravelTimesFiles directory. This functionality is similar for UnitConversion(Descriptors), FlagConversion(Descriptors)
and IdMap(Descriptors).
Correlation sets can be defined to allow logical ordering these into logical groups. This configuration file is similar to the CorrelationEventSets and
defines the groups for which travel time data will be later defined.
When available on the file system, the name of the XML file for configuring the TravelTimesDescriptors is for example:
TravelTimesDescriptors
Fixed file name for the TravelTimesDescriptors configuration
default Unit to indicate the version is the default configuration (otherwise omitted).
TravelTimesDescriptors
Root element of the TravelTimesDescriptor. New element is required for each TravelTimes set identified. Multiple instances may be defined.
Attributes;
Id: Id of the TravelTimes set. This Id must be unique. Normally a string is used that gives some understanding of the group created (
[Link]).
name: Optional name for the TravelTimes Set. Used for reference purposes only.
19 TimeUnits
What [Link]
Required no
Description Definition of time units supported by the system (used for mapping external time units to internal time units)
External data sources to be imported in DELFT-FEWS may provide data at an equidistant time step. The time unit defined is often defined in a
string, and must be resolved on import to a time unit recognised by DELFT-FEWS. The mapping of time units is defined in the TimeSteps
configuration.
When available on the file system, the name of the XML file for configuring the TimeUnits for example:
default Unit to indicate the version is the default configuration (otherwise omitted).
93
Figure 51 Elements of the TimeUNits configuration
timeUnit
Root element for each external time unit identified. Multiple entries may exist.
Unit
milliseconds
Equivalent of time unit in milliseconds (base unit in DELFT-FEWS). By convention 0 milliseconds is a non-equidistant time unit. -1 indicates that
the unit is not supported. This is the case for time units such as months, years etc.
20 Historical Events
What [Link]
Required no
Description Definition of historical events to be plotted against real time forecast data for reference purposes
DELFT-FEWS allows a set of historical events to be defined that can be retrieved when looking at forecast data through the time series display.
These events can then be displayed in the same plot as the real-time data for reference purposes.
Historical events are configured as a time series referenced to a location/parameter. When that location/parameter is displayed in the time series
display, a drop down list of the events available for that specific combination is displayed. Selected events are displayed in the same sub-plot as
the real time data for that location parameter.
When available on the file system, the name of the XML file for configuring the HistoricalEvents is:
HistoricalEvents: Fixed file name for the HistoricalEvents configuration (this can now be split into mulitple files with different post fixes i.e.
HistoricalEvents_Northern.xml, HistoricalEvents_West.xml)
1.00: Version number
default: Unit to indicate the version is the default configuration (otherwise omitted).
94
Figure 52 Elements of the HistoricalEvents configuration.
historicalEvent
Attributes:
Alternatively locationId, parameterId and eventData can be left out and replaced with historicalEventSets.
eventData
Time series data for the event. This follows the same defition of the inputVariable detailed in the Tranformation Module configuration. The typical
profile option is used for defining an historical event).
Attributes:
timeStep
Time step for typical profile if variable to be defined for the historical event.
Attributes:
relativeViewPeriod
95
Relative view period of the event. This is the time span of the event. The start and end information will be used when initially plotting the event to
determine its position against the display time at the time of display
data
Data entered to define the event. Data is entered using the dateTime attribute only, the specific date and time values given for each data point.
Other attributes available for defining typical profiles are not used.
Attributes:
dateTime: Attribute value indicating the value entered is valid for a specific date time combination. The string has the format "<year>
<month><day>T<hour>:<minute>:<second>". For example the 23rd of August is "1984-12-31T[Link]".
timeZone
Optional specification of the time zone for the data entered (see timeZone specification).
timeZone:timeZoneOffset
The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.
(e.g. +01:00)
timeZone:timeZoneName
Enumeration of supported time zones. See appendix B for list of supported time zones.
96
<!— Example historic event sets -->
<historicalEventSet name="04-07 January 1999">
<historicalEvent locationId="[Link].765512" parameterId="[Link]" name="test">
<eventData>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="-48" end="24"/>
<data dateTime="1999-01-04T[Link]" value="2.196"/>
<data dateTime="1999-01-04T[Link]" value="2.199"/>
<data dateTime="1999-01-04T[Link]" value="2.201"/>
<data dateTime="1999-01-04T[Link]" value="2.198"/>
<data dateTime="1999-01-04T[Link]" value="2.204"/>
<data dateTime="1999-01-04T[Link]" value="2.213"/>
<data dateTime="1999-01-04T[Link]" value="2.218"/>
<data dateTime="1999-01-04T[Link]" value="2.233"/>
<data dateTime="1999-01-04T[Link]" value="2.252"/>
<data dateTime="1999-01-07T[Link]" value="2.472" comment="Notified everybody to monitor this."/>
<data dateTime="1999-01-07T[Link]" value="2.462"/>
<data dateTime="1999-01-07T[Link]" value="2.453"/>
<data dateTime="1999-01-07T[Link]" value="2.444"/>
</eventData>
</historicalEvent>
<historicalEvent locationId="[Link].765772" parameterId="[Link]" name="test2">
<eventData>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="-48" end="24"/>
<data dateTime="1999-01-04T[Link]" value="3.146"/>
<data dateTime="1999-01-07T[Link]" value="3.371" comment="Notified AK."/>
</eventData>
</historicalEvent>
</historicalEventSet>
</historicalEvents>
Required no
Description attributes to be mapped from time series values for use in report etc.
DELFT-FEWS allows attributes to be associated to values in a time series. This can be used to associate either a textual value or an icon for use
in displays or in reports. Typically the use of value attribute maps is important in forecasts derived through application of the lookup table
modules. Critical conditions are then defined which resolve a combination of inputs to a single "Lookup Index" output. This Lookup index is then
resolved either to a textual message, an icon or a colour using the value attribute maps. The same principle is used in allocating colours/icons to
thresholds, where the unique threshold index is used as an entry to the value attribute mapping.
When available on the file system, the name of the XML file for configuring ValueAttributeMaps is:
default Unit to indicate the version is the default configuration (otherwise omitted).
97
Figure 53 Elements of the value attribute maps configuration
valueAttributeMap
Root element for the definition of a set of attribute values. The Id used to identify this set is later referenced in for example the report module
configuration to allow association of an attribute to a value. Multiple sets may be defined.
Attributes;
attibutes
Root element for associating an attribute to a value. Each value may be attributed a definition (text), a colour and/or an icon.
Attributes;
value: value for which the attributes defined must be associated. Note that an exact match is required to allow the mapping to be valid.
description
Text to be attributed where this value is given in the input series. This text may be used in a report.
image
Path and filename of the icon to be attributed where this value is given in the input series. This icon may be used in for example displays as well
as in reports.
colour
Colour to be attributed where this value is given in the input series. This colour may be used in for example background colouring of a table in a
report
Why to Use? To have only one file or a set of files where all region specific information is stored.
Description: Based on the DBF or shape file you can easily manage the configuration
Contents
Contents
Overview
Configuration
locationSets
locationIcons
locations
idMaps
displayGroups
Thresholds
ValidationRuleSets
CoefficientSetFunctions
98
Coefficients that depend on location and time
Coefficients with multiple values (tables)
Sample input and output
Error and warning messages
Known issues
Related modules and documentation
Technical reference
Overview
To be able to have only one file that manages all the regional information, Delft-FEWS offers the functionality to use a DBF or shape files that can
be linked to the configuration. Locations and locationSets can be automatically generated and useful information as idMaps, thresholdvalues or
validation values can be derived from these tables. It is also possible to link to one or more DBF files that contain time-dependent attributes. This
can be used to define time-dependent coefficients that can be used by the new transformation module.
Finally you have a configuration that has many links to the DBF / shape files, but that will be managed only in these files. The advantage is that
these files can simply be updated by automatic updating processes.
you generate locations in a locationSet and define attributes to these locations that store additional information like idMapping or
validation limits.
locationSets can be generated from the DBF or from another locationSet by using conditions.
idMaps can be linked to the location text attributes.
all values in validationRuleSets can be linked to location number attributes
all threshold values can be linked to location number attributes
displayGroups can be generated automatically from locationSets
it works both for regular point locations as for grids
values in coefficientSetFunctions in a transformation config file can be linked to location number attributes
Configuration
locationSets
The most useful way is first to read all locations from the DBF into one locationSet, where all attributes are assigned.
See for example:
99
<esriShapeFile>
<file>gegevens</file>
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<id>%ID%</id>
<name>%NAAM%</name>
<description>%TYPE%</description>
<iconName>%ICONFILE%</iconName>
<parentLocationId>%PARENT%</parentLocationId>
<timeZoneOffset>+05:00</timeZoneOffset>
<dateTimePattern>yyyyMMdd HH:mm:ss</dateTimePattern>
<visibilityStartTime>%START%</visibilityStartTime>
<visibilityEndTime>%EIND%</visibilityEndTime>
<x>%X%</x>
<y>%Y%</y>
<z>0</z>
<attribute id="PARENT">
<text>%PARENT%</text>
</attribute>
<attribute id="TYPE">
<text>%TYPE%</text>
</attribute>
<attribute id="CITECTLOC">
<text>%CITECTLOC%</text>
</attribute>
<attribute id="IDMAP_Q">
<text>%DEBIET%</text>
</attribute>
<attribute id="HMIN_Q">
<number>%HMIN_Q%</number>
</attribute>
<attribute id="HMAX_Q">
<number>%HMAX_Q%</number>
</attribute>
<attribute id="ROC_Q">
<number>%ROC_Q%</number>
</attribute>
</esriShapeFile>
]]>
In the above example the visibilityStartTime and visibilityEndTime tags are used to define the columns in the DBF file that contain the start and
end dateTimes of the visibilityPeriod for each location. The (optional) visibilityPeriod is the period for which a location is visible in the user
interface. The start and the end of the period are inclusive. Currently the visibility period is used in the map (explorer) window, the time series
display and the spatial display. If startDateTime is not defined, then the location is visible for all times before endDateTime. If endDateTime is not
defined, then the location is visible for all times after startDateTime. If startDateTime and endDateTime are both not defined, then the location is
visible for all times. Furthermore the (optional) dateTimePattern tag is used to define the pattern for the dateTimes defined in the DBF file. If
dateTimePattern is not specified, then the default pattern "yyyyMMdd" is used, which is the internal format that a DBF file uses for columns of type
'D' (date columns). The (optional) timeZoneOffset is the offset of the times in the DBF file, relative to GMT. For example "+02:00" means
GMT+02:00. If no offset is specified, then time zone GMT is used by default.
Next you can derive the required locationSets from this dump by using constraints.
You can use constraints like:
attributeTextEquals
attributeTextContains
attributeTextStartsWith
idContains
attributeExists
etc (see schema or the schema diagram)
For example:
100
<locationSetId>gegevensdump</locationSetId>
<constraints>
<not>
<attributeTextEquals id="IDMAP_KLEP" equals=""/>
</not>
<attributeTextEquals id="TYPE" equals="Stuwen"/>
</constraints>
]]>
It is also possible in a locationSet to link to time-dependent attributes. Time-dependent attributes need to be defined in a separate DBF file. In the
locationSet use the attributeFile tag to make a reference to such a file. The following xml example has a reference to the file
[Link], which contains attributes that have different values for different periods in time, as well as different values for different
locations. In this case the startDateTime and endDateTime tags are used to define the columns in the DBF file that contain the start and end
dateTimes for each row. A given row in the DBF file contains values that are only valid between the time period for that row. This period is defined
by the optional startDateTime and endDateTime for that row. If a row has no startDateTime, then it is valid always before the endDateTime. If a
row has no endDateTime, then it is valid always after the startDateTime. If a row has no startDateTime and no endDateTime, then it is always
valid.
<esriShapeFile>
<file>PumpStations</file>
<geoDatum>WGS 1984</geoDatum>
<id>%ID%</id>
<name>%ID%</name>
<x>%X%</x>
<y>%Y%</y>
<z>0</z>
<attributeFile>
<dbfFile>PumpStationsAttributes</dbfFile>
<id>%ID%</id>
<timeZoneOffset>+05:00</timeZoneOffset>
<dateTimePattern>dd-MM-yyyy HH:mm</dateTimePattern>
<startDateTime>%START%</startDateTime>
<endDateTime>%EIND%</endDateTime>
<attribute id="speed">
<number>%FREQ%</number>
</attribute>
<attribute id="discharge">
<number>%POMPCAP%</number>
</attribute>
</attributeFile>
</esriShapeFile>
]]>
locationIcons
Since 2009.02 it is possible to define the location icon with a new option in the locationSets derived from Shape DBF files. You can define the
location icon with the element iconName. The icon files should be defined as complete file name and this file should be available in the
Config\IconFiles directory. If you want to refer to Config\IconFiles\[Link], you should define the iconName as
[Link]]]>
locations
The regional configuration file Locations is not needed any more, except for other locations that are not supplied in a DBF file.
idMaps
101
<locationIdPattern internallocationset="Pattern Stations" internallocationpattern="H-*"
externallocationpattern="*"/>
..
or
..
<function externallocationfunction="@CITECTLOC@" internallocationset="VV_Q.meting"
internalparameter="[Link]" externalparameterfunction="@IDMAP_DEBIET@"/>
]]>
Notice that you can use the location attributes as a function to map to the correct locations. You can create strings based on the attributes, like:
displayGroups
See all available options in the actual schema. The useful options for using together with the DBF configuration are explained here. Both options
automatically generate the list of the locations in the shortcut trees. The list of locations is ordered alphabetically.
singleLocationDisplays
Adds multiple displays at once to this display group. Every display will show only one location.
singleParentLocationDisplays
Adds multiple displays at once to this display group. Every display will show only childs for one parent location, and the the parent location itself
when specified in the time series sets.
<singleParentLocationDisplays>
<locationSetId>VV_P.[Link]</locationSetId>
<locationSetId>VV_P.meting</locationSetId>
<parentLocationSetId>VV_P.[Link]</parentLocationSetId>
<parentLocationSetId>VV_P.meting</parentLocationSetId>
<plotId>meteo</plotId>
</singleParentLocationDisplays>
]]>
Thresholds
you can use now ...Function alternatives for all the values
<levelThresholdId>LevelWarn</levelThresholdId>
<description>.....</description>
<valueFunction>@SOFT_MAX@</valueFunction>
<upActionLogEventTypeId>TE.571</upActionLogEventTypeId>
]]>
ValidationRuleSets
you can use now ...Function alternatives for all the values, like
extremeValuesFunctions
sameReadingFunctions
etc...
102
<levelThresholdId>LevelWarn</levelThresholdId>
<description>.....</description>
<valueFunction>@SOFT_MAX@</valueFunction>
<upActionLogEventTypeId>TE.571</upActionLogEventTypeId>
]]>
CoefficientSetFunctions
In the new transformation module it is possible to define transformations with embedded coefficientSetFunctions in a transformation config file.
For a given transformation, e.g. StructurePumpFixedDischarge, there is a choice between a coefficientSetFunctions object and a coefficientSet
object. The coefficientSetFunctions object is the same as its corresponding coefficientSet counterpart, only all elements with a value are replaced
by elements with a function. A function is a function expression that can refer to location attributes, e.g. "@discharge@ / 60". See the following
xml example.
<structure>
<pumpFixedDischarge>
...
<coefficientSetFunctions>
<discharge>@discharge@ / 1000</discharge>
</coefficientSetFunctions>
...
</pumpFixedDischarge>
</structure>
]]>
CoefficientSetFunctions are currently (as of build 30246) supported for the following transformations: userSimple, stageDischargePower,
dischargeStagePower, filterLowPass and all structure transformations. See the pages of those specific transformations for configuration
examples.
For elements of type float (e.g. userSimple coefficient value) the attribute should be defined as a number attribute in
the locationSets configuration file as follows:
<number>%COEF_A%</number>
]]>
For elements of type string, boolean (e.g. structureCrumpWeir energyHeadCorrection) or enumeration (e.g.
stageDischarge type) the attribute should be defined as a text attribute in the locationSets configuration file as follows:
<text>%EHCORR%</text>
]]>
103
A coefficientSetFunction can be very useful when using coefficients that depend on location and/or time. In that case the coefficientSetFunction
needs to be defined only once with a link to the correct attributes. The attributes are defined in a DBF file. Then a transformation run will use the
coefficientSetFunction to create coefficientSets for each location and time-period by taking the required values from the attributes from the DBF
file automatically.
time-dependent attributes
If several attributes are used in the same coefficientSetFunction, then it is still possible to have some of those attributes
time-independent and some time-dependent. However all the time-dependent attributes that are used in a given coefficientSet
should be defined with exactly the same time-periods in the DBF file.
Some transformations require a table, e.g. a head-discharge table, in a coefficientSet. For the purpose of tables it is possible to define a given
attribute in a DBF file with multiple values. To do this make multiple rows with the same location and same period, only with different values for
the attributes. If a given attribute is used in a table in a coefficientSetFunctions object, then for each location and period the multiple values that
are defined for that location and period will be converted to a table during a transformation run. This only works for elements in a
coefficientSetFunctions object that are designated as table elements. An element in a coefficientSetFunctions object is designated as a table
element if, according to the schema, the element can occur only once in the coefficientSetFunctions object, but can occur multiple times in the
corresponding coefficientSet object. This is how a transformation run knows that it should search for multiple values for attributes to create a
table. This is the case e.g. for the headDischargeTableRecord element in the StructurePumpHeadDischargeTable transformation, which would be
used as in the following xml example. In this case the "head" and "discharge" attributes should have multiple values defined in the DBF file so that
a head-discharge table can be created.
tables
All attributes, e.g. "head" and "discharge", that are used in the same table element, e.g. headDischargeTableRecord, should
have the same number of values defined per location per period. It is still possible to have a different number of values for
different periods and different locations, as long as there are as many head values as discharge values per location per period.
<structure>
<pumpHeadDischargeTable>
...
<coefficientSetFunctions>
<headDischargeTableRecord head="@head@" discharge="@discharge@ * 1000"/>
</coefficientSetFunctions>
...
</pumpHeadDischargeTable>
</structure>
]]>
Known issues
Technical reference
104
<moduleDescriptor id="ENTRY">
<description>DESCRIPTION</description>
23 Qualifiers
Function: Qualifiers to parameters
Contents
Contents
Overview
Configuration
Qualifier definition
Time Series
Overview
To be able to give additional information to a parameter without creating lots of extra parameters, we introduced the feature of qualifiers.
Qualifiers are used to define a unique time series, next to the locationId and parameterId. Examples are series where you want to derive the daily
minimum, maximum and mean values of an observed series of water levels. The original series is a regular series with parameterId "H" and no
qualifier, where the other series have the same parameterId "H", but qualifiers like "min", "mean" and "max".
Configuration
Qualifier definition
Qualifiers are defined in the regionConfigFiles directory. When available on the file system, the name of the XML file is for example:
Time Series
The most useful way is first to read all locations from the DBF into one locationSet, where all attributes are assigned.
See for example:
105
<moduleInstanceId>ImportCAW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<qualifierId>min</qualifierId>
<locationSetId>Boezem_Poldergemaal_H.meting</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-6000" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
<synchLevel>5</synchLevel>
]]>
24 Topology
Function: Configure topology of an IFD environment
Why to Use? The [Link] is necessary to be able to use panels like the topology panel and the forecast panel
Description: Topology panel is used to define the topology of an IFD environment. Also the behaviour of the forecast panel which is used
to start
IFD runs can be configured here.
Available DelftFEWS201001
since:
Contents
Contents
Overview
Configuration
Nodes definition
Configuration options which apply to all nodes
enableAutoRun
enableAutoSelectParameters
Configuration options which apply to individual nodes
Groupnodes
WorkflowId
StateSelection
LocalRun
Viewpermission
Leaf nodes
NextNodeId
PreviousNodeId
LocationId
FilterId
MapExtendId
Schema
Overview
The [Link] is an mandatory configuration file when you are setting up an IFD-environment. This configuration file is used to configure the
topology of a region.
The topology is defined by individual nodes and their connectivity. The topology can be viewed in the topology panel, which shows a block
diagram of the topology, or in the forecast panel, which shows
a tree view of the topoloy. The behaviour of the forecast panel can also be configured in the topology-file. For example a workflow can be
configured for a topology-node. By default the workflow will
run locally when the node is selected in the forecast panel. This can be switched off by setting the option enableAutoRun to false.
The [Link] plays a central role in configuring an IFD-environment since it is used to configure the forecast panel which is the central panel
in an IFD-environment.
106
Configuration
Nodes definition
The topology of a region is configured by defining the indvidual nodes off a region and grouping them. Below an example from the topology of the
abrfc region
<nodes id="ABRFC">
<workflowId>ABRFC_Forecast</workflowId>
<nodes id="NMWTX" name="NMWTX">
<workflowId>NMWTX_Forecast</workflowId>
]]></node></nodes></nodes>
In the example above we see that the region abrfc has two leaf nodes CMMN5 and EGLN5. They are grouped in the group NMWTX. The group
NMWTX is part of the toplevel node ABRFC.
This simple example shows how a topology can be defined and how the nodes and groupnodes can be group together. It is also possible to
configure the connnectivity between nodes. This can be done
by using the tag previousNodeId. In the example above we can see that EGLN5 is upstream of node CMMN5. The connectivity between nodes is
visualised in the topology panel.
The [Link] has two types of configuration options. The first group is applied to all nodes, the second group is applied to individual nodes. In
this part the first group of options will be explained.
enableAutorun
enableAutoSelectParameters
These global options are configured at the top of the [Link] before the definition of the nodes.
enableAutoRun
This option is set to true by default. If a topology node is selected in the forecast panel and a workflow is configured for this node and the option is
enabled than the associated workflow will automaticly run locally. By setting this option to false this behaviour can be switched off.
enableAutoSelectParameters
This option is set to false by default. In a node is selected and a filter is configured for that node than the filter will be selected automaticly. If this
option is also enabled than the parameters of that
filter will also be selected automaticly. Because the parameters are also selected after selecting the node the plot display will automaticly show
the time series of the filters in the plot display.
The second group of configuration options are applied to indvidual nodes or a group of nodes. These options are defined in the nodes to which
these options should be applied.
Groupnodes
workflowId
stateSelection
localRun
107
viewPermission
WorkflowId
The workflowid is optional for a node. If a workflow is configured this workflow is automaticly started after selection of the node if the option
enableAutorun is set to true. The workflow can
StateSelection
The forecast panel also allows the forecaster to select a state. The default state selection can be configured with this option.
coldState
warmState
noInitialState
LocalRun
This option can be used to configure if the workflow of this node should be run locally or at the server. By default workflow of leaf nodes are run
locally and workflows of group nodes are
run at the server. Local runs are considered to be temporary runs. The results of these runs are deleted when FEWS is stopped.
Viewpermission
With this option an (optional) viewpermission can be configured. If a user is not allowed to view this node it will not be visible in the forecast panel.
Leaf nodes
nextNodeId
previousNodeId
locationId
filterId
mapExtendId
workflowId
initialState
localRun
viewPermission
NextNodeId
This option is used to configure the next node of a topology node in the case that two topology nodes have configured a node to be the previous
node. The nextNodeId indicates which node is considered to be the next node when going downstream by using the next segment button in the
topology panel
PreviousNodeId
LocationId
This option can be used to connect a location to a topology node. After selection of a node the configured locations are automaticly selected in the
filters.
FilterId
If a filter is configured to a topology node it will automaticly be selected after selection of the topology node.
MapExtendId
If a mapExtendId is configured the map will automaticly zoom to the configured map extend after selection of the node.
The remaing options:workflowId, initialState, localRun and viewPermission are described in the section groupNodes.
108
Schema
25 ModifierTypes
What [Link]
Required no
Contents
Contents
Schema
Introduction
Time series modifiers
Single value modifiers
Constant value modifiers
Enumeration modifiers
Time series modifier
Mark unreliable modifier
Compound modifier
Missing value modifier
Switch option modifier
Option modifiers
Module parameter modifiers
Change weight times modifier
Blending steps modifier
Disable adjustment modifier
Sample years modifier
Module parameter modifier
Change ordinates modifier
Reverse order modifiers
Rating curve modifiers
Shift rating curve modifiers
Schema
109
110
Introduction
A forecaster can modify a forecast with so-called Modifiers. Within FEWS there are two
types of modifiers: time series modifiers and parameter modifiers. Parameter modifiers can
modify a parameter of a model or of a transformation.
Time series modifiers modify a time series. The original time series is stored in the database.
However as soon as this time series is retrieved from the database the modifier will be applied to it.
But it is important to note that the original time series is always available in the database.
The [Link] defines which modifiers are available within a fews configuration. The modifiers can be
created in the modifiers panel. TimeValue-modifiers can also be created in the plot display by graphicly
editing a time series or by changing
A single value modifier is a modifier which modifies only one value at one time step in a time series.
The forecaster can define a single value modifier in the modifier panel by selecting a date and a value.
The combination of the selected time and value is the definition of the single value modifier.
An example of the use of a single value modifier is the WECHNG-modifier of the NWS. This modifier sets the
snow water-equivalent for the date specified. The single value modifier is applied to an empty time series which holds
after the application of the modifier only the value of the modifier. This time series is used as an input time series
to the model. The model adapter reads the time series and knows that it has to change the snow water-equivalent for the
mod date to the specified value.
Display
The user can enter a value in the text box by entering a value and by clicking on the spinner box next to it.
The value can also be adjusted by the slider bar.
The date of the modifier can be selected in the area above which the forecaster can enter a value for the modifier.
The units of the modifier is shown at the right side of the slider bar.
111
schema
timeSeries
The tag timeseries is used to define to which timeseries this modifier can be applied.
softLimits
The slider in the display is bounded to the soft limits defined. However they can be overruled by entering a higher or lower value in the text box.
hardLimits
The values entered in the text box or the slider are bounded by the hard limits defined.
defaultTime
The default time of the modifier. Currently two options are available time zero and start run.
defaultValue
default value
derive a default value from a time series
derive default value from a statistical function
When a default value is configured the modifier will always default to that value.
In case the second option is chosen than it is possible to define a timeseries-filter from which the default value should be derived.
The modifier will look for a value at the time for which the modifier is defined.
The last option allows the forecaster to configure a statistical function from which the value should be derived.
Currently only the principal component analysis-functions support this option.
When the principal component analysis is run in the plot display by selecting the principal component analysis-function the output value of this
function will be default value for the modifier.
112
Configuration example
<timeSeries>
<parameterId>WECHNG</parameterId>
</timeSeries>
<softLimits>
<maximumValue>5</maximumValue>
<minimumValue>0</minimumValue>
</softLimits>
<hardLimits>
<minimumValue>0</minimumValue>
</hardLimits>
<defaultTime>start run</defaultTime>
<defaultValue>0</defaultValue>
]]>
First the id and name of the modifier is declared. In this case this instance of the singeValueModifier will be identified by wechng.
The timeSeries-part identifies that this modifier can be applied to any time series which have the parameter WECHNG.
The modifier has soft limits configured. These limits are used to limit the slider bar in the display.
In this example the slider bar will start at 0 and end at 5. But these soft limits can be overruled by
manually typing a value lower than zero or higher than 5.
The hardLimits identify the upper and lower limit of the mod and they can not be overruled.
This means that for this mod only the maximum value of the soft limit of 5 can be overruled because there
is a minimum value configured in the hard limits of 0. A single value modifier is only applied at one time step.
By default the time step is set to the start of the run in this modifier. The default value is set to 0.
Constant value modifiers are very similar to single value modifiers. But instead of modifying a single value at a
particular point in time, they modify a time series over a period of time with a fixed value.
An example of the use of the constant value modifier is the MFC-modifier. This modifier adjusts the melt factor of the
snow17-model over the specified period of time with the specified value. It is (just as the WECHNG-modifier) applied to an
empty time series and used as an input time series to the snow17-model.
Display
Below the display of a constant value modifier is shown. Which is very similar to the display of single value modifier.
Note however that this modifier has a start and an end time. The constant value of the modifier can be specified in the
text box or with the slider. The period can be defined by using the start and end date boxes.
Schema
113
timeseries
The user can define a timeseries filter in this tag to define to which time series the modifier can be applied.
softLimits
The slider in the display is bounded to the soft limits defined. However they can be overruled by entering a higher or lower value in the text box.
hardLimits
The values entered in the text box or the slider are bounded by the hard limits defined.
defaultstarttime
startrun
time zero
defaultendtime
time zero
end run
114
defaultvalue
Configuration example
<timeSeries>
<parameterId>MFC</parameterId>
</timeSeries>
<softLimits>
<maximumValue>10</maximumValue>
<minimumValue>0</minimumValue>
</softLimits>
<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>
<defaultValue>1</defaultValue>
]]>
This id of this instance of the constant value modifier is mfc and its name is MFC. It will only be applied to time series
which have the parameterid MFC because a timeseries-filter with parameterid MFC is defined.
No hardlimits are defined but the softLimits are set to a range of 0-10. The slider will have a range of 0 till 10.
But they can be overruled with entering a higher value in the text box.
Enumeration modifiers
Enumeration modifiers are modifiers in which the user can select an option from a dropdown-list. This is modifier is applied to an period
of time.
An example of the use of the eneration modifier is the rain snow modifier from the NWS. In this modifier the forecaster can determine
the precipitation in the snow17-model. Only two options are available rain and snow. If the forecaster chooses option rain than
a value of 1 is set into the timeseries, if an option snow is chosen than the value 2 is set into the timeseries at the specified time.
The modifier is applied to an empty time series and used an input to the model. The model knows that if value 1 is set into the
timeseries that the user has chosen option rain and that if value is 2 that option snow was choosen.
Display
Schema
115
timeseries
descriptionEnumeration
Define the text value in the display which is shown before the dropdown list
enumeration
Define the list of options available in the dropdownlist and its associated value which will be placed into
the time series.
defaultstarttime
startrun
time zero
defaultendtime
time zero
end run
Configuration example
116
Below is an example of the configuration of an enumeration modifier.
<timeSeries>
<parameterId>RAINSNOW</parameterId>
</timeSeries>
<descriptionEnumeration>choose precipitation:</descriptionEnumeration>
<enumeration>
<item value="1" text="rain"/>
<item value="2" text="snow"/>
</enumeration>
<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>
]]>
This modifier is applied to every time series which has parameter id RAINSNOW because a filter is defined with only the parameterId
RAINSNOW.
The text of the label in front of the dropdownlist is configurable. The items in the dropdownlist are also configurable. For this modifier the
forecaster can choose between options rain and snow. If snow is selected a value of 2 in set into the time series for the selected period.
If rain is selected a value of 1 is written into the time series.
The numbers are treated as flags by the model to which the time series is passed.
The time series modifier is a modifier which allows the forecaster to edit a timeseries by selecting points in a graph or by
changing values in a modifier. In most applications of this modifier the forecaster is directly editing a time series which is used
by the models. For example it might be used to directly edit the precipitation. This is contrary to how for example the single value modifier
WECHNG is used. This modifier edits an empty time series which is than read by the snow17-model which modifies its state based on the input
from
this modifier. A time series modifier always has a start- and enddate. Optionally (if configured) a valid time is available.
The forecaster can edit this time series by making changes in table or in the graph. The changes in the graph are made by clicking in the graph.
When the user clicks from left to right then the values between the points are interpolated.
When the user clicks from right to left only the newly added or changed points are adjusted but no interpolation will be done between
the last two points. When more than one time series is shown in the display it is possible to make a selection of which time series should edited
when making changes by clicking in the graph.
The time series which should be changed can be selected by clicking on the legend of that example time series in the graph
Besides editing the time series by editing values in the graph or table another type can be selected by the dropdownbox.
add
substract
mulitply
divide
replace
missing
ignore time series
time series
When one of the options add, substract, mulitply, divide or replace is choosen than a text box in which a value can be entered appears next to the
operation type-dropdownbox.
117
The options add, substract, mulitply, divide or replace are self-explaining. They add,substract, multiply,divide or replace the timeseries with the
specified value over the specified period of time.
The option missing replaces the values in the time series with missing values over the specified period of time, the ignore time series sets the
value over the specified period of time to unreliable.
The last option time series is the default option which will selected after startup of this modifier and this option allows the forecaster to freely edit
the timeseries.
An example of the use of the time series modifier is the ROCHNG-modifier which is used by NWS to modify the runoff time series.
Display
Schema
118
119
timeSeries
This tag can be used to identify to which timeseries this modifier can be applied.
resolveInWorkflow
In the tag timeSeries is a filter defined which defines which timeseries can be modified with this timeseries. If the tag
resolveInWorkflow is set than the modifier can be applied to all timeseries in the current workflow to which the defined time
series filter applies. In an IFD-environment the current workflow is the workflow which is associated to the selected topology
node.
resolveInPlots
This tag can only be used in IFD-environments. If this tag is enabled than the timeseries-filter is also applied to all timseries
in the plots associated with the current selected topologynode.
editInPlots
It is possible to create a timeseries modifier in the plot displays. This can be done by selecting a timeseries by selecting a legend.
After selection the timeseries can be modified by graphicly editing the timeseries or by changing values in the graph. This feature can
be disabled by setting this option to false.
createContinousModifiers
If a modifier is created in the graph by default one modifier will be created. However when the option createContinousModifiers is disabled one
modifier will
be created for every continious range of modifications made. For example if the forecaster changes a 6 hours timeseries at 00z and at 12z
but not a 0600z than by default this will result in creating a single modifier, but when this option is disabled two modifiers will be
created. One for each continious range of changes. In this case there is a change at 00z and one at 12z therefore two modifiers will
be created.
Configuration example
<timeSeries>
<moduleInstanceSetId>SACSMA_Forecast</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>INFW</parameterId>
<locationSetId>Gages_Catchments</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<ensembleId>QPF</ensembleId>
</timeSeries>
<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>
<resolveInWorkflow>true</resolveInWorkflow>
<resolveInPlots>false</resolveInPlots>
]]>
This modifier can be applied to the time series identified in tag timeseries.
The modifier will have a start time equal to the start of the run and will end at the end of the run.
The option resolveInWorkflow is set to true and the option resolveInPlots is set to false.
This means that the IFD will search for time series which might be modified in the workflow of the selected
node but it will not search in the time series which are displayed in the plots for this node.
This modifier sets all the values in a time series to unreliable over a period so the data will not
be used in the models, but the original values will be displayed.
The display is very similar to the display used for the timeseries modifier however the dropdownbox is disabled
and the option ignore timeseries is enabled.
The forecaster can only edit the start and end dates of the period in which the time series will be set to invalid.
In the Modifiers Display table the unreliable values in the modified time series are marked yellow.
An example of the use of this modifier is the modifier IGNORETS. This modifier is used by the NWS
120
to arrange that the the model RESSNGL or the tranformation AdjustQ ingores certain types of data.
By setting the correct filter in configuration only certain input time series of ressngl or adjustQ
can be ignored by using the modifier.
Display
Schema
timeSeries
This tag can be used to identify which timeseries this modifier can be applied.
defaultstarttime
121
Possible options are:
startrun
time zero
defaultendtime
time zero
end run
Configuration example
<timeSeries>
<moduleInstanceSetId>RESSNGL_Forecast</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>PELV</parameterId>
<locationSetId>Reservoirs</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
</timeSeries>
<defaultStartTime>start run</defaultStartTime>
<defaultEndTime>end run</defaultEndTime>
]]>
Compound modifier
The compound modifier can be used to modify a set of time series with slider bars. Each slider shows a reference
value in blue. If no modification is made the value of the slider will be equal to the reference value. If a modification
is made the slider will always be equal to the value of the modifier. Too indicate that a modification was made the text box
will be made yellow.
An example of the use of the compound modifier is the sacco-modifier. This modifier is used to modify the state of the Sacramento-model.
Each slider represents a stateparameter. In blue the current value is shown, the slider is equal to current value of the model or if the
stateparameter is changed it will be equal to the modification.
Display
Schema
122
slider
For each slider the time series which holds the reference values should be configured, and the time series which should contain the modified
value should be configured. Each slider also has maximum value. This maximum is retrieved from the module parameter file of the model. The
tag maximumAllowedValueParameterId identifies which parameter should be used to identify the maximum.
maximumAllowedParameterId
The maximum of the slider can be derived from the moduleparameterfile by identifying the parameterId which holds the value
of the maximum
hardLimits
It also possible to define the minimum and maximum of the modififications by hard coding them in the configuration.
defaultTime
Default of modifier date. Possible options are startrun and time zero.
Configuration example
Below an example of the configuration of a compound modifier. In this example a part of the sacco configuration is not shown but only the
configuration of one of the five sliders is shown.
123
<slider>
<currentTimeSeries>
<moduleInstanceSetId>SACSMA_Forecast</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>UZTWC</parameterId>
<locationSetId>Gages_Catchments</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
</currentTimeSeries>
<modifiedTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>UZTWC</parameterId>
<locationSetId>Gages_Catchments</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
</modifiedTimeSeries>
<maximumAllowedValueParameterId>UZTWM</maximumAllowedValueParameterId>
</slider>
<defaultTime>start run</defaultTime>
]]>
The missing value modifier can be used to set the values in a time series to missing over a period of time. The user can only define the period of
time over which this modifier is active.
The panel which is used for this modifier is very similar to the panel of the time series modifier.
The dropdowbox however which is used to select an operation type is disabled and set to the type Missing.
An example of the use of this modifier is the SETMSNG-modifier which is used by the NWS. To set the value of certain time series
to missing this modifier is used.
Display
Schema
124
timeseries
This tag can be used to identify to which timeseries this modifier should be applied.
defaultStartTime
The default start time of the modifier. The available options are startrun and time zero.
offsetDefaultStartTime
The offset start time compared to the option defined in defaultStartTime. For example when an offset of 1 day is configured
in this option and the defaultStartTime is set to timezero than the default starttime of the modifier will be set to
time zero plus 1 day.
defaultEndTime
The default end time of the modifier. The available options are time zero and end run.
offsetDefaultEndTime
The offset of the end time compared to the option defined in defaultEndTime.
expiryTime
This tag can be used to overrule the default expiry time.
resolveInWorkflow
In the tag timeSeries is a filter defined which defines which timeseries can be modified with this timeseries. If the tag
resolveInWorkflow is set than the modifier can be applied to all timeseries in the current workflow to which the defined time
series filter applies. In an IFD-environment the current workflow is the workflow which is associated to the selected topology
node.
resolveInPlots
This tag can only be used in IFD-environments. If this tag is enabled than the timeseries-filter is also applied to all timseries
in the plots associated with the current selected topologynode.
125
<timeSeries>
<parameterId>QIN</parameterId>
</timeSeries>
<defaultStartTime>start run</defaultStartTime>
<offsetDefaultStartTime unit="day" multiplier="1"/>
<defaultEndTime>time zero</defaultEndTime>
<offsetDefaultEndTime unit="day" multiplier="100"/>
<expiryTime unit="day" multiplier="100"/>
<resolveInWorkflow>false</resolveInWorkflow>
<resolveInPlots>true</resolveInPlots>
]]>
This missing value modifier can only be applied to time series which have the parameterid QIN because the tag timeseries defines a
timeseries-filter with parameterId QIN.
When a missing value modifier is created from the modifiers panel by default the start time of the modifier will be equal to the start of the run plus
1 day.
The end of the modifier will default to time zero plus 100 days.
The tag resolveInWorkflow is set to false and the resolveInPlots tag is set to true which means that the modifier can be applied to all time series in
the plots of the node, but will not be applied to the time series identified in the workflow of the node. In this example there are no time series
defined to which this modifier should be applied. This means it can applied to all time series which are defined in the plots of a node.
This modifier allows the forecaster to choose one of the configured time series. If the chosen time series was defined as a timeValue-timeseries
than the forecaster will also have the option to enter a value. If the timeseries was defined as a boolean time series than the forecaster
cannot enter a value and the textbox for the value will be grayed out.
An example of the use of this modifier is the SSARREG-modifier of the NWS. This modifier is used to set the regulation options for a basin.
By using the radio-button a regulation option can be selected. For most regulation options a value can be entered. However the option
FREEFLOW
can only be switched on.
For each data at a model time step one of the options can be selected by the radio-button-field. In the value field the forecaster can enter a value
or if the option is only a switch on option, the value field is blocked. The add-icon automatically adds a new entry to the table. By default the new
entry will have the data of the current row plus one model time step. The delete-icon deletes all the selected rows. The entries are allways in
sequence.
126
schema
Configuration example
Below an example of a switch option modifier. For each regulation option available in the display a time series should be defined. The parameterid
of the configured time series will be used as the name of the regulation option in the column regulation.
<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETH</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETQ</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETS</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<timeValueTimeSeries>
127
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETDH</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETDQ</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<booleanTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>FREEFLOW</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</booleanTimeSeries>
<timeValueTimeSeries>
<moduleInstanceId>ExportMODS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SETDS</parameterId>
<qualifierId>US</qualifierId>
<locationSetId>ImportIHFSDB</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<ensembleId>main</ensembleId>
</timeValueTimeSeries>
<startTime>start run</startTime>
<effectiveDate/>
]]>
Option modifiers
This modifier is very similar to the switch option modifier. However this modifier doesn't allow to define a option per date.
It only allows to define one option which will be always valid after creation of the modifier.
An example of the use of this modifier is the rainfall_switch of the seqwater-system. This option allows the forecaster to choose
a forecast type (user defined forecast, no rainfall forecast or use the rainfaill forecast).
Secondly it is also possible to choose which rainfaill observations to use the forecast.
Display
Below an example of a option modifier. In this case the example shows the rainfall switch-modifier.
128
Schema
timeValueTimeSeries
First the timeValueTimeSeries are defined. The parameterid of the defined timeseries will be used as an identifier
in radiobutton which can be used to select an option. When an option is selected which is defined as a timeValue-timeseries
than the user can also define a value.
booleanTimeSeries
This option allows the user to define option-types which can only be selected by used, but doesn't offer the possibility to
enter a additional value.
expiryTime
This option can be used to define an expiry time for this modifier which overrules the default expiry time.
Configuration example
129
<booleanTimeSeries>
<parameterId>Grid</parameterId>
<qualifierId>observed</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>Stations</parameterId>
<qualifierId>observed</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>SeqGrid</parameterId>
<qualifierId>observed</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>SeqStations</parameterId>
<qualifierId>observed</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>Forecast ON</parameterId>
<qualifierId>forecast</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>Forecast OFF</parameterId>
<qualifierId>forecast</qualifierId>
</booleanTimeSeries>
<booleanTimeSeries>
<parameterId>User ON</parameterId>
<qualifierId>forecast</qualifierId>
</booleanTimeSeries>
<expiryTime unit="day" multiplier="1000"/>
]]>
As an example the configuration of the option modifier is shown. This modifier has 6 boolean timeseries defined.
This means that 6 options will available in the display. The options will be split into two groups based on the
qualifierId. In this example it will mean that we will have observed group with Stations, SegGrid, SegStations and
a forecast group with Forecast On, Forecast off and User on.
This modifier is used in combination with the transformation mergeWeighted. This transformation has two input time series
and creates an output timeseries by taking the weighted average of both input time series. To be able to use this modifier
the transformation must use a module parameter file to define its parameters.
An example of the use of this modifier is BLENTEMP and BLENDPREC. Both modifiers are used by the NWS.
BLENDTEMP is used to blend the observed and simulated temperature time series.
BLENDPREC is used to blend the observed and simulated precipitation time series.
Both blend operations are done as part of the preprocessing step prior to creating a forecast.
Display
130
The user can add rows with the add-button and delete rows by selecting a row and pressing the delete button.
The first column of the table is used to define the value of the offset of timezero and the second column define the timeunit.
The combination of column 1 and 2 define the total offset from time zero. The third column is calculated from the first two.
In the display-example the first row indicates that at an offset of 1 day the weight of the first time series is 1. This means
that the weight of the second time series is 0. The output of the tranformation mergeWeighted will be equal to the first timeseries
at 1 day after timezero. At 5 days after time zero the weight of the first timeseries will be 0 the weight of the second time series
will therefore be 1. The output at this timestep will be equal to the second time series. Between both time steps the weight will
be determined by linear interpolation.
Schema
Configuration example
]]>
To be able to use this modifier the only thing that the configuration has to do is to declare that this modifier should
can used by defining it in the [Link] and assign an id and name to it.
It is possible to define more than one changeWeightModifier. By disabling certain changeWeightModifiers in a workflow it can
be arranged that only one type can be used in a workflow. This might be usefull to give meaningfull names to the modifiers so that
the user can identify the name what the modifier will be doing.
The blending steps modifier is a modifier which can only be used in combination with the transformation AdjustQ.
Secondly adjustQ-transformation should use a moduleparameterfile to define its parameters to be able to use this modifier.
This transformation uses observed discharges and simulated discharges to create an output timeseries.
131
One of the parameters of the adjustQ-transformation is the blending steps.
This parameter determines in how many steps the blend from the observed time series to the simulated time series is done.
The blending steps modifier is used to modify this parameter. The modifier doesn't have a start- and/or endtime and is always valid.
The last applied blending steps modifier is always applied. Only one blending steps modifier can be defined in a fews configuration.
An example of the blending steps modifier is the CHGBLEND-modifier. This modifier is used by the NWS to modify the blending steps of the
adjustQ-operation.
Below is an example of a blending steps modifier. The forecaster can enter the value in the text box and/or change it with the up and down arrows
next to the text box.
Schema
]]>
The transformation adjustQ creates a discharge time series by using observed discharge time series and simulated observed
discharge time series. When this modifier is applied the observed time series are ignored and the output will be equal to the
simulated discharge time series. This modifier can, like the blending steps modifier, only be used in combination with the
adjustQ-modifier.
Secondly adjustQ-transformation should use a moduleparameterfile to define its parameters to be able to use this modifier.
The moduleparameterfile should define the (optional) parameter disableAdjustment.
Below an example
132
<group id="default">
<parameter id="blendingSteps">
<intValue>1</intValue>
</parameter>
<parameter id="interpolationType">
<stringValue>difference</stringValue>
</parameter>
*<parameter id="disableAdjustment">
<boolValue>false</boolValue>
</parameter>*
</group>
]]>
Typical use of this modifier is espadjq which is used by the NWS to disable the adjustQ-operation in the forecast.
Display
Below an example of the display for this modifier. The forecaster cannot select a start- and/or enddate. This means that if this modifier
is active the adjustQ-operation is disabled.
schema
Configuration example
The configurator only has to configure the id and the name of the modifier. By doing this FEWS knows that it is allowed
to use this modifier at a each adjustQ-operation which uses a moduleparameterfile and has the tag disableAdjustment in its
133
moduleparameterfile defined.
]]>
The transformation sample historic creates ensembles based on historic time series.
The sample years modifier can only be used in combination with this transformation.
To be able to use this modifier the transformation sample historic should use a module parameter file to define its configuration options.
An example of the use of this modifier is the modifier HistoricWaterYears which is in use by NWS.
It is used by the forecasters to overrule the default sample years in the transformation.
Display
The forecaster can modifiy the default sample years by changing the start year and end year in the display.
Schema
Configuration example
]]>
The module parameter modifier is a generic module parameter file editor which can be used to modifiy every module parameter file.
134
It is possible to limit the number of module parameter files which can be modified by applying a filter. It is also possible to have
more than one module parameter is a fews region. By giving the modifiers a different name and a different filter it is possible to
define two modifiers which each modifiy a certain parameter of a certain module parameter file.
An example of the use of this modifier is the BASEFLOW-modifier of the NWS. The modifier modifies the BASEFLOW-parameter of the UNITHG-
model.
Display
Schema
filter
Define a filter based on parameterids. This filter will be used to determine which moduleparameterfiles can be edited with this modifier and which
part
Configuration example
135
Below an example of the configuration of a module parameter modifier.
With the tag filter can be identified which module parameter files can be modified.
In the example below every module parameter file with the tag CONSTANT_BASE_FLOW can be modified.
The filter is also used to filter which part of the module parameter file can be modified.
In the example below only the module parameters with id CONSTANT_BASE_FLOW are shown in the modifiers display and are editable.
<filter>
<moduleParameterId>CONSTANT_BASE_FLOW</moduleParameterId>
</filter>
<defaultValidTime/>
]]>
This modifier can be used to change the ordinates of the module parameter file of the unit Hydrograph-model.
The ordinates can be changed in the table or in the graph. When the user presses the apply button the ordinates are adjusted by using a
volume-correction.
The volume correcton will ensure that the volume without the modifier applied is the same as the volume of unit hydrograph after the modifier is
applied.
Display
Schema
136
defaultStartTime
The default start time of the modifier. The available options are startrun and time zero.
defaultEndTime
The default end time of the modifier. The available options are time zero and end run.
offsetDefaultEndTime
The offset of the end time compared to the option defined in defaultEndTime. For example when the default end
time of the modifier is set to end run and an offset of 100 days is defined than the default end time of the modifier
will be set to end run plus 100 days.
defaultValidTime
If this option is configured than a valid time can be choosen for this modifier. The valid time always default to time zero.
Configuration example
]]>
137
This modifier can be used to reverse the data hierarchy of the merge simple transformation.
When this modifier is active on the transformation the data hierachy is reversed.
An example of the use of this modifier is the switchts-modifier of the NWS. With this modifier the forecasters temporarily favor one
timeseries above the other because the timeseries which normally is used as the primary timeseries is considered to be less reliable.
Display
Below an example of the display of a reverse order modifier. The display is empty. The forecaster can only set a start- and endtime of the
modifier.
Schema
Below an example of the display of this modifier. This display is blank, the forecaster can only enter a period in which this modifier is active.
defaultStartTime
The default start time of the modifier. The available options are startrun and time zero.
138
defaultEndTime
The default end time of the modifier. The available options are time zero and end run.
offsetDefaultEndTime
The offset of the end time compared to the option defined in defaultEndTime.
defaultValidTime
If this option is configured than a valid time can be choosen for this modifier. The valid time always default to time zero.
Configuration example
]]>
The rating curve can be modified by shifting the whole rating curve a constant value or by multiplying it with a factor.
The constant value or the multiplication factor is calculated by the following procedure.
This type of modifier is in use by ncrfc one of the rfc's of the NWS. They use this modifier to temporarily modify the rating curve.
However when new rating curves are available they are imported in their system.
Display
An example of the display of this modifier is shown below. The forecaster can define a stage/discharge pair by defining a pair in the textboxes.
However it is also possible to double click on a point in the graph to define a pair. From the defined stage/discharge pair automaticly the constant
value
or multiplication factor is derived, which is displayed besides the given stage/discharge pair. The radio button at the top of the display can be
used to
139
Schema
140
defaultStartTime
The default start time of the modifier. The available options are startrun and time zero.
offsetDefaultStartTime
The offset start time compared to the option defined in defaultStartTime. For example when an offset of 1 day is configured
in this option and the defaultStartTime is set to timezero than the default starttime of the modifier will be set to
time zero plus 1 day.
defaultEndTime
The default end time of the modifier. The available options are time zero and end run.
offsetDefaultEndTime
The offset of the end time compared to the option defined in defaultEndTime.
Configuratie voorbeeld
]]>
26 TimeSteps
Function: Configure predefined timesteps for a fews environment
141
Where to Use? To define verbose timesteps or to define yearly or monthly time steps
Why to Use? Yearly and monthly time steps can only be configured in the [Link]. For verbose timesteps it might be
usefull to define them once in the [Link] and refer to them from other configuration files.
Description: Definition of timesteps which can be referenced from other configuration files
Available since:
Contents
Contents
Overview
Configuration
Schema
Overview
The [Link] can be used to configure timesteps. This file is usefull to define verbose timesteps and refer to the definition of these timesteps
to
Configuration
When available on the file system, the name of the XML file is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
Schema
142
timeStep
Attributes;
143
- id: Unique Id of level threshold. This id should be used when referencing to this definitation from other configuration files.
[Link]
yearlyTimeStep
A timeStep that defines a pattern of dates in a year. This pattern will be repeated every year.
Each date in the year can have a different aggregation period (season).
The start of the aggregation period is exclusive and the end of the aggregation period is inclusive.
If more than four dates in a year are required, then please use the monthDays option in the timeStep element instead of this yearlyTimeStep
]]>
To define a yearly time step an id should be configured. Secondly the monthDays of the time steps should be configured.
144
In this example the yearly time step has 4 monthDays. The start attribute defines the start of the aggregation period and the
end tag defines the end of the aggregation period. The value defines the value of the monthDay itself.
A timeStep that defines a pattern of days in a month. This pattern will be repeated every month. Each day in the month can have a different
aggregation period. The start of the aggregation period is exclusive and the end of the aggregation period is inclusive.
]]>
Introduction
All functionality used be DELFT-FEWS in processing dynamic data and running external forecasting modules is configured in a module instance.
These are then executed in a logical sequence as defined in a workflow.
A variety of modules is available in DELFT-FEWS to provide specific functionality. Examples include interpolation, running external modules,
data import etc. The modules available are defined in the ModuleDescriptors in the System configuration. This defines the Java classes used to
run each module, and assigns a recognizable name for that module (e.g. Transformation). These Java classes implement the workflow plug-in
interface to DELFT-FEWS. The list of available modules can be extended through adding classes implementing this plug-in interface.
To carry out a specific piece of data processing, and instance of a module is configured. This instance specifies the input data, the output data
and the required steps in processing the data. Each module instance is given a unique name with which it is identified in the module instance
145
section of the configuration. To link an instance of a module to the type of module available, the module instance is registered in the
ModuleInstanceDescriptors in the Regional Configuration section.
Interpolation Module
Transformation Module
Import Module
Export Module
General Adapter Module
Lookup Table Module
Correlation Module
Error Correction Module
Report Module
Report Export Module
Performance Indicator Module
Many of the configuration items required will include references to strings. To avoid duplication, a tag can be defined in the
[Link] file in the root configuration and the tag name used in the XML file (see also System Configuration).
Contents
01 Interpolation Module
02 Transformation Module
03 Import Module
04 Export modules
05 General Adapter Module
06 Lookup Table Module
07 Correlation Module
08 Error Correction Module (ARMA)
09 Report Module
10 Performance Indicator Module
11 Amalgamate Import Data Module
12 Archive Module
13 Rolling Barrel Module
14 Support Location Module
15 Scenario Module
16 Pcraster Transformation (pcrTransformation)
17 WorkflowLooprunner
18 Mass-balances
19 Rating curves
20 Transformation Module (Improved schema)
21 Secondary Validation
22 forecastLengthEstimator
23 Decision Module
24. ImportAmalgamate
01 Interpolation Module
What [Link]
146
Two methods of interpolation are available;
Serial interpolation
In serial interpolation mode, interpolation is done to fill any gaps in a time series. The interpolation module will only consider the time series itself
in filling these gaps. Interpolation methods that can be used are;
All these methods can be configured to only fill gaps that are not more than of a given duration. Essential to the understanding of the Interpolation
module is that the module does not have the capability to identify gaps due to potentially unreliable data in a time series. It will only provide an
alternative value for those data points of which the quality flag is set to Unreliable. The validation module can be configured to identify unreliable
data and set quality flags as appropriate.
Spatial Interpolation
In spatial interpolation mode, the interpolation can be either applied to fill gaps in time series, or to create a new time series for a location using
data from other (spatially distributed) locations. Spatial interpolation can also be applied for sampling scalar time series from grid time series, for
re-sampling grids, or for creating grids from time series data. Different methods of spatial interpolation are available;
When available as configuration on the file system, the name of the XML file for configuring an instance of the interpolation module called for
example InterpolateHBV_Forecast may be:
interpolationSet
Root element for the definition of an interpolation step. Multiple entries may exist.
Attributes;
interploationId : Id of the interpolation defined. Used for reference purposes only. This Id will be included in log messages generated.
serialInterpolation
147
spatialInterpolation
timeSeriesInputSet
Input time series set. Note that when the interpolation module is used to fill gaps in time series the input time series set is the same as the output
time series set. The time series sets may include either a single location or a locationSet. Note that the latter may not always be possible when
using the "default" interpolation option, as the default may be location specific.
outputSet
Output time series set. Note that when the interpolation module is used to fill gaps in time series the input time series set is the same as the
output time series set. Identification is only required when the series generation option is used in spatial interpolation. The locations defined in this
timeSeriesSet, and their geographical attributes, determine the locations of the series generated.
Serial interpolation
The serial interpolation option is used to define interpolation options for filling gaps in the time series identified. Multiple methods of interpolation
may be identified. These will be executed in order of definition for the same time series (e,g, first linear interpolation, then an extrapolation and
finally filling remaining gaps with default values).
Figure 55 Elements for defining serial interpolation options in the Interpolation module configuration.
serialInterpolationOption
gapLength
Maximum gap length of unreliable data in seconds which will be filled using the interpolation option defined. If the gap is longer, then none of the
values will be replaced.
defaultValue
Spatial interpolation
The serial interpolation option is used to define interpolation options for filling gaps in the time series identified using available data from other
(spatially distributed) locations. This method can be used to either fill gaps, or to create a new time series.
148
Figure 56 Elements for defining spatial interpolation options in the Interpolation module configuration.
interpolationOption
inversedistance ; for inverse distance weighted interpolation between available values at spatially distributed locations.
bilinear ; for bilinear interpolation between available values at spatially distributed locations.
kriging ; for interpolation using Kriging between available values at spatially distributed locations.
gridcellavaraging; for interpolation of time series based on averaging grid cells (used for example for establishing catchment averages
where the catchment size is much larger than the grid cell size).
Closest distance; for interpolation of time series based on the closest distance between available values at spatially distributed locations.
An extra option is to interpolate from a grid to a longitudinal profile.
interpolationType
Specify if spatial interpolation is used for filling gaps in series or for generating a new series. Note in the latter case the output variable will need to
be defined. This also defines if the output variable is a grid time series or a scalar time series. The available options are:
valueOption
149
Option to determine how input values are used. Enumeration of available options is;
variogram
variogram:type
exponential ;
Gaussian ;
Linear ;
Spherical ;
_power
variogram:nugget
variogram:slope
variogram:sill
variogram:range
numberOfStations
Number of stations to consider in spatial interpolation. Used in Inverse distance when taking a limited number of stations into account. The
nearest stations will be used in preference.
regressionElevation
minimumValue
150
Minimum value of the output data. For interpolation of rainfall data this should be set to zero. Numerically the interpolation may produce invalid
(negative) data.
distanceParameters
Distance parameters for computing actual distances between locations when projection is geographical (WGS1984). Four parameters are
required.
debug
Optional debug level. Spatial interpolation is implemented through a DLL. This can produce a log file, depending on level specified. A setting of 1
is the lowest level, a setting of 4 is highest (can produce very extensive log files).
coordinateFile
Coordinate file allocating grid cells to be considered per location. This coordinate file follows a specific format. Locations to be interpolated to are
indicated through their spatial location. After each location a list of grid cells (m,n coordinates) to be considered is included.
coordinateSystem
Indicates if coordinate system is longitude-latitude this is defined as 1. If not 0 is used and distances are calculated in metres.
inverseDistancePower
02 Transformation Module
What [Link]
</moduleDescriptor>
151
Transformation Module Configuration
The Transformation module is a general-purpose module that allows for generic transformation and manipulation of time series data. The module
may be configured to provide for simple arithmetic manipulation, time interval transformation, shifting the series in time etc, as well as for applying
specific hydro-meteorological transformation such as stage discharge relationships etc.
The Transformation module allows for the manipulation and transformation of one or more time series. The utility may be configured to provide
for;
Manipulation of one or more series using a standard library of arithmetic operators/functions (enumerated);
Addition, subtraction, division, multiplication
Power function, exponential function
Hydro-meteorological functions like:
Deriving discharges from stages
Compute potential evaporation
Calculating weighted catchment average rainfall
Shifting series in time
Time interval conversion:
Aggregation
Dis-aggregation
Converting non-equidistant to equidistant series
Creating astronomical tide series from harmonic components
Handling of typical profiles
Data hierarchy
Selection of (tidal) peaks
statistics
When available as configuration on the file system, the name of the XML file for configuring an instance of the transformation module called for
example TransformHBV_Inputs may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
152
transformationSet
Root element for the definition of a transformation (processing an input to an output). . Multiple entries may exist.
Attributes;
transofrmationId : Id of the transformation defined. Used for reference purposes only. This Id will be included in log messages
generated
inputVariable
Definition of the input variables to be used in transformation. This may either be a time series set, a typical profile or a set of (harmonic)
components. The InputVariable is assigned an ID. This ID is used later in the transformation functions as a reference to the data.
Attributes;
timeSerieSet
timeStep
Attributes;
relativeViewPeriod
Relative view period of the typical profile to create. If this is defined and the time span indicated is longer than the typical profile data provided,
then the profile data will be repeated until the required time span is filled. If the optional element is not provided then the typical profile data will be
used only once.
data
Data entered to define the typical profile. Data can be entered in different ways. The typical profile can be defined as a series of values at the
requested time step, inserted at the start of the series, or it can be mapped to specific time values (e.g. setting a profile value to hold at 03:15 of
every day). Which of these is used depends on the attributes defined.
Attributes;
153
time : Attribute value indicating the value entered is valid for a specific time, irrespective of the date. The date value is added run time.
The string has the format "[hour]:[minute]:[second]". For example "[Link]".
timeZone
Optional specification of the time zone for the data entered (see timeZone specification).
timeZone:timeZoneOffset
The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.
(e.g. +01:00)
timeZone:timeZoneName
Enumeration of supported time zones. See appendix B for list of supported time zones.
arithmeticFunction
Root element for defining a transformation as an arithmetic function (see next section for details).
hydroMeteoFunction
ruleBasedTransformation
Root element for defining a rule based transformation (see next section for details on rules).
Attributes;
aggregate
Root element for defining a time aggregation transformation (rules are discussed below)
Attributes;
disaggregate
Root element for defining a time dis-aggregation transformation (rules are discussed below)
Attributes;
nonequidistantToEquidistant
Root element for defining transformation of an non-equidistant time series to an equidistant time series. (rules are discussed below)
Attributes;
154
zero
missing
linearinterpolated
equaltolast
Statistics
Season: the statistics transformation can also be carried out for a specific season which is defined by a start and end date. If multiple seasons
are specified, then the statistics transformation will be carried out separately for each specified season. A warning will be given when seasons
overlap in time.
Function:
available functions *
max
min
sum
count
mean
median
155
standardDeviation
percentileExceedence
percentileNonExceedence
quartile
skewness
kurtosis
variance
rsquared
rootMeanSquareError
isBlockFunction:* *if true, the statistical parameters are calculated for each time window defined by the time step of the output time
series, e.g. time step year leads to yearly statistical parameters. If false and output time series time step is set to nonequidistant, the
statistical parameters are calculated for the relative view period (one value for the whole period) or for the individual season if applied.
inputVariableId
outputVariableId
value: if function percentileExceedence or percentileNonExceedence is chosen, the desired percentile has to be defined, e.g. 75-th
percentile => value="75"
ignoreMissing: if true, all missings of the input time series are not taken into account in the statistical calculation.
seasonal: this option is only relevant when using seasons. If true (default), then one result value per season per year is returned. If false,
then for each season only one (combined) result value is returned. For example when seasonal is false, the month January is specified
as a season, the input time series contains data for a period of ten years and the function max is specified, then the result will be the
maximum of all values in January in all ten years. Note: if a specific season (e.g. January 2006) is not fully contained within the input time
series, then this specific season is not used in the calculations. For example if the month January is specified as a season and the input
time series contains only data from 15 January 2006 to 1 March 2008, then only January 2007 and January 2008 will be used in the
calculations. In this case January 2006 will not be used in the calculations.
Through definition of an arithmetic function, a user defined equation can be applied in transforming a set of input data to a set of output data. Any
number of inputs may be defined, and used in the user defined function. Each input variable is identified by its Id, as this is used configuring the
function. The function is written using general mathematical operators. A function parser is used in evaluating the functions (per time step) and
returning results. These are again assigned to variables which can be linked to output time series through the variableId.
Rather than use a usedDefinedFunction, a special function can also be selected from a list of predefined hydroMeteoFunctions. When selected
this will pose requirements on other settings.
Transformations may be applied in segments, with different functions or different parameters used for each segment. A segment is defined as
being valid for a range of values, identified in one of the input variables (see example below).
156
Figure 60 Elements of the Arithmetic section of the transformation module configuration
segments
Root element for defining segments. When used this must include the input variable Id used to determine segments as an attribute.
Attributes;
segment
Root element for definition of a segment. At least one segment must be included.
limitLower
Lower limit of the segment. Function defined will be applied at a given time step only if value at that time step in the variable defined as
limitVariable is above or equal to this value.
limitUpper
Upper limit of the segment. Function defined will be applied at a given time step only if value at that time step in the variable defined as
limitVariable is below this value (below or equal only for the highest segment).
functionType
Element used only when defining a predefined hydroMeteoFunction. Depending on selected function, specific requirements will hold for defining
input variables and parameters. If a special function is selected then the user defined function element is not defined; Enumeration of available
options is (the most important are discussed below);
userDefinedFunction
Optional specification of a user defined function to be evaluated using the function parser. Only the function need be defined, without the equality
sign. The function is defined as a string and may contain Id's of inputSeries, names of variables and constants defined, and mathematical
operators
Operators offered
scalar series: +, -, /, *, ^, sin, cos, tan, asin, acos, atan, sinh, cosh, tanh, asinh, acosh, atanh, log, ln, exp, sqrt, abs, pow, min, max,
minSkipMissings, maxSkipMissings, sumSkipMissings, average
operators for conversion of grid to scalar series: spatialMin, spatialMax, spatialSum, spatialSumSkipMissings, spatialAverage
157
h54 constant
Allows definition of a constant to be used in the function.
coefficient
Optional element to allow coefficients for use in the function to be defined. These coefficients are allocated and Id for later use in the function
defined. For user defined functions specific coefficients need to be defined. Multiple entries may be defined.
Attributes;
tableColumnData
tableColumnData:data
outputVariable
Id of the output variable from the function. This may be saved to the database by associating the Id to an outputVariable.
flag
Optional element to force saving the result data for the segment with a given flag. This may be used for example to force data from a segment as
doubtful. Enumeration is either "unreliable" or "doubtful". if data is reliable the element should not be included.
Stage discharge transformations can be defined using the simpleratingcurve option of the hydroMeteoFunctions. To apply this certain properties
must be defined in each segment.
Coefficient values for coefficientId's "a", "b" and "c" must be defined.
Rating curve formula is Q = a * (H+b) ^c
Input variable Id must be "H"
Output variable Id must be "Q".
limitVariableId must be "H".
Example:
158
For stage-discharge transformation the requirements are;
Coefficient values for coefficientId's "a", "b" and "c" must be defined.
Input variable Id must be "Q"
Output variable Id must be "H".
limitVariableId must be "Q".
Example:
Catchment average rainfall can be determined by weighting input precipitation time series. The weightedavarege option of the
hydroMeteoFunctions can be applied to include the option of recalculation of weights if one of the input locations is missing. To apply this certain
properties must be defined in each segment.
159
Additional coefficients may be defined to allow for altitude correction.
Example:
This set of transformations allows temporal aggregation and disaggregation of time series. The time step defined in the input variable and the
output variable determine the howthe time steps are migrated. The configuration need only define the rule followed in aggregation/disaggregation.
Aggregation and disaggregation can only be used to transform between equidistant time steps. A nonequidistant series can be transformed to an
equidistant series using the appropriate element (see above).
Aggregation rules;
Instantaneous: apply instantaneous resampling- ie value at cardinal time step in output series is same as in input time series at that time
step.
accumulative : value in output time series is accumulated sum of values of time steps in input time series (use in for example aggregating
rainfall in mm).
mean value in output time series is mean of values of time steps in input time series (use in for example aggregating rain rate in mm/hr).
constant
Disaggregation rules;
Instantaneous: apply linear interpolation- ie value at cardinal time step in output series is same as in input time series at that time step.
Values in between are interpolated.
accumulative : value in output time series is derived as equal fraction of valuein input series. Fraction is determined using ration of time
steps.
Disaggregateusingweights value in output time series weighted fraction of input value. Weights are defined as coefficients. These are
sub-elements to the disaggregation element. The number of coefficients defined should be equal to the disaggregation ration (i.e. 24
when disaggregating from day to hour). The coefficient Id's should be numbered 1 to n..
constant value in output time series at intermediate time steps is equal to the last available value in the input time series.
Rules for mapping non-equidistant time series to equidistant time series
zero value in output time series is zero if time values do not coincide
missing value in output time series is missing if time values do not coincide
linearinterpolated value in output time series is interpolated linearly between neighbouring values in input time series
equaltolast value in output time series is equal to last available value in input time series.
The set of rule based transformations is a library of specific data transformation functions. Configuration of the rule based transformation is the
same as in the Arithmetic transformation. However, each rule may have specific requirements on the elements that need to be defined. Many
parameters that affect the transformation will need to be defined as a coefficient, using the appropriate coefficientType definition.
The rule based transformations can be grouped into four main sections;
Set of rules to allow selection of peaks and lows from an input time series.
160
Enumerations in the rule attribute of the ruleBasedTransformation element;
selectpeakvalues
selectlowvalues
selectpeakvalueswithincertaingap
selectlowvalueswithincertaingap
__
The first two enumerations will select all peaks or lows in the time series. The second two will select peaks only if there is a defined gap
in time between peaks. If not they are considered to be of dependent and only the highest peak of the dependent sets will be returned.
Requirements for definitions of peak selections using gaps to define independence are;
A coefficientId "a" must be defined. The coefficientType must be set to "gaplengthinsec". The value attribute defines the length of the
minimum gap in seconds.
A coefficientId "b" must be defined with coefficientType "peaksbeforetimezero". The value attribute defines the maximum number of
peaks to consider before T0.
A coefficientId "c" must be defined with coefficientType "peaksaftertimezero". The value attribute defines the maximum number of peaks
to consider before T0.
A coefficientId "d" must be defined with coefficientType "totalnumberofpeaks". The value must be set to zero.
A coefficientId "e" with coefficientType "skipjustbeforetimezero" indicates how many peaks to skip just before T0.
A coefficientId "f" with coefficientType "skipjustaftertimezero" indicates how many peaks to skip just after T0.
They default to 0.
Example:
<ruleBasedTransformation rule="selectpeakvalueswithincertaingap">
<segments limitVariableId="X1">
<segment>
<coefficient coefficientId="a" coefficientType="gaplengthinsec" value="2700"/>
<coefficient coefficientId="b" coefficientType="peaksbeforetimezero" value="3"/>
<coefficient coefficientId="c" coefficientType="peaksaftertimezero" value="4"/>
<coefficient coefficientId="d" coefficientType="totalnumberofpeaks" value="0"/>
<coefficient coefficientId="e" coefficientType="skipjustbeforetimezero" value="2"/>
<coefficient coefficientId="f" coefficientType="skipjustaftertimezero" value="2"/>
<outputVariableId>Y1</outputVariableId>
</segment>
</segments>
</ruleBasedTransformation>
In this example:
The time between two local maxima (peaks) should be at least 2700 seconds or 45 minutes.
Only the last three peaks before T0 and the first four peaks after T0 are considered.
The last two peaks just before T0 are skipped, leaving only the third last one.
Similarly the first peaks just after T0 are skipped, leaving the third and fourth ones.
This section of the rule based transformation can be applied to sample items from an equidistant time series at the time values in a
non-equidistant time series. This may be required when applying transformations to a non-equidistant time series. The values to add will first need
to be resampled to the right time value. An example is when wind and wave information is required at the time of the tidal peaks for entry in a
lookup table.
equitononequidistant
equitononequidistantforinstantaneousseries
equitononequidistantforaccumulativeseries
__
The first two elements are equivalent. The last will consider accumulations of the input variable up to the time value sampled.
The limitVariableId attribute of the segements element must be the non-equidistant time series which determines the time values at which
the equidistant series is to be sampled.
The userDefinedFunction must contain the equidistant time series to be sampled
161
The outputVariableId must resolve to a non-equidistant time series.
__
Example:
Data Hierarchy
This is a simple method to merge overlapping equidistant time series in a single equidistant series. Gaps in foremost (first) series will be filled with
data of second series if a valid value is available at the current time step, otherwise the gap is filled with data from the third series and so on until
no more time series are available. Only missing data values and unreliable values are filled. Doubtful values remain in the result series as
doubtful.
In example above Series 1 is the most important time series, Series 2 has a lower hierarchy and series 3 has the lowest hierarchy. The resulting
time series has values from all 3 series as shown in figure above.
Data hierarchy poses no specific requirements to variables defined. Only the Id of the output variable is of importance.
Typical profiles can be defined in the inputVariable as described above. To use a typical profile it must first be mapped to a dynamic time series.
This can then be retrieved in a later configuration of a module for use.
typicalprofiletotimeseries:
datatotimeseries
The first type of mapping is used when the typical profile has a concept of date/time (e.g. must be mapped to specific dates or time values). The
second is used when only a series of data is given. The time series is then filled with the first data element given as the first time step of the
relative view period to be created.
Typical profile mapping poses no specific requirements to variables defined. Only the Id of the output variable is of importance.
outputVariable
Definition of the output variables to be written following transformation. See the inputVariable for the attributes and structure. The output variable
can only be a TimeSeriesSet (typical profiles are only used as inputs). The OutputVariable is assigned an ID. This ID must be defined as the
result of the transformation.
162
03 Import Module
Introduction
The import module allows data from external source to be imported into DELFT-FEWS. Data may be provided to FEWS in a variety of formats.
The approach taken in the import module is that a class is defined for each of the file formats that can be imported.
Data is imported from specified directories. An attempt is made to import all files in the directories and subdirectories configured. If a file
conforms to the expected format then the data will be imported. If the file does not conform to the expected format, it will not be imported, but will
be moved to a configurable directory with failed import files.
Note that Delft-FEWS can only import the specific data formats that are listed here. Delft-FEWS assumes data types for a
configured import to remain the same over time as Delft-FEWS is usually part of an operational system. This means that it will
not have the flexibility in importing data that for example programs like Matlab and Excel have. Instead, for each new filetype a
dedicated import must be written. However, the list of supported filetypes is ever increasing and adding new imports is fairly
simple.
You can select the files to be imported via the directory and its subdirectories where the files live and by means of a file mask,
which is then used to match the file names against.
Importing data in the XML format defined by the Environment Agency, UK.
Importing of various data formats (including ASCII formats, png files- e.g. meteosat images- grids and GRIB files).
On importing data, the approach to be used for converting flags, units, locations and parameters can be defined. These conversions are
identified by referring to the appropriate configuration files (see Regional Configuration). When data is imported to an equidistant time series, a
time tolerance may also be defined. If the time recorded is within this tolerance it will be snapped to the cardinal time step in the imported series.
When available as configuration on the file system, the name of the XML file for configuring an instance of the import module called for example
ImportRTS may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
There are similarities between the import module and the General Adapter module as both allow import of data into
DELFT-FEWS from an external database. The philosophy of the two modules is, however, different. In the import module there
is no prior expectance on the availability of data to be imported. Data that is available is imported and the module will not fail if
insufficient data is available. In the General Adapter there are stricter controls on the availability of data, and errors may occur
if insufficient data is available.
Note that two main classes are defined for the import module. One for the specific EA XML time series import and one for the
general time series import (including GRIB imports).
These are defined in the moduleDescriptors in SystemConfiguration. The first is normally referred to as "EAImport", the
second as "TimeSeriesImport"
163
Available data types
Documented Imports
Available Imports
Please note the new types are added regularly. Most of the imports are Custom made for specific file formats.
AHD scalar
164
ArcWat ArcWat DBF scalar
BC2000 scalar
BFG scalar
BIL grid
COSMO7_COR grid
DSS scalar
DWD-LM grid
DWD-LM2 grid
DWD-GME grid
EKSW scalar
EKSW2005 scalar
EVN scalar
Era15 scalar
FewsDatabase scalar/grid
GermanSnow grid
GHD scalar
GrayscaleImage grid
(octet 9) for grib1 and the Parameter name (long name, replace ' ' by '_') for grib2
GRIB2 Newer version of the GRIB format used by meteorological institutes. grid
Imports grib, grib2 and netcdf
External parameter in IdMap should be the parameter number of the grib1-GDS section
(octet 9) for grib1 and the Parameter name (long name, replace ' ' by '_') for grib2
GRIBBASIC grid
GRIBCOSMO GRIB reader to handle ensembles where each member is in a different file. grid
Do not use a filePatettern-identifier
165
hdf4 (not yet available on Linux) grid
hdfSoilMoisture grid
Hims scalar
Hydris scalar
IFKIS scalar
KNMI scalar
LUBW scalar
McIdasArea grid
MeteoFranceAscii scalar
Mosaic scalar
Msw MSW (mfps) csv files, with observed levels and flows in Rijn and Maas scalar
Mosm scalar
NETCDF-CF_GRID import module to import grid data from NetCDF files grid
NETCDF-CF_PROFILE import module to import longitudinal profile data from NetCDF files profile
NETCDF-CF_TIMESERIES import module to import timeseries data from NetCDF files scalar
NetcdfGridDataset import module to import grid data from NetCDF, Grib1 and Grib2 files grid
Nimrod grid
NimrodMultipleDir grid
NTUQUARTER Import NTU datalogger csv like files, multiple columns scalar
PMDSynoptic scalar
166
PMDTelemetric scalar
RijnlandRTD scalar
SHD scalar
SHEF scalar
SMA scalar
SMAecmwf scalar
SwissRadar grid
Synop scalar
TTRR scalar
WapdaTelemetric scalar
Wsd scalar
HCS
Contents
Contents
Overview
Functionality and limitations
Configuring the Import
Flag Conversion
Unit conversions
The file format
Performance
Java Source code
Overview
Imports time series data in a tabular ASCII format from the Australian Bureau of Meteorology (HCS). The files consist of a set of header lines and
then lines with a fixed number of fields. The fields are separated by a comma and the order is fixed. Multiple locations and parameters can be put
in a single file.
The import can read scalar data in any timestep from the HCS files.
The header information in the HCS file is mostly ignored but the timezone information (line 7) is used.
The unit is set during import and can be used to convert data on import using the Unit Conversion functionality.
Comments from the HCS file are also imported.
The HCS data quality flags are set on importing (a flagconversion has to be set up to actually use them, see below)
167
Configuring the Import
The reader is named HCS which should be configured in the general section of the import. An example configuration is shown below:
An example IdMapping file (that maps the location and parameter Ids) is shown below:
Flag Conversion
Flag conversions can be used to convert data quality flag from external sources to Delft-FEWS flags. The BOM HCS format defines the following
quality flags:
1 As observed.
2 As observed with normalised time window. e.g 9AM or rounded to nearest hour.
168
5 As observed and validated (Quality Controlled).
To use the flags in Delft-FEWS a flagconversion file has to be set up. A working example is attached to this page. The table below summarizes
the translation used:
2 As observed with normalised time 0 ORIGINAL_RELIABLE Observed value retrieved from external data source.
window Value is valid, marked as original reliable as validation
is yet to be done
3 Derived from other observations 0 CORRECTED_RELIABLE The original value was removed and corrected.
Correction may be through by interpolation or manual
editing
4 Interpolated from other observation 0 CORRECTED_RELIABLE The original value was removed and corrected.
events Correction may be through by interpolation or manual
editing
5 As observed and validated (quality 0 CORRECTED_RELIABLE The original value was removed and corrected.
controlled) Correction may be through by interpolation or manual
editing
6 Void (bad) observation 6 Missing_Unreliable Observed value retrieved from external data source.
Value is invalid due to validation limits set. Value is
removed
7 Observation where canistor reset 0 ORIGINAL_DOUBTFUL Observed value retrieved from external data source.
or calibration change has occured Value is valid, but marked as suspect due to soft
validation limits being exceeded
Unit conversions
On importing the units rom the HCS file are set. These can be used in the Unitconversion to convert the data on import.
The file format is described in the BOM document "External Agency Hydrological data Transfer - Client Requirements, Version 2.20".
169
Performance
On a 2.7Ghz Dual core laptop the import is able to import 188Mb of hcs files (1899984 lines) in 24 seconds including basic validation of the data.
[Link]
[Link]
{
private static final Logger log = [Link]([Link]);
private TimeZone defaultTimeZone = null;
private LineReader reader = null;
private TimeSeriesContentHandler handler = null;
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler handler)
throws IOException {
[Link] = reader;
[Link] = handler;
[Link]("");
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
[Link](true);
parseHeader();
170
[Link](200);
for (String line; (line = [Link]()) != null; [Link](200)) {
line = [Link]();
if ([Link](0) != '#') {
// this is not a header row, undo read line
[Link]();
break;
}
// Supported formats:
// # HEADER: Local ObsTime Offset: 1:30
// # HEADER: Local ObsTime Offset: +01:30
// # HEADER: Local ObsTime Offset: 9
// # HEADER: Local ObsTime Offset: -01:00
String[] elements = [Link](line, ':');
if ([Link] < 3) continue;
[Link](elements);
if (!elements[1].equalsIgnoreCase("Local ObsTime Offset")) continue;
[Link]
, VirtualInputDirConsumer {
private static final Logger log = [Link]([Link]);
@Override
public void setVirtualInputDir(VirtualInputDir virtualInputDir) {
[Link] = virtualInputDir;
}
interface F {
int A = 1 << 0; // attributes
int R = 1 << 1; // required;
int M = 1 << 2; // multple;
}
HeaderElement() {
[Link] = 0;
}
HeaderElement(int flags) {
[Link] = flags;
}
171
public boolean isRequired() {
return (flags & F.R) != 0;
}
/**
* For performance reasions the pi time series format alllows that the values are stored in
* a separate bin file instead of embedded in the xml file.
* The bin file should have same name as the xml file except the extension equals bin
* In this case all time series should be equidistant.
*/
private VirtualInputDir virtualInputDir = [Link];
private InputStream binaryInputStream = null;
private byte[] byteBuffer = null;
private float[] floatBuffer = null;
private int bufferPos = 0;
private int bufferCount = 0;
/**
* For backwards compatibility. Earlier versions of the PiTimeSeriesParser were tollerant
about the date/time format
* and the case insensitive for header element names.
* This parser should not accept files that are not valid according to pi_timeseries.xsd
* When old adapters are not working you can UseLenientPiTimeSeriesParser temporaray till
the adapter is fixed
*
* @param lenient
*/
172
public static void setLenient(boolean lenient) {
[Link] = lenient;
}
public PiTimeSeriesParser() {
[Link](lenient);
}
@Override
public void parse(XMLStreamReader reader, String virtualFileName, TimeSeriesContentHandler
timeSeriesContentHandler) throws Exception {
[Link] = reader;
[Link] = virtualFileName;
[Link] = timeSeriesContentHandler;
// time zone can be overruled by one or more time zone elements in the pi file
[Link]([Link]());
if () {
parse();
return;
}
binaryInputStream = [Link](virtualBinFileName);
try {
if (byteBuffer == null) {
byteBuffer = new byte[BUFFER_SIZE * NumberType.FLOAT_SIZE];
floatBuffer = new float[BUFFER_SIZE];
}
parse();
boolean eof = bufferPos == bufferCount && [Link]() == -1;
if (!eof)
throw new IOException("More values available in bin file than expected based on
time step and start and end time\n" + [Link](virtualFileName, "bin"
));
} finally {
bufferPos = 0;
bufferCount = 0;
[Link]();
binaryInputStream = null;
}
}
173
[Link](XMLStreamConstants.START_ELEMENT, null, "series");
[Link]();
parseHeader();
if (binaryInputStream == null) {
while ([Link]() == XMLStreamConstants.START_ELEMENT &&
[Link]([Link](), "event")) {
parseEvent();
}
if ([Link]() == XMLStreamConstants.START_ELEMENT) {
// skip comment
[Link](XMLStreamConstants.START_ELEMENT, null, "comment");
[Link]();
[Link]();
}
} else {
readValuesFromBinFile();
}
[Link](XMLStreamConstants.END_ELEMENT, null, "series");
[Link]();
}
174
String flagText = [Link](null, "flag");
String commentText = [Link](null, "comment");
if (timeText == null)
throw new Exception("Attribute time is missing");
if (dateText == null)
throw new Exception("Attribute date is missing");
if (valueText == null)
throw new Exception("Attribute value is missing");
try {
[Link]([Link](dateText, timeText));
} catch (ParseException e) {
throw new Exception("Can not parse " + dateText + ' ' + timeText);
}
if (flagText == null) {
[Link](0);
} else {
try {
[Link]([Link](flagText));
} catch (NumberFormatException e) {
throw new Exception("Flag should be an integer " + flagText);
}
}
[Link](commentText);
try {
float value = [Link](valueText);
// we can not use the automatic missing value detection of the content handler
because the missing value is different for each time series
if (value == missingValue) {
value = [Link];
} else {
[Link]([Link](valueText, '.'));
}
[Link](value);
[Link]();
} catch (NumberFormatException e) {
throw new Exception("Value should be a float " + valueText);
}
[Link]();
[Link](XMLStreamConstants.END_ELEMENT, null, "event");
[Link]();
}
long time;
try {
time = [Link](dateText, timeText);
175
} catch (ParseException e) {
throw new Exception("Not a valid data time for "
+ currentHeaderElement + ' ' + dateText + ' ' + timeText, e);
}
[Link]();
return time;
}
TimeUnit tu = [Link](unit);
if (tu != null) {
String multiplierText = [Link](null, "multiplier");
int multiplier;
if (multiplierText == null) {
multiplier = 1;
} else {
try {
multiplier = [Link](multiplierText);
} catch (NumberFormatException e) {
throw new Exception([Link](e), e);
}
if (multiplier == 0) {
throw new Exception("Multiplier is 0");
}
}
if (divider == 0) {
throw new Exception("dividplier is 0");
}
}
[Link]();
return [Link]() * multiplier / divider;
} else {
[Link]();
return 0;
}
}
176
endTime = Long.MIN_VALUE;
missingValue = [Link];
creationDateText = null;
creationTimeText = "[Link]";
[Link]();
}
177
for (long time = startTime; time <= parameterid:="parameterId:" forecastdate:=
"forecastDate:" exception([Link](e),="Exception([Link](e),"
for="for" buffer_size="BUFFER_SIZE" [Link]([Link]());=
"[Link]([Link]());" javadoc="javadoc"
(,="(,"
missingvalue)="missingValue)" "timezone");=""timeZone");" timestepmillis;="timeStepMillis;" float=
"float" fillbuffer()="fillBuffer()" missing="missing" of="of" time="time" "timezone"))=
""timeZone"))" file="file"
[Link](parseensemblememberindex([Link]()));=
"[Link](parseEnsembleMemberIndex([Link]()));"
[Link]([Link]());=
"[Link]([Link]());" ensemblememberindex:="ensembleMemberIndex:"
(numberformatexception="(NumberFormatException" (timestepmillis="(timeStepMillis" !=
"XMLStreamConstants.START_ELEMENT)" endtime;)="endTime;)" or="or" %="%" *="*" +="count;"
[Link]([Link]());="[Link]([Link]());" -="-"
not="not" content="content" timestep:="timeStep:" [Link]([Link]());=
"[Link]([Link]());" [Link]();="[Link]();" exception(
"not="Exception("Not" (![Link])="(![Link])"
byteorder.little_endian);="ByteOrder.LITTLE_ENDIAN);" fillbuffer();="fillBuffer();" start="start"
timezonefromdouble="[Link](offset);" each="each" buffercount="
byteBufferCount" xmlstreamexception="XMLStreamException" try="try" we="we" endtime="parseTime();"
different="different" series="series" eofexception("bin="EOFException("Bin" use="use" switch=
"switch" null,="null," e);="e);" private="private" timestep=
"[Link](timeStepMillis," bufferpos="0;"
[Link]([Link]());="[Link]([Link]());"
[Link](xmlstreamconstants.end_element,="[Link](XMLStreamConstants.END_ELEMENT,"
(bufferpos="=" bytebuffercount);="byteBufferCount);" rounded="rounded" parsethresholds()=
"parseThresholds()" ([Link]()="([Link]()"
[Link](parsetype([Link]()));=
"[Link](parseType([Link]()));"
[Link](time);="[Link](time);" ||="||"
[Link](bytebuffer,="[Link](byteBuffer," creationtimetext=
"[Link]();" void="void" creationdatetext="[Link]();"
filedescription:="fileDescription:" [Link](value);=
"[Link](value);" short");="short");"
[Link](parsetime());="[Link](parseTime());"
[Link]([Link]());="[Link]([Link]());" new="new"
@suppresswarnings({"overlylongmethod"})="@SuppressWarnings({"OverlyLongMethod"})" ioexception="
IOException" (ioexception="(IOException" assert="assert" [Link]());="
[Link]());" [Link]([Link]());="
[Link]([Link]());" }="}" timezoneoffsetmillis);="
timeZoneOffsetMillis);" {="{" (value="=" parsethresholds();="parseThresholds();" count="count"
[Link](timezonefromdouble);="this
.[Link](timeZoneFromDouble);" exception="Exception" creationdate:="
creationDate:" break;="break;" arraylist<string="ArrayList<String" long="long" has="has"
floatbuffer,="floatBuffer," (timezoneoffsetmillis="(timeZoneOffsetMillis" timezone="timeZone"
format",="format"," (currentheaderelement)="(currentHeaderElement)" value="[Link];"
missingvalue="parseMissingValue([Link]());" sourcesystem:="sourceSystem:"
stationname:="stationName:" locationid:="locationId:" be="be" numbertype.float_size;=
"NumberType.FLOAT_SIZE;" -1)="-1)" starttime="parseTime();" parsetimezone()="parseTimeZone()"
timestepmillis="parseTimeStep();" bytebuffercount="byteBufferCount" int="int" initiatetimestep()=
"initiateTimeStep()" throws="throws" automatic="automatic" bytebuffercount,="byteBufferCount," 0,=
"0," double="double" 0)="0)" [Link]([Link]());=
"[Link]([Link]());" and="and"
[Link]([Link]());="[Link]([Link]());"
timezoneoffsetmillis="-startTime" (count="=" ensembleid:="ensembleId:" detection="detection"
default="default" longname:="longName:" e)="e)" irregular="Irregular" enddate:="endDate:"
startdate:="startDate:" case="case" sourceorganisation:="sourceOrganisation:" used.");="used.");"
(bytebuffercount="(byteBufferCount" [Link]([Link]());="
[Link]([Link]());" valid="valid" creationtime:="creationTime:"
qualifierid:="qualifierId:" can="can" [Link]([Link]());="
[Link]([Link]());" catch="catch" numbertype.float_size="
NumberType.FLOAT_SIZE" while="while" because="because" throw="throw" units:="units:" if="if"
(equidistantmillis)="(equidistantMillis)" return;="return;"
[Link]();="[Link]();"
read="read" offset="[Link]([Link]());" too="too" is="is"
buffercount,="bufferCount," ([Link]())="([Link]())" minutes="minutes"
parseheaderelement()="parseHeaderElement()" buffercount)="bufferCount)" type:="type:" continue;="
continue;" the="the" wil="wil" [Link]("header="[Link]("Header" timeunit.minute_millis=
"TimeUnit.MINUTE_MILLIS" missval:="missVal:" see="see" region:="region:" this
.invalidheadertimedetected="true;" thresholds:="thresholds:" handler="handler"> ids = new
ArrayList<String>();
178
ArrayList<String> names = new ArrayList<String>();
ArrayList<String> stringValues = new ArrayList<String>();
do {
if ([Link]() == XMLStreamConstants.START_ELEMENT) {
String id = [Link](null, "id");
String name = [Link](null, "name");
String stringValue = [Link](null, "value");
[Link](id);
[Link](name);
[Link](stringValue);
}
[Link]();
} while (![Link]().equals([Link]()));
float[] values = new float[[Link]()];
for (int i = 0; i < [Link]; i++) {
values[i] = [Link]([Link](i));
}
[Link]([Link](new String[[Link]()]), [Link](new
String[[Link]()]), values);
}
179
if (currentHeaderElement == element) {
throw new Exception("Duplicate header element: " + localName);
}
currentHeaderElement = element;
}
[Link]
, VirtualInputDirConsumer {
private static final Logger log = [Link]([Link]);
@Override
public void parse(XMLStreamReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = virtualFileName;
[Link] = new PiMapStacksReader(reader, virtualFileName,
[Link](), [Link]());
try {
parse(contentHandler);
} finally {
[Link]();
}
}
180
[Link](timeSeriesHeader);
}
@Override
public GeoDatum getDefaultGeoDatum() {
return geoDatum;
}
};
[Link](timeSeriesHeader);
[Link](true);
while ([Link]()) {
[Link]([Link]());
[Link]([Link]());
[Link]([Link]());
[Link](-1);
[Link]([Link]());
switch ([Link]()) {
case ASCII:
parseAscii(times);
break;
case PCRGRID:
parsePcRaster(times);
break;
case USGS:
parseUsgs(times);
break;
}
}
}
181
[Link](timeSeriesHeader);
PcRasterParser pcrParser = new PcRasterParser();
for (int i = 0; i < [Link]; i++) {
[Link](times[i]);
String fileName = [Link](fileNamePattern, i, [Link]);
if () {
if ([Link]()) [Link](fileName + " is missing, assume missing value
grid");
[Link](times[i]);
[Link](null);
[Link]();
continue;
}
// todo handle virtual file
File file = new File(fileName);
if (![Link]()) {
File defaultDir = new File(virtualFileName).getParentFile();
file = new File(defaultDir, fileName);
}
[Link](file, [Link]);
}
}
@Override
public void setVirtualInputDir(VirtualInputDir virtualInputDir) {
[Link] = virtualInputDir;
}
}
]]>
HymosAscii
Overview
Imports time series data from files in Hymos ASCII format with five header lines containing a description of the time series:
Import type
Example
182
Here is a simple example:
FIXE
PAR
1
NOOR HH4 418 8
2008 1 1 0 2 6048 1 1 114.906
2008 01 01 00 15 0.478
2008 01 01 00 30 0.478
2008 01 01 00 45 0.478
2008 01 01 01 00 0.478
2008 01 01 01 15 0.478
2008 01 01 01 30 0.478
2008 01 01 01 45 0.478
LMW
Overview
This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
Imports time series data directly from the Dutch Water Authorities' data centre (Landelijk Meetnet Water). The FEWS LMW import function uses
the SIP protocol to import the data directly from a remote database, it is therefore required to have an internet connection. In the FEWS import
configuration file a username and password have to be entered, these can be requested from the LMW data centre directly and not through
Deltares. Without an appropriat eusername/password combination it is not possible to import data. Currently the import only works on Windows
computers as there is no LINUX library available to access the LMW database.
Configuration (Example)
A complete import module configuration consists of an ID Mapping file, a Flag Mapping file, a Unit Mapping file, and an Import Module Instance
file. See the attached example configuration files.
ModuleConfigFiles
The following example of an Import Module Instance will import the time series as equidistant 10 minute series for timezone GMT+1 hour for a
period of 24 hour before the current system time.
183
[Link]
<general>
<importType>LMW</importType>
<serverUrl>[Link]
<user>......</user>
<password>.......</password>
<relativeViewPeriod unit="hour" startoverrulable="true" endoverrulable="true" start="-24"
end="1"/>
<idMapId>IdImportLMW</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<flagConversionsId>ImportMSWFlagConversions</flagConversionsId>
<missingValue>-1000000000</missingValue>
<missingValue>99999</missingValue>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>LMW</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportLMW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>LMW_h</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
....
<externUnit unit="cm" parameterid="[Link]"/>
<externUnit unit="0.01 m3/s" parameterid="[Link]"/>
]]>
IDMapping
The ID Mapping configuration is very important because this is used by the import function to make requests to the LMW database. In this
example two locations have been included, Almen (externalLocation="ALME") and Lobith (externalLocation="LOBI"). Both these locations have
an external parameter "H10", imported in FEWS as "H.m" waterlevels. Both these series are observations and need therefore a qualifier "WN". A
complete list of the location and parameter ID's of all series can be requested from LMW.
[Link]
<map externalparameter="H10" internalparameter="H.m" internallocation="H-RN-ALME"
externalparameterqualifier="WN" externallocation="ALME"/>
<map externalparameter="H10" internalparameter="H.m" internallocation="H-RN-0001"
externalparameterqualifier="WN" externallocation="LOBI"/>
]]>
FlagConversion
The LMW database has a data quality flags for each value in the database, a complete list of quality flags can be requested from LMW.
UnitConversion
Important in the above configuration is that a unitconversion is forced to convert the waterlevels from cm to m NAP. In LMW all water levels are
stored in cm.
Some Issues
Most parameters will be observation data, but some, like the astronomic tide or forecast data, are predictions. If the data are not
observation data, you must use the qualifier to the parameter to indicate the SIP command to be used.
For observation data the external qualifier is "WN"
For forecast data the external qualifier is "VW"
For astronomical data the external qualifier is "AS"
The data in the LMW database cover a period of one month only and the data are retrieved per day.
184
MM3P
Overview
MM3P files are essentially CSV files (comma-separated values) with the following characteristics:
The first line contains the identification of the columns (see the example below). This line is expected to be present, but it is simply
skipped by the reading routines.
The ID of the location appears in the first column. The ID of the parameter appears in the second column. These are taken to be the
external names for the location and parameter.
The time base column is used to optionally identify the time step of a series. This makes it possible to have both a 15 minutes and an
hourly time series in one file.
The derivation is an enumeration of VAL (actual value) and AVG (average value during last time step). It is read as qualifier and can be
used in the idMapping to store the series at the correct destination
The date and time for each observed value appears in the fifth column in the format "YYYY-mm-DD HH:MM".
The value itself appears in the sixth column.
There may be more than one location and more than one parameter in the file - each combination will become a new time series.
Example file
Here is an example of such a file (note that a comma (,) is used as the separator exclusively and that the decimal separator is a period (.))
OS_NAME,PT_NAME,Timebase,Derivation,Timestamp,Value,Manual
TKK001,LEVEL,15M,VAL,2009-03-13 05:00,10.655,
TKK001,LEVEL,15M,VAL,2009-03-13 04:45,10.65063,
TKK001,LEVEL,15M,VAL,2009-03-13 04:30,10.64937,
TKK001,LEVEL,15M,VAL,2009-03-13 04:15,10.65563,
TKK001,LEVEL,15M,VAL,2009-03-13 04:00,10.6575,
TKK001,LEVEL,15M,VAL,2009-03-13 03:45,10.6525,
TKK001,LEVEL,15M,VAL,2009-03-13 03:30,10.65,
TKK001,LEVEL,15M,VAL,2009-03-13 03:15,10.65125,
TKK001,LEVEL,15M,VAL,2009-03-13 03:00,10.64937,
TKK001,LEVEL,15M,VAL,2009-03-13 02:45,10.65,
TKK001,LEVEL,15M,VAL,2009-03-13 02:30,10.65063,
TKK001,LEVEL,15M,VAL,2009-03-13 02:15,10.65625,
TKK001,LEVEL,15M,VAL,2009-03-13 02:00,10.65312,
TKK001,LEVEL,15M,VAL,2009-03-13 01:45,10.66,
TKK001,LEVEL,15M,VAL,2009-03-13 01:30,10.64937,
TKK001,LEVEL,15M,VAL,2009-03-13 01:15,10.65688
TKK001,LEVEL,15M,VAL,2009-03-13 01:00,10.65938
TKK001,LEVEL,15M,VAL,2009-03-13 00:45,10.65688
Configuration
185
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>MM3PCSV</importType>
<folder>$IMPORT_FOLDER_MM3P$</folder>
<maxAllowedFolderSizeMB>250</maxAllowedFolderSizeMB>
<failedFolder>$IMPORT_FAILED_FOLDER$/MM3P</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$/MM3P</backupFolder>
<idMapId>IdMM3P</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>MM3P</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMM3P</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>TKK001</locationId>
<locationId>TKK101</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
]]></import></timeSeriesImportRun>
Pegelonline
Overview
This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
Imports time series data that have been profided by Pegelonline ([Link] The FEWS Pegelonline import function is a
straight forward ASCII file import function.
Configuration (Example)
A complete import module configuration consists of an ID Mapping file, a Unit Mapping file and an Import Module Instance file. See the attached
example configuration files.
ModuleConfigFiles
The following example of an Import Module Instance will import the time series as equidistant 15 minute time series for timezone GMT+1 hour.
186
[Link]
<general>
<importType>Pegelonline</importType>
<folder>$IMPORT_FOLDER_PEGELONLINE$</folder>
<idMapId>IdImportPegelOnline</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<missingValue>-777</missingValue>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>PegelOnline</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportPegelOnline</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationSetId>PegelOnline_H</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<externUnit unit="cm" parameterid="H.m"/>
]]>
IDMapping
The ID Mapping configuration file links the external Pegelonline IDs to the internal FEWS ID's. External Pegelonline IDS are for example "WT_O"
for water temperature and "W_O" for water level.
[Link]
<parameter internal="H.m" external="W_O"/>
<location internal="H-RN-0693" external="2390020"/>
<location internal="H-RN-0984" external="279100000100"/>
<location internal="H-RN-WURZ" external="2430060"/>
]]>
UnitConversion
Important in the above configuration is that a unitconversion is forced to convert the waterlevels from cm to meter. In Pegelonline all water levels
are stored in cm, the unit is however included in the pegelonline data file header.
Example File
WQCSV
Overview
Imports time series data in csv format, specially added for some of the Dutch Waterboards. This import format has some special features
compared to other time series import formats. Water quality is mostly analysed from a sample, there fore the sample id is a required field in this
file. The data is seperated by a ";" and contains 15 columns with data. Because the data files do not contain any information on the content of the
different columns, the layout and number of columns is fixed.
Column Content
187
1 Sample ID
2 Location ID
11 Value
13 Unit
15 Parameter ID
The Location ID's, Parameter ID's and Units can be converted to DELFT-FEWS ID's and units using the different mapping tables.
The reader is named WQCSV which should be configured in the general section of the import. An example import configuration is shown below:
188
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link]
[Link] xmlns="
[Link]
<import>
<general>
<importType>WQCSV</importType>
<folder>$IMPORT_FOLDER_WQCSV$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_WQCSV$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_WQCSV$</backupFolder>
<idMapId>IdImportWQCSV</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>WQCSV</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportWQCSV</moduleInstanceId>
<valueType>sample</valueType>
<parameterId>ZS</parameterId>
<locationSetId>WQLocaties</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportWQCSV</moduleInstanceId>
<valueType>sample</valueType>
<parameterId>BZV4</parameterId>
<locationSetId>WQLocaties</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportWQCSV</moduleInstanceId>
<valueType>sample</valueType>
<parameterId>BZV1</parameterId>
<locationSetId>WQLocaties</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
The file format is just plain ASCII with the columns seperated by a semicolon ";"
Example:
189
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Afvoer geschat
l/s;;;15;NVT;l/s;OW;afGeschat
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Afvoer
l/s;;;n.b.;NVT;l/s;OW;afL/s
2006010177;OGANS900;Gansbeek
Schelkenspoort;204.22;367.63;09-01-2006;[Link];Ammonium-N;NH4;;0,2;N;mg/l;OW;NH4
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];BZV met Atu/5
dagen;BZV5;<;1;O2;mg/l;OW;BZV1
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Cadmium
(Cd);Cd;;0,088;NVT;ug/l;OW;Cd2W
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Calcium
(Ca);Ca;;36;NVT;mg/l;OW;Ca2W
2006010177;OGANS900;Gansbeek
Schelkenspoort;204.22;367.63;09-01-2006;[Link];Chloride;Cl;;14;NVT;mg/l;OW;Cl
2006010177;OGANS900;Gansbeek Schelkenspoort;204.22;367.63;09-01-2006;[Link];Chroom
(Cr);Cr;;1,2;NVT;ug/l;OW;Cr2W
Note:
Make sure the date and time formats are correct (dd-mm-yyyy and hh-mm-ss).
Make sure each line has only 15 columns with 14 ";" characters separating the columns.
Only columns 1, 2, 6, 7, 10, 11, 13 and 15 are required, all other columns can be left empty.
[Link]
[Link]
{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws IOException {
[Link]("");
[Link]("n.b.");
[Link]("nb");
[Link]("n,b,");
[Link]('?');
190
ArcInfoAscii
Overview
Imports time series data (grids) ArcInfoAscii files. The time and parameter information are encoded in the filename. Example:
Rain_20071010231500.asc (parameterId_YYYYMMDDHHMMSS.extension)
parameterId = Rain
year = 2007
month = 10
day = 10
hours = 23
min = 15
sec = 00
This import always uses the locationid ARC_INFO_LOC as the external location. The import can read from a zip stream. As such, you can
multiple grids within a single .zip file. Useful for large amounts of grids or very large grids.
The reader is named ArcInfoAsciiGrid which should be configured in the general section of the import:
<importType>ArcInfoAsciiGrid</importType>.
Example:
As the file format does not include geographical datum information you must configure the geoDatum field in the general section
The IdMapping file should always refer to the external location ARC_INFO_LOC.
Example:
191
<?xml version="1.0" encoding="UTF-8"?>
<idMap version="1.1" xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<parameter external="CEH" internal="[Link]"/>
<location external="ARC_INFO_LOC" internal="My_Location"/>
</idMap>
Introduction
ARC ASCIIGRID refers to a specifc interchange format developed for ARC/INFO rasters in ASCII format. The format consists of a header that
specifies the geographic domain and resolution, followed by the actual grid cell values. Usually the file extension is .asc.
ncols 157
nrows 171
xllcorner -156.08749650000
yllcorner 18.870890200000
cellsize 0.00833300
0 0 1 1 1 2 3 3 5 6 8 9 12 14 18 21 25 30 35 41 47 53
59 66 73 79 86 92 97 102 106 109 112 113 113 113 111 109 106
103 98 94 89 83 78 72 67 61 56 51 46 41 37 32 29 25 22 19
etc...
Geographic header
ncols refers to the number of columns in the grid and xxxxx is the numerical value
nrows xxxxx
nrows refers to the number of rows in the grid and xxxxx is the numerical value
xllcorner xxxxx
xllcorner refers to the western edge of the grid and xxxxx is the numerical value
yllcorner xxxxx
yllcorner refers to the southern edge of the grid and xxxxx is the numerical value
cellsize xxxxx
cellsize refers to the resolution of the grid and xxxxx is the numerical value
nodata_value xxxxx
nodata_value refers to the value that represents missing data and xxxxx is the numerical value. The default is -9999.
ArcWat
Arcwat provides DBF IV files that can be imported through the data import module of Delft-FEWS. The file is rather basic:
it contains two fixed colums, named "STAMP" and "WAARDE". The STAMP column contains the date-time and the "WAARDE" column contains
the data value.
The STAMP column should be defined as numeric value (19.1) with the next format: yyyyMMddHHmmss.0
for example: 20080830080718.0, which is read as the next date-time: August 30th, 2008, [Link]
The WAARDE column should contain the values in a string, with a dot as decimal seperator.
An error will be raised in case the STAMP or WAARDE column does not contain a value that meets these requirements.
192
The importType should be "ARCWAT_DBF".
The location and parameter ID are derived from the file name. If the file name is "LOC_PAR_INFO.DBF", the base file name will be splitted by the
underscore character. The first token of the file name is used as external location and the second as external parameter ID.
The attached example file "Station1_Parameter1_20081004_1600.dbf" will be parsed into external location ID "Station1" and external parameter
ID "Parameter1".
Configuration
In the import moduleInstance the next definition should be used to import ArcWat DBF files:
<general>
<importType>ARCWAT_DBF</importType>
<folder>....
.....
</general
Example
193
194
BIL Import
Overview
TimeSeries reader for BIL grid files. The identifier for this reader is "BIL". For each BIL file to be imported two other files should also be present:
File format
In the above example the [Link] files contains the actual data, the [Link] describes the bil file and the [Link] file contains the date/time
information for the bil file.
;
; ArcView Image Information
;
NCOLS 534
NROWS 734
NBANDS 2
NBLOCKS 4
NBITS 16
LAYOUT BIL
BYTEORDER I
SKIPBYTES 0
MAPUNITS CELSIUS;METERS
ULXMAP 300000
ULYMAP 300000
XDIM 100
YDIM 100
The BIL import assign a numerical id (starting at 0) to each parameter in the BIL file. This information is needed to set up the idmapping table (see
below). The NRBLOCK parameter denotes the number of timesteps. As such, it is possible to have multiple timesteps in a single bil file. The
NRBANDS parameter denotes the number of parameter in the file. The MAPUNITS keywords are not used in the FEWS reader.
200001011200 60
200001011300 60
200001011400 60
200001011500 60
The first column in the time files contains: YYYYMMDDHHMM, the second column the number of minutes in the timesteps. The second column
is presently ignored and may be omitted.
NRBITS 0
In the future we plan to also support the PIXELTYPE keyword in the header which will alleviate the need for the hack described
here.
Configuration
195
<?xml version="1.0" encoding="UTF-8"?>
<timeSeriesImportRun xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<import>
<general>
<importType>BIL</importType>
<folder>d:/import/bil</folder>
<idMapId>bilMapId</idMapId>
</general>
<timeSeriesSet>
<moduleInstanceId>GridImportBIL</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>BIL</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>GridImportBIL</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>BIL</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
BUFR
BUFR data files are used extensively in the meteorological and oceanographic community to exchange
all manner of data, but most importantly satellite images. More information: [Link] and [Link]
The BUFR files are assumed to contain the image for one parameter and one time. The name of the
parameter is encoding as the "X" and "Y" fields of the corresponding record, in particular: 1000X+Y.
The reason for this is that the names as reported by BUFR utilities are contained in external
configuration files. It is easier to use the X and Y fields (see the documentation for this type
of files) than to distill the names from the configuration files.
BUFR files containing timeseries data can be read using the "WMOBUFR" import type. The import functions use the following conventions:
196
The name of the location is the string associated with the data item with "fxy" code 0/01/019. Usually this is the human readable name.
The parameter name is constructed from the "fxy" code as: f-xx-yyy, for instance "0-11-011" is the wind direction at 10 m above sea or
ground level. The reason for using this encoding is that it is contained in the file itself, whereas the description "Wind direction (10 m)" is
found in an external file.
Note:
Some BUFR files, one example being files produced by the Wavenet measurement system in the UK, contain extra information rendering them
unuseful for the library that implements the WMOBUFR import type. Instead use the BUFR type. The files may contain only a single time, though
multiple parameters. (If the WMOBUFR library can not properly handle them, then parameters that you know to be present will be missing.)
The names of the parameters are slightly different then: they are formed as an integer number from the "fxy" code - so that fxy = 0 22 70
(significant wave height) becomes "22070" instead of "0-22-070".
When using BUFR files, you should at least have a basic understanding of the philosophy of the file format. A BUFR file consists of one or more
messages, each containing data and a complete description of these data.
However, the description is encoded: each part is identified by the so-called fxy code, a code consisting of three numbers, f, x and y, that are used
to retrieve information from several tables. These tables (see the subdirectory "bufr" under the directory "bin" of the Delft-FEWS installation)
contain the descriptive strings:
The Delft-FEWS import module uses but a few pieces of the available information, notably the location ID, the parameter ID and the unit of the
values.
If you need to define the external ID for the parameters, then consult these tables, as they contain all the information you need.
CSV
Overview
Imports time series data from files in CSV format with three header lines containing a description of the time series:
The first line contains the location names, but the line is used only to determine the field separator and the decimal separator (see below)
The second line contains the keyword "Location Ids" as the first field and then the IDs for the locations for which the time series are given.
These IDs are the external IDs, found in an ID map.
The third line contains the IDs of the parameters.
All other lines contain the time (in yyyy-mm-dd HH:MM:SS format) as the first field and the values for each time series in the next fields.
Values between -1000.0 and -999.0 (inclusive) are regarded as missing values.
Furthermore, if you need to specify the unit in which the parameter value is expressed, you can do this by adding the unit in square brackets to
the ID of the parameter:
Rainfall [mm/day]
Import type
Example
197
Location Names,Bewdley,Saxons Lode
Location Ids,EA_H-2001,EA_H-2032
Time,Rainfall,Rainfall
2003-03-01 [Link],-999,-999
2003-03-01 [Link],1.000,1.000
2003-03-01 [Link],2.000,2.000
2003-03-01 [Link],3.000,3.000
2003-03-01 [Link],4.000,4.000
2003-03-01 [Link],-999,5.000
2003-03-01 [Link],6.000,6.000
2003-03-01 [Link],7.000,7.000
2003-03-01 [Link],8.000,8.000
2003-03-01 [Link],9.000,9.000
2003-03-01 [Link],10.000,10.000
2003-03-01 [Link],11.000,11.000
2003-03-01 [Link],12.000,12.000
2003-03-01 [Link],13.000,13.000
2003-03-01 [Link],14.000,14.986
If the first line contains a comma, the decimal separator is taken to be a period (.), otherwise it is supposed to be a semicolon (;) and the decimal
separator is taken to be a comma. This way locale-specific CSV files are supported.
The field separator is either a comma or a semicolon. Tabs are not supported.
[Link]
[Link]
@Override
198
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws IOException {
[Link] = contentHandler;
[Link](-999.9999f, -999f);
[Link] = reader;
parseHeader();
// The first few lines contain vital information about the file:
// - Whether the separator character is a , or a ;
// - The names of the parameters and locations
if ([Link] != [Link])
throw new IOException("Number of locations not the same as the number of parameters\n"
+ [Link]());
columnCount = [Link];
199
}
}
]]></TimeSeriesContentHandler>
Database import
Overview
TimeSeries reader for a database. The identifier for this reader is "database". This import allows a FSS or stand alone system to import data from
r database. The import reads the database directly.
Configuration
This reader supports the tableMetadata element in the general section of the timeseriesImportrun:
Overview
The Delft-Fews Published interface format (PI) consists of a number of xsd schemas defining a number of XML formats for the exchange of data.
The timeseries format deals with (scalar) timeseries data.
Time series data represent data collected over a given period of time at a specific location. Within the PI XML format timeseries files can contain
both equidistant times series and non-equidistant series. Multiple time series can be stored in a single file. All time and date information is given in
GMT unless otherwise stated. Each series has a small header followed by a number of events. An event is defined by a date/time, a value and an
(optional) quality flag. A missing value definition can be defined in the files. The default (and preferred) missing value definition is NaN.
As described in the timeseries XML schemas a single quality flag may be given. It is up to the data supplier to describe the meaning of the quality
flags used. Delft-Fews will map these to internal flags on import.
200
<timeZone>0.0</timeZone>
<series>
<header>
<type>instantaneous</type>
<locationId>locA</locationId> <!-- Put the locationId here.
the locationId is defined by the data supplier. Delft-Fews
will map this to an internal location if needed.
-->
<parameterId>[Link]</parameterId> <!-- Put the parameterId here.
the parameterIdis defined by the data supplier. Delft-Fews
will map this to an internal location if needed. -->
<timeStep unit="second" multiplier="3600"/>
<!-- start and end date/time are required! -->
<startDate date="2006-08-23" time="[Link]"/>
<endDate date="2006-08-24" time="[Link]"/>
<missVal>-8888.0</missVal>
<longName>Bobbio Trebbia</longName>
<units>m</units>
</header>
<event value="8.66" date="2006-08-23" time="[Link]"/>
<event value="9.66" date="2006-08-23" time="[Link]"/>
<event value="8.66" time="[Link]" flag="33" date="2006-08-23"/>
<event value="-8888.0" date="2006-08-23" time="[Link]"/>
<event value="8888.0" date="2006-08-23" time="[Link]"/>
<event value="8888.0" time="[Link]" flag="9" date="2006-08-23"/>
<event value="8888.0" time="[Link]" flag="99" date="2006-08-23"/>
<event value="-8888.0" time="[Link]" flag="33" date="2006-08-24"/>
</series>
<series>
<header>
<type>instantaneous</type>
<locationId>locB</locationId> <!-- Put the locationId here.
the locationId is defined by the data supplier. Delft-Fews
will map this to an internal location if needed.
-->
<parameterId>[Link]</parameterId> <!-- Put the parameterId here.
the parameterIdis defined by the data supplier. Delft-Fews
will map this to an internal location if needed. -->
<timeStep unit="second" multiplier="3600"/>
<!-- start and end date/time are required! -->
<startDate date="2006-08-23" time="[Link]"/>
<endDate date="2006-08-24" time="[Link]"/>
<missVal>-999.0</missVal>
<longName>Fitz</longName>
<units>m</units>
</header>
<event value="2.66" date="2006-08-23" time="[Link]"/>
<event value="2.66" date="2006-08-23" time="[Link]"/>
<event value="2.66" date="2006-08-23" time="[Link]"/>
<event value="-2.0" date="2006-08-23" time="[Link]"/>
<event value="2.0" date="2006-08-23" time="[Link]"/>
<event value="2.0" date="2006-08-23" time="[Link]"/>
<event value="2.0" date="2006-08-23" time="[Link]"/>
<event value="-2.0" date="2006-08-24" time="[Link]"/>
</series>
]]>
201
Java source code
[Link]
[Link]
, VirtualOutputDirConsumer {
public enum EventDestination {XML_EMBEDDED, SEPARATE_BINARY_FILE, ONLY_HEADERS}
202
private LittleEndianDataOutputStream binaryOutputSteam = null;
public PiTimeSeriesSerializer() {
}
[Link] = version;
}
@Override
public void setVirtualOutputDir(VirtualOutputDir virtualOutputDir) {
[Link] = virtualOutputDir;
}
@Override
public void serialize(TimeSeriesContent timeSeriesContent, LineWriter writer, String
virtualFileName) throws Exception {
[Link] = timeSeriesContent;
[Link] = writer;
[Link]([Link]());
[Link]([Link]());
if (eventDestination == EventDestination.SEPARATE_BINARY_FILE) {
if (virtualOutputDir == null)
throw new IllegalStateException("virtualOutputDir == null");
binaryOutputSteam = new
LittleEndianDataOutputStream([Link]([Link](virtualFileName
"bin")));
try {
serialize();
} finally {
[Link]();
}
return;
}
binaryOutputSteam = null;
serialize();
}
203
[Link]();
[Link]();
addAttribute("xmlns", "[Link]
addAttribute("xmlns:xsi", "[Link]
addAttribute("xsi:schemaLocation", [Link]("pi_timeseries.xsd"));
addAttribute("version", [Link]());
[Link]("", "TimeSeries", "TimeSeries", attributesBuffer);
writeElement("timeZone", [Link]((double)
[Link]().getRawOffset() / (double) TimeUnit.HOUR_MILLIS));
for (int i = 0, n = [Link](); i < n; i++) {
[Link](i);
[Link](null, null, "series", null);
writeHeader();
writeEvents();
[Link](null, null, "series");
}
[Link](null, null, "TimeSeries");
[Link]();
}
writeTimeStep(header);
writePeriod();
if ([Link]() >= PiVersion.VERSION_1_5.ordinal()) writeTime("forecastDate",
[Link]());
writeElement("missVal", [Link]([Link]()));
writeOptionalElement("longName", [Link]());
writeOptionalElement("stationName", [Link]());
writeOptionalElement("units", [Link]());
writeOptionalElement("sourceOrganisation", [Link]());
writeOptionalElement("sourceSystem", [Link]());
writeOptionalElement("fileDescription", [Link]());
204
if ([Link]() != Long.MIN_VALUE) {
writeElement("creationDate", [Link]([Link]()));
writeElement("creationTime", [Link]([Link]()));
}
writeOptionalElement("region", [Link]());
writeTime("startDate", [Link]());
writeTime("endDate", [Link]());
}
writeAttributes("timeStep");
}
[Link]();
addAttribute("date", [Link](time));
addAttribute("time", [Link](time));
addAttribute("value", [Link]('.'));
addAttribute("flag", [Link]());
if ([Link]() >= PiVersion.VERSION_1_3.ordinal()) {
205
String comment = [Link]();
if (comment != null) addAttribute("comment", [Link]());
}
writeAttributes("event");
}
DINO
Overview
GWS_PutXXXXXXX files
Put_XXXXXXX files
Status
Configuration (Example)
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file.
ModuleConfigFiles/
The following example of an Import Module Instance will import the time series as non-equidistant series.
206
[Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>DINO</importType>
<folder>$IMPORT_FOLDER_DINO$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_DINO$</failedFolder>
<idMapId>IdImportDINO</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>DINO</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportDINO</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DINO_G.meting_nonequidistant</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
IdMapFiles/
[Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<!--DINO locaties-->
<map externalparameter="STAND (MV)" internalparameter="[Link]" internallocation="B45F0142"
externalparameterqualifier="1" externallocation="B45F0142"/>
<map externalparameter="STAND (MV)" internalparameter="[Link]" internallocation="B51F0423"
externalparameterqualifier="1" externallocation="B51F0423"/>
</idMap>
]]>
Important in this configuration is the externalParameterQualifier, this is used to indicate the Filternumber.
Example File/
GWS_PutB45H0224.csv
[Link]
[Link]
207
{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
[Link](reader);
if ([Link](line, columnSeparator).trim().equalsIgnoreCase("Locatie")) {
columnNames = [Link](line, columnSeparator);
[Link](columnNames);
buffer = new String[[Link]];
headerIndex++;
continue;
}
DIVER MON
Overview
Imports time series data from Diver loggers. The files have a sort of Windows ini file format with file extension (*.mon). The format of the MON ini
files is not well defined. Many programs interpret the structure differently and have various names for the ini file sections and parameters.
Sections: Section declarations start with '[' and end with ']'; i.e. '[Logger settings]' or '[Instrument info]'.
Parameters or item: this is the content of a section with an '=' sign between the key and the value; i.e. "location = abc"
208
The Date format used is: "yyyy/MM/dd HH:mm:ss"
Configuration (Example)
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file.
ModuleConfigFiles
The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT+1 with a time step of 1
hour. Many times the MON files do not save the data at rounded hourly tims, therefore a tolerance has been added to map the imported data to
correct hourly interval time series.
[Link]
<timeSeriesImportRun ......"="......"">
<import>
<general>
<importType>DIVERMON</importType>
<folder>$IMPORT_FOLDER_MON$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_MON$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_MON$</backupFolder>
<idMapId>IdImportMON</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>MON</dataFeedId>
</general>
<tolerance locationsetid="ImportMON_H.meting.cm_uur" timeunit="minute" unitcount="30"
parameterid="[Link]"/>
<timeSeriesSet>
<moduleInstanceId>ImportMON</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>ImportMON_H.meting.cm_uur</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
IdMapFiles
ID mapping fefines mappings between Diver MON and FEWS parameters and locations. Remember that ID mapping is case sensitive.
sample of [Link]
<map externalparameter="niveau" internalparameter="[Link]" internallocation=
"Dorperdijk_beneden" externallocation="Dorperdijk beneden"/>
]]>
Example file
There is a wide range of MON file types, this is just one example.
sample of 13PB175646_03_16_0708_23_07.mon
The Mon Import module in Delft-FEWS does not parse all data in the MON file. The important sections and parameters are the following:
209
[Logger settings] or [Instrument info]; Read location Id from this section
Location or Locatie ; location Id
[Channel 1] or [Kanaal 1] ; Not used
[Channel 2] or [Kanaal 2] ; Not used
[Series settings] or [Instrument info from data header] ; Not used
[Channel 1 from data header] or [Kanaal 1 from data header]; Read Parameter Id
Identification or Identificatie = Parameter Id
[Channel 2 from data header] or [Kanaal 2 from data header]; Read Parameter Id
Identification or Identificatie = Parameter Id
[Data]; Data values with the different channels in columns. The datavalues may have a "."or a ","as decimal seperator, both options are
accepter by the import function.
When the MON file is not in the correct format a warning message is returned. Known problems are missing location ID's or parameter ID's in the
MON files.
[Link]
[Link]
{
private static final Logger log = [Link]([Link]);
String loggerSettings;
String seriesSettings;
String channel;
String location;
String chnlId;
String chnlRange;
String data;
210
private int channelCount = -1;
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = reader;
[Link] = virtualFileName;
[Link] = contentHandler;
[Link](99999f);
[Link](true);
parseHeaders();
parseData();
}
/**
* Parses the datarows of the diver file. Data is written as a timestamp, followed by a
number of columns. Each
* column refers to a channel that is declared in the Series settings
*
*/
private void parseData() throws Exception {
int nrOfLines = [Link]([Link]().trim());
long lineCounter = 0;
char columnSeparator = '\0';
String[] buffer = new String[channelCount + 2];
for (String line; (line = [Link]()) != null;) {
if (columnSeparator == '\0') {
columnSeparator = [Link]('\t') == -1 ? ' ' : '\t';
}
if ([Link]("END OF DATA")) break;
[Link](line, columnSeparator, buffer);
[Link]([Link](), "yyyy/MM/dd", buffer[0],
"HH:mm:ss", buffer[1]);
for (int i = 0; i < channelCount; i++) {
String valueText = buffer[i + 2];
if (valueText == null) continue;
valueText = [Link](',', '.');
[Link](i);
[Link]('.', valueText);
[Link]();
}
lineCounter++;
}
if ([Link]())
[Link]("Number of expected values: " + nrOfLines + " Found values: " +
lineCounter);
/**
* Parses the available channels from the *.mon file. Multiple channels can exist in a file.
Typically the number
* of channels will be 2 but files with 1 channel do exist.
*/
private void parseHeaders() throws Exception {
IniFile iniFile = parseHeaderIniFile();
DiverHeaderConstants headerConstants = determineLanguage(iniFile);
211
}
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
String[] subjects = [Link]();
channelCount = 0;
if ([Link]() == 0) {
[Link]("Identification for channel: '" + subject + "' is not set in File: " +
virtualFileName);
// set dummy parameter
id = "Not Defined";
}
[Link](id);
[Link](loc);
[Link](channelCount, header);
channelCount++;
}
}
if ([Link] > 5) {
212
if (subjects[4].equalsIgnoreCase("kanaal 1 from data header")) return
DiverHeaderConstants.MIXED_HEADER_CONSTANTS;
if (subjects[4].equalsIgnoreCase("kanaal1 from data header")) return
DiverHeaderConstants.MIXED_HEADER_CONSTANTSNEW;
}
if ([Link] > 3) {
if (subjects[2].equalsIgnoreCase("kanaal 1 from data header")) return
DiverHeaderConstants.INCOMPLETE_HEADER_CONSTANTS;
}
FewsDatabase Import
Overview
TimeSeries reader for a FEWS Master Controller database. The identifier for this reader is "Fewsdatabse". This import allows a FSS or stand
alone system to import data from another FEWS master controller database. The import reads the database directly and does NOT communicate
with the MC. The following limitations apply:
Configuration
213
An example configuration file is attached to this page. This file imports one timeseries from the eami00 database on the fewdbsvr04 server; the
figure below shows this file in XML-SPY grid format:
database types
[Link]
<jdbcConnectionString>jdbc:jtds:sqlserver://MYSERVER:1433;databaseName=MYNAME</
jdbcConnectionString>
<user>fews</user>
<password>123</password>
]]>
214
[Link]
<jdbcConnectionString>jdbc:oracle:thin:@ fewsdbsvr[Link]mi00</jdbcConnectionString>
<user>fews</user>
<password>123</password>
]]>
Syntax for localDatastore, that should be placed into an import directory. Notice that jdbcDriverClass etc are not required, but import folder is
required now. The system automatically detects if the type of the localdatastore (Acces or firebird)
Fewsdatabase
<folder>$IMPORT_FOLDER_FEWS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_FEWS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_FEWS$</backupFolder>
]]>
Overview
Meteosat Images are generally imported as images in <filename>.png format. The Meteosat images constitute a time series of png images, that
are geo-referenced by means of a specific world file. Each image needs its own world file, which in case of PNG carries the extension
<filename>.pgw.
Import of images in another format, such as JPEG is also possible. The corresponding world file for a JPEG file has the extension <filename>.jpg.
The images are imported via a common time series import, for which a specific image parameter needs to be specified in a parameterGroup via
the parameter id image.
Configuration (Example)
The regional parameters XML file must have a special parameter for the images that are imported.
[Link]
<parameterType>instantaneous</parameterType>
<unit>-</unit>
<valueResolution>8</valueResolution>
<parameter id="image">
<shortName>image</shortName>
</parameter>
]]>
The value resolution indicates the resolution of the values of the pixels (grey tones) in the Meteosat images. In this case 8 grey tones are
resampled into a single grey tone for storage space reductions. In the module for the timemeseries import run for a Meteosat image the import is
then configured as follows:
215
[Link]
<general>
<importType>GrayScaleImage</importType>
<folder>$REGION_HOME$/Import/MeteoSat</folder>
<idMapId>IdImportMeteosat</idMapId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMeteosat</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>image</parameterId>
<locationId>meteosat</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>4</synchLevel>
<expiryTime unit="day" multiplier="750"/>
</timeSeriesSet>
]]>
hdf4
Overview
Notice that the file name should contain the date and time for the data in the following format:
*_yyyymmddHHMM_*.*
that is:
The file name without the extension should contain the date and time between two underscores, for instance
AMSR_E_L2A_BrightnessTemperatures_P07_200604152307_D.hdf
The date and time are given as four digits for the year, two digits for respectively month, day, hour and minute (seconds are ignored; the
format may not currently contain a "T" to separate date and time or a "Z" to indicate the timezone).
All parameters in the file are assumed to be defined on one and the same grid, defined in the configuration.
Configuration (Example)
<timeSeriesSet>
<moduleInstanceId>ImportMODISHDF4</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>sRb5</parameterId>
<locationId>MODIS_GRID</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>7</synchLevel>
<expiryTime unit="day" multiplier="14"/>
</timeSeriesSet>
HYMOS
HYMOS provides a database format to transfer time series. Two formats of HYMOS Transfer Databases are provided by HYMOS, related to the
HYMOS versions: both 4.03 and 4.50. The transfer database files are in MS Access format (*.mdb).
216
The transfer files can be imported through the data import module of Delft-FEWS.
Notice that FlagConversion should be applied to convert from the HYMOS flags to FEWS flags. See the attached conversion files.
Configuration
In the import moduleInstance the next definition should be used to import ArcWat DBF files:
<general>
<importType>HymosTransferDb</importType>
<folder>$IMPORT_FOLDER_HYMOS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_HYMOS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_HYMOS$</backupFolder>
<idMapId>IdImportHYMOS</idMapId>
<flagConversionsId>ImportHYMOSFlagConversions</flagConversionsId>
<importTimeZone>
.....
</general
It is also possible to export time series from Delft-FEWS to HYMOS transfer databases. The export is standard available in the table of the Time
Series Display (Save as .. menu option). However, this option only uses the correct flag conversion if you have defined also the Hymos export in
the file menu of the explorer.
You can activate this by adding the next definition to your explorer configuration:
<interactiveExportFormats>
<interactiveExportFormat>
<name>HYMOS Transferdatabase 4.03</name>
<exportType>Hymos 4.03</exportType>
<IdMapId>IdHYMOS</IdMapId>
<flagConversionsId>ExportHYMOSFlagConversions</flagConversionsId>
</interactiveExportFormat>
<interactiveExportFormat>
<name>HYMOS Transferdatabase 4.50</name>
<exportType>Hymos 4.5</exportType>
<IdMapId>IdHYMOS</IdMapId>
<flagConversionsId>ExportHYMOSFlagConversions</flagConversionsId>
</interactiveExportFormat>
</interactiveExportFormats>
Note: all the timeseries within the "interactiveExportFormat" block should have the same time step, as otherwise Hymos will not accept the file.
[Link]
[Link]
{
private static final Logger log = [Link]([Link]);
217
private static class SeriesInfo {
int seriesNr = 0;
String locationID = null;
String dataType = null;
String unit = null;
String tableName = null;
TimeZone timeZone = null;
float missVal = -999F;
float traceVal = 0.0F;
boolean forecast = false;
@Override
public String toString() {
return seriesNr + ":" + dataType + ':' + locationID + ':' + tableName;
}
}
public HymosTransferDbTimeSeriesParser() {
}
@Override
public void parse(Connection connection, TimeSeriesContentHandler contentHandler) throws
Exception {
[Link] = connection;
[Link] = [Link]().getRawOffset();
unitMap = readUnitMap();
@SuppressWarnings({"OverlyLongMethod"})
private SeriesInfo[] readSeriesInfos() throws SQLException {
ArrayList<SeriesInfo> list = new ArrayList<SeriesInfo>();
Statement statement = [Link]();
try {
ResultSet resultSet = [Link]("SELECT * FROM Series");
try {
int seriesNrColumnIndex = [Link]("ID");
int realStatColumnIndex;
try {
realStatColumnIndex = [Link]("REALSTAT");
} catch (SQLException e) {
realStatColumnIndex = [Link]("LocationId");
}
int dataTypeColumnIndex;
try {
dataTypeColumnIndex = [Link]("DATATYPE");
} catch (SQLException e) {
dataTypeColumnIndex = [Link]("ParameterId");
}
int tableNameColumnIndex = [Link]("TABLENAME");
int missValColumnIndex;
try {
missValColumnIndex = [Link]("MISSVAL");
} catch (SQLException e) {
218
missValColumnIndex = [Link]("MissingValue");
}
int traceValColumnIndex;
try {
traceValColumnIndex = [Link]("TRACEVAL");
} catch (SQLException e) {
traceValColumnIndex = -1;
}
int forecastColumnIndex;
try {
forecastColumnIndex = [Link]("FORECAST");
} catch (SQLException e) {
forecastColumnIndex = -1;
}
while ([Link]()) {
SeriesInfo seriesInfo = new SeriesInfo();
[Link] = [Link](seriesNrColumnIndex);
if ([Link]()) [Link]("Parse series info for " +
[Link]);
[Link]
= [Link](realStatColumnIndex).trim();
[Link]
= [Link](dataTypeColumnIndex).trim();
[Link] = [Link]([Link]);
// if ([Link] == null)
// [Link]([Link], "Can not find
datatype/parameter in table parameters " + [Link]);
[Link]
= [Link](tableNameColumnIndex).trim();
if ([Link] == null) {
throw new SQLException("Table name is not specified for series:" +
[Link]);
}
[Link] = [Link](missValColumnIndex);
if ([Link]()) [Link] = [Link];
if (traceValColumnIndex != -1) {
[Link] = [Link](traceValColumnIndex);
if ([Link]()) [Link] = 0;
}
if (forecastColumnIndex != -1) {
[Link] = [Link](forecastColumnIndex);
}
[Link](seriesInfo);
}
} finally {
[Link]();
219
}
} finally {
[Link]();
}
return [Link](new SeriesInfo[[Link]()]);
}
while ([Link]()) {
String parId = [Link](idColumnIndex);
String unit = [Link](unitColumnIndex);
[Link](parId, unit);
}
} finally {
[Link]();
}
} finally {
[Link]();
}
return res;
}
@SuppressWarnings({"OverlyLongMethod"})
private void read(SeriesInfo seriesInfo) throws IOException, SQLException {
[Link]("Start reading table " + [Link] + " for " + [Link] +
" and " + [Link]);
int rowCount = 0;
int missingValueCount = 0;
int traceValueCount = 0;
float minValue = Float.POSITIVE_INFINITY;
float maxValue = Float.NEGATIVE_INFINITY;
long minTime = Long.MAX_VALUE;
long maxTime = Long.MIN_VALUE;
220
Statement statement = [Link]();
try {
ResultSet rows = [Link](sql);
int columnType = [Link]().getColumnType(valueColumn);
boolean binary = [Link](columnType);
try {
DefaultTimeSeriesHeader header = null;
while ([Link]()) {
rowCount++;
long time = [Link](valueDateColumn).getTime() - timeZoneOffset;
[Link](time);
if (time > maxTime) maxTime = time;
if (time < minTime) minTime = time;
if ([Link]()) continue;
if (hasLabel)
[Link]([Link](labelColumn));
if (binary) {
byte[] bytes = [Link](valueColumn);
try {
InputStream inputStream;
try {
inputStream = new UnsyncBufferedInputStream(new GZIPInputStream(
new ByteArrayInputStream(bytes), 100000), 100000);
} catch (IOException e) {
inputStream = new ByteArrayInputStream(bytes);
}
boolean asciiGrid = isAsciiGrid(inputStream);
if (asciiGrid) {
AsciiGridReader reader = new AsciiGridReader(inputStream,
"hymostransferdb", [Link]());
try {
Geometry geometry = [Link]();
if ([Link] != [Link]()) {
values = new float[[Link]()];
byteBuffer = new byte[[Link]() *
NumberType.INT16_SIZE];
shortBuffer = new short[[Link]()];
}
[Link](values);
[Link](geometry);
[Link](values);
[Link]();
221
} finally {
[Link]();
}
} else {
MosaicGridFileReader reader = new
MosaicGridFileReader(inputStream);
try {
Geometry geometry = [Link]();
if ([Link] != [Link]()) {
values = new float[[Link]()];
byteBuffer = new byte[[Link]() *
NumberType.INT16_SIZE];
shortBuffer = new short[[Link]()];
}
} catch (Exception e) {
}
} else {
float value = [Link](valueColumn);
if (value == [Link]) {
missingValueCount++;
value = [Link];
}
[Link](value);
[Link]();
rowCount++;
} finally {
[Link]();
}
} finally {
[Link]();
}
if (rowCount == 0) {
[Link]("No values found");
222
} else {
Period period = new Period(minTime, maxTime);
[Link]("Period: " + period);
[Link]("Row count: " + rowCount);
[Link]("Missing value count: " + missingValueCount);
[Link]("Trace value count: " + traceValueCount);
}
}
}
]]></=></String,></String,></String,></SeriesInfo></SeriesInfo></String,>
KNMI CSV
Overview
Imports time series data from the KNMI CSV files that are delivered to Dutch waterboards. The files contain both daily rainfall and evaporation.
The files have an extension of "*.dat".
Configuration (Example)
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file.
ModuleConfigFiles/
The following example of an Import Module Instance will import the time series as equidistant daily series for timezone GMT+1 hour. Notice that
FEWS should store the time at the end of the day. Therefore the import timezone should be -23:00 instead of +01:00.
223
[Link]
<timeSeriesImportRun ......"="......"">
<import>
<general>
<importType>KNMICSV</importType>
<folder>$IMPORT_FOLDER_KNMI$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI$</backupFolder>
<idMapId>IdImportKNMI</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>-23:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI_P.meting_dag</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="second" timezone="GMT+1" multiplier="86400"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI_E.ref.Makkink_dag</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="second" timezone="GMT+1" multiplier="86400"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
IdMapFiles/
sample of [Link]
<map externalparameter="908" internalparameter="[Link]" internallocation="KNMIDN"
externallocation="908"/>
<map externalparameter="911" internalparameter="[Link]" internallocation="KNMIDT"
externallocation="911"/>
]]>
Important in this configuration is the externalParameter and the externalLocation have the same identifier.
Example File/
ab0115a_aamaas.dat
[Link]
224
[Link]
{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
KNMI EPS
Overview
Imports time series data with forecasts from the KNMI that are delivered to the Dutch waterboards. The files contain the next 52 forecasts:
deterministic forecast
control run
ensemble forecast of 50 members.
See KNMI site for information on all possible parameters and locations EPS. Two forecasts are supported: a forecast of 10 days and the forecast
of 15 days. Notice that the forecast of 15 days still contains a 10 day deterministic forecast only.
Notice that the rainfall forecast is supplied as an accumulative time series in 0.1 mm. All time series have a 6 hourly time step in GMT.
A complete forecast is supplied as a zipfile that contains individual files for each location. In each file the forecast timeseries for a list of
parameters are suppplied.
ECME_VEN_200710291200_NL001_LC
ECME_VEN_200710291200_NL002_LC
ECME_VEN_200710291200_NL004_LC
ECME_VEN_200710291200_NL009_LC
ECME_VEN_200710291200_NL011_LC
ECME_VEN_200710291200_NL012_LC
ECME_VEN_200710291200_NL015_LC
ECME_VEN_200710291200_NL018_LC
ECME_VEN_200710291200_NL020_LC
for the stations NL001, NL002 etc. at forecast time 2007-10-29 12:00.
Configuration (Example)
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.1 mm per 6 hour to mm/hr for example) it is also required to configure a Unit Conversion file.
ModuleConfigFiles
The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 6
hours.
225
[Link]
<timeSeriesImportRun ......"="......"">
<import>
<general>
<importType>KNMIEPS</importType>
<folder>$IMPORT_FOLDER_KNMI_EPS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI_EPS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI_EPS$</backupFolder>
<idMapId>IdImportEPS</idMapId>
<unitConversionsId>ImportKNMIUnits</unitConversionsId>
<importTimeZone>
<!--EPS is in GMT-->
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI-EPS</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-EPS</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<ensembleId>EPS</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-EPS</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-EPS</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<!--to let the import module know that the KNMI rainfall is an accumulative timeseries
in 0.1 mm/hr that should be disaggrated and converted to for example mm/hr-->
<externUnit unit="0.1 mm/6hr" cumulativesum="true" parameterid="[Link]"/>
<externUnit unit="0.1 mm/6hr" cumulativesum="true" parameterid="[Link]"/>
<externUnit unit="0.1 mm/6hr" cumulativesum="true" parameterid="[Link]"/>
</import>
</timeSeriesImportRun>
]]>
IdMapFiles
226
sample of [Link]
<parameter internal="[Link]" external="13011_deterministic"/>
<parameter internal="[Link]" external="13011_control"/>
<parameter internal="[Link]" external="13011_ensemble"/>
<location internal="KNMI_NL001" external="NL001"/>
<location internal="KNMI_NL002" external="NL002"/>
<location internal="KNMI_NL004" external="NL004"/>
<location internal="KNMI_NL009" external="NL009"/>
<location internal="KNMI_NL011" external="NL011"/>
<location internal="KNMI_NL012" external="NL012"/>
<location internal="KNMI_NL015" external="NL015"/>
<location internal="KNMI_NL018" external="NL018"/>
<location internal="KNMI_NL020" external="NL020"/>
]]>
Important in this configuration is the externalParameter is manipulated to identify the deterministic, control and ensemble forecasts. Therefore the
import module generates automatically a suffix to the parameter ID in the import files. If the import file contains a parameter "13011" for rainfall,
the import generates the next externalParameters: "13011_deterministic", "13011_control" and "13011_ensemble".
UnitConversionFile
sample of [Link]
<unitConversions ...................="...................">
<unitConversion>
<inputUnitType>0.1 mm/6hr</inputUnitType>
<outputUnitType>mm/hr</outputUnitType>
<multiplier>0.01666667</multiplier>
<incrementer>0</incrementer>
</unitConversion>
........
........
</unitConversions>
]]>
Example Files
ECME_VEN_2007102912.zip 417 kB Klaas-Jan van Heeringen 15-11-2007 08:28 Example of the 15 day forecast zip file
ImportKNMI 1.00 [Link] 2 kB Klaas-Jan van Heeringen 15-11-2007 08:30 Module Config file
IdImportEPS 1.00 [Link] 0.9 kB Klaas-Jan van Heeringen 15-11-2007 08:30 Id Map file
ImportKNMIUnits 1.00 [Link] 1 kB Klaas-Jan van Heeringen 15-11-2007 08:30 Unit Conversion File
[Link]
[Link]
227
{
private TimeSeriesContentHandler contentHandler = null;
private LineReader reader = null;
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = reader;
[Link] = contentHandler;
[Link](99999F);
[Link](parameterId + "_deterministic");
[Link](header);
parseValues();
[Link](parameterId + "_control");
[Link](header);
parseValues();
KNMI HDF5
Overview
Imports time series data with radar rainfall information from the KNMI that are delivered to the Dutch waterboards. The files are in HDF5 file
format. See KNMI site for information on the files.
Notice that the rainfall forecast is supplied as an accumulative rainfall sum in 0.01 mm over the last 3 hours where the time is in GMT. The files
are supplied once per hour.
228
5 minute radar intensity (uncalibrated)
accumulated calibrated precipitation of last 3 hours, supplied every hour
accumulated calibrated daily precipitation, supplied every day
The accumulated precipitation files contain the rainfall depth (in millimeters). The 5-minute radar intensity files contain a 8-bit value (0-255) that
represents the rainfall depth. To convert from this bit value to normal rainfall depth an additional conversion should be applied. The conversion
table is listed at KNMI site. The conversion can be done by a Transformation that uses a log-function that fits the conversion table.
This import uses a general C++ DLL for reading the HDF5 files. On some Windows systems the correct runtime components of
Visual C++ Libraries are not installed by default. A Microsoft Visual C++ 2008 SP1 Redistributable Package must be installed on
the computers to solve the problem. Problems have been found on Windows 2003 and Windows 2008 server computers.
On Linux importing HDF5 files will fail if the operating system is too old. From available evidence,
the kernel must be at GLIBC 2.6.18 (see the output of the "uname -a" command).
Configuration (Example)
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.01 mm per 3 hour to mm/hr for example) it is also required to configure a Unit Conversion file.
ModuleConfigFiles
The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 3
hours.
[Link]
<import>
<!--Radar-->
<general>
<importType>KNMI-HDF5</importType>
<folder>$IMPORT_FOLDER_KNMI_RADAR$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI_RADAR$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI_RADAR$</backupFolder>
<idMapId>IdImportRADAR</idMapId>
<unitConversionsId>ImportKNMIUnits</unitConversionsId>
<!--radar is in GMT-->
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI-RADAR</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>KNMI-RADAR</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" timezone="GMT+1" multiplier="3"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
<!--to let the import module know that the KNMI rainfall is in 0.01 mm/3hr that should be
converted to for example mm/hr-->
<externUnit unit="0.01 mm/3hr" parameterid="[Link]"/>
</import>
]]>
IdMapFiles
229
Defines mappings between KNMI and FEWS parameters and locations.
sample of [Link]
<map externalparameter="image_data" internalparameter="[Link]" internallocation=
"KNMI-RADAR" externallocation="KNMI-RADAR"/>
]]>
UnitConversionFile
sample of [Link]
<unitConversions ...................="...................">
<unitConversion>
<inputUnitType>0.01 mm/3hr</inputUnitType>
<outputUnitType>mm/hr</outputUnitType>
<multiplier>0.00333333</multiplier>
<incrementer>0</incrementer>
</unitConversion>
........
........
</unitConversions>
]]>
Grid definition
Defines the definition of the radar grid. This definition is not read from the file as in GRIB files like HIRLAM is done. Therefore a
grid definition is required for the KNMI radar grid.
230
sample of [Link]
<grids ...........="...........">
<regular locationid="KNMI-RADAR">
<rows>765</rows>
<columns>700</columns>
<polarStereographic>
<originLatitude>90</originLatitude>
<originLongitude>0</originLongitude>
<trueScalingLatitude>60</trueScalingLatitude>
<equatorRadius>6378137</equatorRadius>
<poleRadius>6356752</poleRadius>
</polarStereographic>
<firstCellCenter>
<x>0</x>
<y>-3650500</y> <!-- = projectie_shift + 1/2 cellsize = 3650000 + 500 -->
<z>0</z>
</firstCellCenter>
<xCellSize>1000</xCellSize>
<yCellSize>1000</yCellSize>
</regular>
<!-- the old KNMI 2,5 km grid-->
<regular locationid="KNMI-RADAR2.5km">
<rows>256</rows>
<columns>256</columns>
<polarStereographic>
<originLatitude>90</originLatitude>
<originLongitude>0</originLongitude>
<trueScalingLatitude>60</trueScalingLatitude>
<equatorRadius>6378388</equatorRadius>
<poleRadius>6356912</poleRadius>
</polarStereographic>
<firstCellCenter>
<x>1250</x>
<y>-3728515</y> <!-- = projectie_shift + 1/2 cellsize = 3727265 + 1250 -->
<z>0</z>
</firstCellCenter>
<xCellSize>2500</xCellSize>
<yCellSize>2500</yCellSize>
</regular>
</grids>
]]>
Example Files
KNMI IRIS
Overview
Imports time series data with observed daily rainfall from the KNMI that is delivered to the Dutch waterboards. The files are in CSV format with file
extension (*.dat) the next definition in the file:
<location ID>, <location name>, <X in km>, <Y in km>, <date in YYYYMMDD>, <value in 0.1 mm>. See the example file and the KNMI site.
Notice that the rainfall is measured at 08:00 UT (=GMT), but this time is not written in the file. Therefore the time will be read bij the FEWS import
reader as 00:00 hours. The rainfall is supplied as an accumulative time series over the last 24 hours. This requires the time step in FEWS to be
configured as
]]>
More information on the KNMI rainfall data sets can be found in the following document Maand Neerslag Overzicht.
Configuration (Example)
231
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.1 mm/day to mm/day for example) it is also required to configure a Unit Conversion file.
ModuleConfigFiles
The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 24
hours.
[Link]
<timeSeriesImportRun ......"="......"">
<import>
<!--IRIS (24h)-->
<general>
<importType>KNMIIRIS</importType>
<folder>$IMPORT_FOLDER_KNMI_IRIS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI_IRIS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI_IRIS$</backupFolder>
<idMapId>IdImportIRIS</idMapId>
<unitConversionsId>ImportKNMIUnits</unitConversionsId>
<!--data is supplied at 08:00 GMT, but in the file this time is not mentioned, so read as
00:00 hrs.
so the time zone offset (to GMT) should be -8 hrs-->
<importTimeZone>
<timeZoneOffset>-08:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI-IRIS</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-IRIS</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" timezone="GMT-8" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<!--to let the import module know that the KNMI rainfall is an accumulative timeseries in
0.1 mm/d
that should be converted to for example mm/d-->
<externUnit unit="0.1 mm/d" parameterid="[Link]"/>
</import>
</timeSeriesImportRun>
]]>
IdMapFiles
sample of [Link]
<map externalparameter="827" internalparameter="[Link]" internallocation="KNMI_827"
externallocation="827"/>
<map externalparameter="831" internalparameter="[Link]" internallocation="KNMI_831"
externallocation="831"/>
<map externalparameter="896" internalparameter="[Link]" internallocation="KNMI_896"
externallocation="896"/>
<map externalparameter="902" internalparameter="[Link]" internallocation="KNMI_902"
externallocation="902"/>
.....
]]>
232
Important in this configuration is the externalParameter and the externalLocation have the same identifier.
UnitConversionFile
sample of [Link]
<unitConversions ...................="...................">
<unitConversion>
<inputUnitType>0.1 mm/d</inputUnitType>
<outputUnitType>mm/d</outputUnitType>
<multiplier>0.1</multiplier>
<incrementer>0</incrementer>
</unitConversion> ........
........
</unitConversions>
]]>
Example file
An example of a csv-file from IRIS to be imported using the KNMI-IRIS import Module.
sample of irisgegevens_20071025.dat
Example Files
[Link]
[Link]
{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
KNMI SYNOP
Overview
Imports time series data with observed hourly values from the KNMI that is delivered to the Dutch waterboards. The files are in a kind of CSV
format with file extension (*.txt), where not a comma but a ";" is used as seperator. See for a detailed contents and definition of the file the KNMI
233
site.
Notice that the parameters are not listed in the file. The parameters are hard coded in the import routines as defined at the KNMI site. Notice also
that text fields like "cloudy" are not imported. Only parameters that contain values should be read, like rainfall (RhRhRh).
Configuration (Example)
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.1 mm/hr to mm/hr for example) it is also required to configure a Unit Conversion file.
ModuleConfigFiles
The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 6
hours.
[Link]
<timeSeriesImportRun ......"="......"">
<import>
<!--SYNOP (1h)-->
<general>
<importType>KNMISYNOPS</importType>
<folder>$IMPORT_FOLDER_KNMI_SYNOPS$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_KNMI_SYNOPS$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_KNMI_SYNOPS$</backupFolder>
<idMapId>IdImportSYNOPS</idMapId>
<unitConversionsId>ImportKNMIUnits</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>KNMI-SYNOPS</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-SYNOPS</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<externUnit unit="0.1 mm/hr" parameterid="[Link]"/>
</import>
</timeSeriesImportRun>
]]>
IdMapFiles
sample of [Link]
<map externalparameter="RhRhRh" internalparameter="[Link]" internallocation="KNMI_370"
externallocation="06370"/>
<map externalparameter="RhRhRh" internalparameter="[Link]" internallocation="KNMI_479"
externallocation="06479"/>
]]>
Important in this configuration is that the externalParameter are as defined at the KNMI site. They are not listed in the import files and therefore
hard coded in the import routines.
UnitConversionFile
234
Defines the conversion of the units that should be applied.
sample of [Link]
<unitConversions ...................="...................">
<unitConversion>
<inputUnitType>0.1 mm/hr</inputUnitType>
<outputUnitType>mm/hr</outputUnitType>
<multiplier>0.1</multiplier>
<incrementer>0</incrementer>
</unitConversion> ........
........
</unitConversions>
]]>
Example file
sample of 2007102503_decoded_synops_NL.txt
Example Files
[Link]
235
{
private static final String[] COLUMNS = {"IX", null, "N", null, "ff", "fxfx", "TTT", "TnTnTn",
"TxTxTx", "TgTgTg",
"TwTwTw", "TdTdTd", "UU", "VVVV", "PPPP", "tr", "RRR", "RhRhRh", "Dr", "QQQ", "ddd"};
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
String line;
// find first line
while ((line = [Link]()) != null && [Link]() < 2) {
// do nothing
}
[Link]("*");
String[] lineItems = new String[28];
// lines are spread actually over two rows
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
while (line != null && [Link]() > 1) {
line += [Link]();
int count = [Link](line, ';', '\"', lineItems);
if (count != 28)
throw new Exception("read data does not contain expected number of columns");
[Link](OutOfDetectionRangeFlag.INSIDE_DETECTION_RANGE);
}
[Link](lineItems[1]);
[Link](COLUMNS[i]);
[Link](header);
[Link]();
}
// read next line to start gathering new line info
line = [Link]();
}
}
}
]]>
Landsat-HDF5
Overview
Imports time series data from the Landsat satellite. The files are in HDF5 file format. See the NASA site for information on the files.
Each file contains a single image of one particular meteorological parameter. The following parameters are supported (note: these are the names
of the data items in the HDF5 files):
236
"LAI"
" ET" (note the three spaces in front!)
"ET"
"FVC"
"LST"
"SZA"
"SC" - snow cover
This import uses a general C++ DLL for reading the HDF5 files. On some Windows systems the correct runtime components of
Visual C++ Libraries are not installed by default. A Microsoft Visual C++ 2008 SP1 Redistributable Package must be installed on
the computers to solve the problem. Problems have been found on Windows 2003 and Windows 2008 server computers.
On Linux importing HDF5 files will fail if the operating system is too old. From available evidence,
the kernel must be at GLIBC 2.6.18 (see the output of the "uname -a" command).
Configuration (Example)
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. To convert the rainfall in a proper unit
(from 0.01 mm per 3 hour to mm/hr for example) it is also required to configure a Unit Conversion file.
ModuleConfigFiles
The following example of an Import Module Instance will import the time series as equidistant series for timezone GMT with a time step of 6
hours.
[Link]
<import>
<!--Meteo data-->
<general>
<importType>Landsat-HDF5</importType>
<folder>$IMPORT_FOLDER_LANDSAT$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_LANDSAT$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_LANDSAT$</backupFolder>
<idMapId>IdImportLandsat</idMapId>
<unitConversionsId>ImportLandsatUnits</unitConversionsId>
<!--radar is in GMT-->
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>Landsat</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportLandsat</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>Snow-cover</parameterId>
<locationId>Landsat-grid</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" timezone="GMT+1" multiplier="3"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
</import>
]]>
IdMapFiles
237
sample of [Link]
<map externalparameter="SC" internalparameter="Snowcover" internallocation="Landsat-grid"
externallocation="Landsat-grid"/>
]]>
Grid definition
Defines the definition of the Landsat grid. As not all this information is present in the Landsat files, it needs to be defined in this
way.
sample of [Link]
<grids ...........="...........">
<regular locationid="GridName">
<rows>651</rows>
<columns>1701</columns>
<geostationarySatelliteView>
<centralMeridian>0.0</centralMeridian>
</geostationarySatelliteView>
<firstCellCenter>
<!-- First x must be -COFF*2**16/CFAC, first y must be (NR-LOFF)*2**16/LFAC as found
in the HDF5 files
Cell sizes must be 2**16/CFAC and 2**16/LFAC -->
<x>-1.4796</x>
<y>8.6854</y>
</firstCellCenter>
<xCellSize>0.00480387</xCellSize>
<yCellSize>0.00480387</yCellSize>
</regular></grids>
]]>
The Landsat files contain a set of attributes, of which COFF, LOFF, CFAC and LFAC are the most important ones, as they can be used
to determine the coordinate system.
According to the document [Link] the image coordinates have to
be converted to intermediate coordinates x and y that in turn can be converted into longitude and latitude.
For the Delft-FEWS configuration we need the extremes for x and y (as the satellite image is a rectangle in these coordinates).
The firstCellCenter's x and y need to be computed as:
The cell sizes are to be determined as 2^16 / CFAC and 2^16 / LFAC.
The centralMeridian may be taken from the Landsat file, but care must be taken: as we give the cell centers a shift may be needed to get
the image right.
Example Files
We have seen problems with this import on a number of systems, notably Windows server 2008. It turned out that on those systems the
underlying runtime libraries need several extra DLLs that are not required on other systems (see for instance the page on known problems for the
HDF viewer). Installing the .NET 3.5 Service Pack 1 solves this problem (
[Link]
LUBW
Overview
Imports time series data in ASCII format from the Landes Umwelt Baden Wurtenberg Forecasting Centre in Germany. The LUBW files contain a
single parameter for a single location. The parameter follows implicitly from the file extension; eg. in the file [Link] the parameter is the
file extention QVHS (discharge). The first line in the file is a header with information on the location and data period in the file.
238
Configuring the Import
The reader is named LUBW which should be configured in the general section of the import. An example import configuration is shown below:
An example IdMapping file (that maps the location and parameter Ids of the also attached example input file) is shown below:
</idMap>
239
K_MAXAU *03.01.2009 T0=0 DT=1 N=462 KM=999.99 MODUL(208) VT=15.01.09-05:00
736.00 736.00 736.00 728.00 725.00 721.00 721.00 721.00 721.00 721.00
725.00 732.00 732.00 736.00 743.00 747.00 755.00 751.00 755.00 758.00
762.00 762.00 762.00 766.00 762.00 762.00 762.00 758.00 758.00 755.00
755.00 751.00 751.00 747.00 743.00 740.00 732.00 728.00 725.00 717.00
717.00 717.00 713.00 710.00 710.00 710.00 710.00 706.00 703.00 699.00
............
NOTE :
1. All the columns in text file will be separated by space " " character
2. The parameter id used for mapping must always be configured in upper case in the ID mapping configuration file.
3. The Header line in the example file contains the following information:
a. K_MAXAU is the location Id
b. *03.01.2009 is the date of first data element in file
c. N=462 are the number of data elements in file
d. VT=15.01.09-05:00 is the external forecast time
Matroos NetCDF
Overview
Imports time series data in NetCDF format from MATROOS Forecast databases. The import reader creaters a perl URL for direct data retrieving
from Matroos. This NetCDF data import retrieves regulare and/or irregular grids from the MATROOS database. There are also three types of
NOOS import functions in Delft FEWS to import scalar time series from MATROOS, see the NOOS import function for this type.
The import function for direct retrieval of grid data (maps) is matroos_netcdfmapseries.
More information on the retrieval of time series from Matroos can be found on: [Link]
An example of the matroos_netcdfmapseries configuration will be given here. The reader is named matroos_netcdfmapseries which should be
configured in the general section of the import. The general section must also contain the server URL and a correct username and password if
you need to log-in. The relativeViewPeriod in the general section is used to select the period to retrieve data for.
240
ImportMatroosMap 1.00 [Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>matroos_netcdfmapseries</importType>
<serverUrl>[Link]
<user>XXXXX</user>
<password>XXXXx</password>
<relativeViewPeriod unit="hour" start="-1" end="12"/>
<idMapId>IdImportMatroosMap</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>Matroos</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMatroos_Maps</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>Snelheid.u.F0</parameterId>
<locationId>hmcn_zeedelta</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="30"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportMatroos_Maps</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>Snelheid.v.F0</parameterId>
<locationId>hmcn_zeedelta</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="30"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
When a locationSet or multiple time series sets are configured in the import module instance, the Noos readers will construct URL's for each time
series and retreive the data from the Matroos database sequentially. An improvement of the import readers could be to construct a more complex
URL and retreive the data for multiple time series in one URL query.
The IdMapping configuration is very important because this maps the internal FEWS Id's to the Matroos Id's. In the IdMapping the following
FEWS and Matroos elements are mapped:
The FEWS external qualifiers can be used to add extra information to the URL. In the example below the following information is stored in the
external qualifiers. This information is used to resample the grid. MATROOS will interpolate theoriginal grid to the grid definition you give in the
qualifiers.
241
IdImportMatroosMap 1.00 [Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<map internalparameter="Snelheid.u.F0" externalqualifier3="ymin=442000&ymax=449000"
externalqualifier4="xn=101&yn=101" externalqualifier1="coordsys=RD" externalqualifier2=
"xmin=54000&xmax=67000" externallocation="hmcn_zeedelta" externalparameter="velu"
internallocation="hmcn_zeedelta"/>
<map internalparameter="Snelheid.v.F0" externalqualifier3="ymin=442000&ymax=449000"
externalqualifier4="xn=101&yn=101" externalqualifier1="coordsys=RD" externalqualifier2=
"xmin=54000&xmax=67000" externallocation="hmcn_zeedelta" externalparameter="velv"
internallocation="hmcn_zeedelta"/>
</idMap>
]]>
When importing grids in the FEWS database it is required to configure the grid characteristics in the [Link] file. The grid characteristics must
be similar to the grids imported from MATROOS.
]]>
NetCDF format
The NetCDF format used can be found on the MATROOS webpage and the FEWS-PI pages.
Msw
Overview
Imports time series data from MSW CSV files that are delivered from MFPS. The files contain both observed levels and flows in the main Dutch
rivers. The files have an extension of "*.csv".
Configuration (Example)
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file. See the attached example configuration
files.
ModuleConfigFiles/
The following example of an Import Module Instance will import the time series as equidistant daily series for timezone GMT+1 hour. Notice that
FEWS should store the time at the end of the day. Therefore the import timezone should be -23:00 instead of +01:00.
242
[Link]
<general>
<importType>MSW</importType>
<folder>$IMPORT_FOLDER_MSW$</folder>
<failedFolder>$IMPORT_FAILED_FOLDER_MSW$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER_MSW$</backupFolder>
<idMapId>IdImportMSW</idMapId>
<unitConversionsId>ImportMSWUnits</unitConversionsId>
<flagConversionsId>ImportMSWFlagConversions</flagConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>MSW</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMSW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>MSW_H</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
....
<externUnit unit="cm" parameterid="[Link]"/>
]]>
UnitConversion
Important in this configuration is that a unitconversion should be forced to convert the waterlevels from cm to m NAP.
Example File/
[Link]
[Link]
[Link]
{
private static final Logger log = [Link]([Link]);
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = contentHandler;
[Link] = virtualFileName;
[Link]("-999");
[Link] = reader;
243
[Link]('#');
parseHeader();
if ([Link]()) return;
parseData();
}
/**
* Read exactly 6 header-lines and extract:
* from line 2: location Id
* from line 3: parameter Id
*/
private void parseHeader() throws Exception {
String[] headerLines = [Link](reader, 6);
if ([Link] < 6) {
throw new Exception("[Link]: Header of the file " + [Link] + " has
unknown format.");
}
[Link]([Link](headerLines[1], '='));
[Link]([Link](headerLines[2], '='));
[Link](header);
}
/**
* Reads the file and put read data to the TimeSeriesContentHandler
* @return true if at least 1 line is read, otherwise false
* @throws IOException if any unexpected error occur while reading the file
*/
private void parseData() throws Exception {
[Link](firstLine);
[Link] = firstLine[3];
//Put unit to the header and ask if this header is wanted (i.e. are data from this file
wanted ?)
[Link]([Link]);
[Link](header);
if ([Link]()) return;
//Parse other data from this data line and put them to the timeseries handler
parseDataLine(firstLine);
//Read remaining lines, parse the data and put them to the timeseries handler
for (String[] line; (line = [Link](';')) != null;) {
[Link](line);
parseDataLine(line);
}
}
/**
244
* Parse from each line the following data:
* from column 1: date
* from column 2: time
* from column 4: unit
* from column 5: flag
* from column 6: value
*
* Unit must be the same in all records, i.e. equal to [Link] that is read from the
first data record.
*/
private void parseDataLine(String[] line) throws IOException {
[Link](line[4]);
[Link]('.', line[5]);
[Link]();
}
}
]]>
NETCDF-CF_PROFILE
Overview
This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
Imports profile time series data from NetCDF files which comply to the CF standard. More information about the cf standards can be found at:
[Link]
See also the following two other types of NetCDF-CF imports that are available:
In DELFT-FEWS versions 2011.02 and later this import type can also be used to import data using OPeNDAP, see Import data
using OPeNDAP.
245
ImportNetcdf_Profile 1.00 [Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>NETCDF-CF_PROFILE</importType>
<folder>$IMPORT_FOLDER$/NETCDF</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$</backupFolder>
<idMapId>IdImportNetCDF</idMapId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportNetcdf_Profile</moduleInstanceId>
<valueType>longitudinalprofile</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>SobekProfiles_WL</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
An example of the IdMapping used for the NETCDF-CF_PROFILE import will be given below.
Note that in the IdMapping of the parameters, the external name must match the variable names as used by the netcdf file exactly (case
sensitive). The locations that are mapped refer to branch id's which are defined in the [Link].
</idMap>
]]>
246
Branches 1.00 [Link]
<branches xmlns:xsi="[Link] xmlns="[Link]
" xsi:schemalocation="[Link]
[Link] version="1.1">
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<branch id="Maastakken_NDB(Haringvliet)">
<branchName>Maastakken_NDB(Haringvliet)</branchName>
<startChainage>1030</startChainage>
<endChainage>321624</endChainage>
<pt label="R_MS_001_1" chainage="1030" z="40.32" z_rb="51.34" y="308594.236" x=
"176029.1129"/>
<pt label="R_MS_001_2" chainage="2061" z="41.79" z_rb="50.92" y="309427.7428" x=
"176631.808"/>
...
<pt label="N_NDB_92" chainage="321624" z="-7.82" z_rb="2.79" y="436953" x="57935.1"/>
</branch>
...
<branch id="Markermeer_VeluweRandmeren">
...
</branch>
</branches>
]]>
The locationSetId used by the ImportNetcdf_Profile.xml must contain the branches defined in the above IdMapping.
NETCDF-CF_GRID
Overview
This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
Imports grid time series data from NetCDF files which comply to the CF standard. More information about the cf standards can be found at:
[Link]
See also the following two other types of NetCDF-CF imports that are available:
In DELFT-FEWS versions 2011.02 and later this import type can also be used to import data using OPeNDAP, see Import data
using OPeNDAP.
Import Configuration
247
An example of the NETCDF-CF_GRID import will be given here.
Id Map Configuration
An example of the IdMapping used for the NETCDF-CF_GRID import is shown below.
]]>
Grids Configuration
When importing grids in the FEWS database it may be required to configure the grid characteristics in the [Link] file. The grid characteristics
must be similar to the grid imported from the NetCDF file.
]]>
For the import of Waterwatch NetCDF data a special NetCDF import type can be used "NETCDF-CF_GRID-NW". This import type has been
added in July 2011 to the FEWS 2010.01 and 2011.01 builds, and will be available in the 2011.02 build. Waterwatch NetCDF data for Dutch
waterboards requires the Transverse Mercator projection to be used. This regular grid projection has been added to the FEWS code in October
2011.
248
Grids 1.00 [Link]
<rows>1309</rows>
<columns>1049</columns>
<transverseMercator>
<originLatitude>0.0</originLatitude>
<originLongitude>3.0</originLongitude>
<scaleFactorAtOrigin>0.9995999932289124</scaleFactorAtOrigin>
</transverseMercator>
<gridCorners>
<geoDatum>WGS 1984</geoDatum>
<upperLeft>
<x>3.3474039424011828</x>
<y>53.58134813984449</y>
</upperLeft>
<lowerRight>
<x>7.0253359554942705</x>
<y>50.572267443880236</y>
</lowerRight>
</gridCorners>
]]>
NETCDF-CF_TIMESERIES
Overview
This import is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
Imports scalar time series data from NetCDF files which comply to the CF standard. More information about the cf standards can be found at:
[Link]
See also the following two other types of NetCDF-CF imports that are available:
Profiles (NETCDF-CF_PROFILE)
Grids (NETCDF-CF_GRID)
In DELFT-FEWS versions 2011.02 and later this import type can also be used to import data using OPeNDAP, see Import data
using OPeNDAP.
249
ImportNetcdf_Timeseries 1.00 [Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>NETCDF-CF_TIMESERIES</importType>
<folder>$IMPORT_FOLDER$/NETCDF</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$</backupFolder>
<idMapId>IdImportNetCDF</idMapId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportNetcdf_Timeseries</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DMFlowPoints</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
An example of the IdMapping used for the NETCDF-CF_TIMESERIES import will be given below.
In this example, the mapped locations correspond to the locations of the locatiesSet as defined above in the ImportNetcdf_Timeseries.xml.
Note that in the IdMapping of the parameters and locations, the external name must match the variable and location names as used by the netcdf
file exactly (case sensitive).
NOOS
Overview
Imports time series data in ASCII format from MATROOS Forecast databases. The import reader creaters PhP URL's for direct data retrieving
from Matroos. There are three types of NOOS URL's supported by the NOOS import function:
250
More information on the retrieval of time series from Matroos can be found on: [Link]
For the three types of series retrieval URL's three import readers have been made in FEWS:
1. noos_timeseries
2. noos_1dmapseries
3. noos_mapseries
An example of the noos_timeseries configuration will be given here. The reader is named noos_timeseries which should be configured in the
general section of the import. The general section must also contain the server URL and a correct username and password if you need to log-in.
The relativeViewPeriod in the general section is used to select the period to retrieve data for.
Special attention should be given to the timezone; FEWS retrieves all Noos data from the Matroos database in GMT.
When a locationSet or multiple time series sets are configured in the import module instance, the Noos readers will construct URL's for each time
series and retreive the data from the Matroos database sequentially. An improvement of the import readers could be to construct a more complex
URL and retreive the data for multiple time series in one URL query.
The IdMapping configuration is very important because this maps the internal FEWS Id's to the Matroos Id's. In the IdMapping the following
FEWS and Matroos elements are mapped:
the FEWS externalLocation is used to map the Matroos loc or node element
the FEWS externalParameter is used to map the Matroos unit element
the FEWS externalParameterQualifier is used to map the Matroos source element
In case the noos_mapseries reader is used, the FEWS externalLocation is used to map a fews Matroos loc elements, namely coordsys, x and
y.
An example IdMapping file for the noos_timeseries and noos_1dmapseries readers is shown below:
251
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<map externalparameter="waterlevel_astro" internalparameter="[Link]" internallocation=
"hoekvanholland" externalparameterqualifier="observed" externallocation="hoekvanholland"/>
</idMap>
]]>
#------------------------------------------------------
# Timeseries retrieved from the MATROOS maps1d database
# Created at Tue Oct 28 [Link] CET 2008
#------------------------------------------------------
# Location : MAMO001_0
# Position : (64040,444970)
# Source : sobek_hmr
# Unit : waterlevel
# Analyse time: 200709020100
# Timezone : MET
#------------------------------------------------------
200709010000 -0.387653201818466
200709010010 -0.395031750202179
200709010020 -0.407451331615448
200709010030 -0.414252400398254
200709010040 -0.425763547420502
200709010050 -0.43956795334816
200709010100 -0.309808939695358
200709010110 -0.297703713178635
200709010120 -0.289261430501938
200709010130 -0.256232291460037
NTUQUARTER Import
Overview
TimeSeries reader for NTUQUARTER Datalogger files. These contain observed telemetry data for several paremmeters from NTU (National
Technical University, of Singapore). The identifier for this reader is "NTUQUARTER".
The timeSeries reader for NtuQuarter Datalogger files (NTUGauge) used for the Singapore OMS. These contain observed telemetry data for
several parameters send by NTU (National Technical University, of Singapore). These contain Channel_Level, Velocity, Temperature,
Conductivity, pH, Turbidity, NTU DO, Battery and Flow. The locationID is encoded in the filename e.g: MC02_Quarter.dat contains data for
locationId MC02.
Colums are:
Date/time
number
Level, m (by SW or SL) (parameter name: level)
Channel_Level, m (by US level sensor) (parameter name: channel_level)
Velocity, m/s (parameter name: velocity)
Temperature, oC (parameter name: temperature)
Conductivity, mS/cm (parameter name: conductivity)
pH (parameter name: ph)
Turbidity, ( parameter name: turbidity)
NTU DO, mg/L (parameter name: ntu_do)
252
Battery, V (parameter name: battery)
Flow, m3/s (parameter name: flow)
Configuration
The parameter name will be used to set the external parameterId to be used for the idmapping in the import.
Example:
"2007-05-08 [Link]",7892,0.809,0,-0.187,28.76,0.36,7.56,141.9,2.03,12.86272,-3.933358
"2007-05-08 [Link]",7893,0.849,0,-0.167,29.04,0.413,7.59,144.8,2.61,12.87867,-3.686358
"2007-05-08 [Link]",7894,0.89,0,-0.137,29.37,0.475,7.65,146,2.48,12.87363,-3.17018
"2007-05-08 [Link]",7895,0.929,0,-0.109,29.68,0.629,7.67,146.3,3.26,12.85852,-2.632786
"2007-05-08 [Link]",7896,0.966,0,-0.13,30.11,0.907,7.76,147.3,3.96,12.8686,-3.26508
"2007-05-08 [Link]",7897,1.003,0,-0.094,30.4,1.161,7.78,147.5,4.44,12.85601,-2.451332
[Link]
[Link]
* These contain Channel_Level, Velocity, Temperature, Conductivity, pH, Turbidity, NTU DO,
Battery, Flow
* <p/>
* <p/>
* The locationID is encoded in the filename e.g: MC02_Quarter.dat contains data for
* locationId MC02
* <p/>
* <pre>
* Colums are:
* Date/time
* number
* Level, m (by SW or SL) (parameter name: level)
* Channel_Level, m (by US level sensor) (parameter name: channel_level)
* Velocity, m/s (parameter name: velocity)
* Temperature, oC (parameter name: temperature)
* Conductivity, mS/cm (parameter name: conductivity)
* pH (parameter name: ph)
* Turbidity, ( parameter name: turbidity)
* NTU DO, mg/L (parameter name: ntu_do)
* Battery, V (parameter name: battery)
* Flow, m3/s (parameter name: flow)
* <p/>
* Example:
* "2007-05-08 [Link]",7892,0.809,0,-0.187,28.76,0.36,7.56,141.9,2.03,12.86272,-3.933358
* "2007-05-08 [Link]",7893,0.849,0,-0.167,29.04,0.413,7.59,144.8,2.61,12.87867,-3.686358
* "2007-05-08 [Link]",7894,0.89,0,-0.137,29.37,0.475,7.65,146,2.48,12.87363,-3.17018
* "2007-05-08 [Link]",7895,0.929,0,-0.109,29.68,0.629,7.67,146.3,3.26,12.85852,-2.632786
* "2007-05-08 [Link]",7896,0.966,0,-0.13,30.11,0.907,7.76,147.3,3.96,12.8686,-3.26508
* "2007-05-08 [Link]",7897,1.003,0,-0.094,30.4,1.161,7.78,147.5,4.44,12.85601,-2.451332
* </pre>
* The second column (data number) is not used in the import
* <p/>
*/
public class NtuQuarterTimeSeriesParser implements TextParser<TimeSeriesContentHandler> {
private static final Logger log = [Link]([Link]);
253
private DefaultTimeSeriesHeader timeSeriesHeader = new DefaultTimeSeriesHeader();
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = virtualFileName;
[Link]('#');
parseLocationIdFromFileName();
if ([Link] < 2)
throw new Exception("File with name <" "=""" +="+" [Link]="this
.virtualFileName"> cannot be parsed to find location Id");
[Link](fileNameParts[0]);
if (!fileNameParts[1].equals("Quarter")) {
[Link]("File <" "=""" +="+" filename="fileName"> contains unexpected ending <" "="""
+="+" filenameparts[1]="fileNameParts[1]"> Expected Quarter");
}
if ([Link]())
[Link]("File <" "=""" +="+" filename="fileName"> contains data for external
locationdId <" "=""" +="+" filenameparts[0]="fileNameParts[0]"> and reader type <Quarter>");
}
}
]]></Quarter></"></"></"></"></"></TimeSeriesContentHandler>
NTURAIN Import
Overview
TimeSeries reader for NTURAIN Datalogger files. These contain observed telemetry data for Rain send by NTU (National Technical University, of
Singapore). The identifier for this reader is "NTURAIN".
254
Configuration
The locationID is encoded in the filename e.g: MC02_Rain.dat contains data for locationId MC02. an example file plus configuration (IDmap and
import module configuration) is attached to this page. The external parameter is always "Rain".
Columns are:
Date/time number Rain(mm)
Example:
"2007-04-30 [Link]",5594,0
"2007-04-30 [Link]",5595,0
"2007-04-30 [Link]",5596,0
"2007-04-30 [Link]",5597,0
"2007-04-30 [Link]",5598,0
[Link]
[Link]
*/
public class NtuRainTimeSeriesParser implements TextParser<TimeSeriesContentHandler> {
private static final Logger log = [Link]([Link]);
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = virtualFileName;
[Link] = contentHandler;
[Link]('#');
255
parseParameterLocationIdFromFileName();
if ([Link]()) return;
if ([Link] < 2)
throw new IOException("File with name <" "=""" +="+" [Link]="this
.virtualFileName"> cannot be parsed to find location Id");
// spit on underscore - first items are the externalLocatioId, second Parameter (Rain)
[Link](fileNameParts[0]);
if (!fileNameParts[1].equals("Rain")) {
[Link]("File <" "=""" +="+" filename="fileName"> contains data for external
parameter <" "=""" +="+" filenameparts[1]="fileNameParts[1]"> Forcing to Rain");
}
// Set to "Rain" Anyway after sending the warning
[Link]("Rain");
if ([Link]()) {
[Link]("File <" "=""" +="+" filename="fileName"> contains data for external
locationdId <" "=""" +="+" filenameparts[0]="fileNameParts[0]"> and parameter <" '="'" +="+"
filenameparts[1]="fileNameParts[1]">');
}
[Link](timeSeriesHeader);
}
}
]]></"></"></"></"></"></"></TimeSeriesContentHandler></p>
SSE
Overview
Imports time series data from Scottish & Southern Electric (SSE) ASCII files.
SSE File format is expected to contain 4 columns with information (location, value, date-time, unit)
File must be ',' seperated
All comment lines start with a '*'
Date format in the files must be of the form 'dd/MM/yyyy HH:mm:ss'
The data file has no parameter in th efile, the unit is used as external parameter Id
Configuration (Example)
256
A complete import module configuration consists of an ID Mapping file, an Import Module Instance file. Unitconversion can also be included while
importing. A complete set of configuration files for importing SSE files with unit conversion is attached SSE_Import.zip.
ModuleConfigFiles/
The following example of an Import Module Instance will import the time series as non-equidistant series.
IdMapFiles/
[Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<map externalparameter="Water Level" internalparameter="[Link]" internallocation="115304"
externallocation="Level in Loch Benevean"/>
<map externalparameter="Water Level" internalparameter="[Link]" internallocation="336370"
externallocation="Level in Loch Cluanie"/>
<map externalparameter="Water Level" internalparameter="[Link]" internallocation="335609"
externallocation="River Moriston Level at Torgoyle Bridge"/>
<map externalparameter="Water Level" internalparameter="[Link]" internallocation="335612"
externallocation="Level in Loch Glascarnoch"/>
</idMap>
]]>
Important in this configuration is that the external locations are location names.
Example File/
257
SSE_Test.txt
[Link]
[Link]
{
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws IOException {
[Link]('*');
TMX
Contents
Contents
Overview
Status
Configuration (Example)
ModuleConfigFiles/
IdMapFiles/
Import of TMX in CSV file format
Java source code
Overview
Imports time series data from the Microsoft Access database file.
There are two types of the import depending on type of measurement station:
The difference between two formats in how time series are stored in the database.
For analog data type it is .mdb file containing a set of tables where each separate time series is stored in a separate table.
In case of digital data type one table in the tmx mdb file can contain many time series.
Status
258
Configuration (Example)
The configuration files below define import of 4 time series from the tmx .mdb file:
Data Format Parameter (tmx) Location (tmx) Parameter (fews) Location (fews)
Note
Tmx database (mdb) may contain data for both digital and analog types of data.
When data are in analog format they usually stored in tables which names are defined using location and parameter name, e.g.:
Loc063Ao1, Loc063Ai1. In case of digital format everything is stored in one table, e.g.: ReportAo or ReportDi. When in digital
tables other columns than the regular ActualValue column must be imported the externalParameterQualifier can be used to
indicate the correct table; ReportDi_Open to import the MinutesOpen column of the ReportDi table.
ModuleConfigFiles/
Time series which are listed in this file can be imported into fews.
Defines what time series can be imported from the TMX .mdb file and which tables contain their values.
[Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] xmlns="
[Link]
<import>
<general>
<importType>Tmx</importType>
<folder
>../junit_test_output/nl/wldelft/fews/system/plugin/dataImport/TimeSeriesImportTestData/import/tmx</
folder>
<idMapId>tmxMapId</idMapId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
</general>
259
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="11"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
IdMapFiles/
[Link]
<!-- analog -->
<map externalparameter="Ai1" internalparameter="P1.m" internallocation="tmx_location1"
externallocation="Loc063"/>
<map externalparameter="Ao1" internalparameter="P2.m" internallocation="tmx_location1"
externallocation="Loc063"/>
]]>
1. Missing values
Example:
Datum;Tijd;Ai8;Ai1;Ai3;Ai2;Ai7;Ai9
260
;;Gem;Gem;Gem;Gem;Gem;Gem
25-05-2005;12:00;16.257;15.958;16.135;15.026;15.513
25-05-2005;13:00;16.257;15.958;16.135;15.026;15.507
25-05-2005;14:00;--;-;-;-;--
25-05-2005;15:00;16.257;15.958;16.135;15.026;15.494
...
09-01-2006;01:00;1.648;?;?;?;?;???
09-01-2006;02:00;0.399;12.743;12.606;12.333;?;?
...
27-08-2007;01:00;>>>;>>>;>>>;>>>;>>>;>>>
27-08-2007;02:00;<<<;<<<;<<<;<<<;<<<;<<<
...
It is assumed that date and time comes always in the "DD-MM-YYYY hh:mm" format. Since december 2008 also the time format of
"DD-MM-YYYY hh:mm:ss" (with seconds) supported.
3. 2nd line
4. Reader also assumes Date and Time always come as a 1st and 2nd columns.
Location should be at the first position and should be the same as location defined in the [Link]
Location must be separated by " " - space from the rest of file or ".csv", for example:
location1 [Link]
[Link]
location1 .csv
location1_01012007.csv
[Link]
The systems gives a warning when data in the database is overwritten by new data:
The system gives a warning when multiple series are imported for one location-parameter combination:
29.02.2008 [Link] WARN - Multiple time series sets found for parameter-internal/external=[Link]/Ai2
location-internal/external=WAM0400_afwat_kan/WAM0400 ensemble member main$0
[Link]
[Link]
261
* Each table contains a unique parameter/location combination. In other words in the Id Mapping
the location id is
* and parameter id is identical (see also WISKI import).
* <p/>
* When importing, consider only those tables included in the IdMapping used. There are
additional tables in the
* database. These should be ignored. If defined to be read, then an error can be generated
indicating the format
* of requested table to read is wrong.
* <p/>
* The Status column can be translated using the flag mapping functionality.
* <p/>
* Note: The time at midnight is sometimes offset by a few seconds. This may then not be
imported.
* The import module can apply the tolerance functionality to import this to the cardinal time
step.
* <p/>
* Example (columns truncated)
* <p/>
* Loc063Di18
* --------------------------------
* Status TimeStamp Value
* --------------------------------
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* 1 7-4-2005 [Link] 0
* <p/>
* <p/>
* Please see also [Link] for more documentation.
*/
@SuppressWarnings({"AssignmentToCollectionOrArrayFieldFromParameter"})
@Override
public void setTimeSeriesHeaders(TimeSeriesHeader[] timeSeriesHeaders) {
[Link] = timeSeriesHeaders;
}
@Override
public void parse(Connection connection, TimeSeriesContentHandler contentHandler) throws
Exception {
[Link] = connection;
[Link] = contentHandler;
262
String msg = "[Link]: time series parameter = " +
[Link]()
+ ", location: " + [Link]()
+ " can not be imported, table name = " + getTableName(header) + '\n' +
[Link]();
return "SELECT TimeStamp, " + getValueColumnName(header) + ", Status FROM " + tableName;
}
[Link](header);
assert ![Link]();
263
}
} finally {
[Link]();
}
}
Wiski
Overview
The ZRXP format is a ASCII data exchange file format of Wiski. The file may contain one or more time series.
The time series are defined by a header line that starts with #REXCHANGE.
Directly after this keyword the ID of the time series is defined: #REXCHANGE013S050
After the keyword RINVAL the missing value is defined.
In a complete example of the header
#REXCHANGE013S050|*|RINVAL-777|*|
A new time series starts simply with a new header. It also means that the previous series has ended.
More recent versions of ZRXP files may contain a separate location and parameter instead of #REXCHANGE. The location is defined by the
keyword SANR and the parameter by CNAME.
The CUNIT keyword defines the unit and the RINVAL the missing value.
Configuration
ZRXP does not identify both a location and a parameter ID. It only has a time series ID. To import these series into FEWS you have to set both
the external location ID and parameter ID to this time series ID.
264
<import>
<general>
<importType>WISKI</importType>
<folder>$IMPORT_FOLDER$/zrxp</folder>
....
</general>
Example file
#REXCHANGE013S050|*|RINVAL-777|*|
20081111000000 7.538
20081111002000 7.541
20081111004000 7.544
20081111010000 7.547
20081111012000 7.549
20081111014000 7.550
20081111020000 7.553
20081111022000 7.554
20081111024000 7.555
where
locationId = 13S050
parameterId = 13S050
unit = not defined
missing value = -777
or
#SANR7424|*|#SNAMEPiding|*|
#REXCHANGE7424_N_15|*|#CNAMEN|*|#CUNITmm|*|RINVAL-777|*|
20091211001500 0
20091211003000 0
20091211004500 0
20091211010000 0
20091211011500 0
where
locationId = 7424
parameterId = N
unit = mm
missing value = -777
or
265
#TSPATH/82/82_3/WATHTE/cmd.p|*|
#LAYOUT(timestamp,value,status,interpolation_type)|*|
#TZUTC+01|*|
#CUNITm|*|
20100127000000 -0.94 200 102
20100127001500 -0.93 200 102
20100127003000 -0.93 200 102
where
locationId = 82_3
parameterId = WATHTE
qualifierId = cmd.p (which will be translated to cmd_p)
timezone = GMT+1
unit = m
Note that the status and the interpolation_type are combined to form a flag which can be mapped in the flag mapping. This is done by multiplying
the status by 1000 and adding the interpolation_type (example below). The status should also be between 0 and 999.
[Link]
[Link]
@Override
public void parse(LineReader reader, String virtualFileName, TimeSeriesContentHandler
contentHandler) throws Exception {
[Link] = virtualFileName;
[Link] = contentHandler;
[Link](-777.0f);
[Link] = reader;
[Link]('?');
[Link](true);
[Link]();
[Link] = null;
[Link](500);
String[] buffer = new String[2];
for (String line; (line = [Link]()) != null; [Link](500)) {
line = [Link]();
if ([Link]("ENDOFFILE")) return;
if ([Link](0) == '#') {
[Link]();
parseHeader();
continue;
}
266
if ([Link]() == null && [Link]() == null)
throw new Exception("Not a valid wiski file, REXCHANGE, CNAME, SANR tags are all
missing in the file header");
if ([Link]()) continue;
/**
* Read metadata from the #-records. Metadata block is followed by the timeseries-records
* but the timeseries-records may be also omitted. In this case the Metadata block MUST
start
* with a record that begins with ## !
* Empty records wil be ignored.
* <p/>
* The meaning of the keys is:
* TZ : time zone. TZ are UTC0 and UTC+/-x (e.g. UTC+1 or UTC-2).
* TSPATH : /site id/location id/parameter id/ts shortname
* example TSPATH/160/160_1/WATHTE/cmd.p
* only location id and parameter id is parsed and used
* SANR : location id. Used only if not specified with TSPATH
* CNAME: parameter id. Used only if not specified with TSPATH
* CUNIT: unit
* RINVAL: missing value
* REXCHANGE: location-parameter. Wil be used only if the metadata block does not contain
keys TSPATH, SANR or CNAME.
* The string specified by keyword REXCHANGE represents location Id and also parameter-id
(so locations Id and parameter Id equals)
*
* @throws IOException if the header format is incorrect
*/
private void parseHeader() throws IOException {
[Link]();
[Link] = null;
267
//format: TSPATH/<site id="id">/<station id="id">/<parameter
shortname="shortname">/<ts shortname="shortname">
//example: TSPATH/160/160_1/WATHTE/cmd.p (contains always all these 4 elements )
//<ts shortname="shortname"> is read as qualifier
String tspath = parseKeyValue("TSPATH", line);
if (tspath != null) {
String[] buffer = [Link](tspath, '/');
if ([Link] != 5 || buffer[2].length() < 1 || buffer[3].length() < 1) {
throw new IOException("Not a valid wiski file, TSPATH has a incorrect format:
" + tspath +
" expected: TSPATH/<site id="id">/<station id="id">/<parameter
shortname="shortname">/<ts shortname="shortname">");
}
tspathLoc = buffer[2];
tspathPar = buffer[3];
tspathQual = buffer[4].replace('.', '_'); // dots are not allowed in fews as
internal qualifiers, replace dots with underscores
}
String locationId = parseKeyValue("SANR", line);
if (locationId != null) [Link](locationId);
String parameterId = parseKeyValue("CNAME", line);
if (parameterId != null) [Link](parameterId);
String unit = parseKeyValue("CUNIT", line);
if (unit != null) [Link](unit);
String missingValue = parseKeyValue("RINVAL", line);
if (missingValue != null) [Link](missingValue);
String parLoc = parseKeyValue("REXCHANGE", line);
if (parLoc != null) fallbackParLoc = parLoc;
//Parse time zone. Note: UTC always expected , since no other code wil occur according to
the Wiski 7 format
//Allowed formats are: UTC0 and UTC+/-x (e.g. UTC+1 or UTC-2).
private static TimeZone parseTimeZone(String buffer, String fileName, String defaultTimeZone)
throws IOException {
268
String strOffset = [Link](3);
TimeZone timeZone;
try {
double offset = [Link](strOffset);
timeZone = [Link](offset);
} catch (NumberFormatException e) {
throw new IOException("Invalid timeZone specified with TZ keyword:" + buffer, e);
}
return timeZone;
}
}
]]></ts></parameter></station></site></ts></ts></parameter></station></site>
WSCC csv
Overview
Imports time series data in csv format from the Woodleigh System Control Centre in Singapore. The first line is a header for each column
indicating the location. The filename encodes the parameter. Eg., in the file Aname_RF.txt is parameter is RF (rainfall). If the column after a data
columns has a header named "Qf" it is interpreted as a column holding quality flags for the data column. The flags are converted to DELFT-FEWS
data flags using the flagconversions mapping.
The reader is named WSCCCsv which should be configured in the general section of the import. An example import configuration is shown
below:
An example IdMapping file (that maps the first column of the also attached example input file) is shown below:
269
<?xml version="1.0" encoding="UTF-8"?>
<idMap version="1.1" xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<parameter internal="[Link]" external="LEV"/>
<parameter internal="[Link]" external="FLW"/>
<parameter internal="[Link]" external="RF"/>
<parameter internal="[Link]" external="VOL"/>
<!-- vlume -->
<location internal="LowerPierce" external="S23-USR-LEV-RES-1"/>
</idMap>
An example flag conversions file for the WSCC data is shown below:
270
<name>Blocked ( alarm disabled by the operator )</name>
<value>B</value>
</inputFlag>
<outputFlag>
<name>ORIGINAL_UNRELIABLE</name>
<value>6</value>
<description>Observed value retrieved from external data source.
Value is invalid due to validation limits set. Value is removed.</description>
</outputFlag>
</flagConversion>
<flagConversion>
<inputFlag>
<name>Calculation Failure</name>
<value>C</value>
</inputFlag>
<outputFlag>
<name>ORIGINAL_DOUBTFUL</name>
<value>3</value>
<description>Observed value retrieved from external data source.
Value is valid, but marked as suspect due to soft validation limits being exceeded</description>
</outputFlag>
</flagConversion>
<defaultOuputFlag>
<name>ORIGINAL_RELIABLE</name>
<value>0</value>
<description>The data value is the original value retrieved from an
external source and it successfully passes all validation criteria set.</description>
</defaultOuputFlag>
<missingValueFlag>
<name>ORIGINAL_MISSING</name>
<value>9</value>
</missingValueFlag>
271
</flagConversions>
Date Time TN-1 QF-1 TN-2 QF-2 TN-3 QF-3 TN-N QF-N
DataType date time float Char(1) float Char(1) float Char(1) float Char(1)
DataFormat ddmmyyyy hh:mm [Link] [Link] [Link] [Link]
NOTE :
1. Value of all the Columns contains the Rain Fall value for that particular time interval.
This rain fall value is derive from current (raw accumulated pulse) value - previous (raw accumulated pulse) value,
in other words the rain fall (in mm) within that time interval (i.e. 10 mins).
F Telemetry Failed
N Telemetry Normal
C Calculation Failure
Example:
Date,Time,S23-USR-LEV-RES-1,Qf,S48(51)-UPP-LEV-RES-1,Qf,S46(50)-LPP-LEV-RES-1,Qf,S46(24)-MCP-LEV-RES-1,Qf06052007,02:10
Files in the Singapore OMS Lake Diagnostic System file format are expected to be text files.
Implementation details
First, the location is determined from the first two characters of the filename. The import module will scan the input for the following lines:
SCHEDULE A
SCHEDULE B
Data Section
The parameters are collected from lines containing a slash '/', where the actual parameterId is taken from the left part before the slash. The
parameters for SCHEDULE A are retrieved from the lines between SCHEDULE A and SCHEDULE B. The parameters for SCHEDULE B are
retrieved from the lines between SCHEDULE B and Data Section. The actual data with measurements is retrieved from the lines after the Data
Section tag. Each line with data that starts with 'A' or 'B' will contain also a number of space delimited values. The first value is the Julian day
since 1-Jan-2000. The second value is the number of seconds in that day. The timestamp that is derived from this is supposed to be in same
timezone as specified in the importTimeZone value of the import configuration. The other values are measurement data, for each parameter in the
corresponding schedule. The import will determine the timestep from the difference in time between the first two data rows for each schedule. The
other rows in a schedule must follow the same timestep for the schedule otherwise the import run will produce error messages. Also when either
no data values for schedule A or B are found, a warning messages is produced.
Sample Input
The following sample shows equidistant timeseries data for schedule A with a timestep of 30 seconds and equidistant timeseries data for
schedule B with a timestep of 15 minutes. The sample also shows the definition of two parameters TCHN 1 and TCHN 2 for schedule A and two
parameters CHAN 8 for schedule B. The current implementation rounds the actual timestamps to make them acceptable as equidistant data for
272
FEWS.
Sample configuration file for importing Singapore OMS Lake Diagnostic System files
</timeSeriesImportRun>
]]>
Sample idMapping
When working with parameters with different depths the identifiers sometimes have to be mapped.
For instance suppose the parameterId in the input file was TCHN 1 and the location derived from the filename was t4, this case can easily be
mapped to a internal identifiers MCH1 and [Link] using the following id mapping.
EasyQ
Files in the EasyQ file format are expected to be text files. The recommended timezone to be configured is (GMT+1). The input files for the parser
are header files (.hdr) describing the data files and the data files themselves (.sen or .dat). The locationId is retrieved from the filename. If there is
a space in the filename, the locationId will be taken from the part before the first space.
The parser seeks the position in the header file where the variables are defined. This section always starts with 'Data file format' and a separator
line. The next line contains the reference to the data file. The parser will ignore the specified the absolute file path but only scan whether the
extension .dat or .sen has been provided. The lines following the path describe the columns in the data file. The following sample illustrates a
piece of a typical header section. Each line contains a 1-based column index, followed by the header name, and finally the unit. The parser
expects fixed column-lengths.
273
Structure of the data files
The following snippet illustrates a sample data file. The first six columns are used to set the time step for the measurements.
McIdasArea
Introduction
The McIdasArea files format is available at Space Science and Engineering Center. Basically each McIdas Area file is a binary file containing
several header blocks followed by a data block. The values in the datablock are for MinoSil 4-byte big endian integers.
The data files for Minosil differ slightly from the 2006 specifications. Instead they use the function below for calculating the day. The first header
block is the directory block, which contains 64 integers. The following snippet illustrates how the datestamp is calculated from a pair of integers
starting at the specified offset.
CUMULATIVE_DAYS[monthIndex]) monthIndex++;
int month = (monthIndex + 1) % 12;
if (month == 0) month = 12;
274
<import>
<general>
<importType>McIDASArea</importType>
<folder>$IMPORT_FOLDER$/Radar</folder>
<idMapId>ImportRadar</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportRadar</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radar2000</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="60"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>6</synchLevel>
</timeSeriesSet>
<externUnit unit="0.001 mm" parameterid="[Link]"/>
</import>
]]>
<regular locationid="Radar2000">
<rows>760</rows>
<columns>760</columns>
<lambertConformalConic>
<originLatitude>0</originLatitude>
<originLongitude>0</originLongitude>
<firstStandardParallelLatitude>33.5</firstStandardParallelLatitude>
<secondStandardParallelLatitude>46.5</secondStandardParallelLatitude>
</lambertConformalConic>
<firstCellCenter>
<x>-1012857.84</x>
<y>5541058.269</y>
</firstCellCenter>
<xCellSize>2000</xCellSize>
<yCellSize>2000</yCellSize>
</regular>
...
]]>
Keller IDC
Overview
Next to the time series, read from the channels, also the next meta information is read and stored:
InstallationDepth
HeightOfWellhead
Offset
WaterDensity
BatteryCapacity
Notice: to store the data from the channels you should use the channel number as external parameter. To store the meta data you should use the
above listed keys as external parameter (case sensitive!).
Configuration
275
keller
..
]]>
[Link]
[Link]
package [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
/**
* TimeSeries reader for Keller AG *.IDC files
*
*/
public class IdcTimeSeriesParser implements BinaryParser<TimeSeriesContentHandler> {
// Constanten
private static final int DF_ID_FILE = 0;
private static final int DF_ID_DEVICE = 1;
private static final int DF_ID_DATA = 2;
private static final int DF_ID_UNITS = 3;
private static final int DF_ID_PROFILE = 4;
private static final int DF_ID_CONFIG = 5;
private static final int DF_ID_WL_CONVERTED = 6;
private static final int DF_ID_AIR_COMPENSATED = 7;
private static final int DF_ID_INFO = 8;
//
private LittleEndianDataInputStream is = null;
private TimeSeriesContentHandler contentHandler = null;
private int rawTimeZoneOffset;
/**
* Parse Keller AG *.idc bestand
*
* @param inputStream
* @param virtualFileName
* @param contentHandler
* @throws Exception
*/
276
@Override
public void parse(BufferedInputStream inputStream, String virtualFileName,
TimeSeriesContentHandler contentHandler) throws Exception {
switch (blockId) {
case DF_ID_FILE:
// File Identification
String version = readString();
if ([Link]()) [Link]("version = " + version);
break;
case DF_ID_DEVICE:
// Device properties
int lw = [Link]();
int w1 = lw / 65536;
int w2 = lw % 65536;
if (jaar == 3) {
if (week >= 10) {
abVersion0310 = true;
}
}
if (jaar > 3) {
abVersion0310 = true;
}
locationId = readString();
String comment = readString();
277
if ([Link]()) {
[Link]("serial number = " + serialNumber);
[Link]("configured as waterlevel = " + configuredAsWaterlevel);
[Link]("comment = " + comment);
[Link]("location id = " + locationId);
}
break;
case DF_ID_DATA:
// Data records
int z = [Link]();
for (int i = 0; i < z; i++) {
// datum 8 bytes double
// channel 1 byte
// 3 bytes skip
// value 4 bytes float
// lw 4 bytes int
// 4 bytes skip
// The date is stored as the number of days since 30 Dec 1899. Quite why it is
not 31 Dec is not clear. 01 Jan 1900 has a days value of 2.
double doubleTime = [Link]();
[Link](new Long(time));
byte channel = [Link]();
[Link](channel);
[Link](is, 3);
float singleValue = [Link]();
[Link](singleValue);
[Link]();
int longValue = [Link]();
// Skip 4 bytes
[Link](is, 4);
}
break;
case DF_ID_UNITS:
case DF_ID_PROFILE:
installationDepth = userValArr[2];
if ([Link]()) [Link]("installation depth " + installationDepth);
heightOfWellhead = userValArr[3];
if ([Link]()) [Link]("Height of wellhead above sea level " +
heightOfWellhead);
offset = userValArr[4];
278
if ([Link]()) [Link]("Offset " + offset);
waterDensity = userValArr[5];
if ([Link]()) [Link]("Water density " + waterDensity);
if ((availableChannels & 2) == 2) {
float p1min = [Link]();
float p1max = [Link]();
if ([Link]()) [Link]("P1 min " + p1min);
if ([Link]()) [Link]("P1 max " + p1max);
}
if ((availableChannels & 4) == 4) {
float p2min = [Link]();
float p2max = [Link]();
if ([Link]()) [Link]("P2 min " + p2min);
if ([Link]()) [Link]("P2 max " + p2max);
}
if ((availableChannels & 8) == 8) {
float t1min = [Link]();
float t1max = [Link]();
if ([Link]()) [Link]("T1 min " + t1min);
if ([Link]()) [Link]("T1 max " + t1max);
}
if ((availableChannels & 16) == 16) {
float tob1min = [Link]();
float tob1max = [Link]();
if ([Link]()) [Link]("TOB1 min " + tob1min);
if ([Link]()) [Link]("TOB1 max " + tob1max);
}
if ((availableChannels & 32) == 32) {
float tob2min = [Link]();
float tob2max = [Link]();
if ([Link]()) [Link]("TOB2 min " + tob2min);
if ([Link]()) [Link]("TOB2 max " + tob2max);
}
break;
case DF_ID_CONFIG:
// Record configuration
int startDate = [Link]();
int stopDate = [Link]();
lw = [Link]();
if (abVersion0310) {
int recFixCounter = [Link]();
short recModCounter = [Link]();
} else {
lw = [Link]();
int recFixCounter = lw / 65536;
int tmp = lw % 65536;
short recModCounter = (short) tmp;
}
short sw = [Link]();
279
short recFastModCounter = [Link]();
boolean recEndless = [Link]();
break;
case DF_ID_WL_CONVERTED:
// Waterlevel converted
boolean convertedIntoWaterlevel = [Link]();
break;
case DF_ID_AIR_COMPENSATED:
// Airpressure compensation
boolean airCompensated = [Link]();
break;
case DF_ID_INFO:
// Additional information
batteryCapacity = [Link]();
for (int i = 0; i < 10; i++) {
int reserve = [Link]();
}
// Read CRC16 sum of the whole file
short crc16 = [Link]();
break;
}
}
// Inhangdiepte
[Link](locationId);
[Link](startTime);
[Link]("InstallationDepth");
[Link]("m");
[Link](headerEq);
[Link](installationDepth);
[Link]();
[Link]("HeightOfWellhead");
[Link]("m");
[Link](headerEq);
[Link](heightOfWellhead);
[Link]();
[Link]("Offset");
[Link]("m");
[Link](headerEq);
[Link](offset);
[Link]();
[Link]("WaterDensity");
[Link]("kg/m3");
[Link](headerEq);
[Link](waterDensity);
[Link]();
[Link]("BatteryCapacity");
[Link]("%");
[Link](headerEq);
[Link](batteryCapacity);
[Link]();
}
/**
* Read block identification
* @return block identification
* @throws IOException
*/
280
private short readBlock() throws IOException {
short block = [Link]();
short w1 = [Link]();
short w2 = [Link]();
/**
*
*
*
* @param locationId
* @param header
* @return
* @throws [Link]
*/
private boolean parseUnits(
String locationId,
DefaultTimeSeriesHeader header) throws IOException {
// Kanaal
byte channel = [Link]();
// Eenheid
byte[] bytes = new byte[7];
[Link](bytes, 0, 7);
String unit = new String(bytes, 1, bytes[0]);
if([Link]("m")){
retval = false;
}
[Link](unit);
if ([Link]("°C")) {
[Link]("deg C");
}
// Multiplier
float multiplier = [Link]();
// Offet
float offset = [Link]();
// Description
bytes = new byte[41];
[Link](bytes, 0, 41);
String description = new String(bytes, 1, bytes[0]);
[Link](is, 3);
if ([Link]()) {
[Link]("channel " + channel);
[Link]("multiplier " + multiplier);
[Link]("offset " + offset);
[Link]("unit " + unit);
[Link]("description " + description);
}
[Link](locationId);
[Link]([Link](channel));
[Link](channel, header);
}
281
return retval;
}
/**
* Read a string from file
* @return string from file
* @throws Exception
*/
private String readString() throws IOException {
String retval = "";
// lees lengte van de string
short length = [Link]();
if (length > 0) {
// Create the byte array to hold the data
byte[] bytes = new byte[length];
282
}
}
Obserview
Overview
The Obserview format is a ASCII data file format exported from the Obserview system. Each file may contain only one time series.
The time series are defined by the file name, there is no information on the ASCII file on the location or parameter.
Configuration
The Obserview file does not conmtain a location nor a parameter ID. To import these series into FEWS you have to map the file name to both the
set FEWS location ID and parameter ID. The configuration of the import module instance is not different from any other import type. Because the
file does not contain any parameter unit information, the "external" unit can be specified in the import module instance.
<general>
<importType>OBSERVIEW</importType>
<folder>$IMPORT_FOLDER$/Obsv</folder>
....
</general>
<timeSeriesSet>
<moduleInstanceId>Import_Obsv</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>ObsLocSet</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
....
<externUnit unit="0.01 m3/s" parameterid="[Link]"/>
<externUnit unit="mm" parameterid="[Link]"/>
]]>
Example file
283
27-11-2009;[Link];9119
27-11-2009;[Link];9118
27-11-2009;[Link];9118
27-11-2009;[Link];9125
27-11-2009;[Link];9123
27-11-2009;[Link];9120
27-11-2009;[Link];9124
27-11-2009;[Link];9120
27-11-2009;[Link];9123
generalCsv
Overview
Imports time series data from files in CSV format with one header line containing a column heades of the time series:
The first line contains the column names (fields) in the csv file, the line is used to determine the field separator and to determine the
names of the data columns
All other lines contain the date-time as field and the values for each time series.
Values between -1000.0 and -999.0 (inclusive) are regarded as missing values.
Import type
Example
Time,Waterstand,Pomp-1 Born
04-05-2011 03:24,0.000000,-0.450000
04-05-2011 03:44,0.000000,-0.450000
04-05-2011 03:54,0.000000,-0.440000
.....
284
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>generalCSV</importType>
<folder>$IMPORT_FOLDER$/OBS</folder>
<failedFolder>$IMPORT_FAILED_FOLDER$</failedFolder>
<backupFolder>$IMPORT_BACKUP_FOLDER$/OBS</backupFolder>
<table>
<dateTimeColumn pattern="dd-MM-yyyy HH:mm" name="Time"/>
<valueColumn unit="m" locationid="Bosscheveld" parameterid="[Link]" name=
"Waterstand"/>
<valueColumn unit="min" locationid="Bosscheveld" parameterid="[Link]" name=
"Pomp-1 Born"/>
</table>
<idMapId>IdImportOBS</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
</general>
</import>
</timeSeriesImportRun>
]]>
If the first line contains a comma, the decimal separator is taken to be a period (.), otherwise it is supposed to be a semicolon (;) and the decimal
separator is taken to be a comma. This way locale-specific CSV files are supported.
The field separator is either a comma or a semicolon. Tabs are not supported.
DINO Service
Overview
The GrondWaterService offers a wide variety of data however only imports for the following data types have been implemented:
The ground water levels request returns a list of time/value pairs of the measured ground water levels for a measuring station.
The ground water statistics request returns a list containing a number of times and for each time a set of statistical parameters that apply to the
ground water level. The following parameters are returned:
MAX_LEVEL
MIN_LEVEL
STD_LEVEL
MEAN_LEVEL
MEDIAN_LEVEL
P10_LEVEL
P25_LEVEL
P75_LEVEL
P90_LEVEL
Configuration (Example)
A complete import module configuration consists of an ID Mapping file and a Import Module Instance file.
285
ModuleConfigFiles/
The following example of an Import Module Instance will import the time series as non-equidistant series.
[Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>dinoservice</importType>
<serverUrl>[Link]
<relativeViewPeriod unit="day" start="-365" end="0"/>
<idMapId>IdImportDino</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
<!-- <timeZoneOffset>+01:00</timeZoneOffset>-->
</importTimeZone>
<dataFeedId>tnonitg</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportDinoService</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>G.m</parameterId>
<locationId>NL-B32C0677-001</locationId>
<!-- <locationSetId>TNO-sensors(SWE)</locationSetId> -->
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="365"/>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportDinoService</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>NL-B32C0677-001</locationId>
<!-- <locationSetId>TNO-sensors(SWE)</locationSetId> -->
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="365"/>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportDinoService</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>NL-B32C0677-001</locationId>
<!-- <locationSetId>TNO-sensors(SWE)</locationSetId> -->
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="365"/>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportDinoService</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>NL-B32C0677-001</locationId>
<!-- <locationSetId>TNO-sensors(SWE)</locationSetId> -->
<timeSeriesType>external historical</timeSeriesType>
286
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day" multiplier="365"/>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
IdMapFiles/
[Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
Important in this configuration is the externalQualifier, this is used to map the statistical parameters to FEWS parameters.
GrondWaterService WSDL
[Link]
]]>
287
Ground Water Levels - Response
<S:Body>
<ns2:findMeetreeksResponse xmlns:ns2="[Link]
<GROUND_WATER_LEVELS>
<LEVELS>
<DATE>2007-01-01+01:00</DATE>
<LEVEL>187.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-02+01:00</DATE>
<LEVEL>190.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-03+01:00</DATE>
<LEVEL>193.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-04+01:00</DATE>
<LEVEL>188.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-05+01:00</DATE>
<LEVEL>190.0</LEVEL>
<REMARK/>
</LEVELS>
<LEVELS>
<DATE>2007-01-05+01:00</DATE>
<LEVEL>189.0</LEVEL>
<REMARK/>
</LEVELS>
</GROUND_WATER_LEVELS>
</ns2:findMeetreeksResponse>
</S:Body>
]]>
[Link]
GermanSnow
Overview
GermanSnow imports grid time series data from the ASCII file produced by German SNOW model.
The first line contains the forecast time in the format "YYYY MM DD HH". The time zone is always UTC.
288
The forcast time may be preceded by the keyword : " - Datum: YYYY MM DD HH".
The second record contains the parameter Id and the time offset in hours in relation to the forecast time.
The next lines contain grid data. The start point of the grid in the SNOW file is South - West.
2011 1 10 18
OBWN -29
2.0 2.0 2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 2.0 2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
OBND -29
1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 1.0 2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
OBWN -28
0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
OBND -28
......
......
The record with forecast time may be exceptionally followed with a line acoording to this format:
" - Limits: [ X1- X2; Y2- Y2] - [ X1- X2; Y2- Y2]"
This information is associated with the corrections made by SNOW model. The reader ignores this line
Parser features
The parser must know the size of the grid to read from the SNOW files. So, the user must configure the grid geometry in [Link] file.
The parser also expects that all parameters has the same grid size.
Delft3D-Flow
Overview
Import Delft3D Flow model results that are stored in the NEFIS file format. There are 2 types of Delft3D Flow model results:
Point-based output
Grid-based output.
In both cases data are stored in NEFIS file but the structure is different.
Currently the Delft3DFlow import can import point-based results only which are stored in a number of NEFIS structures (groups, cells, elements
as they are defined in the NEFIS file format):
his-const ITDATE INTEGER [2] YYYYYYDDMM, 20050101, start date and time
HHmmSS 100101
his-info-series ITHISC INTEGER [nCells] 0, 10, ... time step number for each cell in a group
his-series ZWL REAL[nCells] m 1.1, 1.2, ... water level values for each time step defined in
the ITHISC
289
Names and dimensions of the variables available in the NEFIS Delft3D Flow results file:
FLTR Total discharge through cross section [5761] [16] can not be imported, requires velocity
(velocity points) points
DTR Dispersive transport through cross section [5761] [ 1] strange that dimension differs from
(velocity points) previous - 1 versus 16
CTR Monumentary discharge through cross [5761] [16] can not be imported, requires velocity
section (velocity points) points
CTR Advective transport through cross section [5761] [ 1] can not be imported, requires velocity
(velocity points) points
Configuration
To import location-based data from Delft3D Flow NEFIS file setup a TimeSeriesImport module configuration like:
290
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>Delft3DFlow</importType>
<folder
>../junit_test_output/nl/wldelft/fews/system/plugin/dataImport/TimeSeriesImportTestData/import/delft3dflow</
folder>
<idMapId>delft3dflowMapId</idMapId>
<importTimeZone><timeZoneOffset>+01:00</timeZoneOffset></importTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportDelft3DFlow</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Z.m</parameterId>
<locationId>shepelevo</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="4"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
CERF
Overview
CERF regionalised conceptual rainfall runoff model from Center for Ecology and Hydrology and the Environment Agency (UK).
The CERF gridded rainfall data is a simple CSV format file in which the data in each record refers to a specific day and the records increment in
chronological order. The columns within the file contain the daily rainfall for each grid cell within the export. These columns increment right to left
in "strips" of data for 1km cells for a common Northing but incrementing eastings. The eastings for these cells increment left to right.
Data format
The first column of the file contains the date (format: <day>/<month>/<year>). The first record contains the column headers which for a grid
extract covering a box denoted by the lower left and upper right grid coordinates (nnnE, mmmN), (nnn+10E, mmm+5N) would consist of the
following
Date, nnnE-mmmN,..nnn+10E-mmmN,nnnE-mmm+1N....nnn+10E-mmm+1N...etc.
Grid references are quoted in units of 1000m and rainfall data are in units of mm/day.
291
Date, 410E-505N , 411E-505N , 412E-505N , ... 454e-506n
01/12/1963, 1.53 , 1.51 , 1.46 , ... 0
02/12/1963, 0 , 0 , 0 , ... 0
03/12/1963, 0 , 0 , 0 , ... 0
04/12/1963, 0 , 0 , 0 , ... 0
05/12/1963, 0 , 0 , 0 , ... 0
06/12/1963, 0.17 , 0.15 , 0.12 , ... 0
07/12/1963, 0.77 , 0.74 , 0.69 , ... 1.32
08/12/1963, 0 , 0 , 0 , ... 0
09/12/1963, 0 , 0 , 0 , ... 0
.
.
.
Configuration
SWE
Overview
The Open Geospatial consortium has developed the concept of Sensor Web Enablement to make sensor data available for the web. A number of
services have been specified including associated XML-formats. The most important XML-specification for FEWS seems to be the Observations
and Measurements model ([Link] ).
Investigation of the xml model and interface to determine which xml files/objects should be supported to implement the observation and
measurement model
Implementation of a SweTimeSeriesParser to test whether import from a URL fits the current import strategy of FEWS.
292
SWE, will involve hundreds of xml files. In FEWS, Castor is used to generate java classes for object/model mapping. A test was performed to see
whether Castor could generate code for a single SWE file; [Link]. Castor did not succeed due to the embedded references to external
XML files.
Before use!
During tests we managed to get servers to hang because the amount of XML data, that had to be returned by the server, caused OutOfMemory
exceptions. This occurred even for small queries (short period of just a few days and only one location and parameter) . To avoid these problems,
please ensure that you've tested your queries on the target service, and adjust your relative view period appropriately.
To avoid these memory exceptions, we've adjusted the import module so that it will send a request to the SWE service for every single location,
parameter combination in the IdMap file. A drawback of this approach is that the performance of the import has slowed down.
Configuration (Example)
In order to import data from a swe enabled service one has to configure the following items:
The url can be configured in the moduleConfigFile for the desires SWE import. Use the serverUrl tag in the general section as described in the
following example:
293
[Link]
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>SWE</importType>
<serverUrl>[Link]
<relativeViewPeriod unit="week" start="-100" end="0"/>
<idMapId>IdImportKNMI_SWE</idMapId>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>tnonitg</dataFeedId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI_SWE</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-sensors(SWE)</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-5" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI_SWE</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-sensors(SWE)</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-5" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI_SWE</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-sensors(SWE)</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-5" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportKNMI_SWE</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>KNMI-sensors(SWE)</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" startoverrulable="true" start="-5" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
294
The offerings, procedures and observedProperties are mapped in the idMap file for the swe import. See next example:
IdImportSwe
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link] [
[Link] version="1.1">
<parameter externalqualifier="weatherNL" internal="[Link]" external=
"urn:og[Link]phenomenon:precipitationIntensity"/>
<parameter externalqualifier="weatherNL" internal="[Link]" external=
"urn:og[Link]phenomenon:windSpeed"/>
<parameter externalqualifier="weatherNL" internal="[Link]" external=
"urn:og[Link]phenomenon:airTemperature"/>
<parameter externalqualifier="weatherNL" internal="[Link]" external=
"urn:og[Link]phenomenon:relativeHumidity"/>
Note the parameter mappings in the above example. Four different parameters are described:
The externalQualifier is required and describes which offering is used to get the data from the service.
The internal parameter name is mapped to the external 'observed property'
The internal location is mapped to the external 'procedure'
NetcdfGridDataset
Import NetcdfGridDataset uses NetCdf to read grid data from grib1, grib2 en NC formats.
The NetcdfGridDataset supports :
importing ensembles,
importing grid data from separate layers (z-dimension)
importing grid data that are distributed over the several files, e.g. all forecasts in one file, one file per forecast, et cetera.
Starting with DELFT-FEWS versions 2011.02 NetcdfGridDataset replaces the grib1 specific import types that use JGrib decoder.
NetcdfGridDataset is in use as several import types.
This is due to the backward compatibility of the existing configurations, and due to the logical naming of the import types from the customer's point
of view.
NetcdfGridDataset NC reads also grib1 and grib2, however with low performance
295
grib1 grib1
grib2 grib2
Note:
the old JGrib based imports Grib, GribBasic, GribCosmo can be still used as import types GribOld, GribBasicOld, GribCosmoOld
IP1
Overview
Imports ASCII type time series data in CSV formatted files. Used by FEWS-Basque
The data file contains one row with data columns, seperated by a comma (,).
1 code location -
2 date/time -
Location ID and date/time values are inluded in " (double quotes). These are removed by the import.
"C0D2","20110912233000",0,0,0,0,0,0,0,159.1,96.7,0,0,0,0,0,0,0,0,0,190,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13.37
Configuration
296
<folder>$IMPORT_FOLDER_IP1$</folder>
<failedFolder>$IMPORT_FOLDER_IP1$</failedFolder>
<backupFolder>$BACKUP_FOLDER_IP1$</backupFolder>
<idMapId>IdImportIP1</idMapId>
<unitConversionsId>ImportIP1Units</unitConversionsId>
<importTimeZone>
<!--EPS is in GMT-->
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>IP1-DF</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>wlevel</parameterId>
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>T_air</parameterId>
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Precip</parameterId>
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>wlevel_ana</parameterId>
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportIP1</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Discharge</parameterId>
297
<locationSetId>IP1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minutes" multiplier="10"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>IP1</ensembleId>
</timeSeriesSet>
]]>
idMapping
The parser assigns IDs for the parameters as indicated in the table above.
To map these to current FEWS parameters an idMapping can be configured.
For example:
IFKIS
Overview
The input file is of ASCII type. Data is structured in 14 columns, separated by a whitespace.
810 2009 4 18 8 0.0000 4.5400 76.3414 122.3708 14.0900 1.5760 50.9380 0.0505 6.4438
810 2009 4 18 9 0.0000 5.2833 68.0622 221.7067 22.5300 2.0883 50.4450 0.0485 6.0487
810 2009 4 18 10 0.0000 6.1333 60.2494 248.2750 19.9600 2.5067 49.6933 0.0475 5.6804
The next table explains the columns from the input and states what parameter ID and parameter unit is assigned by the time series parser during
import:
1 Location ID - -
2 Year - -
3 Month - -
4 Day - -
5 Hour - -
298
6 Precipitation precipitation mm
During import the location ID is set to the (numeric) value found in the first column of the row.
The time stamp is created from the next 4 columns (year, month, day, hour); minutes are set to zero, so values are hourly.
Configuration
.
.
.
</import>
</timeSeriesImportRun>
]]>
idMapping
299
To map these to current FEWS location and parameter an idMapping can be configured.
For example:
IJGKlepstanden
Overview
Each row contains in fact 6 columns, however column seperation is not consistent.
Seperation characters in use are white space, comma, colon.
Data contained in each row are described as (first row from snippet above as example):
1 Location Stevinsluis
5 Parameter ID Stand
6 Parameter value 0
The location , External qualifier 1 and External qualifier2 are concatenated during the import to form the LocationID used by the import to store the
time series.
For the first row in the data example above, the LocationID becomes Stevinsluis_Kolk1_Zuid
Configuration
300
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>IJGKlepstanden</importType>
<folder>$IMPORT_FOLDER_IJGKlepstanden$</folder>
<failedFolder>$IMPORT_FOLDER_IJGKlepstanden$</failedFolder>
<backupFolder>$IMPORT_FOLDER_IJGKlepstanden$</backupFolder>
<importTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>IJGKS-DF</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportIJGKS</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Klepstand</parameterId>
<locationSetId>IJGKS_Locs</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
<ensembleId>IJGKS</ensembleId>
</timeSeriesSet>
.
.
.
</import>
</timeSeriesImportRun>
]]>
idMapping
Non matching LocationID and ParameterID assigned during import can be mapped to the ones in use by the FEWS system by defining an ID
Mapping.
For example:
Radolan
Overview
Imports high resolution precipitation analysis and forecast data from Radolan/Radvor-OP files from Deutscher Wetterdienst (DWD).
The data is GRID typed data.
File structure
A range of data products from DWD radar observations and/or derived from radar observations are delivered in binary files. The heading in the file
is ASCII text (hence human readable) and is used to determine for which parameter the file contains data.
The remainder of the file contains the data in binary format.
This import can read data for the following data products; the product identifier is used as parameter ID during import:
Product identifier
TZ
301
TH
RX
PC
PF
PN
PI
DX
DQ
DXQ
Configuration
.
.
.
</import>
</timeSeriesImportRun>
]]>
idMapping
To map these to current FEWS location and parameter an idMapping can be configured.
For example:
302
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter internal="[Link]" external="RX"/>
<location internal="Loc10410" external="10410"/>
</idMap>
]]>
Bayern
Overview
Imports ASCII type time series data (level forecasts) from Bayern, location Raunheim am Main.
Data is to be obtained through a http request. Data obtained from the URL must be stored as an ASCII-file in order for the parser to process it.
The data consists of three sections; a header, the time series data and a footer.
The header consists of three rows; the first and last contains only dash characters and are ignored by the parser.
The middle row contains the german keyword for location and the numerical ID for the location seperated by a | (pipe character).
The parser sets both (external) LocationID and ParameterID to this numerical ID value.
In between header and footer are the time series date/time and values.
Date/time and values are again seperated by the | (pipe character).
----------------------------
Messstelle | 24095302
----------------------------
21.07.2011 05:00 | 167
21.07.2011 06:00 | 165
21.07.2011 07:00 | 163
21.07.2011 08:00 | 160
.
.
.
23.07.2011 04:00 | 199
23.07.2011 05:00 | 199
----------------------------
Datenart: Wasserstand [cm]
Alle Vorhersagewerte ohne Gewähr.
Datenbankabfrage: 21.07.2011 09:33
Configuration
To import forecast data from Bayern, stored into an Acii file, configure a module like:
303
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<!--Bayern-->
<general>
<importType>Bayern</importType>
<folder>$IMPORT_FOLDER_BAYERN$</folder>
<failedFolder>$IMPORT_FOLDER_BAYERN$</failedFolder>
<backupFolder>$BACKUP_FOLDER_BAYERN$</backupFolder>
<idMapId>IdImportBayern</idMapId>
<unitConversionsId>ImportBayernUnits</unitConversionsId>
<importTimeZone>
<!--EPS is in GMT-->
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>Bayern-DF</dataFeedId>
<reportChangedValues>true</reportChangedValues>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportBayern</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>wlevel</parameterId>
<locationSetId>Bayern</locationSetId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
<expiryTime unit="day"/>
<ensembleId>Bayern</ensembleId>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
idMapping
The parser assigns the numerical ID found in the header to LocationID as well as ParameterID.
To map these to current FEWS location and parameter an idMapping can be configured.
For example:
Fews allows you to write your own time series import format in Java.
In the import module one can specify the fully qualified class name and the bin directory that contains a jar file with compiled java code and
optimally other third party libaries. A jar file is just a zip file that contains you compiled java files.
e.g. classname = [Link]
and binDir = $REGION_HOME$/parsers/bin
The source code required for a simple format is only a few lines of code
A parser tells the content handler every thing it finds a file in the order it is available in the file. The content handler will map everything to the right
304
time series.
The content handler will do the id mapping, unit conversion, datum conversion, translating text to decimal values, translating text to date/times
with the specified time zone, convert missing values, trace values, validate the time series.
The import module will also open and close the files for you.
The import files can retain on the file system, in a zip file, tar file or gz file, ftp server, sftp server. The programmer will not notice the difference
Types of parsers
Text parsers
Binary parser
File parsers
This kind of parsers will use a third party library that not accepts streams
Database parsers
Server parsers
PeriodConsumer
Database and server parsers often needs a period in their query to the database of server.
When this interface implemented the import module will provide an absolute period.
TimeSeriesHeadersConsumer
Database and server parsers often needs the location and parameter ids in their queries
When this interface implemented the import module will convert the FEWS headers with specified id map and provide them to the parser. The
mapping is used in the opposite direction compared to normal mapping. This can result in different mapping when the id map is not one internal to
one external and visa versa.
VirtualDirConsumer
With a virtual dir the parser can open meta files with additional information required for to parse file that retain in the same directory as the
imported file. For example some grid coverage formats need an additional file for the geo referencing.
Examples
TextParsers
[Link]
[Link]
305
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
XML parsers
[Link]
[Link]
Binary parsers
[Link]
, VirtualInputDirConsumer, FileFilter {
private BufferedImage bufferedImage = null;
private Graphics2D graphics = null;
private int[] rgbs = null;
private float[] values = null;
private VirtualInputDir virtualInputDir = null;
private String virtualFileName = null;
private TimeSeriesContentHandler contentHandler = null;
@Override
public boolean accept(File pathname) {
return getWorldFileExt([Link](pathname)) != null;
}
@Override
public void setVirtualInputDir(VirtualInputDir virtualInputDir) {
[Link] = virtualInputDir;
}
@Override
public void parse(BufferedInputStream inputStream, String virtualFileName,
TimeSeriesContentHandler contentHandler) throws Exception {
[Link] = virtualFileName;
[Link] = contentHandler;
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
[Link]("image");
[Link]("image");
[Link](header);
306
if ([Link]()) return;
[Link](getTime(new File(virtualFileName).getName(),
[Link]()));
if ([Link]()) return;
loadImage(inputStream, virtualFileName);
Geometry geometry = loadGeometry();
[Link](geometry);
[Link](1.0f);
[Link](values);
[Link]();
}
if ([Link]() != [Link]()) {
throw new Exception("Width of image file (" + [Link]()
+ ") differs from number of cols (" + [Link]() + ") in grid required
geometry or world file");
}
if ([Link]() != [Link]()) {
throw new Exception("Height of image file (" + [Link]()
+ ") differs from number of rows (" + [Link]() + ") in grid required
geometry or world file");
}
return res;
}
if (jaiFormatId == null)
throw new Exception("Unsupported bitmap format " + inputStream);
values[i] = (r + g + b) / 3;
}
} finally {
[Link]();
307
}
} finally {
[Link]();
}
}
return null;
}
return null;
}
308
String dateTimeText = [Link]();
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyyMMdd" + [Link](8) +
"HHmm");
[Link](timeZone);
try {
return [Link](dateTimeText).getTime();
} catch (ParseException e) {
throw new Exception("File name should contain valid date time yyyyMMdd_HHmm " +
fileName);
}
}
}
]]>
[Link]
{
private static final Logger log = [Link]([Link]);
@Override
public void parse(BufferedInputStream inputStream, String virtualFileName,
TimeSeriesContentHandler contentHandler) throws Exception {
[Link] = contentHandler;
[Link] = new LittleEndianDataInputStream(inputStream);
parseHeader();
if ([Link]()) return;
309
if ([Link] != [Link]()) {
values = new float[[Link]()];
byteBuffer = new byte[[Link]() * NumberType.INT16_SIZE];
shortBuffer = new short[[Link]()];
}
if ([Link] != [Link]()) {
values = new float[[Link]()];
byteBuffer = new byte[[Link]() * NumberType.INT16_SIZE];
shortBuffer = new short[[Link]()];
}
@SuppressWarnings({"OverlyLongMethod"})
private void parseHeader() throws IOException {
DefaultTimeSeriesHeader header = new DefaultTimeSeriesHeader();
nx1 = [Link]();
ny1 = [Link]();
nz1 = [Link]();
310
dx1 = (float) dx1int / dxyScale;
dy2 = (float) dy2int / dxyScale;
[Link](is, 4 * nz1);
iBbMode = [Link]();
[Link](is, 4 * 9);
[Link](is, 4);
varName = [Link](is, 20);
varScale = [Link]();
imissing = [Link]();
nradars = [Link]();
if (nradars > 0) {
radarNames = new String[nradars];
for (int i = 0; i < nradars; i++) {
radarNames[i] = [Link](is, 4).trim();
}
}
[Link](geometry);
[Link](1.0f / varScale);
[Link](varName);
[Link](varUnit);
[Link](header);
debugHeader();
}
311
[Link]("nradars = " + nradars);
if (radarNames != null) {
for (int i = 0; i < [Link]; i++) {
String radarName = radarNames[i];
[Link]("radarName = " + radarName);
}
}
[Link]("dxyScale = " + dxyScale);
[Link]([Link](header, '\n'));
}
}
]]></String></String>
[Link]
TimeSeriesContentHandler represents the classes
* to handle the timeseries data that are supplied by the timeseries parsers.
*/
public interface TimeSeriesContentHandler {
/**
* Defines time zone which should be used while importing time series.
* If time zone is defined in the file format - this TimeZone should not be used.
* @return
*/
TimeZone getDefaultTimeZone();
/**
* Adds a value that should be recognized as missing when calling {@link #setValue(float)},
{@link #setValue(char, String)} or {@link #setCoverageValues(float[])}
* {@link Float#NaN} is always recognized as missing
*/
void addMissingValue(float missingValue);
/**
* Adds a alpha numeric tag that should be recognized as missing when calling {@link
#setValue(char, String)}
* These alphanemeric missings are not recognized when using {@link #setValue(float)} or
{@link #setCoverageValues(float[])}
* The missings added with {@link #addMissingValue(float)} and {@link
#addMissingValueRange(float, float)} are also recognized
* {@link Float#NaN} is always recognized as missing
*
*/
void addMissingValue(String missingValue);
/**
* Adds a range of values that should be recognized as missing when calling {@link
#setValue} or {@link #setCoverageValues(float[])}
* NaN, null and an empty string, string with only spaces are always recognized as missing
*/
void addMissingValueRange(float minMissingValue, float maxMissingValue);
/**
* Creating an alias allows high speed switching between different headers
* E.g. For files with multiple parameters per row, for every row multiple switches are
required between different headers
* This will not work properly without defining an alias for every column
* The alias is ultimate fast in the range form 0 to 1000
*
* @param alias, integer for ultimata speed, good practice is to use the parameter column
index.
* @param header
312
*/
void createTimeSeriesHeaderAlias(int alias, TimeSeriesHeader header);
/**
* Changes the header that will be used when calling {@link #applyCurrentFields()}
* A call to this method will not consume any significant time
* {@link #setTimeSeriesHeader(TimeSeriesHeader)} is relatively time consuming
* @see #createTimeSeriesHeaderAlias(int, TimeSeriesHeader)
* @param alias defined with {@link #createTimeSeriesHeaderAlias (int, TimeSeriesHeader)}
* @throws IllegalArgumentException when alias is not created before
*/
void setTimeSeriesHeader(int alias);
/**
* Same as {@link #setTimeSeriesHeader(int)} , but slightly SLOWER
* The second time this method is called for the SAME header,
* there is NO new time series created but the first one is re-selected
* This method is relatively time consuming.
* When parsing multiple parameters per row use {@link #setTimeSeriesHeader (int)}
*/
void setTimeSeriesHeader(TimeSeriesHeader header);
/**
* Changes the time that will be used when calling {@link #applyCurrentFields()}
* A NEW time series is created, with a new forecast time and new ensemble member index
* A warning is logged when this method is called twice for the same header (historical non
ensemlbe time series)
*/
void setNewTimeSeriesHeader(TimeSeriesHeader header);
/**
* The parser should call this method when it starts parsing time/values and has any idea of
the period of the values that will come.
* This information is only used by this content handler for OPTIMALISATION and
* is never required and never results in an error when the real period is differs from the
estimated period
* @param period
*/
void setEstimatedPeriod(Period period);
/**
* Changes the time that will be used when calling {@link #applyCurrentFields()}
*
* @param time Represents the number of milliseconds since January 1, 1970, [Link] GMT
*/
void setTime(long time);
/**
* Changes the time that will be used when calling {@link #applyCurrentFields()}
*
* In addition to simple date format 24h will be recognized as 0h the next day
*
* @param timeZone. When not known use {@link #getDefaultTimeZone()}
* @param pattern see {@link SimpleDateFormat}, in addition for HH 24 will be recognized as
0:00 the next day
* @param dateTime leading and trailing spaces are ignored
*/
void setTime(TimeZone timeZone, String pattern, String dateTime);
/**
* Changes the time that will be used when calling {@link #applyCurrentFields()}
*
* In addition to simple date format 24h will be recognized as 0h the next day
313
*
* @param timeZone. When not known use {@link #getDefaultTimeZone()}
* @param datePattern see {@link SimpleDateFormat}
* @param date leading and trailing spaces are ignored
* @param timePattern see {@link SimpleDateFormat}, in addition for HH 24 will be
recognized as 0:00 the next day
* @param time leading and trailing spaces are ignored
*/
void setTime(TimeZone timeZone, String datePattern, String date, String timePattern, String
time);
/**
* Return false if any value for the selected time series with {@link #setTimeSeriesHeader}
is wanted
* When true parsing of ALL values for this time series can be skipped
*/
boolean isCurrentTimeSeriesHeaderForAllTimesRejected();
/**
* Return false if the value selected time {@link #setTimeSeriesHeader} and selected time
{@link #setTime(long)} is wanted
* When true parsing of the time and time series can be skipped
*/
boolean isCurrentTimeSeriesHeaderForCurrentTimeRejected();
/**
* Changes the flag that will be used for when calling {@link #applyCurrentFields()}
*/
void setFlag(int flag);
/**
* Changes the flag that will be used for when calling {@link #applyCurrentFields()}
*/
void setFlag(String flag);
/**
* Changes the sample id that will be used for when calling {@link #applyCurrentFields()}
*/
void setSampleId(String sampleId);
/**
* Changes the out of detection range that will be used when calling {@link
#applyCurrentFields()}
*/
void setOutOfDetectionRangeFlag(OutOfDetectionRangeFlag flag);
/**
* Changes the comment that will be used when calling {@link #applyCurrentFields()}
*/
void setComment(String comment);
/**
* When a overrullilng geometry is defined (in [Link]) this geometry will overturn the
geometry set with {@link #setGeometry(Geometry)}
* When there is a overrullilng geometry the content handler will log an error when the
number of rows and cols is not the same a set with {@link #setGeometry(Geometry)}
* @return
*/
Geometry getOverrulingGeometry();
/**
* Used by the parser to create a geometry when there is no geometry info available in the
file
*/
314
GeoDatum getDefaultGeoDatum();
/**
* Changes the geometry that will be used when calling {@link #applyCurrentFields()}
*
* When only the number of rows and cols are available use {@link
NonGeoReferencedGridGeometry#create(int, int))))|
* @see #getDefaultGeoDatum()
*/
void setGeometry(Geometry geometry);
/**
* Changes the value resolution that will be used when calling {@link #applyCurrentFields()}
* Only be used when the file format don't uses IEEE floats to store the values
* e.g. When there are only integers parsed the value resolution is 1.0
* e.g. When there at maximum to decimals the value resolution is 0.01;
* e.g. When file format store the values as integers and devides the integers by 5
afterwards the value resolution is 0.2
*
* @param valueResolution
*/
void setValueResolution(float valueResolution);
/**
* Changes the value that will be used when calling {@link #applyCurrentFields()}
*/
void setValue(float value);
/**
* Changes the value that will be used when calling {@link #applyCurrentFields()}
* When the value can not be parsed an error will be logged, no excepton is thrown
* Add missing value tags before calling this function {@link #addMissingValue(String)}
* e.g. addMissingValue('?')
*
* @param value leading and trailing spaces are ignored
*/
void setValue(char decimalSeparator, String value);
/**
* Puts the coverage values for the last set time and and last set header with the last set
flag and last set geometry
* When {@link #getOverrulingGeometry ()} returns not null there is no need to to set the
geometry
* When the active geometry does not have the same number of rows and cols an error message
is logged.
* For performance reasons do not parse the values when {@link
#isCurrentTimeSeriesHeaderForCurrentTimeRejected()} returns true
* For performance reasons do no recreate the values array for every time step again
*/
void setCoverageValues(float[] values);
/**
* Saves the current fields for the current time series header and current time.
* The current fields are not cleared so it is only required
* to update the changed fields for the next calll to {@link #applyCurrentFields()}
*/
void applyCurrentFields();
}
]]>
315
Where to Use? This can be used for importing NetCDF data into the Delft-FEWS system.
Why to Use? The advantage of importing NetCDF data directly from an OPeNDAP server, as opposed to importing local NetCDF files, is
that the files do not have to be stored locally. Furthermore if only part of a file is needed, then only that part will be
downloaded instead of the entire file. This can save a lot of network bandwidth (i.e. time) for large data files.
Preconditions: The data to import needs to be available on an OPeNDAP server that is accessible by the Delft-FEWS system.
Contents
Contents
Overview
How to import data from an OPeNDAP server
Import configuration
Id map configuration
Import data from a single file
Import data from a catalog
Import data for a given variable
Import data for a given period of time
Import data for a given subgrid
Import data from a password protected server
Import data from a server that uses SSL
Known issues
Export of data
Related modules and documentation
Internal
External
Overview
OPeNDAP (Open-source Project for a Network Data Access Protocol) can be used to import NetCDF data from an OPeNDAP server directly into
Delft-FEWS. For more information on OPeNDAP see [Link] Currently only the import of NetCDF files from an OPeNDAP server is
supported. Three types of NetCDF data can be imported: grid time series, scalar time series and profile time series. For more information on
these specific import types see their individual pages: NETCDF-CF_GRID, NETCDF-CF_TIMESERIES and NETCDF-CF_PROFILE.
Import configuration
Data can be imported into Delft-FEWS directly from an OPeNDAP server. This can be done using the Import Module. The following import types
currently support import using OPeNDAP:
NETCDF-CF_GRID Use this for importing grid time series that are stored in NetCDF format
NETCDF-CF_TIMESERIES Use this for importing scalar time series that are stored in NetCDF format
NETCDF-CF_PROFILE Use this for importing profile time series that are stored in NetCDF format
To instruct the import to use OPeNDAP instead of importing local files, specify a server URL instead of a local import folder. Below is an example
import configuration with a serverUrl element.
316
<timeSeriesImportRun xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link]
xmlns="[Link]
<import>
<general>
<importType>NETCDF-CF_GRID</importType>
<serverUrl>[Link]
<startDateTime date="2007-07-01" time="[Link]"/>
<endDateTime date="2008-01-01" time="[Link]"/>
<idMapId>OpendapImportIdMap</idMapId>
<missingValue>32767</missingValue>
</general>
<timeSeriesSet>
<moduleInstanceId>OpendapImport</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>gridLocation1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</import>
</timeSeriesImportRun>
]]>
Here the serverURL is the URL of a file on an OPeNDAP server. For details on specifying the URL see Import data from a single file or Import
data from a catalog below. The time series set(s) define what data should be imported into Delft-FEWS. Only data for the configured time series
sets is downloaded and imported, all other data in the import file(s) is ignored. For more details see Import Module configuration options.
Id map configuration
The import also needs an id map configuration file, that contains a mapping between the time series sets in the import configuration and the
variables in the file(s) to import. Below is an example id map configuration.
To import data from a single file on an OPeNDAP server, the correct URL needs to be configured in the serverUrl element. To get the
correct URL for a single file:
1. Use a browser to browse to a data file on an OPeNDAP server, e.g.
[Link]
2. Copy the URL that is listed on the page after the keyword "Data URL:", e.g.
[Link]
3. Paste this URL in the serverUrl element in the import configuration file.
Instead of specifying the URL of a single file on an OPeNDAP server, it is also possible to specify the URL of a catalog. The files on an
OPeNDAP server are usually grouped in folders and for each folder there is a catalog file available. The catalog usually contains a list of files and
subfolders, but can also refer to other catalog files. If the URL of a catalog file is specified for the import, then all files that are listed in the catalog
will be imported. Other catalogs that are listed in the specified catalog are also imported recursively.
A catalog file is usually called [Link]. The URL of a catalog file can be obtained in the following way.
For a THREDDS First browse to a folder on the server. Then copy the current URL from the address line and replace ".html" at the
opendap server: end of the url by ".xml".
317
For a HYRAX opendap First browse to a folder on the server. Then click on the link "THREDDS Catalog XML" on the bottom of the page.
server: Then copy the current URL from the address line.
For example to import data from the folder [Link] use the catalog URL
[Link] in the import configuration. For example:
<general>
<importType>NETCDF-CF_GRID</importType>
<serverUrl>[Link]
<startDateTime date="2007-07-01" time="[Link]"/>
<endDateTime date="2008-01-01" time="[Link]"/>
<idMapId>OpendapImportIdMap</idMapId>
<missingValue>32767</missingValue>
</general>
<timeSeriesSet>
<moduleInstanceId>OpendapImport</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>gridLocation1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
]]>
An import file (local or on an OPeNDAP server) can contain multiple variables. For each time series set in the import configuration the import uses
the external parameter id from the id map configuration to search for the corresponding variable(s) in the file(s) to import. If a corresponding
variable is found, then the data from that variable is imported. Only data for the found variables is downloaded and imported, all other data in the
import file(s) is ignored.
For NetCDF files the external parameter id is by default matched to the names of the variables in the NetCDF file to find the required variable to
import. There also is an option to use the standard_name attribute or long_name attribute of a variable in the NetCDF file as external parameter
id. To use this option add the variable_identification_method property to the import configuration, just above the time series set(s). For
example:
<general>
<importType>NETCDF-CF_GRID</importType>
<serverUrl>[Link]
<startDateTime date="2007-07-01" time="[Link]"/>
<endDateTime date="2008-01-01" time="[Link]"/>
<idMapId>OpendapImportIdMap</idMapId>
<missingValue>32767</missingValue>
</general>
<properties>
<string value="long_name" key="variable_identification_method"/>
</properties>
<timeSeriesSet>
<moduleInstanceId>OpendapImport</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>gridLocation1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
]]>
variable_identification_method behaviour
318
standard_name All external parameter ids are matched to the standard_name attributes of the variables in the NetCDF file
to find the required variable(s) to import.
long_name All external parameter ids are matched to the long_name attributes of the variables in the NetCDF file to
find the required variable(s) to import.
variable_name All external parameter ids are matched to the names of the variables in the NetCDF file to find the required
variable(s) to import.
If the variable_identification_method property is not present, then variable_name is used by default. The variable_identification_method
property currently only works for the import types NETCDF-CF_GRID, NETCDF-CF_TIMESERIES and NETCDF-CF_PROFILE.
Currently it is not possible to import data from the same variable in the import file to multiple time series sets in Delft-FEWS. If
required, this can be done using a separate import for each time series set.
To import only data for a given period of time, specify either a relative period or an absolute period in the general section of the import
configuration file. See relativeViewPeriod, startDateTime and endDateTime for more information. The import will first search the metadata of each
file that needs to be imported from the OPeNDAP server. Then for each file that contains data within the specified period, only the data within the
specified period will be imported. The start and end of the period are both inclusive.
This can be used to import only the relevant data if only data for a given period is needed, which can save a lot of time. However, for this to work
the import still needs to search through all the metadata of the file(s) to be imported. So for large catalogs that contain a lot of files, it can still take
a lot of time for the import to download all the required metadata from the OPeNDAP server.
Example: to import only data within the period from 2007-07-01 [Link] to 2008-01-01 [Link], add the following lines to the import
configuration:
Alternatively you can use the relativeViewPeriod element so set a period to import relative to the T0. If you do this you can use the manual
forecast dialog to set the period to import data from using the Cold/Warm state selection options.
Importing data for a subgrid currently only works for regular grids.
This section only applies to the import of grid data. For data with a regular grid that is imported from a NetCDF file, it is in most cases not required
to have a grid definition in the [Link] configuration file. Because for regular grids the import reads the grid definition from the NetCDF file and
stores the grid definition directly in the datastore of Delft-FEWS. If for the imported data there is no grid definition present in the [Link]
configuration file, then data for the entire grid is imported.
To import data for only part of the original grid, it is required to specify a grid definition in the [Link] configuration file. The grid definition defines
the part of the grid that needs to be imported. In other words the grid definition defines a subgrid of the original grid. In this case only data for the
configured subgrid is downloaded and imported, the data for the rest of the original grid is ignored. The following restrictions apply:
For example to import data for a sub grid from the URL [Link] use e.g. the following
grid definition in the [Link] file. In this example a subgrid of 5x5 cells is imported, where the cell center longitude coordinates range from 0 to 8
degrees and the cell center latitude coordinates range from 50 to 58 degrees.
319
<rows>5</rows>
<columns>5</columns>
<geoDatum>WGS 1984</geoDatum>
<firstCellCenter>
<x>0</x>
<y>58</y>
</firstCellCenter>
<xCellSize>2</xCellSize>
<yCellSize>2</yCellSize>
]]>
For more information about the configuration of grid definitions in Delft-FEWS see Grids.
For importing data from a password protected OPeNDAP server, it is required to configure a valid username and password for accessing the
server. This can be done by adding the user and password elements (see Import Module configuration options#user) to the import configuration,
just after the serverUrl element.
This currently only works for importing a single file, this does not work when using a catalog.
<general>
<importType>NETCDF-CF_GRID</importType>
<serverUrl>[Link]
<user>kermit</user>
<password>gr33n</password>
<startDateTime date="2007-07-01" time="[Link]"/>
<endDateTime date="2008-01-01" time="[Link]"/>
<idMapId>OpendapImportIdMap</idMapId>
<missingValue>32767</missingValue>
</general>
<timeSeriesSet>
<moduleInstanceId>OpendapImport</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>gridLocation1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
]]>
For importing data from an OPeNDAP server that communicates using SSL, the certificate of the server has to be either validated by a known
certificate authority or present and trusted in the local certificate store. To add a certificate to the local Delft-FEWS certificate store, first export the
certificate file from the server using a browser, then import the certificate file into the certificate store using e.g. the following command on the
command line:
where fileName is the pathname of the certificate file, aliasName is the alias to use for the certificate, G:\java\jre6\bin\[Link] is the pathname
of the Java [Link] file (depends on your Java installation) and G:\FEWS\[Link] is the pathname of the keystore file in the
Delft-FEWS region home directory (depends on your Delft-FEWS installation). The keystore file in the Delft-FEWS region home directory is
automatically read each time when Delft-FEWS starts.
2.
320
2. Left click on the certificate icon.
3. Choose More Information -> Show Certificate -> Details -> Export
4. Follow the on screen instructions.
Known issues
Export of data
It is not possible to export data directly using the OPeNDAP protocol, since the OPeNDAP protocol only supports reading data from the
server. If it is required to export data from Delft-FEWS and make it available on an OPeNDAP server, then this can be done in two steps:
1. setup a separate OPeNDAP server that points to a given storage location. For instance a THREDDS server, which is relatively
easy to install. The OPeNDAP server picks up any (NetCDF) files that are stored in the storage location and makes these
available for download using OPeNDAP.
2. export the data to a NetCDF file using a Delft-FEWS export run. Export of grid time series, scalar time series and profile time
series is supported (respectively export types NETCDF-CF_GRID, NETCDF-CF_TIMESERIES and NETCDF-CF_PROFILE).
Set the output folder for the export run to the given storage location. That way the exported data will automatically be picked up
by the OPeNDAP server.
Internal
Import Module
Import Module configuration options
Available data types
NETCDF-CF_GRID
NETCDF-CF_TIMESERIES
NETCDF-CF_PROFILE
External
[Link]
OPeNDAP
THREDDS
321
importTimeZone:timeZoneOffset
importTimeZone:timeZoneName
gridStartPoint
dataFeedId
tolerance
startTimeShift
startTimeShift:locationId
startTimeShift:parameterId
properties
timeSeriesSet
externUnit
gridRecordTimeIgnore
Example: Import of Meteosat images as time-series
EA Import module
The time series import class can be applied to import data from a variety of external formats. The formats are included in an enumeration of
supported import types. Each of these enumerations is used for a specifically formatted file.
import
Root element for the definition of an import run task. Each task defined will import data in a specified format from a specified directory. For
defining multiple formats, different import tasks reading from different directories must be defined.
general
description
Optional description for the import run. Used for reference purposes only.
322
importType
Specification of the format of the data to be imported. The enumeration of options includes;
MSW : Import of data provided by the MSW System (Rijkswaterstaat, the Netherlands).
KNMI : Import of synoptic data from KNMI (Dutch Meteorological Service).
WISKI : Import of time series data from the WISKI Database system (Kisters AG).
DWD-GME : Import of NWP data of the DWD Global Modell, (German Meteorological Service). This is a grid data format.
DWD-LM : Import of NWP data of the DWD Lokal Modell, (German Meteorological Service). This is a grid data format.
GRIB : Import of the GRIB data format. General format for exchange of meteorological data.
EVN: Import of data in the EVN format (Austrian Telemetry)
METEOSAT: Import of images form meteosat satellite
Location to import data from. This may be a UNC path (ie located on the network), sftp, http or from a database.
JDBC example:
<general>
<importTypeStandard>database</importTypeStandard>
<jdbcDriverClass>[Link]</jdbcDriverClass>
<jdbcConnectionString>jdbc:mysql://[Link]/cwb_ac</jdbcConnectionString>
<user>sobek</user>
<password>Tohek>cwa</password>
<relativeViewPeriod startOverrulable="true" endOverrulable="true" start="-1" end="1"
unit="day"/>
<table name="qpe_sums_obs">
<dateTimeColumn name="rehdate"/>
<valueColumn name="rad_gz" unit="mm/hr" locationId="Qesums" parameterId="[Link]"
parser="Mosaic"/>
</table>
<table name="qpe_sums_fo">
<forecastDateTimeColumn name="createdate"/>
<dateTimeColumn name="raddate"/>
<valueColumn name="rad_gz" unit="mm/hr" locationId="Qpesums" parameterId="[Link]"
parser="Mosaic"/>
</table>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>QPE_Sum</dataFeedId>
</general>
sftp example:
<general>
<importType>TypicalAsciiForecast</importType>
<folder>s[Link]
<relativeViewPeriod startOverrulable="true" endOverrulable="true" start="-1" end="3"
unit="day"/>
<unitConversionsId>ImportUnitConversions</unitConversionsId>
<importTimeZone>
<timeZoneOffset>+00:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>Forecast</dataFeedId>
</general>
http example:
323
<general>
<importType>RemoteServer</importType>
<serverUrl>[Link]
<relativeViewPeriod startOverrulable="true" endOverrulable="true" start="-1" end="0"
unit="day"/>
<idMapId>IdImportRO</idMapId>
<importTimeZone>
<timeZoneOffset>+10:00</timeZoneOffset>
</importTimeZone>
<dataFeedId>RO</dataFeedId>
</general>
failedFolder
Folder to move badly formatted files to. This may be a UNC path (ie located on the network).
user
User name, required when importing from protected database connections or protected servers.
password
Password, required when importing from protected database connections or protected servers.
relativeViewPeriod
The relative period for which data should be imported. This period is relative to the time 0 of the run. When the start and end time are overrulable
the user can specify the download length with the cold state time and forecast length in the manual forecast dialog. It is also possible to import
data for an absolute period of time using the startDateTime and endDateTime elements.
startDateTime
Start date and time of the (absolute) period for which data should be imported. Start is inclusive. This dateTime is in the configured
importTimeZone. It is also possible to import data for a relative period of time using the relativeViewPeriod element.
endDateTime
End date and time of the (absolute) period for which data should be imported. End is inclusive. This dateTime is in the configured
importTimeZone. It is also possible to import data for a relative period of time using the relativeViewPeriod element.
idMapId
ID of the IdMap used to convert external parameterId's and locationId's to internal parameter and location Id's. Each of the formats specified will
have a unique method of identifying the id in the external format. See section on configuration for Mapping Id's units and flags.
unitConversionsId
ID of the UnitConversions used to convert external units to internal units. Each of the formats specified will have a unique method of identifying
the unit in the external format. See section on configuration for Mapping Id's units and flags.
flagConversionsId
ID of the FlagConversions used to convert external data quality flags to internal data quality flags. Each of the formats specified will have a unique
method of identifying the flag in the external format. See section on configuration for Mapping Id's units and flags.
missingValue
importTimeZone
Time zone the external data is provided in if this is not specified in the data format itself. This may be specified as a timeZoneOffset, or as a
specific timeZoneName.
importTimeZone:timeZoneOffset
The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.
324
(e.g. +01:00)
importTimeZone:timeZoneName
Enumeration of supported time zones. See appendix B for list of supported time zones.
gridStartPoint
Identification of the cell considered as the first cell of the grid. This may be in the upper left corner or in the lower left corner. Enumeration of
options include;
dataFeedId
Optional id for data feed. If not provided then the folder name will be used. This is is used in the SystemMonitorDisplay in the importstatus tab.
tolerance
Definition of the tolerance for importing time values to cardinal time steps in the series to be imported to. Tolerance is defined per
location/parameter combination. Multiple entries may exist.
Attributes;
startTimeShift
Specification of a shift to apply to the start time of a data series to be imported as external forecasting. This is required when the time value of the
first data point is not the same as the start time of the forecast. This may be the case in for example external precipitation values, where the first
value given is the accumulative precipitation for the first time step. The start time of the forecast is then one time unit earlier than the first data
point in the series. Multiple entries may exist.
startTimeShift:locationId
startTimeShift:parameterId
properties
Available since Delft-FEWS version 2010.02. These properties are passed to the time series parser that is used for this import. Some (external
third party) parsers need these additional properties. See documentation of the (external third party) parser you are using.
timeSeriesSet
TimeSeriesSet to import the data to. Multiple time series sets may be defined, and each may include either a (list of) locationId's ar a
locationSetId. Data imported is first read from the source data file in the format specifed. An attempt is then made to map the locationId's and the
parameterId's as specified in the IdMap's to one of the locations/parameters defined in the import time series sets. If a valid match is found, then
the time values are mapped to those in the timeSeriesSet, taking into account the tolerance for time values. A new entry is made in the timeSeries
for each valid match made.
For non-equidistant time series the time values imported will be taken as is. For equidistant time series values are only returned on the cardinal
time steps. For cardinal time steps where no value is available, no data is returned.
externUnit
For some data formats an external unit is not defined in the file to be imported. This elements allows the unit to be specified explicitly. This unit is
then used in possible unit conversions.
Attributes;
parameterId: Id of the parameter for which a unit is specified. This is the internal parameter Id.
unit: specification of unit. This unit must be available in the UnitConversions specified in the unitConversionsId element.
325
gridRecordTimeIgnore
Boolean flag to specify if the start of forecast is read from the GRIB file or if it is inferred from the data imported. In some GRIB files a start of
forecast is specified, but the definition of this may differ from that used in DELFT-FEWS.
When importing grid data from file formats where the attributes of the grid is not specified in the file being imported (ie the file is
not self-describing), a definition of the grid should be included in the Grids configuration (see Regional Configuration).
It is also advisable to define the Grid attributes fro self describing Grids such as those imported from GRIB files. If no GRIB data
is available, then DELFT-FEWS will require a specification of the grid to allow a Missing values grid to be created.
Meteosat Images are generally imported as images in [filename].png format. The Meteosat images constitute a time series of png images, that
are geo-referenced by means of a specific world file. Each image needs its own world file, which in case of PNG carries the extension
[filename].pgw .
Import of images in another format, such as JPEG is also possible. The corresponding world file for a JPEG file has the extension [filename].jpg .
The images are imported via a common time series import, for which a specific image parameter needs to be specified in a parameterGroup via
the parameter id image .
_<parameterGroup id="image">_
_<parameterType\]instantaneous\[/parameterType>_
_<unit>-</unit>_
_<valueResolution>8</valueResolution>_
_<parameter id="image">_
_<shortName>image</shortName>_
_</parameter>_
_</parameterGroup>_
The value resolution indicates the resolution of the values of the pixels (grey tones) in the Meteosat images. In this case 8 grey tones are
resampled into a single grey tone for storage space reductions. In the module for the timemeseries import run for a Meteosat image the import is
then configured as follows:
<import>
<general>
<importType>GrayScaleImage</importType>
<folder>$REGIONHOME$/Import/MeteoSat</folder>
<idMapId>IdImportMeteosat</idMapId>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMeteosat</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>image</parameterId>
<locationId>meteosat</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>4</synchLevel>
<expiryTime unit="day" multiplier="750"/>
</timeSeriesSet>
</import>
EA Import module
A specific import class is available for importing time series data from the XML format specified by the UK Environment Agency. The configuration
items required are a sub-set of those required in the more generic time series import format. This is due to much of the required information being
available in the XML file itself (ie file is self describing).
326
Figure 63 Elements of the EAImport configuration.
04 Export modules
Introduction
The export module allows (observed and forecast) data from DELFT-FEWS to be exported for use in external sources. On exporting data, the
approach to be used for converting flags, units, locations and parameters can be defined. These conversions are identified by referring to the
appropriate configuration files (see Regional Configuration).
Required no
327
Entry in ModuleDescriptors <moduleDescriptor id="ExportRun">
<description>Export module to export EATimeseriesDataExchangeFormat compliant files</description>
<className>[Link]</className>
</moduleDescriptor>
File exported are written to the path specified. The file name of files to be exported. The filename is constructed as a time string (in milliseconds).
An optional prefix can be applied to the time stamp string.
When available as configuration on the file system, the name of the XML file for configuring an instance of the import module called for example
ExportForecast may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
328
Figure 64 Elements of the exportRun configuration.
folder
Folder to export data to. This may be a UNC path (ie located on the network).
idMapId
ID of the IdMap used to convert internal parameterId's and locationId's to external parameter and location Id's. See section on configuration for
Mapping Id's units and flags.
unitConversionsId
ID of the UnitConversions used to convert internal units to external units. See section on configuration for Mapping Id's units and flags.
flagConversionsId
ID of the FlagConversions used to convert internal data quality flags to external data quality flags. See section on configuration for Mapping Id's
units and flags.
exportFilePrefix
exportMissingValue
temporaryFilePrefix
Optional prefix to the file name when writing the file. This can be used by systems reading the file to identify if the file is being written, thus
avoiding concurrent reading/writing of a file. If not defined the prefix "tmp" is used. On completion of the file, an atomic replace of the filename is
done.
exportTimeZone
Time zone the external data is exported to. This may be specified as a timeZoneOffset, or as a specific timeZoneName.
timeZoneOffset
The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.
329
(e.g. +01:00)
timeZoneName
Enumeration of supported time zones. See appendix B for list of supported time zones.
timeSeriesSet
TimeSeriesSets defining the data to be exported. Multiple time series sets may be defined, and each may include either a (list of) locationId's ar a
locationSetId.
Overview
The GrdcTimeSeriesSerializer class can export any number of timeSeriesSet's but the following restrictions apply due to the nature of the GRDC
Near Real-Time Data Format Version 3.0:
for each locationId it expects exactly one timeSeriesSet wit parameterId='Water Level' and exactly one timeSeriesSet with
parameterId='Discharge'. When not configured properly, an exception will be thrown.
the GRDC format enforces a specific file naming convention. This should be configured properly. When this convention is violated, a
warning is given, but no exception is thrown.
The following flags are set statically, because they are not stored in FEWS:
Configuration (Example)
A complete export module configuration consists of an ID Mapping file and a Export Module Instance file.
ModuleConfigFiles
330
[Link]
<timeSeriesExportRun ......"="......"">
<export>
<general>
<exportType>grdc</exportType>
<folder>$EXPORT_EFAS_FOLDER$</folder>
<exportFileName>
<name>-[Link]</name>
<prefix>
<currentTimeFormattingString>'NL-1008-'yyyyMMddHHmmss</
currentTimeFormattingString>
</prefix>
</exportFileName>
<validate>false</validate>
<idMapId>IdExportEFAS</idMapId>
<exportTimeZone>
<timeZoneName>GMT</timeZoneName>
</exportTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ImportMSW</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>H-MS-BORD</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-192" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</export>
</timeSeriesExportRun>
]]>
IdMapFiles
Defines mappings between FEWS parameters and locations and the expected GRDC locations and parameters.
[Link]
<idMap xmlns=".......">
<!---->
<parameter internal="Q.m" external="Discharge"/>
<parameter internal="H.m" external="Water Level"/>
<!---->
<location internal="H-MS-BORD" external="BORGHAREN"/>
<location internal="H-RN-0001" external="LOBITH"/>
</idMap>
]]>
Export module
What [Link]
Required no
331
Entry in ModuleDescriptors <moduleDescriptor id="TimeSeriesExportRun">
<description>Export module to export timeseries to various formats</description>
<className>[Link]</className>
</moduleDescriptor>
Configuration
General
description
exportTypeStandard
exportType
folder
exportFileName
validate
idmapId
unitConversionsId
flagConversionsId
exportMissingValue/exportMissingValueString
omitMissingValues
exportTimeZone
convertDatum
metadata
timeseriesSet
Configuration
The export module can export timeseries for use in other systems. The configuration of the module is split into three sections:
332
In the sections below the different elements of the configuration are described
General
333
description
An optional description
exportTypeStandard
This type specifies which reader should be used to read the file. The type must be one from the enumeration. Presently (2007/02) only bfg and pi
are included in this list.
exportType
This type specifies which reader should be used to read the file. It may be any string as long as this type is supported by the TimeSeriesExport
module. The list of supported types is given below.
folder
exportFileName
This elements describes how to construct the filename(s) of the exported file(s).
If only the name element is given a fixed name is used for each export. The prefix and suffix elements describe how to create a filename prefix
and/or suffix. The temporaryPrefix is used to generate a prefix for the temporary file as it is being written. After that the fine is renamed.
334
validate
Optional element. Only applicable if the data are exported to the xml-file. This option activates the validation of the exported file against a XML
schema.
idmapId
unitConversionsId
flagConversionsId
exportMissingValue/exportMissingValueString
Missing value definition for this time series. Either a string or a number. Defaults to NaN if not defined.
omitMissingValues
335
If set to true records with missing values are not exported
exportTimeZone
TimeZone in which to export the data. Can either be a string (timeZoneName) or an offset (timeZoneOffset).
convertDatum
Convert datum to local datum during import. The conversion will be done for all parameters which use datum (as configured in [Link])
The local datum is defined in the z element in the [Link] file.
metadata
TO BE COMPLETED
timeseriesSet
Define the timeseriesset to be exported. Please note that not all exports support all timeseriestypes (e.g. csv only supports scalar type).
Available Exports
Please note the new types are added regularly. Most of the Exports are Custom made for specific file formats. The preferred
format for new scalar Exports is the Delft-Fews Published Interface Format (PI).
UM Aquo Uitwissel Model Aquo (Dutch exchange XML file format) scalar
336
BfG Export
Introduction
Example
No example present
CSV Export
Introduction
Export scalar timeseries to csv type format (example config). The resulting csv files has three header rows. This first row contains the location
name for each data column, the second row the location Id for each data column, the third row the parameter. Date/time is in yyy-mm-dd
hh:mm:ss format.
Example
[Link]
[Link]
[Link]
{
private CsvTimeSeriesSerializer serializer = new CsvTimeSeriesSerializer();
@Override
public void serialize(TimeSeriesContent content, LineWriter writer, String virtualFileName)
throws Exception {
[Link](',');
[Link](';');
[Link](content, writer, virtualFileName);
}
}
]]>
337
[Link]
{
private char decimalSeparator = '.';
private char columnSeparator = ',';
@Override
public void serialize(TimeSeriesContent content, LineWriter writer, String virtualFileName)
throws Exception {
if ([Link]([Link]())) [Link](-999f);
[Link](true);
[Link](locationHeader, columnSeparator);
[Link](parameterHeader, columnSeparator);
DINO Export
Introduction
338
The DINO-format is a text file with the extension .tuf.
It consists of a fixed block of text with information on the file.
Te lines in the text block are marked with a #.
The following lines contain the the information as specified below in 9 columns separated by a ,
NOTES:
assumed that the file always contains just 1 parameter for one or more locations;
parameter id or name is not mentioned in the file;
only non-missing values are written;
Number of decimals is zero
second column should contain the external parameter qualifier (Which is 01 in this example)
missing values are indicated with an empty position (, ,)?
Example
#TNO_NITGEXCHANGE_FILE=
#VERSION= 1, 1, 0
#FILE_SOURCE=
#FILE_DATE=
#DATA_SET_NAME_IN= DINO
#DATA_SET_NAME_OUT=
#REMARK=
#OBJECT_MEASUREMENT_TYPE= GWL
#COLUMN= 9
#COLUMN_INFO= 1, OBJECT_ID
#COLUMN_INFO= 2, OBJECT_SUB_ID
#COLUMN_INFO= 3, DATE, YYYY/MM/DD
#COLUMN_INFO= 4, TIME, HH24:MI:SS
#COLUMN_INFO= 5, VALUE, CM, MP
#COLUMN_INFO= 6, REM
#COLUMN_INFO= 7, QLT
#COLUMN_INFO= 8, REL
#COLUMN_INFO= 9, NOTE
#COLUMN_SEPERATOR= ,
#DATA_INSERT_METHOD=
#DATA_UPDATE_METHOD=
#EOH=
B58G0294,01,2007/09/14,[Link],134,,,,
B58G0294,01,2007/10/01,[Link],137,,,,
B58G0294,01,2007/10/14,[Link],134,,,,
B58G0294,01,2007/10/29,[Link],131,,,,
B58G0294,01,2007/11/15,[Link],120,,,,
B58G0294,01,2007/11/30,[Link],102,,,,
B58G0294,01,2007/12/18,[Link],109,,,,
B58G0294,01,2008/01/14,[Link],106,,,,
B58G0294,01,2008/01/28,[Link],105,,,,
B58G0294,01,2008/02/15,[Link],105,,,,
B58G0294,01,2008/03/03,[Link],116,,,,
B58G0294,01,2008/03/14,[Link],109,,,,
B58G0294,01,2008/03/31,[Link],84,,,,
B58G0295,01,2007/09/14,[Link],93,,,,
B58G0295,01,2007/10/01,[Link],82,,,,
B58G0295,01,2007/10/14,[Link],98,,,,
B58G0295,01,2007/10/29,[Link],98,,,,
B58G0295,01,2007/11/15,[Link],87,,,,
B58G0295,01,2007/11/30,[Link],89,,,,
B58G0295,01,2007/12/18,[Link],77,,,,
B58G0295,01,2008/01/14,[Link],75,,,,
B58G0295,01,2008/01/28,[Link],73,,,,
B58G0295,01,2008/02/15,[Link],67,,,,
B58G0295,01,2008/03/03,[Link],70,,,,
B58G0295,01,2008/03/14,[Link],70,,,,
B58G0295,01,2008/03/31,[Link],58,,,,
B58D0446,01,2007/09/14,[Link],287,,,,
B58D0446,01,2007/10/01,[Link],292,,,,
339
B58D0446,01,2007/10/14,[Link],292,,,,
B58D0446,01,2007/10/29,[Link],293,,,,
B58D0446,01,2007/11/15,[Link],280,,,,
B58D0446,01,2007/11/30,[Link],288,,,,
B58D0446,01,2007/12/18,[Link],280,,,,
B58D0446,01,2008/01/14,[Link],278,,,,
B58D0446,01,2008/01/28,[Link],282,,,,
B58D0446,01,2008/02/15,[Link],271,,,,
B58D0446,01,2008/03/03,[Link],272,,,,
B58D0446,01,2008/03/14,[Link],278,,,,
B58D0446,01,2008/03/31,[Link],263,,,,
B58G0296,01,2007/09/14,[Link],83,,,,
B58G0296,01,2007/10/01,[Link],80,,,,
B58G0296,01,2007/10/14,[Link],85,,,,
B58G0296,01,2007/10/29,[Link],73,,,,
B58G0296,01,2007/11/15,[Link],69,,,,
B58G0296,01,2007/11/30,[Link],66,,,,
B58G0296,01,2007/12/18,[Link],80,,,,
B58G0296,01,2008/01/14,[Link],78,,,,
B58G0296,01,2008/01/28,[Link],78,,,,
B58G0296,01,2008/02/15,[Link],78,,,,
B58G0296,01,2008/03/03,[Link],79,,,,
B58G0296,01,2008/03/14,[Link],76,,,,
B58G0296,01,2008/03/31,[Link],63,,,,
B58G0297,01,2007/09/14,[Link],80,,,,
B58G0297,01,2007/10/01,[Link],73,,,,
B58G0297,01,2007/10/14,[Link],80,,,,
B58G0297,01,2007/10/29,[Link],70,,,,
B58G0297,01,2007/11/15,[Link],56,,,,
B58G0297,01,2007/11/30,[Link],68,,,,
B58G0297,01,2007/12/18,[Link],76,,,,
B58G0297,01,2008/01/14,[Link],76,,,,
B58G0297,01,2008/01/28,[Link],77,,,,
B58G0297,01,2008/02/15,[Link],63,,,,
B58G0297,01,2008/03/03,[Link],65,,,,
B58G0297,01,2008/03/14,[Link],62,,,,
B58G0297,01,2008/03/31,[Link],48,,,,
B58D1904,01,2007/09/14,[Link],102,,,,
B58D1904,01,2007/10/01,[Link],101,,,,
B58D1904,01,2007/10/14,[Link],100,,,,
B58D1904,01,2007/10/29,[Link],97,,,,
B58D1904,01,2007/11/15,[Link],88,,,,
B58D1904,01,2007/11/30,[Link],86,,,,
B58D1904,01,2007/12/18,[Link],55,,,,
B58D1904,01,2008/01/14,[Link],79,,,,
B58D1904,01,2008/01/28,[Link],79,,,,
B58D1904,01,2008/02/15,[Link],77,,,,
B58D1904,01,2008/03/03,[Link],86,,,,
B58D1904,01,2008/03/14,[Link],71,,,,
B58D1904,01,2008/03/31,[Link],51,,,,
B58G0298,01,2007/09/12,[Link],199,,,,
B58G0298,01,2007/10/01,[Link],195,,,,
B58G0298,01,2007/10/16,[Link],204,,,,
B58G0298,01,2007/10/30,[Link],190,,,,
B58G0298,01,2007/11/15,[Link],176,,,,
B58G0298,01,2007/11/28,[Link],183,,,,
B58G0298,01,2007/12/28,[Link],0,,,,
B58G0298,01,2007/12/17,[Link],177,,,,
B58G0298,01,2007/12/28,[Link],,,,,
B58G0298,01,2008/01/15,[Link],166,,,,
B58G0298,01,2008/01/29,[Link],169,,,,
B58G0298,01,2008/02/18,[Link],168,,,,
B58G0298,01,2008/02/28,[Link],174,,,,
B58G0298,01,2008/03/12,[Link],175,,,,
B58G0298,01,2008/03/31,[Link],170,,,,
B58G0299,01,2007/09/12,[Link],194,,,,
B58G0299,01,2007/10/01,[Link],193,,,,
B58G0299,01,2007/10/16,[Link],194,,,,
B58G0299,01,2007/10/30,[Link],189,,,,
B58G0299,01,2007/11/15,[Link],168,,,,
B58G0299,01,2007/11/28,[Link],177,,,,
B58G0299,01,2007/12/17,[Link],165,,,,
B58G0299,01,2007/12/28,[Link],,,,,
B58G0299,01,2008/01/15,[Link],166,,,,
B58G0299,01,2008/01/29,[Link],171,,,,"aflezing was 1,17"
340
Fliwas Export
Introduction
Example
341
<?xml version="1.0" encoding="UTF-8"?>
<fliwas
xsi:schemaLocation="[Link]
version="1.0" xmlns="[Link]
xmlns:xsi="[Link]
<header gebied="fews" datum="2003-03-01" tijd="[Link]" volgnummer="1.0">
<riviertak naam="EA_H-2001">
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="2.11"/>
<waterstand km="200" stand="2.11"/>
<waterstand km="400" stand="2.11"/>
<waterstand km="600" stand="2.11"/>
</voorspelling>
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="3.11"/>
<waterstand km="200" stand="3.11"/>
<waterstand km="400" stand="3.11"/>
<waterstand km="600" stand="3.11"/>
</voorspelling>
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="4.11"/>
<waterstand km="200" stand="4.11"/>
<waterstand km="400" stand="4.11"/>
<waterstand km="600" stand="4.11"/>
</voorspelling>
<maximum>
<waterstand km="27" datum="2003-03-01" tijd="[Link]" stand="1.31"/>
<waterstand km="28" datum="2003-03-01" tijd="[Link]" stand="1.41"/>
</maximum>
</riviertak>
<riviertak naam="EA_H-2002">
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="3.51"/>
<waterstand km="100" stand="3.51"/>
<waterstand km="300" stand="3.51"/>
<waterstand km="500" stand="3.51"/>
</voorspelling>
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="0" stand="4.51"/>
<waterstand km="100" stand="4.51"/>
<waterstand km="300" stand="4.51"/>
<waterstand km="500" stand="4.51"/>
</voorspelling>
<maximum>
<waterstand km="29" datum="2003-03-01" tijd="[Link]" stand="1.71"/>
</maximum>
</riviertak>
<riviertak naam="EA_H-2032">
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="111" stand="1.91"/>
<waterstand km="222" stand="1.91"/>
</voorspelling>
<voorspelling datum="2003-03-01" tijd="[Link]">
<waterstand km="111" stand="2.91"/>
<waterstand km="222" stand="2.91"/>
</voorspelling>
</riviertak>
</header>
</fliwas>
GIN Export
Introduction
GIN stands for 'Gemeinsame Informationsplattform Naturgefahren' and will be the central information hub for natural hazards events in
Switzerland. The GIN export functionality is available from 2010_01 onwards and patches will be made available for versions 2009_01 and
342
2009_02.
The configuration of GIN export module follows the normal pattern, except that two qualifiers are necessary to get the proper values for the
attributes 'datasourceProvider' and 'abbreviation'.
343
<?xml version="1.0" encoding="UTF-8"?>
<Collection datasourceName="loc1" datasourceProvider="BAFU1"
xmlns:xsi="[Link]
<prediction abbreviation="Abbreviation1" datasourceName="loc1" datasourceProvider="BAFU">
<run>
<inittime>2003-11-05T[Link].000+0100</inittime><!-- init time hydrological model-->
<inittime2>2003-11-05T[Link].000+0100</inittime2><!-- init time meteorological model-->
</run>
<member>3</member>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.5161157</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.749531</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.35472012</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.91763437</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.29822087</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.038461924</par1>
</preddata>
<preddata>
<predtime>2003-10-05T[Link].000+0100</predtime>
<par1>0.94585866</par1>
</preddata>
</prediction>
</Collection>
Example configuration
344
<?xml version="1.0" encoding="UTF-8"?>
<timeSeriesExportRun xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<export>
<general>
<exportType>GIN_Export</exportType>
<folder>$EXPORT_FOLDER_ROOT$/GIN</folder>
<exportFileName>
<name>.[Link]</name>
<prefix>
<timeZeroFormattingString>[Link]</timeZeroFormattingString>
</prefix>
</exportFileName>
<idMapId>IdExportGIN</idMapId>
<exportMissingValue>-999</exportMissingValue>
<omitMissingValues>true</omitMissingValues>
<exportTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</exportTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>ARMA_OberThur_COSMO7</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<qualifierId>BAFU</qualifierId>
<qualifierId>abbreviation1</qualifierId>
<locationId>H-2181</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="0" startOverrulable="false" end="72" endOverrulable="true"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</export>
<export>
<general>
<exportType>GIN_Export</exportType>
<folder>$EXPORT_FOLDER_ROOT$/GIN</folder>
<exportFileName>
<name>.[Link]</name>
<prefix>
<timeZeroFormattingString>[Link]</timeZeroFormattingString>
</prefix>
</exportFileName>
<idMapId>IdExportGIN</idMapId>
<exportMissingValue>-999</exportMissingValue>
<omitMissingValues>true</omitMissingValues>
<exportTimeZone>
<timeZoneOffset>+01:00</timeZoneOffset>
</exportTimeZone>
</general>
<timeSeriesSet>
<moduleInstanceId>EnsembleGIN</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<qualifierId>BAFU</qualifierId>
<qualifierId>abbreviation2</qualifierId>
<locationId>H-2181</locationId>
<timeSeriesType>external forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="0" startOverrulable="false" end="72" endOverrulable="true"/>
<readWriteMode>read only</readWriteMode>
<ensembleId>GIN</ensembleId>
</timeSeriesSet>
</export>
</timeSeriesExportRun>
345
GRDS Export
Introduction
Export scalar timeseries to GRDC type format (example config). GRDC-NRT-Format - for the exchange of near real-time hydrological data.
Example
iBever Export
Introduction
The iBever file format is a special CSV ASCII format for water quality data that can be imported by iBever. In FEWS it can be used to export
sample time series.
Each timeseries value in the timeseries content is written as a separate row in the file.
346
Each row contains information on the location ID and name, parameter ID and name, date and time, value, unit, flag
Only non-missing values are printed.
More information on the iBever CSV format can be found on the iBever internet site:
[Link]
mpn_mpnident Location ID
mwa_mwawrden Value
mep_domgwcod Unit
mps_domgwcod Parameter ID
Example File
mpn_mpnomsch;mpn_mpnident;mwa_mwadtmb;mwa_mwatijdb;mwa_mwawrden;mep_domgwcod;mco_domgwcod;mrsinovs_domafkrt;hoe_domgwco
Sevenum buffer Groot Luttel;DSVNLUT1;2007-12-19;[Link];2.0;ug/kg;10;<;NVT;24DDTS
WB Tungelroysebeek trace 1;DTUNG001;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;24DDTS
WB Tungelroysebeek trace 2;DTUNG002;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;24DDTS
WB Tungelroysebeek trace 3;DTUNG003;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;24DDTS
WB Sevenum buffer Groot Luttel;DSVNLUT1;2007-12-19;[Link];2.0;ug/kg;10;<;NVT;44DDDS
WB Tungelroysebeek trace 1;DTUNG001;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;44DDDS
WB Tungelroysebeek trace 2;DTUNG002;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;44DDDS
WB Tungelroysebeek trace 3;DTUNG003;2007-05-09;[Link];2.0;ug/kg;10;<;NVT;44DDDS
WB Sevenum buffer Groot Luttel;DSVNLUT1;2007-12-19;[Link];1.0;ug/kg;10;<;NVT;44DDES
WB Tungelroysebeek trace 1;DTUNG001;2007-05-09;[Link];1.0;ug/kg;10;<;NVT;44DDES
WB Tungelroysebeek trace 2;DTUNG002;2007-05-09;[Link];1.0;ug/kg;10;<;NVT;44DDES
WB Tungelroysebeek trace 3;DTUNG003;2007-05-09;[Link];1.0;ug/kg;10;<;NVT;44DDES
WB Sevenum buffer Groot Luttel;DSVNLUT1;2007-12-19;[Link];2.0;ug/kg;10;<;NVT;44DDTS
[Link]
[Link]
* Each timeseries value in the timeseries content is written as a serperate row in the
file.<br/>
* Each row contains information on the location ID and name, parameter ID and name, date and
time, value, unit, flag
* Only non-missing values are printed.
* <p/>
* The following header information is present for every timeseries array:
* <li>iBever header line</li>
*/
public class IbeverTimeSeriesSerializer implements TextSerializer<TimeSeriesContent> {
private static final String[] HEADER_LINE = {
"mpn_mpnomsch", /* 1 locationName */
347
"mpn_mpnident", /* 2 locationID */
"mwa_mwadtmb", /* 3 beginDatum */
"mwa_mwatijdb", /* 4 beginTijd */
"mwa_mwawrden", /* 5 meetwaarde */
"mep_domgwcod", /* 6 eenheid */
"mrsinovs_domafkrt", /* 7 Detectiegrens*/
"mps_domgwcod", /* 8 Parameter code*/
"hoe_domgwcod", /* 9 Hoedanigheid code*/
"mco_domgwcod", /* 10 compartiment */
"wtt_cod", /* 11 watertype */
"mpn_mrfxcoor", /* 12 xcoord */
"mpn_mrfycoor"}; /* 13 ycoord */
@Override
public void serialize(TimeSeriesContent content, LineWriter writer, String virtualFileName)
throws Exception {
[Link](HEADER_LINE, ';');
line[0] = [Link]();
line[1] = locationIdParts[0];
line[2] = [Link]([Link](), "yyyy-MM-dd");
line[3] = [Link]([Link](), "HH:mm:ss");
line[4] = [Link]('.');
line[5] = [Link]();
line[6] = setOutOfDetectionRangeFlag([Link]());
line[7] = [Link]();
[Link](line, ';');
}
}
}
348
]]></";></TimeSeriesContent>
Menyanthes
Introduction
The Menyanthes file format is a special CSV ASCII format that can be imported by Menyanthes. In FEWS it can be used to export sample time
series.
Each export file consist of 3 parts, first a header followed by a description of the series and last the timeseries.
Each timeseries value in the timeseries content is written as a serperate row in the file.
Each row contains information on the location ID and Qualifier id, date and time, value, flag
Only non-missing values are printed.
More information on the Menyanthes format can be found on the Menyanthes internet site: ( [Link]
GEBRUIKERSNAAM userName
REFERENTIE NAP
LOCATIE Location Id
FILTERNUMMER Qualifier id
X COORDINAAT Geometry X
Y COORDINAAT Geometry Y
MAAIVELD Geometry Z
GESCHAT
MEETPUNT NAP
BOVENKANT FILTER
ONDERKANT FILTER
LOCATIE Location Id
FILTERNUMMER Qualifier id
349
STAND (NAP) Value
BIJZONDERHEID Flag
Source code
[Link]
Introduction
To be completed
Example
No example present
NETCDF-CF_GRID_MATROOS Export
Overview
This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
An example of the NETCDF-CF_GRID_MATROOS export can be found at NETCDF-CF_GRID. The only difference is that the exportType must
be changed to NETCDF-CF_GRID_MATROOS.
NETCDF-CF_GRID Export
350
Overview
This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
An example fo the IdMapping used for the NETCDF-CF_GRID export is shown below.
If the parameter has an entry in the standard name CF table, you can enter it in the externalQualifier1 attribute of the parameter. The value of this
qualifier will be added as the standard_name attribute for this variable in the netcdf exported file.
]]>
351
NETCDF-CF_PROFILE_MATROOS Export
Overview
This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
An example of the NETCDF-CF_PROFILE_MATROOS export can be found at NETCDF-CF_PROFILE. The only difference is that the exportType
must be changed to NETCDF-CF_PROFILE_MATROOS.
200101010000_example.nc
_<offset>" ;
int connections(noelements, nodesperelement) ;
connections:long_name = "Left and right node for each element" ;
connections:_FillValue = -999 ;
float waterlevel(time, nonodes) ;
waterlevel:long_name = "Waterlevel" ;
waterlevel:units = "m" ;
waterlevel:_FillValue = -9999.f ;
// global attributes:
:title = "Netcdf data" ;
:institution = "Deltares" ;
:source = "export NETCDF-CF_PROFILE_MATROOS from FEWS" ;
:history = "Created at Thu Oct 15 [Link] GMT 2009" ;
:references = "[Link] ;
:Conventions = "CF-1.4" ;
:coordinate_system = "RD" ;
]]></offset>
NETCDF-CF_PROFILE Export
Overview
This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
352
Grids (NETCDF-CF_GRID_MATROOS)
An example of the IdMapping used for the NETCDF-CF_PROFILE export will be given below.
Note that in the IdMapping of the parameters, the external name must match the variable names as used by the netcdf file exactly (case
sensitive). The locations that are mapped refer to branch id's which are defined in the [Link].
If the parameter has an entry in the standard name CF table, you can enter it in the externalQualifier1 attribute of the parameter. The value of this
qualifier will be added as the standard_name attribute for this variable in the netcdf exported file.
</idMap>
]]>
353
Branches 1.00 [Link]
<branches xmlns:xsi="[Link] xmlns="[Link]
" xsi:schemalocation="[Link]
[Link] version="1.1">
<geoDatum>Rijks Driehoekstelsel</geoDatum>
<branch id="Maastakken_NDB(Haringvliet)">
<branchName>Maastakken_NDB(Haringvliet)</branchName>
<startChainage>1030</startChainage>
<endChainage>321624</endChainage>
<pt label="R_MS_001_1" chainage="1030" z="40.32" z_rb="51.34" y="308594.236" x=
"176029.1129"/>
<pt label="R_MS_001_2" chainage="2061" z="41.79" z_rb="50.92" y="309427.7428" x=
"176631.808"/>
...
<pt label="N_NDB_92" chainage="321624" z="-7.82" z_rb="2.79" y="436953" x="57935.1"/>
</branch>
</branches>
]]>
NETCDF-CF_TIMESERIES_MATROOS Export
Overview
This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
An example of the NETCDF-CF_TIMESERIES_MATROOS export can be found at NETCDF-CF_TIMESERIES. The only difference is that the
exportType must be changed to NETCDF-CF_TIMESERIES_MATROOS.
200601010000_example.nc
NETCDF-CF_TIMESERIES Export
Overview
This export is available in DELFT-FEWS versions after 28-10-2009 (FEWS version 2009.02)
354
Profiles (NETCDF-CF_PROFILE)
Grids (NETCDF-CF_GRID)
Time series (NETCDF-CF_TIMESERIES_MATROOS)
Profiles (NETCDF-CF_PROFILE_MATROOS)
Grids (NETCDF-CF_GRID_MATROOS)
An example of the IdMapping used for the NETCDF-CF_TIMESERIES export will be given below. In this example, the mapped locations
correspond to the locations of the locatiesSet as defined above in the ExportNetcdf_Timeseries.xml.
If the parameter has an entry in the standard name CF table, you can enter it in the externalQualifier1 attribute of the parameter. The value of this
qualifier will be added as the standard_name attribute for this variable in the netcdf exported file.
355
IdExportNetCDF 1.00 [Link]
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<parameter externalqualifier1="discharge (not standardname, just for test)" internal="[Link]"
external="discharge"/>
Introduction
To be completed
Example
No example present
PI Export
Introduction
Export scalar timeseries to PI type format (example config). This xml format is described in detail in the Delft-Fews published interface
documentation.
Example
356
<?xml version="1.0" encoding="UTF-8"?>
<TimeSeries
xsi:schemaLocation="[Link]
[Link]
version="1.2" xmlns="[Link]
xmlns:xsi="[Link]
<timeZone>0.0</timeZone>
<series>
<header>
<type>accumulative</type>
<locationId>EA_H-2001</locationId>
<parameterId>Rainfall</parameterId>
<timeStep unit="second" multiplier="900"/>
<startDate date="2003-03-01" time="[Link]"/>
<endDate date="2003-03-01" time="[Link]"/>
<missVal>-999.0</missVal>
<stationName>Bewdley</stationName>
<units>m</units>
</header>
<event date="2003-03-01" time="[Link]" value="-999.0" flag="88"/>
<event date="2003-03-01" time="[Link]" value="0.0010" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0020" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0030" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0040" flag="44"/>
<event date="2003-03-01" time="[Link]" value="-999.0" flag="88"/>
<event date="2003-03-01" time="[Link]" value="0.0060" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0070" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.0080" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.009000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.010000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.011000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.012" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.013" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.014" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.015000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.016" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.017" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.018000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.019000001" flag="44"/>
<event date="2003-03-01" time="[Link]" value="0.020000001" flag="44"/>
</series>
</TimeSeries>
Introduction
The RAM export function exports time series in the Rhine Alarm Model format. The file is meant to contain daily time series of Level and Flow
series for the Rhine River. Make sure the exported timeseriesSets have all the same length (relative View Period), this period may contain
missing values.
The export function will first export all Level series, then all Flow series. Make sure the idMapping file converts all level series to a parameter ID
named "Level" and all flow series to a parameter ID named "Flow". Example of an ID mapping file: IdExportRAM.
ID mapping
<parameter internal="[Link]" external="Level"/>
<location internal="Andernach" external="ANDERNACH"/>
<location internal="Lobith" external="LOBITH"/>
...........
]]>
There is a standard footer at the end of the file. This footer is as follows:
357
[Stuwen]
StuwProgrammaS285 = -1
[Haringvlietsluizen]
SluisProgrammaLPH84 = -1
[Dispersie]
DispersieBerekend = -1
DispersieWaarde = 5
Example
[Water Levels]
Date=05.02.2008 00.00
Variable=WaterLevel
"Station","Level"
[Water Levels]
Date=06.02.2008 00.00
Variable=WaterLevel
"Station","Level"
"Maxau",4.0780106
"Speyer",2.9005127
"Worms",1.4478912
"Mainz",2.7116013
"Kaub",2.0306778
"Koblenz",2.4764786
"Andernach",3.2620697
"Bonn",3.4753723
"Keulen",3.6934166
"Dusseldorf",3.3034477
"Ruhrort",4.706806
"Wesel",4.499606
"Rees",4.093315
"Lobith",10.248271
"Driel boven",7.5582066
"Amerongen boven",5.8657036
"Hagestein boven",2.7667627
"H-RN-0908",3.9794693
[Water Levels]
Date=07.02.2008 00.00
Variable=WaterLevel
"Station","Level"
"Maxau",4.3678207
"Speyer",2.9759445
"Worms",1.5724945
......
......
[Flows]
Date=05.02.2008 00.00
Variable=Flow
"Station","Flow"
"Rheinfelden",624.347
[Flows]
Date=06.02.2008 00.00
Variable=Flow
"Station","Flow"
"Maxau",777.0
"Speyer",860.47656
"Worms",1008.7158
358
"Mainz",1295.96
"Kaub",1444.0793
"Koblenz",1528.0868
"Andernach",2208.8018
"Bonn",2224.2356
"Keulen",2332.453
"Dusseldorf",2385.7227
"Ruhrort",2511.8423
"Wesel",2610.2185
"Rees",2735.6772
"Lobith",2766.772
"Driel boven",473.542
"Amerongen boven",474.09454
"Hagestein boven",488.70618
"H-RN-0908",662.99994
"Rheinfelden",586.232
[Flows]
Date=07.02.2008 00.00
Variable=Flow
"Station","Flow"
........
........
[Stuwen]
StuwProgrammaS285 = -1
[Haringvlietsluizen]
SluisProgrammaLPH84 = -1
[Dispersie]
DispersieBerekend = -1
359
DispersieWaarde = 5
[Link]
[Link]
@Override
public void serialize(TimeSeriesContent content, LineWriter writer, String virtualFileName)
throws Exception {
[Link] = writer;
[Link] = content;
[Link]([Link]());
writeLevelEvents();
writeFlowEvents();
writeFooter();
}
360
[Link]("\"Station\"" + ',' + "\"Flow\"");
SHEF Export
Introduction
Example
TSD Export
Introduction
Export scalar timeseries to tsd type format (example config). This is a tab delimited file with two header row. The first column contains the
date/time. Date format is yyyy-MM-dd HH:mm:ss. The first header line contains the parameter and the T0. The second line the location above
each column. As such, only one parameter can be exported per file.
Example
No example present
UM Aquo export
Introduction
The UM Aquo file format is a special XML format to exchange all types of time series data, defined by the Dutch IDsW. In FEWS it can only be
used to export sample time series. Currently only the format version of 2009 is supported.
More information on the UM Aquo file format can be found IDsW internet site for UM Aquo:
[Link]
The Export module in FEWS exports requires many additional information that should be supplied to the export module by using an idMap. In this
idMap the next four external qualifiers should be defined:
361
1. externalQualifier1 : eenheid
2. externalQualifier2 : hoedanigheid
3. externalQualifier3 : compartiment
4. externalQualifier4 : <landcode>;<waterbeheerderCode>;<waterbeheerder>
unit / eenheid
[Link]
hoedanigheid
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
compartiment
[Link]
[Link]
[Link]
[Link]
[Link]
landcode
always: NL (it is a Dutch standard only...)
waterbeheerderCode and waterbeheerder
[Link]
Example flagConversionFile
362
<flagConversions xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns="
[Link]
<flagConversion>
<inputFlag> <value>0</value></inputFlag>
<outputFlag> <value>0</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>1</value></inputFlag>
<outputFlag> <value>0</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>2</value></inputFlag>
<outputFlag> <value>0</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>3</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>4</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>5</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>6</value></inputFlag>
<outputFlag> <value>99</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>7</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>8</value></inputFlag>
<outputFlag> <value>50</value></outputFlag>
</flagConversion>
<flagConversion>
<inputFlag> <value>9</value></inputFlag>
<outputFlag> <value>99</value></outputFlag>
</flagConversion>
<defaultOuputFlag><value>0</value></defaultOuputFlag>
<missingValueFlag><value>99</value></missingValueFlag>
</flagConversions>
]]>
Example idMap
Below an example idMap file is listed that is used within the export for timeseries of the waterboard Vallei en Eem in the Netherland. That is why
landcode=NL, waterbeheerderCode=10 and waterbeheerder=Waterschap Vallei en Eem.
363
<idMap xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<!-- external: UM Aquo parameter
internal: FEWS parameter
externalQualifier1 : eenheid
externalQualifier2 : hoedanigheid
externalQualifier3 : compartiment
externalQualifier4 : landcode;waterbeheerderCode;waterbeheerder
-->
<!-- Temperatuur parameters-->
<parameter externalqualifier3="LT;Lucht" externalqualifier4="NL;10;Waterschap Vallei en Eem"
externalqualifier1="oC;graad Celsius" internal="T_meting_lucht" externalqualifier2="NVT;Niet van
toepassing" external="T;Temperatuur"/>
<parameter externalqualifier3="OW;" externalqualifier4="NL;10;Waterschap Vallei en Eem"
externalqualifier1="oC;graad Celsius" internal="T_meting_oppwater" externalqualifier2="NVT;Niet
van toepassing" external="T;Temperatuur"/>
<parameter externalqualifier3="GW:Grondwater" externalqualifier4="NL;10;Waterschap Vallei en
Eem" externalqualifier1="oC;graad Celsius" internal="T_meting_grondwater" externalqualifier2=
"NVT;Niet van toepassing" external="T;Temperatuur"/>
<!-- Hoogte parameters-->
<parameter externalqualifier3="OW;Oppervlaktewater" externalqualifier4="NL;10;Waterschap
Vallei en Eem" externalqualifier1="m;meter" internal="WATHTE_meting" externalqualifier2=
"NAP;t.o.v. Normaal Amsterdams Peil" external="WATHTE;Waterhoogte"/>
<!-- Neerslag parameters-->
<parameter externalqualifier3="HW;Hemelwater" externalqualifier4="NL;10;Waterschap Vallei en
Eem" externalqualifier1="ml;milliliter" internal="NEERSG_meting" externalqualifier2="NVT;Niet van
toepassingl" external="NEERSG;Neerslag"/>
<!-- Debiet parameters-->
<parameter externalqualifier3="OW;Oppervlaktewater" externalqualifier4="NL;10;Waterschap
Vallei en Eem" externalqualifier1="m3/s;kubieke meter per seconde" internal="Q_meting"
externalqualifier2="NVT;Niet van toepassingl" external="Q;Debiet"/>
<parameter externalqualifier3="OW;Oppervlaktewater" externalqualifier4="NL;10;Waterschap
Vallei en Eem" externalqualifier1="m3/s;kubieke meter per seconde" internal="Q_berekend"
externalqualifier2="NVT;Niet van toepassingl" external="Q;Debiet"/>
<parameter externalqualifier3="OW;Oppervlaktewater" externalqualifier4="NL;10;Waterschap
Vallei en Eem" externalqualifier1="m3/s;kubieke meter per seconde" internal="Q_totaal"
externalqualifier2="NVT;Niet van toepassingl" external="Q;Debiet"/>
<!-- Druk parameters-->
<parameter externalqualifier3="LT;Lucht" externalqualifier4="NL;10;Waterschap Vallei en Eem"
externalqualifier1="B;Beaufort" internal="DRUK_meting_lucht" externalqualifier2="NVT;Niet van
toepassingl" external="DRUK;Druk"/>
</idMap>
]]>
[Link]
Rdbms Export
What [Link]
Required no
</moduleDescriptor>
Configuration
General
364
jdbcDriverClass
jdbcConnectionString
user
password
exportTimeWindow
exportTimeZone
moduleInstanceID
filter
RDBMS DDL/object creation scripts
Required database size (disk space)
Additional remarks
Configuration
The RdbmsExport module exports historical time series data to tables in a RDBMS. These tables must exist prior to running the module. Notice
that in the current version no qualifiers are supported!
365
In the sections below the different elements of the configuration are described
General
jdbcDriverClass
366
JDBC driver class to use for connection to RDBMS.
FEWS installation contains drivers for Oracle, PostgreSQL, Firebird
An Oracle example
<jdbcDriverClass>[Link]</jdbcDriverClass>
jdbcConnectionString
An Oracle example:
<jdbcConnectionString>jdbc:oracle:thin:@localhost:1521:xe</jdbcConnectionString>
user
password
exportTimeWindow
Defines the time window for which to export data from FEWS.
The exportTimeWindow will be applied if it is within the aforementioned 10 days period. Then it will limit the amount of data exported to be within
the specified start and end.
exportTimeZone
<exportTimeZone>+01:00</exportTimeZone>
moduleInstanceID
Optional list of Module Instance Id's for which to export time series data.
filter
Optional list of Filter Id's for which to export time series data.
367
<filter filterID="TSI_productie" />
<filter filterID="TMX_ruw" />
<filter filterID="DINO_ruw" />
Tables for storage of data in the RDBMS must be present before first execution of RDBMS Export module.
Proper priviledges must be assigned to the user account by the database administrator to insert/update data in these tables as well as (execution)
rights on the sequences and triggers in use.
Data model:
DDL scripts:
Oracle DDL
PostgreSQL DDL
Ms SQL Server DDL
Notice that it may be required to increase some column sizes, like [Link] from the default 256 characters to e.g. 1024 characters to
be able to store the complete string. If the size in the database is not large enough, the export will stop with an error (data truncation error).
One record in the TIMESERIEDATA table requires about 300 bytes. This means that 1.000.000 records take about 286 MB (300 * 1e6 million /
1024 /1024).
Additional remarks
The value of the timestep attribute (a string indicating the time step) in the Timeserie table will depend on the locale/language settings of
the computer which is running the export module.
368
Report Export
This Report Export module is one of the DELFT-FEWS export modules. This export module is responsible for retrieving reports generated by
forecasting runs from the database, and exporting these to the relevant directory structure on the web server. Reports can then be accessed from
there via the web interface to DELFT-FEWS. All reports are exported as is, by the report module- ie the module is only responsible for distributing
reports created.
Access to these reports through the web server may be at different levels depending on the user in question. The report export module itself does
not explicitly consider these access rights, but exports the reports in such a structure to allow the static part of the web server to correctly
administer the access rights.
The web server will control access to all or some reports in these categories to appropriate users.
When available as configuration on the file system, the name of the XML file for configuring an instance of the correlation module called for
example Report_Export may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
369
Figure 133 Elements of the reportExport module configuration
reportExportRootDir
Root directory to which the all reports are to be exported to. This directory is typically the root directory of the web server
currentForecastReports
Root element for definition of exporting reports for the current forecast
currentForecastSubDir
excludeModuleInstanceId
Optional list of reports generated by report module instances that should not be included in the export of current forecasts.
exportForecastReports
Root element for definition of exporting reports from recent forecasts made. Includes both the current forecast and a configurable number of
recently made forecasts.
numberForecastsToExport
370
NOTE: The number defined here should comply with the number of links to other forecasts in the index_template.html file. This file is located in
the reportExportRootDir directory.
exportForecastSubDir
Directory to use as root for exporting other forecasts to. For identification a sub-directory is created for each forecast exported. This sub-directory
is constructed using the id of the taskRun it was created by.
excludeModuleInstanceId
Optional list of reports generated by report module instances that should not be included in the export of other forecasts.
exportSystemStatusReports
includeModuleInstanceId
List of reports identified by moduleInstanceId of the report module instances that created them that should be included in the export of other
system status reports.
<className>[Link]</className>
</moduleDescriptor>
371
timeZone
timeZoneOffset
timeZoneName
time0Format
ensembleMemberCount
Burn-In Profile
length
timeSeries
Startup Activities
purgeActivity
filter
unzipActivity
zipActivity
Export Activities
exportStateActivity
description
moduleInstanceId
stateExportDir
stateConfigFile
stateLocations
stateSelection
loopTimeStep
writeIntermediateState
ExportTimeSeriesActivity
description
exportFile
exportBinFile
ignoreRunPeriod
includeThresholds
timeSerieSets
timeSerieSets: timeSerieSet
omitMissingValues
omitEmptyTimeSeries
forecastSelectionPeriod
ExportMapStacksActivity
description
exportFile
gridFile
locationId
gridName
gridFormat
timeSerieSet
exportProfilesActivity
ExportDataSetActvity
description
moduleInstanceId
ExportParameterActivity
description
moduleInstanceId
fileName
ExportTableActivity
description
exportFile
tableType
operation
parameters
locationId/locationSetId
exportNetcdfActivity
description
exportFile
timeSeriesSets
omitMissingValues
omitEmptyTimeSeries
ExportRunFileActivity
description
exportFile
properties
Execute Activities
executeActivity
description
command
arguments
environmentVariables
timeOut
overrulingDiagnosticFile
372
ignoreDiagnostics
GA Variables
Import Activities
description
importStateActivity
stateConfigFile
importTimeSeriesActivity
importMapStacksActivity
importPiNetcdfActivity
importProfilesActivity
Shutdown Activities
The General Adapter is the part of the DELFT-FEWS system that implements this feature. It is responsible for the data exchange with the
modules and for executing the modules and their adapters. The central philosophy of the General Adapter is that it knows as little as possible of
module specific details. Module specific intelligence is strictly separated from the DELFT-FEWS system. In this way an open system can be
guaranteed. Module specific intelligence required by the module to run is vested in the module adapters.
Communication between the General Adapter and a module is established through the published interface (PI). The PI is an XML based data
interchange format. The General Adapter is configured to provide the data required for a module to run in the PI format. A module adapter is then
used to translate the data from the PI to the module native format. Vice versa, results will first be exported to the PI format by a module adapter
before the General Adapter imports them back into DELFT-FEWS.
The General Adapter module can be configured to carry out a sequence of five types of tasks;
Startup Activities. These activities are run prior to a module run and any export import of data. The activities defined are generally used to
remove files from previous runs that may implicate the current run.
Export Activities. These activities defined all items to be exported through the published interface XML formats to the external module,
prior to the module or the module adapters being initialised.
Execute Activities. The execute activities define the external executables or Java classes to be run. Tracking of diagnostics from these
external activities is included in this section.
Import Activities: These activities define all items to be imported following successful completion of the module run.
Shutdown Activities. These activities are run following completion of all other activities The activities defined are generally used to remove
files no longer required.
373
Figure 65 Schematic interaction between the General Adapter and an external module
When available as configuration on the file system, the name of the XML file for configuring an instance of the general adapter module called for
example HBV_Maas_Forecast may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
374
Figure 66 Elements of the General Adapter configuration
general
burnInProfile
activities
Root element for the activities to be defined. The activities are defined in a fixed order;
startUpActivities
exportActivities
executeActivities
importActivities
shutDownActivities
General settings
375
Figure 67 Elements of the general section of the general adapter configuration
description
piVersion
Version of the PI specification that is supported by the pre and post adapter.
376
rootDir
Root directory for the external module. Other directories can be defined relative to this rootDir using predefined tags (see comment box below).
workDir
Working directory to be used by the external module. When started this directory will be the current directory.
exportDir
Directory to export data from DELFT-FEWS to the external module. All Published Interface files will be written to this directory (unless overruled in
naming the specific export files).
exportDataSetDir
Directory to export module datasets from DELFT-FEWS to the external module. A module dataset is a ZIP file, which will be unzipped using this
directory as the root directory. If the zip file contains full path information, this will be included as s tree of subdirectories under this directory.
exportIdMap
ID of the IdMap used to convert internal parameterId's and locationId's to external parameter and location Id's. See section on configuration for
Mapping Id's units and flags.
exportUnitConversionsId
importDir
Directory to import result data from the external module to DELFT-FEWS. All Published Interface files will be read from this directory (unless
overruled in naming the specific export files).
importIdMap
ID of the IdMap used to convert external parameterId's and locationId's to interna l parameter and location Id's. This may be defined to be the
same as the import directory, but may also contain different mappings. See section on configuration for Mapping Id's units and flags.
importUnitConversionsId
dumpFileDir
Directory for writing dump files to. Dump Files are created when one of the execute activities fails. A dump file is a ZIP file which includes all the
dumpDir directories defined. The dump file is created immediately on failure, meaning that all data and files are available as they are at the time of
failure and can be used for analysis purposes. The ZIP file name is time stamped to indicate when it was created.
dumpDir
Directory to be included in the dump file. All contents of the directory will be zipped. Multiple dumpDir's may be defined.
NOTE: ensure that the dumpDir does not include the dumpFileDir. This creates a circular reference and may result in corrupted ZIP files.
diagnosticFile
377
File name and path of diagnostic files created in running modules. This file should be formatted using the Published Interface diagnostics file
specification.
missVal
Optional specification of missing value identifier to be used in PI-XML exported to modules and imported from modules.
NOTE: it is assumed an external uses the same missing value identification for both import and export data.
convertDatum
Optional Boolean flag to indicate level data is used and produced by the module at a global rather than a local datum. The convention in
DELFT-FEWS is that data is stored at a local datum. If set to true data in parameter groups supporting datum conversion will be converted on
export to the global datum by adding the z coordinate of the location. (see definition of parameters and locations in Regional Configuration).
timeZone
The time zone with reference to UTC (equivalent to GMT) for all time dependent data communicated with the module. If not defined, UTC+0
(GMT) will be used.
timeZoneOffset
The offset of the time zone with reference to UTC (equivalent to GMT). Entries should define the number of hours (or fraction of hours) offset.
(e.g. +01:00)
timeZoneName
Enumeration of supported time zones. See appendix B for list of supported time zones.
time0Format
ensembleMemberCount
Burn-In Profile
Burn-in profile for cold state starts. Used to replace first part of a timeseries.
For time series with matching parameter-location ids, the first value is replaced by the initialValue. Element length defines the length of timeseries
beginning that is to be replaced using linear interpolation.
length
378
timeSeries
Initial value (which should match cold state), location and parameter should be specified.
Startup Activities
purgeActivity
Root element of a purge activity used to delete files from previous runs. Multiple purge activities may be defined.
filter
Deleting a whole directory can be achieved by defining the directory path in the filter without any file filter options ( .).
eg: %ROOT_DIR%/exportDir/purgeDirectory
A directory can only be removed if it is a sub directory of the General Adapter root directory!
unzipActivity
Root element of an unzip activity used to unpack a zip file and put the contained files in the directory of choice. Multiple unzip activities may be
defined.
zipActivity
Root element of a zip activity used to pack all files and subdirectories of an indicated directory to a zip file for later use/inspection. Multiple zip
activities may be defined.
379
description - optional description of the activity (for documentation only)
sourceDir - the name of the directory containing the files to be zipped
destinationZipFile - the name of the zip file to be created
Example:
<startupActivities>
<unzipActivity>
<sourceZipFile>extra_files.zip</sourceZipFile>
<destinationDir>%ROOT_DIR%/work</destinationDir>
</unzipActivity>
</startupActivities>
...
<shutdownActivities>
<zipActivity>
<sourceDir>%ROOT_DIR%/work</sourceDir>
<destinationZipFile>%ROOT_DIR%/inspection/[Link]</destinationZipFile>
</zipActivity>
</shutdownActivities>
Export Activities
Export activities are defined to allow exporting various data objects from DELFT-FEWS to the external modules. The list of objects that can be
exported (see figure above) includes;
380
Note that for most types of exportActivity, multiple entries may exist.
exportStateActivity
description
Optional description for the export states configuration. Used for reference purposes only.
moduleInstanceId
Id of the moduleInstance that has written the state to be exported. Generally this will be the same as the Id of the current instance of the General
Adapter. This can also be the ID of another instance of the General Adapter. The latter is the case when using a state in a forecast run that has
been written in an historical run.
stateExportDir
Directory to export the states to. This is the export location for the (binary) state files.
stateConfigFile
Name (and location) of the PI-XML file describing the states. If the directory location is not explicitly specified the file will be written in the exportDir
defined in the general section.
stateLocations
Root element for the description of the state. Both a read location and a write location will need to be defined. This allows the name of the
file(s)/directory to be different on read and write. Multiple locations may be defined, but these must all be of the same type.
Attributes type: indication of type of state to be imported. This may either be "directory" or "file". Note that multiple locations are supported
only if type is "file".
stateLocation - Root element of a state location
readLocation - Location where the external module will read the state. This is the location (and name of file/directory) where the General
Adapter writes the state.
writeLocation - Location where the external module is expected to write the state. This is the location (and name of file/directory) where
the General Adapter expects to read the state.
<stateLocations type="file">
<stateLocation>
<readLocation>[Link]</readLocation>
<writeLocation>[Link]</writeLocation>
</stateLocation>
</stateLocations>
stateSelection
Root element to specify how a state to be exported to the external module is to be selected. Two main groups are available, cold states and warm
states. Only one of these types can be specified. Note that if a warm state selection is specified and an appropriate warm state cannot be found, a
cold state will be exported by default.
381
coldState - Root element for defining the stateSelection method to always export a cold state.
groupId - Id of the group of cold states to be used. This must be a groupId as defined in the ColdModuleInstanceStateGroups
configuration (see Regional Configuration).
coldState:startDate - Definition of the start date of the external module run when using the cold state. This startDate is specified relative
to the start time of the forecast run. A positive startDate means it is before the start time of the forecast run.
warmState - Root element for defining the stateSelection method to search for the most suitable warm state.
stateSearchPeriod - Definition of the search period to be used in selecting a warm state. The database will return the most recent suitable
warm state found within this search period.
coldStateTime - Definition of the start time to use for a cold state if a suitable state is not found within the warm state search period.
insertColdState - When you set insertColdState to true, the defaultColdState is inserted into the WarmStates when no WarmState is
found inside the stateSearchPeriod. By default the cold state is not inserted as warm state
<stateSelection>
<warmState>
<stateSearchPeriod unit="hour" start="-48" end="0"/>
<coldStateTime unit="hour" value="-48"/>
<insertColdState>true</insertColdState>"
</warmState>
</stateSelection>
loopTimeStep
When specified, all activities are run in a loop to ensure that a state is produced on every cardinal time step between the time of the exported
state and T0. This has two advantages:
states are distributed over time equally and frequently. It is possible to start an update run from every point, also half way of a cold udpate
run that spans several days.
restriction of memory consumption. You can run an update run over months without going out of RAM.
Do not specify a relative view period for all time series sets in the export activity
writeIntermediateState
When specified, an extra state is written at the end of the state search period. Note that the run is than split in two. E.g my state search period is
-10 to -4 days, then there are two update runs, one from the time where a state was found to -4 and one from -4 to T0. A state is written at the
end of both runs (T0 and T0 - 4days). You can additionally define a minimum run length. This is necessary for some runs that need a minimum
run length for e.g. PT updating. The run is then only split in two if both runs can be run over the minimum run length. If not, there is only one run
and the state is written to the end of this run (T0), no intermediate state is written.
<exportStateActivity>
<moduleInstanceId>HBV_AareBrugg_Hist</moduleInstanceId>
<stateExportDir>%ROOT_DIR%/FEWS/states</stateExportDir>
<stateConfigFile>%ROOT_DIR%/FEWS/states/[Link]</stateConfigFile>
<stateLocations type="file">
<stateLocation>
<readLocation>HBV_States.zip</readLocation>
<writeLocation>HBV_States.zip</writeLocation>
</stateLocation>
</stateLocations>
<stateSelection>
<warmState>
<stateSearchPeriod unit="hour" start="-240" end="-96"/>
</warmState>
382
</stateSelection>
<writeIntermediateState>true</writeIntermediateState>
<minimumRunLength unit="day" multiplier="4"/>
</exportStateActivity>
ExportTimeSeriesActivity
description
exportFile
Name (and location) of the PI-XML file with exported time series. If the directory location is not explicitly specified the file will be written in the
exportDir defined in the general section.
exportBinFile
When true the events in the PI time series file are written to a binairy file instead of the xml file. The written xml file will only contain the time series
headers and optionally a time zone. The binairy file has the same name as the xml file only the extension is "bin" instead of "xml". During PI time
series import the bin file is automatically read when available. The byte order in the bin file is always Intel x86.
ignoreRunPeriod
When true the run period, written in the pi run file, will not be extended.
includeThresholds
When true any thresholds for the exported timeseries will be written in the timeserie headers
timeSerieSets
timeSerieSets: timeSerieSet
TimeSeriesSets to be exported. These may contain either a (list of) locations or a locationSet. Multiple entries may be defined.
omitMissingValues
Are missing values to be written to the export file or should they be left out.
omitEmptyTimeSeries
When true, a series is not exported when the time series is empty (or when omitMissingValues = true, when the time series is empty after
removing the missing values.)
forecastSelectionPeriod
Can be used to select all approved forecasts with a forecast start time lying within this period
383
ExportMapStacksActivity
description
exportFile
Name (and location) of the PI-XML file describing the map stack of the exported grid time series. If the directory location is not explicitly specified
the file will be written in the exportDir defined in the general section.
gridFile
locationId
gridName
Name of the files for the grid to be exported. For grid files where each time slice is stored in a different file, this name is the prefix for the full file
name. The final file name is created using an index of files exported (e.g the file name for the 4^th^ time step is grid00000.004).
gridFormat
timeSerieSet
TimeSeriesSets to be exported. These should contain only one locationId. For exporting multiple grids, multiple exportMapStack activities should
be defined.
exportProfilesActivity
384
ExportDataSetActvity
description
moduleInstanceId
Optional reference to the moduleInstanceId of the moduleDataSet to be exported. If not defined the moduleInstanceId of the current module
instance is taken as a default (see section on Module Datasets and Parameters).
ExportParameterActivity
description
moduleInstanceId
Optional reference to the moduleInstanceId of the moduleParameter to be exported. If not defined the moduleInstanceId of the current module
instance is taken as a default (see section on Module Datasets and Parameters)
fileName
Name (and location) of the PI-XML file with exported parameters. If the directory location is not explicitly specified the file will be written in the
exportDir defined in the general section.
ExportTableActivity
385
Figure 1 Elements of the ExportTableActivity configuration
description
exportFile
File to which the table will be exported. This file is always placed in exportDir.
tableType
operation
parameters
Parameters for the convertEquation operation. Must include minimumLevel, maximumLevel and stepSize
locationId/locationSetId
exportNetcdfActivity
description
exportFile
File to which the data will be exported. This file is always placed in exportDir.
386
timeSeriesSets
omitMissingValues
Are missing values to be written to the export file or should they be left out.
omitEmptyTimeSeries
The time series is not exported when the time series is empty or when; omitMissingValues = true and the time series is empty after removing the
missing values.
ExportRunFileActivity
description
exportFile
File to which the data will be exported. This file is always placed in exportDir.
properties
Kind of environment variables for the pre and post adapters. These properties are copied to the run file. This is also a convinient way way to pass
global properties to a pre or post adapter. An adapter is not allowed to access the FEWS [Link] directly. Global properties (between $)
are replace by there literal values before copied to the run file. These extra options makes an additional pre or post adapter configuration file
unnessesary.
Options:
string
int
float
bool
Execute Activities
387
Figure 75 Elements of the ExecuteActivity configuration
executeActivity
Root element for the definition of an execute activity. For each external executable or Java class to run, an executeActivity must be defined.
Multiple entries may exist.
description
Optional description for the activity. Used for reference purposes only.
command
executable - File name and location of the executable to run the command is an executable. The file name may include environment
variables, as well as tags defined in the general adapter or on the [Link].
className - Name of Java Class to run if the command defined as a Java class. This class may be made available to DELFT-FEWS in a
separate JAR file in the \Bin directory.
binDir - Directory with jar files and optinally native dlls. When not specified the bin dir and classloader of FEWS is used. When specified
the java class is executed in a private class loader, it will not use any jar in the FEWS bin dir. Only one class loader is created per binDir,
adapters should still not use static variables. All dependencies should also be in this configured bin dir.
arguments
environmentVariables
Root element for defining environment variable prior to running the executable/Java class
timeOut
Optional timeout to be used when running module (in milliseconds). If run time exceeds timeout it will be terminated and the run considered as
having failed.
overrulingDiagnosticFile
File containing diagnostic information about activity. This file always is located in the importDir and overrules the global diagnostic file.
ignoreDiagnostics
388
For this activity no check should be done whether the diagnostics file is present or not.
GA Variables
Several variable are available to be used as a argument to an external program. These are:
variable description
Import Activities
description
importStateActivity
Root element for importing modules states resulting from the run of the external modules. Multiple elements may be defined. If no state is to be
imported (for example in forecast run as opposed to state run), then the element should not be defined.
stateConfigFile - Fully qualifying name of the XML file containing the state import configuration
expiryTime - When the state is an intermediate result in a forecast run you can let the state expire. By default the expiry time is the same
as the module instance run.
synchLevel - Optional synch level for state. Defaults to 0 is not specified (i.e. same as data generated by the forecast run)
stateConfigFile
Name (and location) of the PI-XML file describing the states to be imported. If the directory location is not explicitly specified the file will be
expected to be read from the importDir defined in the general section. This file contains all necessary information to define state type and location.
The moduleInstanceId of the state imported is per definition the current module instance.
importTimeSeriesActivity
Root element for importing scalar and polygon time series resulting from the run of the external modules. Multiple elements may be defined.
importFile and timeSeriesSet should be defined.
importFile - PI-XML file describing the time series to be imported. The file contains all information on type of data to be imported (scalar,
389
longitudinal, grid, polygon). For all data types except the grid the file also contains the time series data If the directory location is not
explicitly specified the file will be expected to be read from the importDir defined in the general section.
importMapStacksActivity
Root element for importing grid time series resulting from the run of the external modules. Multiple elements may be defined. importFile and
timeSeriesSet should be defined.
importPiNetcdfActivity
Root element for importing grid time series in Pi-Netcdf format. importFile and timeSeriesSet should be defined.
importProfilesActivity
Root element for importing longitudinal profile time series resulting from the run of the external modules. Multiple elements may be defined.
importFile and timeSeriesSet should be defined.
Shutdown Activities
The lookup table utility is predominantly applied as the forecasting tool for coastal forecasting. Typically values such as predicted surge, wind
force and direction, wave height, fluvial flow in an estuary are used to predict values at a number of points on the coast or in an estuary. These
values are generally defined as a Lookup Index. This can then be resolved to a text string such as "Flood Warning" or "Severe Flood Warning" for
use in for example reports using the ValueAttributeMaps (see Regional Configuration).
simple table lookup. This is a two column (or row) table where the value at each time step in the input series is used to identify a relative
position in the first column (or row). The result value is found in the second column (or row) at the same relative position.
Multi-dimensional lookup. This is a lookup in a matrix. Two input series are required. One is used to find the relative row position in the
matrix at each tome step, while the other is used to find the relative column position in the matrix. The output value is found through
resolving these relative positions in the matrix values using bi-linear interpolation.
Critical condition tables. These defined a set of heuristic rules. Multiple inputs can be combined and an output is found through evaluating
the heuristic rules. A default output (also using rules can be defined).
When available as configuration on the file system, the name of the XML file for configuring an instance of the general adapter module called for
example Coastal_Lookup_Forecast may be:
390
default Flag to indicate the version is the default configuration (otherwise omitted).
LookupSet
Root element of the definition of a lookup table. Multiple entries may exist.
Attribute;
lookupSetId : Id of the lookup table. Used for reference purposes only (e.g. in log messages).
inputVariable
Definition of input variable to be used in the lookup table. For each entry in the lookup table an input variable will need to be identified.
The variableId is used to refer to the time series. See Transformation Module for definition of inputVariable configuration.
outputVariable
Definition of output variable as a result of the lookup table. A single timeSeriesSet for one location output variable per lookup table is
defined.
comment
Optional comment on lookup display configuration. Used for reference purposes only.
criticalConditionLookup
Root element for definition of a critical condition table. If no results can possibly be returned by any of the conditions specified, a
defaultValue should be defined as a set of rules.
Attributes;
simpleTableLookup
391
Root element for definition of a simple table lookup. Multiple entries may exist.
Attributes;
multiDimensionalLookup
Root element for definition of a multidimensional lookup table. Multiple entries may exist.
Attributes;
lookUpRowVariableId Id of the input variable to be used in the table for finding relative row position.
lookUpColVariableId Id of the input variable to be used in the table for finding relative column position..
outputVariableId Id of the output variable.
rows number of rows in lookup table (matrix).
cols number of columns in lookup table (matrix).
type Optional indication of type of value in lookup table. Enumeration of "float"or "int".
criticalConditionLookup
criticalCondition
Definition of a critical condition as a set of rules. Multiple entries may exist. When multiple entries do exist, then these will be resolved
sequentially until a condition defined is met. The result is then written to the output time series. Each condition holds a set of rules. Each
rule is resolved to a Boolean true or false. Rules can be combined in ruleGroups using Boolean operators. If a "true" value returned
through combination of all rules and ruleGroups specified, then the conditions specified are met.
rule: string value for result to be returned if conditions specified are met (for reference purposes only).
ruleIndex: index value returned if conditions specified are met. This is the value returned in the output time series. Value given is either a
numerical value enclosed in quotes (e.g. "4" or "Missing" to indicate a missing value should be returned).
ruleCriteria
Root element for definition of set of rules and ruleGroups. Multiple ruleCriteria can be defined. These are combined using the logical
operator defined.
ruleCriteriaLogicalOperator
Operator for combining ruleCriteria to single Boolean value. Enumeration of "and " and "or ".
rule
Attributes;
392
value: Value to compare input variable to using operator defined.
logical: optional definition of logical operator to combine sequence of rules (for rules defined in a rule group only). Enumeration of "and "
and "or ".
ruleGroup
Root element for defining a rule group. A rule group is a sequence of rules. Each rule is configured as defined above, and combined
using the logical operator given in the rule. The logical operator need not be included in the last rule defined.
Example:
defaultValue
The default value element is identical to the specification of a criticalConditon as described above.
SimpleTableLookup
393
Figure 80 Elements of the simpleTableLookup configuration
LookUpData
Attributes;
number : optional definition of number of entries (otherwise inferred from data provided)
type : optional type indication of data. Enumeration of "float" and "int".
separator : optional indication of separator string used between values. Default is space. Enumeration of;
space
data
rowwise
Element to define vector of data to lookup data in. Data value at relative position is returned. Attributes are same as LookUpData
element. Use this element if data is provided as a row.
columnwise
Element to define vector of data to lookup data in. Data value at relative position is returned. Use this element if data is provided as a one
value per row (as a column)
Attributes;
number : optional definition of number of entries (otherwise inferred from data provided)
type : optional type indication of data. Enumeration of "float" and "int".
separator : optional indication of separator string used between values. Default is space. Enumeration of;
lineseparator
info
Element containing information on how values are determined in lookup vector using the relative position determined.
info:extrapolation
Definition of how to extrapolate when relative position is above last or below first value in vector. Enumeration includes;
394
none : no extrapolation, missing value is returned
minmax : limit values returned to minimum/maximum of vector
linear : linear extrapolation using last or first two values in vector
info:interpolation
Example:
MultipeDimensionLookup
lookupColData
Row vector of data used to find relative position in matrix columns of input variable defined as lookUpColVariableId.
Attributes;
number : optional definition of number of entries (otherwise inferred from data provided)
type : optional type indication of data. Enumeration of "float" and "int".
395
separator : optional indication of separator string used between values. Default is space. Enumeration of;
space
lookupRowData
Row vector of data used to find relative position in matrix rows of input variable defined as lookUpRowVariableId.
Attributes;
number : optional definition of number of entries (otherwise inferred from data provided)
type : optional type indication of data. Enumeration of "float" and "int".
separator : optional indication of separator string used between values. Default is space. Enumeration of;
space
rowwise
Element for defining rows of marix as a vector of data on one line. For definition see simpleTableLookup. The number of rowwise
elements provided must be equal to the number of columns defined in the multiDimensionalLookup element. Each rowwise vector must
contain as many values as defined in cols in the multiDimensionalLookup element.
colwise
Element for defining rows of marix as a vector of data on one multiple lines. For definition see simpleTableLookup element. The number
of colwise elements provided must be equal to the number of columns defined in the multiDimensionalLookup element. Each colwise
vector must contain as many values as defined in cols in the multiDimensionalLookup element.
Info
Example:
07 Correlation Module
Correlation Module Configuration
correlationSet
inputTimeSeriesSet
outputTimeSerieSet
correlation
forecastLocationId
equationType
396
eventSetsDescriptorId
travelTimesDescriptorId
eventSelectionType
Comment
selectionCriteria
period
startDate
endDate
startTime
endTime
selectInsidePeriod
thresholds
thresholdLimit
selectAboveLimit
tags
tag
include
CorrelationEventSets
comment
correlationEventSet
locationId
parameterId
event
TravelTimesSets
travelTime
downstreamLocation
upstreamLocation
travelTime
validPeriod
As an automatic forecasting module. The module is then run through a preconfigured workflow with all required inputs being retrieved
from those available in the database. Results are returned to the database. These results are then available for later viewing through for
example a suitably configured report. In this mode a module Instance of the correlation module is defined as described below.
In interactive mode. In this mode the module is used through the correlation display available on the operator client. In this mode no
results are returned to the database. A module instance does not need to be created in this mode, all required configuration settings are
selected through appropriate options in the dialogue.
The correlation module uses two associated configuration itemsto establish correlations, these are the CorrelationEventSets and the
TravelTimeSets.
When available as configuration on the file system, the name of the XML file for configuring an instance of the correlation module called for
example Correlation_Severn_Forecast may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
397
Figure 82 Elements of the correlationSets configuration
correlationSet
inputTimeSeriesSet
Time series set to be used as input for the correlation. This time series set can either be a complete hydrograph (e.g. equidistant time series). It
may also be a non-equidistant series of peaks sampled using a transformation module defined previously in the workflow.
outputTimeSerieSet
Time series set to be used as input for the correlation. Values are returned for the same time step as the input series.
correlation
forecastLocationId
LocationId for the forecast location. This is the location to be defined in the output time series set.
equationType
Attributes;
simple_linear
exponential_divide
exponential_multiply
power
logarithmic
398
hyperbolic
polynomialOrder : Integer value for value of polynomial order. Applies only if polynomial equation is selected.
eventSetsDescriptorId
Id of the event sets to be used. This id is defined in the CorrelationEventSetsDescriptors (see Regional Configuration). A suitable
CorrelationEventSets configuration must be available (see below).
travelTimesDescriptorId
Id of the event sets to be used. This id is defined in the CorrelationEventSetsDescriptors (see Regional Configuration). A suitable
CorrelationEventSets configuration must be available (see below).
eventSelectionType
Method to be used in matching events. Events at the support and forecast location can be paired either on the basis of common
EventId's, or on the basis of a selection on travel time, where events at the upstream and downstream location are paired if these are
found to belong to the same hydrological event as defined using travel time criteria defined in the TraveTimesConfiguration. Enumeration
of options includes;
eventid
traveltime
Comment
Optional comment for correlation configuration. Used for reference purposes only.
selectionCriteria
period
startDate
399
Start date of time span to be used in selection (yyyy-mm-dd).
endDate
startTime
endTime
selectInsidePeriod
Boolean to indicate if events are to be selected that fall in the time span defined, or that fall outside the time span defined.
thresholds
Root element for defining selection of events that fall above or below a threshold. The threshold is applied to events selected at the
forecast location.
thresholdLimit
selectAboveLimit
Boolean to indicate if events are to be selected that fall above the threshold if true. Events are selected below the threshold if false.
tags
Root element for defining selection of events on tags defined in the eventSets
tag
include
Boolean to define if events with given tag are to be included or excluded in selection.
CorrelationEventSets
This configuration file is related to the Correlation module, and is used to define the events used in establishing a correlation. The
configuration file is in the CorrelationEventSets (table or directory). Each configuration defined is referenced using a
CorrelationEventSetsId as defined in the Regional Configuration.
comment
400
Optional comment for correlation event sets configuration. Used for reference purposes only.
correlationEventSet
Root element for defining set of events at a location to be used in establishing correlations.
locationId
parameterId
event
Attributes;
TravelTimesSets
This configuration file is related to the Correlation module, and is used to define the travel time between locations. These travel times may be
used in matching events. The configuration file is in the TravelTimesSets (table or directory). Each configuration defined is referenced using a
TravelTimesSetsId as defined in the Regional Configuration.
travelTime
Root element for defining a set of travel times. Multiple entries may exist.
downstreamLocation
Attributes;
id : Id of the location
name : name of the location (for reference purposes only)
upstreamLocation
401
Attributes;
id : Id of the location
name : name of the location (for reference purposes only)
travelTime
Attributes;
unit unit of time (enumeration of: second, minute, hour, day, week)
multiplier defines the number of units given above in a time step.**
divider same function as the multiplier, but defines fraction of units in time step.**
validPeriod
unit unit of time (enumeration of: second, minute, hour, day, week)
start : start of validity period
end : start of validity period
402
Error correction module configuration
The error modelling module is a generic forecasting module. The module is used to improve the reliability of forecast by attempting to identify the
structure of the error a forecasting module makes during the modelling phase where both the simulated and observed values are available, and
then applying this structure to the forecast values. This is under the assumption that the structure of the error remains unchanged. A description of
the background of this module can be found at AR Module Background information. In defining the error model three time series will need to be
defined;
Merged input time series of simulated model output for the historical period and of forecasted model output for the forecast period. The
time series in the historical period will be used for establishing error model through comparison with the observed time series. The error
forecast will be applied to the time series in the forecast period.
Input time series for the observed data.
Output time series for the updated simulated data for the historical period and the updated forecast data for the forecast period.
Two methods of establishing an error model are available. The first uses an AR (Auto Regressive) model only, but allows the order of the model
to be determined automatically. The second method uses an ARMA model, but the order of both the AR and the MA (Moving Average) model
must be defined. In both cases various transformations may be applied to normalise the residuals prior to establishing the error model.
When available as configuration on the file system, the name of the XML file for configuring an instance of the error module called for example
GreatCorby_ErrorModel_Forecast may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
403
Figure 86 Elements of the error module configuration.
errorModelSet
inputVariable
Definition of input variable to be used in the error correction model. At least two entries are required in the error model, one for observed time
series and one for simulated time series For each entry an input variable will need to be identified. The variableId is used to refer to the time
series. See Transformation Module for definition of inputVariable configuration.
autoOrderMethod
404
Figure 87 Elements of the autoOrderMethod configuration.
orderSelection
Boolean to indicate if order of AR components should be established automatically or if the given order should be used.
order_ar
Order of the AR model. If the orderSelection is true, then this value is the maximum order (may not exceed 50). In literature mostly an value of the
AR order up to 3 is chosen, higher values are possible, but will have a smaller contribution to the overall result of the error correction.
405
order_ma
parameters
This optional setting can be used to exactly specify the values for all the parameters (multipliers, powers, dividers, etc) used in the error correction
model. An example is shown below. Please note that you will need to establish these parameters firs. One way to do this is to run a long historical
run with auto-parameters on. The log file will show the parameters determined by the model. These parameters can be used to fix the parameters
for the forecast.
subtractMean
Boolean to indicate if mean of residuals should be subtracted prior to establishing error model.
boxcoxTransformation
Boolean to indicate if the residuals should be transformed using Box-Cox transformation prior to establishing error model.
lambda
Lambda parameter to use in Box-Cox transformation (note: value of 0 means the transformation is a natural logarithm). Values ranging from 0 to
0.5 are often used.
ObservedTimeSeriesId
Input time series set to be defined as the observed data to compare simulated model output to.
SimulatedTimeSeriesId
Input time series set to be defined as the simulated model output for both the historic and the forecast period. Multiple series will be combined into
single series. Series with higher index will be overlayed by series with lower index.
OutputTimeSeriesId
Updated timeseries data generated by the error model. This serie can contain data for the historic and the forecast period.
fixedOrderMethod
Root element for defining an error model using the ARMA structure.
406
Figure 88 Elements of the fixedOrderMethod configuration.
correctionModel
Structure of the error model to be used. The model selection includes the selection of initial transformations. Enumeration of options included;
none
ARMA+ systematic
systematic
ARMA
ARMA+ log transformation
ARMA+ systematic+ log transformation
order_ar
Order of the AR part of the model. In literature mostly an value of the AR order up to 3 is chosen, higher values are possible, but will have a
smaller contribution to the overall result of the error correction.
order_ma
Order of the MA part of the model. The order you specify determines the length of the period effected by the moving average function. The higher
the order, the longer the effected period. The moving average model is not operational yet.
ObservedTimeSeriesId
Input time series set to be defined as the observed data to compare simulated model output to.
SimulatedTimeSeriesId
Input time series set to be defined as the simulated model output for both the historic and the forecast period. Multiple series will be combined into
single series. Series with higher index will be overlayed by series with lower index.
OutputTimeSeriesId
407
Updated timeseries data generated by the error model. This serie can contain data for the historic and the forecast period.
interpolationOptions
Interpolation options for filling the missing values of the observed time series. This parameter is optional.
interpolationType
You can make a selection of a type of interpolation. Enumeration of available options is;
gapLength
defaultValue
maxObserved
Maximum value to be used by the error module. Higher values will be converted to NaN and not used as input for error correction. This parameter
is optional.
minObserved
Minimum value to be used by the error module. Lower values will be converted to NaN and not used as input for error correction. This parameter
is optional.
maxResult
Maximum value to be generated by the error module. This setting can be used to specify an upper limit of the generated output timeseries. This
parameter is optional.
minResult
Minimum value to be generated by the error module. This setting can be used to specify a lower limit of the generated output timeseries. This
parameter is optional.
ignoreDoubtful
Should the error module ignore doubtful input values. This parameter is optional.
outputVariable
Definition of output variable as a result of the error model. A single timeSeriesSet for one location output variable error model is defined.
408
AR Module Background information
Introduction
The quality of the flood forecasts will, in general, depend on the quality of the simulation model, the accuracy of the precipitation and boundary
forecasts, and the efficiency of the data assimilation procedure (Madsen, et al. 2000).
This document describes the AR error module that can be used for output correction.
Role in FEWS
The error modelling module is a generic forecasting module. The module is used to improve the reliability of forecast by attempting to identify the
structure of the error a forecasting module makes during the modelling phase where both the simulated and observed values are available, and
then applying this structure to the forecast values. This is under the assumption that the structure of the error remains unchanged.
Because of the structure of the forecasting system where models are first run over a historic period and then over a forecast period the error
modelling module runs in two phases, i during the historic period where the structure of the error model is determined, and ii during the forecast
phase where the error model is applied in correcting the forecast time series.
The module applies an AR model of the error. The order of the statistical model may either be selected by the user (through configuration), or
derived automatically. In this second mode the user must indicate the maximum order of each of these parts.
To stabilise the identification of the error model, transformations to the model residuals may be applied before identifying the model;
1. no transformation
2. transforming the series by subtracting the mean
3. Box-Cox transformation, in this case the user must also identify the lambda parameter to be used. A lambda of zero indicates a natural
logarithm transformation
Functionality described
This utility is applied to improve model time series predictions through combining modelled series and observed series. It uses as input an output
series from a forecasting module (typically discharge from a routing or rainfall-runoff module) and the observed series at the same location. An
updated series for the module output is again returned by the module. Updating is applied through application of an error model to the residuals
between module output and observed series. This error model is applied also to the forecast data from this module to allow correction of errors in
the forecast.
Data Requirements
To apply the error modelling module, time series data are required for both the simulated and historical period at a given location, as well as the
forecast time series at this location. Under normal configuration, these time series will be of the same parameter.
Simulated values [Link] Historic period (e.g. -2000 hours to start of forecast)
Observed values [Link] Historic period (e.g. -2000 hours to start of forecast)
Forecast values [Link] Forecast period (e.g. start of forecast to +48-240 hours)*
* Note: The length of the forecast period may be zero. If this is the case, then the error modeling module will consider only the historic period.
The error modelling module returns two time series, an update time series for the historic period, and an updated time series for the forecast
period. In principal the updated time series over the historic period is almost identical to the observed time series.
Updated values (historic) [Link] Historic period (e.g. -2000 hours to start of forecast)
Updated values (forecast) [Link] Forecast period (e.g. start of forecast to +48-240 hours)*
Configuration data
The configuration of the error modelling module is used to determine its behaviour in establishing the statistical model of the error and how this is
409
applied to derive the updated series
Configuration items
Below a short summary of the paper by Broersen (2002) can be found. The algorithms were extracted from ARMASA a Matlab Toolbox
(Broersen, Online) and are implemented in the Delft-FEWS AR module.
Three types of time series models can be distinguished, autoregressive or Ar, moving average or MA and the combined ARMA type. An
ARMA(p,q) process can be written as (Priestley, 1981)
where en is a purely random process, thus a sequence of independent indetically distributed stochastic variables with zero mean and variance se2
. This process is purely AR for q=0 and MA for p=0. Any stationary stochastic process can be written as a unique AR(¥) or MA(¥) process The
roots of
are denoted as the poles of the ARMA(p,q) process, and the roots of
are the zeros. Processes and models are called stationary if all poles are strictly within the unit circle, and they are invertible if all zeros are within
the unit circle.
AR estimation
This model type is the backbone of time series analysis in practise. Burg's method, also denoted as maximum entropy, estimates the reflection
coefficients (Burg, 1967;Kay and Marple, 1981), thus making sure that the model will be stationary, with all roots of A(z) within the unit circle.
Asymptotic AR order selection criteria can give wrong orders if candidate order are higher than 0.1N (N is the signal length). The finite sample
criterion CIC(p) is used for model selection (see Broersen, 2000). The model with the smallest value of CIC(p) is selected. CIC uses a
compromise between the finite sample estimator for the Kullbach-Leibler information (Broersen and Wensink, 1998) and the optimal asymptotic
penalty factor 3 (Broersen, 2000,Broersen and Wensink, 1996).
Box-Cox transformations
The Box Cox transformation (Box and Cox, 1964) can be applied in the order selection and estimation of the coefficients. The object in doing so is
usually to make the residuals more homoskedastic and closer to a normal distribution:
The implemented algorithm computes AR(p) models with p=0,1,...,N/2 and selects a single best AR model with CIC. However, one can choose to
provide the order one wants to use. Usually the mean of the signal will be extracted from the signal to obtain the model and coefficients, but this
option can be switched off. It is recommended to use the subtraction of the mean. Figure 1 shows an example of using the implemented error
module on the Moesel river basin at Cochem, Germany.
410
![Link]!
Figure 1. Application of AR module with subtraction of mean to the Moesel basin at Cochem, Germany. Blue is the measured discharge (Q), red
is the updated model update and forecast, green is the model simulation. The forecasts starts at t=401 hours.
Optionally, one can choose to use the Box-Cox transformation. In the update the algorithm will provide an updated model update. During the
forecast the selected model and coefficients are used for predicting the model error and are added with the model forecast to obtain an updated
model forecast.
Figure 2. Application of AR module to the Moesel basin using Box Cox transformation and subtraction of mean. Blue is the measured discharge
(Q), red is the updated model update and forecast, green is the model simulation. Forecasts starts at t=401 hours.
Figure 2 shows the effect of additionally applying a Box Cox transformation (l=0.3). It gives slightly better predictions than without (Figure 1).
![Link]!
Figure 3. Application of AR module to the Moesel basin using subtraction of mean. Blue is the measured discharge (Q), red is the updated model
update and forecast, green is the model simulation. Forecasts starts at t=250 hours.
![Link]!
Figure 4. Application of AR module to the Moesel basin using subtraction of mean. Blue is the measured discharge (Q), red is the updated model
update and forecast, green is the model simulation. Forecasts starts at t=500 hours.
Figure 3 and 4 show two applications (forecast starts at t=250 hours and at t=500 hours) of the algorithm with subtraction of mean but without
Box-Cox transformation.
References
Box, G.E.P and D.R. Cox, 1964. An analysis of transformations. J. Royal Statistical Soc. (series B), vol 26, pp 211-252.
Broersen, P.M.T.; Weerts, A.H. (2005). Automatic Error Correction of Rainfall-Runoff models in Flood Forecasting Systems. Instrumentation and
Measurement Technology Conference, 2005. IMTC 2005. Proceedings of the IEEE
Volume 2, Issue , 16-19 May 2005 Page(s): 963 - [Link]
Broersen, P.M.T., 2000. Finite sample criteria for Autoregressive order selection. IEEE Trans. Signal Processing, vol 48, pp 3550-3558.
Broersen, P.M.T. Automatic spectral analysis with time series models. IEEE Instr. Meas., vol 51, pp 211-216.
411
Broersen, P.M.T. Matlab toolbox ARMASA (online) Available: [Link]
Broersen, P.M.T. and H.E. Wensink, 1996. On the penalty factor for autoregressive order selection in finite samples, vol 44, pp 748-752.
Broersen, P.M.T. and H.E. Wensink, 1998. Autoregressive model order selection by a finite sample estimator for the Kullbach-Leibler
discrepancy. IEEE Trans. Signal Processing, vol 46, pp 2058-2061.
Burg, J.P., 1967. Maximum entropy spectral analysis. Proc. 37th Meeting Soc. Exploration Geophys., Oklahoma City, OK, pp 1-6.
Kay, S.M. and S.L. Marple, 1981. Spectrum analysis-A modern perspective. Proc IEEE, vol 69, pp 1380-1419.
Madsen, H., M.B. Butts, S.T. Khu, S.Y. Liong, 2000. Data assimilation in rainfall-runoff forecasting. Hydroinformatics 2000, 4th Inter. Conference
on Hydroinformatics, Cedar Rapids, Iowa, USA, 23-27 July 2000, 9p.
Priestely, M.B., 1981. Spectral analysis and time series. New York:Academic.
09 Report Module
Report Module Configuration
Configuring formatting of reports
Configuring content of reports
Charts
Spatial plot snapshots
Summary
thresholdsCrossingsTable
thresholdCrossingCountsTable
flagCountsTable
flagSourceCountsTable
maximumStatusTable
mergedPrecipitationTable
SystemStatusTables
liveSystemStatus
exportStatus table
importStatus table
scheduledWorkflowStatus table
completedWorkflowStatus table
currentForecastStatus table
logMessageListing table
forecastHistory table
provision of detailed, specific information to e.g. forecasting duty officers, area officers
provision of general, overview reports
Reports are used to present the forecast result data in a fixed, user defined, format. The format of a report can be easily customized by a user
and stored in a template.
Some functions of the DELFT-FEWS use the report component to present the results. For example the critical condition lookup tables defined in
for example coastal forecasting only produce index time series. Without post-processing of these indexes, a user can never see what is
happening in his coastal system. The report component will therefore also be used to interpret the indexes and transform these into information
that a user can understand. The ValueAttributeMaps (see Regional Configuration) define how these indeces are to be transformed to
understandable strings and/or icons.
Reports generated can be exported directly to the file system (for viewing through the web server) or they can be stored in the database. The
report export module may then be used to exporting selected reports when required to the web server.
The report template uses tags as placeholders to identify the location of objects in the report. In the following table the available tags are
described. Appendix D gives an overview of available tags. Appendix D also gives details on how to defined declatations to be sued in the
reports- allowing layout of tables etc. to be defined.
When available as configuration on the file system, the name of the XML file for configuring an instance of the correlation module called for
example Report_Coastal_Forecast may be:
412
Report_Coastal_Forecast 1.00 [Link]
default Flag to indicate the version is the default configuration (otherwise omitted).
declarations
Root element for declaring variables and formats to be used in making the report. Details on how these are to be configured is given in Appendix
D.
defineGlobal
The DefineGlobal element can be used to enter some information that is valid for all reports. Multiple DefineGlobal elements may be defined, as
long as the ID attribute is kept unique.
413
Related TAG: $DEFINITION(definitionId)$
<item>Format
Formatting instructions for a specific item in the report. See Appendix D for details.
templateDir
reportRootsDir
Root directory to which the all reports are to be exported. This directory is typically the root directory of the web server
reportRootsSubDir
Root directory to which the current reports are to be exported. This directory is relative to the reportsRootsDir
sendToLocalFileSystem
Boolean option to determine if reports are to be written to the file system on creation. If set to false reports will be stored in the Reports table in the
database, pending export through the ReportExport module.
report
414
Figure 91 Elements of the Report configuration
InputVariable
Input timeseriesSets for the report. All timeSeriesSets that are used in the report must be defined here, both for carts and for tables. See
Transformation Module for details on definint an inputVariable element.
FileResource
Reference to an external file that must be copied with the report. The file will be copied to the same location as the report, with the correct
filename inserted at the place of the tag. This can be used for copying images.
Chart
In the Chart element the variableId's to be used for one or more charts are defined. The Chart ID that is defined is referenced in the TAG.
Summary
In the Summary element the variableId's are specified that are used to create the summary information. The OverlayFormat of the
SummaryFormat determines what is shown on the map.
Table
415
In the Table element the variableId's are specified that are used to create a table. The TableFormat controls how the table is formatted, i.e. the
row and column information and how the data is displayed in the map.
Status
The Status element links a Status ID that is referenced in the STATUS TAG to a Status Format ID.
Template
OutputSubDir
OutputFileName
Report Filename.
DefineLocal
The DefineLocal element can be used to enter some information that is valid for a single report. Multiple DefineLocal elements may be defined, as
long as the ID attribute is kept unique.
A special case of the DefineLocal attribute is when the DefineLocalID is the same as a previously DefineGlobalID. In this case the DefineLocal
overrides the setting of the DefineGlobal. This is valid only for the report being configured, not for any other configured reports in the same
configuration file.
Charts
Charts can be used for visualising (more than one) timeseries by displaying them on a x and y axis using lines, dots etc. The charts which can be
added to html-reports looks more or less the same as in the TimeSeries Display. Charts are being created as (indivudual) *.png files.
Template tag
In the Chart element the variableId's to be used for one or more charts are defined. The Chart ID that is defined is referenced in the TAG.
Configuration aspects
416
Chart should be configured according the following schema definition ([Link])
For adding a chart to a report the following aspects are important to configure:
Chart attributes;
Chart timeseries;
Chart attributes
Chart timeseries
To display lines, dots etc. of a certain timeseries, the reference to this timeseries (variableId) should be mentioned.
Gridded time series can be visualized in a report by means of snapshots. The snap shot is an image depicting the time series spatially.
417
Configuration aspects
Spatial plot snapshots are configured according to the following schema definition ([Link])
418
419
snapshot
The snapshot is defined as a relative time interval from T0. Optionally a file name may be specified for the snapshot which is used to save the
snapshot on the file system. If omitted the file name is generated by the report module.
Summary
A summary is a (background) map which can be added to a report. On top of this map, icons can be displayed. The icons indicate the location or
the (warning) status of the monitoring or forecasting point. By adding specific html-functionality to the report template(s), maps can be used to
navigate through the reports as well. Clickable areas and locations can be distinguished here. The map itself (as a file) can be any image file. For
displaying maps in html-reports the following formats are advised: *.png, *.jpg or *.gif.
Template tag
In the Summary element the variableId's are specified that are used to create the summary information. The OverlayFormat of the
SummaryFormat determines what is shown on the map.
Creating a summary
420
The map itself is an existing file and can be created in several ways. An image processing software package (like Paint Shop Pro) can create a
'screendump' from the map section of The FEWS-Explorer. The FEWS-Explorer itself has some hidden features which can be used as well. The
[F12] button can be used for:
The *.png file is named "[Link]" and can be found in the /bin directory of your system. The map extent (rectangle containing real world
coordinates) can be pasted into any application by choosing Edit-Paste or [Ctrl]+ V. These four coordinates describing the extent of your map
picture in world coordinates are needed in the declarations section of the report ModuleConfigFile where you declare this summary.
Remark: Every time you use the above mentioned [F12] features, the png file in the /bin directory and the clipboard is overwritten! In making
series of maps you should copy/rename your png file after using this option. You should also paste the map extent in a text editor or spreadsheet
directly before repeating the operations with another map extent in the FEWS-Explorer.
Configuring a summary
Declaration section
In the declarations section of the report ModuleConfigFile, the summaryFormat needs to be declared. The following elements should be specified
(see figure).
Detailed explanation
File details like width and height can be retrieved using image processing software.
The x0 and y0 elements are margins/shifts of the position of the map compared to the left-upper point of the report (e.g. an A4-sheet).
This left-upper point is (0,0). The x0/y0 refer to the left-upper point of the image.
The mapFormat is used for positioning the map on the page (relative to all other content) and therefor it is placed in a so-called [DIV] tag.
This type of html tag puts everything which is configured within that tag in that position on the page. The following table explains the
references of the number in this format:
421
0 number Absolute x position of map image.
The overlayFormat is used for positioning location and/or warning icons on the map based on:
The map itself (defined in mapFormat);
X and Y coordinates of the required locations from the [Link];
Note that in case of numbers bigger than 999, it can be that the output may contain unwanted commas, e.g. width 1274 is printed in the
reports as 1,274. In order to avoid this, it is recommended to use
, number, #} ]]>
instead of
, number} ]]>
The mapFormat and overlayFormat elements are configured in html language in which some parts needs to be dynamically filled in. The result will
be valid html. The map itself is placed as the bottom 'layer' using the <DIV> tag attributes (index="0") in the mapFormat. Objects defined in the
overlayFormat are given higher indices to place them 'on top of the map' and will always be visible (location icons, warning icons).
Clickable map
A clickable map is an image on a html page containing 'hot spots' where the cursor change from the normal 'arrow' to a 'pointing finger' during a
mouse-over event. These hot spots can contain hyperlinks to other pages.
mapFormat
<div style="TOP:{1, number, #}px;
LEFT:{0, number, #}px;
position:absolute;Z-index:0">
<img border="0" src="{4}"
width="{2,number,#}" height="{3, number, #}"
usemap="{5}">
</div>
When adding the string "usemap="{5}" to the mapFormat (see above) the outcome in the html page will be (printed in bold).
The part describing the hot spots for this map are defined in the [map] tag. In this example below, three areas are 'clickable'. Every hot spots links
to another html page in the set of reports.
422
<img border="0" src="[Link]" width="503" height="595" *usemap="#clickmap"*>
<!-- Here the clickable map starts: SHOULD BE IN THE TEMPLATE -->
<map name="clickmap">
<area alt="Northumbria Area" href="northumbria/area_status.html" shape="polygon" coords="138,
...,...,34">
<area alt="Dales Area" href="dales/area_status.html" shape="polygon" coords="266, ..., ..., 285">
<area alt="Ridings Area" href="ridings/area_status.html" shape="polygon" coords="359, ..., ..., 379">
</map>
To avoid hot spots on a map, do not include the "usemap="{5}" in the mapFormat.
Hyperlinks
Hyperlinks can be added to the overlayFormat. By using the following option, hyperlinks to individual reports will be added automatically. They will
have a fixed (non-configurable) name, "[Link]" and assuming it is located in a directory with the same name as the locationId
compared to where this report with the map is located in the directory structure.
overlayFormat
<div style="TOP:{1,number,#}px;
LEFT:{0,number,#}px;
position:absolute;Z-index:1">
<a href="{5}">
<img border="0" src="{2}" title="{4}: {3}">
</a>
</div>\n
When adding href="{5}" to the overlayFormat (at that location) a hyperlink is being added to the icon placed on the map. In html it will look like
this.
Report section
Id: identifier referring to the tag in the template. In this case the corresponding template will contain $SUMMARY(statusAreaMap)$
423
formatId: identifier referring to the unique identifier of this map in the declarations section.
timeSeries: reference to an inputVariable (declared in the report or in the declarations section.
mapId: reference to a valueAttributeMap;
Tables
Tables are available in different types. The similarity between them is that they are referenced with the same template tag.
Template tag
In the Table element the variableId's are specified that are used to create a table. The TableFormat controls how the table is formatted, i.e. the
row and column information and how the data is displayed in the map.
Tables 2 to 6 have references to cascading style sheets. Below the different tables are explained.
htmlTable
The htmlTable is the successor of the table described earlier. The configuration of this htmlTable is easier and more readable.
Declarations section
In the declarations sections a format of a table needs to be defined. In the figure below, a format of a htmlTable is configured.
Report Section
424
In the report itself the reference to a tableFormat and the declaration of the content should take place. The schema prescribes as follows:
Remark: htmlTables can contain more than one timeseries. By adding different cellFormats to the series (see picture obove) different styles can
be attached for display in the table. In this way you can distinguish the two timeseries in the table!
Detailed explanation
The choice of adding a certain tableStyle to a table supplies you with the opportunity to influence the looks and style of the table, its borders,
background and content. By setting the tableStyle a number of style classes are being generated in the html-pages. By declaring these classes in
a stylesheet (*.css) and ascribe a certain font family, font size, border color, background color etc. you are able to 'polish' the looks of your
reports. If a tableStyle class is not mentioned in a stylesheet, it is being displayed using default settings.
The following classes are generated automatically and are added after the tableStyleX in which X stand for an integer in the range 1 to 10.
_beforeT0 date/time indication before time zero (TO) of the forecast time column (most left column)
_firstAfterT0 date/time indication of the first occurrence after time zero (T0) of the forecast time column (most left column)
_afterT0 date/time indication after time zero (TO) of the forecast time column (most left column)
_datamax addition to current style if value is maximum of that series (_data_datamax or data cells
_anyString_datamax)
thresholdsCrossingsTable
A thresholdCrossingsTable is a table in which the number of thresholds for each level are counted. The number given in the table suggests with
the 'worst' case situation. When a timeseries crosses a number of thresholds in a forecast, only the 'worst' thershold crossings is counted. An
example of a thresholdsCrossingsTable is given below.
Declarations section
425
In the declaration sections the layout of the table needs to be defined.
Report section
A thresholdsCrossingsTable should be defined in the report as well. E.g. this table needs to be 'filled' with its corresponding timeseries.
426
The following elements needs to be defined:
Since this type of table is a table in which you can aggregate data (which means combine timeseries) the following option is available:
mergeLocations. By default this This is explained in detail.
thresholdCrossingCountsTable
A thresholdCrossingsCountsTable displays threshold crossing counts depending on which thresholds have been crossed within a given time
period. The thresholdCrossingCountsTable is a new version of the thresholdsCrossingsTable. A thresholdCrossingCountsTable has the same
layout as a thresholdCrossingCountsTab in the thresholdOverviewDisplay for consistency.
Declarations section
In the declaration sections the layout of the table needs to be defined in a thresholdCrossingCountsTableFormat.
Report section
id: identifier for the template tag (in this case: $TABLE(table1)$);
formatId: reference to the format of this table (to one of the thresholdCrossingCountsTableFormats in the declarations section);
mergeLocations: boolean indicator. True means: treat all locations of mentioned timeseries together for combined assessment. False
means: extract individual timeseries so that every row indicates one location (timeseries);
Choice between
o timeSeries;
o table; --> this can be used to display a table within another one.
o tableRow
flagCountsTable
FlagCountsTable is available since Delft-FEWS release 2011.01. A FlagCountsTable displays flag counts depending on the flags of the values in
427
a time series within a given time period.
Example of a flagCountsTable
Declarations section
In the declaration section the layout of the table needs to be defined in a flagCountsTableFormat.
Configuration example:
<tableStyle>tableStyle1</tableStyle>
<hyperlinkUrl>[Link]#!LOCATION_NAME!_!PARAMETER_NAME!</hyperlinkUrl>
]]>
Report section
id: Identifier for this FlagCountsTable that is used in the report template html file in the table tag (e.g: $TABLE(table1)$).
formatId: The id of the FlagCountsTableFormat to use for this FlagCountsTable.
inputVariableId: One or more ids of inputVariables that are defined at the start of this report. For each time series in the inputVariable(s),
there will be one row in the table with the location, parameter and flag counts for that time series. For a given time series this uses only
the data within the relativeViewPeriod that is defined for that time series in the timeSeriesSet. If a timeSeriesSet contains multiple time
series (e.g. a locationSet), then for each time series in the timeSeriesSet a separate row is created.
Configuration example:
<inputVariableId>Cowbeech</inputVariableId>
<inputVariableId>Romsey</inputVariableId>
<inputVariableId>CrosslandsDrive</inputVariableId>
]]>
flagSourceCountsTable
FlagSourceCountsTable is available since Delft-FEWS release 2011.01. A FlagSourceCountsTable displays counts of flag sources depending on
the flag sources of the values in a time series within a given time period. The flag source for a value contains the reason why that value got a
certain flag. For example if a value was rejected by a "hard max" validation rule, then it gets flag unreliable and flag source "hard max".
428
Example of a flagSourceCountsTable
Declarations section
In the declaration section the layout of the table needs to be defined in a flagSourceCountsTableFormat.
Configuration example:
<tableStyle>tableStyle1</tableStyle>
]]>
Report section
id: Identifier for this FlagSourceCountsTable that is used in the report template html file in the table tag (e.g: $TABLE(table1)$).
formatId: The id of the FlagSourceCountsTableFormat to use for this FlagSourceCountsTable.
inputVariableId: The id of an inputVariable that is defined at the start of this report. The time series of this inputVariable is used for this
table. This table shows for each validation rule (hard max, hard min, rate of change, etc.) the number of values that were rejected
because of that validation rule. This uses only the data within the relativeViewPeriod that is defined for the time series in the
timeSeriesSet. If the timeSeriesSet contains multiple time series (e.g. a locationSet), then an error message is given.
Configuration example:
<inputVariableId>Cowbeech</inputVariableId>
]]>
maximumStatusTable
A maximumStatusTable indicates, by colouring, when certain threshold levels are crossed. In this type of table, the rows should be defined
individually and can contain more than one series. The boolean value 'mergLocation' plays an important role in combining the locations or treat
them individually.
429
Figure 102 Example of a maximumStatusTable (NE Region)
Declarations section
tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
mainHeaderText: a textual string which is displayed in the table header;
timeStepsInTable: integer number of timesteps to be displayed in the table. This should be derived from the relativeViewPeriod of the
corresponding timeSeries to add:
timeStepsAggregation: integer number of timesteps to be aggregated (=taken together). Worst status is being displayed.
timeHeaderInterval: integer number for aggregating the headers of the cells or not. Number '1' means 'no aggregation' so every column
has got its own header.
timeHeaderDisplayMinutes: boolean value for having the minutes displayed;
colWidth: integer value for the width of the cells;
showAsSingleColumn: boolean value for displaying the timeseries into one column only (true). If set to 'true' the last value of the
timeseries is considered.
Reports Section
In the report itself the necessary timeseries needs to be assigned to the the table.
430
Figure 104 Declaration of a maximumStatusTable in a report
formatId: reference to the format of this table (to one of the maximumStatusTables in the declarations section);
id: identifier (used for comments only)
mergeLocations: boolean indicator. True means: treat all locations of mentioned timeseries together for combined assessment. False
means: extract individual timeseries so that every row indicates one location (timeseries);
tableRow: (1 or more)
o formatId: reference to the thresholdsCrossingsTable format;
o id: identifier (mainly for own use/comment);
o timeSeries: (1 or more)
mapId: reference to a valueAttributeMap;
Text: reference to an inputVariable (declared in the report or in the declarations section.
In fact, the maximumStatusTable is designed as visualised below. To create a nicely aligned table the 'two timeseries tables' (the one with the
'observed' values and the one with the forecast series) are put in individual cells of the outer table. So the outer table only consist of two cells. The
left cell contains the observed table, the right cell contains the forecast table. The outer table itself needs to be declared as well!! The report
declaration (see above) can be inspected to see this one in practice.
The variation for displaying maximumStatus information is wide. The combination of relativeViewPeriod (length of forecast series), timestep and
the desire to aggregate timesteps can all be implemented. The calculation should be correct. If not, several messages will be shown.
Some examples
value explanation
Result: table with 48 time columns with 12 aggregated headers visualising the hour with a minute indication (like in first figure of this section)
431
Configuration
value explanation
Result: table with 6 (time) columns indicating the 'worst' status of that hour.
Example thresholdsCrossingsTable/maximumStatusTable
The geographical hierarchy is that Area 1 contains 2 Catchments (Catchment1 and Catchment2)
The Region overview should be configured that all catchments belonging to that area are 'put' into one row which is describing the status of that
area. This is valid for both the observed as the forecast timeseries. The 'mergeLocations' variable should be put to 'true' because all locations
should be merged (combined).
The Area overview shouls be configured in such way that all catchments are in separate rows. This is valid for both the observed as the forecast
timeseries. The 'mergeLocations' variable should be put to 'true' because all locations should be merged (combined).
The Catchment overview forms the exception here. With mergeLocations set to 'False', the corresponding locationSet is extracted into the
individual locations and so every location has got its own row.
For the two tables for which this is valid, the last example does not give much additional value for a thresholdsCrossingsTable. Then each row
(which is equal to one locatation) will have a '1' in one of the cells. A maximumStatusTable supplies more value because it will indicate when this
(maximum) threshold will be reached.
See below mentioned (simplified) figures.
432
mergedPrecipitationTable
A mergedPrecipitationTable contains both observerd rainfall as well as forecast rainfall, preferably in [Link] timeseries. Data can be visualised
in configurable time intervals compared to T0 and will appear in seperate columns. Additionally, a single column can be added to visualise any
parameter (e.g. CWI). An example can be found below (without extra column). In the example below, actually two tables are plotted next to each
other. The left table (with names) contains the historical date. The one on the right hand side contains the forecast timeseries and has no name
column. A table like this has two header rows to be defined by the user.
Declaration Section
433
Figure 106 mergedPrecipitationTable configuration in the declarations section
Report section
In the report section the content (timeseries) are 'attached' to this table.
434
Figure 107 mergedPrecipitationTable in the report section.
The mergedPrecipitationTable in the report section is (very) easy to define. The rule of the 'outer table' is valid here as well. To align the historical
and the forecast table nicely, the outer table contains both [Link] tables.
SystemStatusTables
SystemStatusTables display information about the status and behaviour of the FEWS system itself (like in the System monitor).
In most tables it is possible to add 'benchmark' data to compare the actual and the desired/required situation. The configuration of such a table
requires the definition of this benchmark value. Such a table contains a 'Item', "Benchmark' and a 'Status' column.
Besides a 'benchmark' (something to compare the actual status with) additional fields (columns from the database) can be included in the table. A
specific boolean value (showOutputFieldsOnly) can be used to either include or exclude these benchmark columns. In most tables this boolean is
set to 'False' because most tables contain both status information as well as additional (meta)information. See figure below.
435
Figure 108 A systemStatusTable is divided into a status part and an extraOutputFields part.
liveSystemStatus
A liveSystemStatusTable displays information about the status and behaviour of the live system components (MasterController and Forecasting
Shell Server(s))
Declarations Section
Figure 110 Example of the configuration of a liveSystemStatus table in the declarations section
tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
showOutputFieldsOnly: boolean value for displaying the outputfields only.
Report Section
436
Figure 111 Example of the configuration of a liveSystemStatus table in the report section
exportStatus table
A exportStatus table displays information about the status of a number export features of the system, such as:
Declarations Section
437
Figure 113 Example of the configuration of an exportStatus table in the declarations section
tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
showOutputFieldsOnly: boolean value for displaying the outputfields only.
Report Section
438
++
Figure 114 Example of the configuration of an exportStatus table in the report section
workflowStatusQuery;
logMessageParseQuery
439
importStatus table
An importStatus table displays information about the datafeeds which have been imported, how many of them were read and how many failed to
be imported. The frequency of the files imported can be (visually) compared with a benchmark figure. See below for an example.
Declarations Section
Figure 116 Example of the configuration of an importStatus table in the declarations section
tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field definition specifically for import related topics. Recommended fields are:
Last file imported
Nr. of files read
Nr. of files failed
showOutputFieldsOnly: boolean value for displaying the outputfields only.
440
Remark: when defining extraOutputFieldHeaders it is important to maintain the same order in the declarations sections (definition of
the fields) and in the report section (referencing the content) otherwise the header and the content will not correspond.
Report Section
Figure 117 Example of the configuration of an importStatus table in the report section
scheduledWorkflowStatus table
A scheduledWorkflowStatus table displays the workflows which are scheduled together with their repeat time and next due time. The figure below
illustrates this.
441
Figure 118 Example of a scheduledworkflowStatus table (NE Region)
Declarations Section
Figure 119 Example of the configuration of a completedWorkflowStatus table in the declarations section
tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field(s) definition specifically for scheduled workflow related topics. Recommended fields are:
Workflows
Description
MC Id
Repeat Time
Next Due Time
showOutputFieldsOnly: boolean value for displaying the outputfields only.
Since this table is not referring to a benchmark (it is just reading the configuration) the value for showOutputFieldsOnly is set to true. Only these
fields are displayed.
442
remark: One reference to an existing workflow is sufficient to extract all scheduled workflows out of the database, that's why it
seems that there is only one table row configured here. In fact, this table will be filled with ALL scheduled workflows when configured
as above.
Report Section
Figure 120 Example of the configuration of a scheduledWorkflowStatus table in the report section
completedWorkflowStatus table
A completedWorkflowStatus table contains an overview of all workflows carried out in the last 24 hours. An example is given below.
Declarations Section
443
Figure 122 Example of the configuration of a completedWorkflowStatus table in the declarations section
tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
statusSubHeader: Additional Field(s) definition specifically for completed workflow related topics. Recommended fields are:
Nr. of Runs
Nr. Failed
showOutputFieldsOnly: boolean value for displaying the outputfields only.
Report Section
Figure 123 Example of the configuration of a completedWorkflowStatus table in the report section
currentForecastStatus table
The currentForecastStatus table gives an overview of which workflows are set to CURRENT. These mentioned workflows in this tables are the
same as the marked with a green icon the System Monitor of the Operator Client. An example of this table is given below.
444
Figure 124 Example of a currentForecastStatus table (NE Region)
Declarations Section
Figure 125 Example of the configuration of a currentForecast table in the declarations section
tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field(s) definition specifically for scheduled workflow related topics. Recommended fields are:
T0
What-if Scenario
Description
FDO
showOutputFieldsOnly: boolean value for displaying the outputfields only.
Report Section
445
Figure 126 Example of the configuration of a currentForecastStatus table in the report section
logMessageListing table
A logMessageListing table contains logmessages which are available in the Log Browser tab in the System Monitor of the Operator Client. Log
messages of a specific type can be queried. By making use of a correct reference to the cascading style sheet this table can be set to 'scrollable'
An example of such a table is given in the figure below.
Declarations Section
446
Figure 128 Example of the configuration of a logMessageListing table in the declarations section
tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field(s) definition specifically for scheduled workflow related topics. Recommended fields are:
Log Creation Time
Log Message
TaskrunId
showOutputFieldsOnly: boolean value for displaying the outputfields only. This tableType requires a 'true' here.
Report Section
Figure 129 Example of the configuration of a currentForecastStatus table in the report section
447
logCreationTime (creation time of message)
logMessage (content of the log message itself)
taskRunId (reference to the taskrun that throwed this message)
forecastHistory table
A forecastHistory table provides an overview of all most recent forecasts carried out. The number of foracast to include is configurable. An
example of such a table is given below.
Declarations Section
Figure 131 Example of the configuration of a forecastHistory table in the declarations section
tableStyle: a choice of tableStyle which can be influenced by using the corresponding classes in a cascading style sheet. Choices are
tableStyle1 to tableStyle10;
id: unique identifier (as reference to this table);
448
statusTableSubType: Choice for one of the subtypes of systemStatusTables: Choices are: liveSystemStatus, exportStatus, importStatus,
scheduledWorkflowStatus, completedWorkflowStatus, currentForecastStatus, logMessageListing, forecastHistory;
tableTitle: a text for a title for this table;
headerRows: integer value for the number of header rows;
itemHeader: header text for the 'Item' column;
benchmarkHeader: header text for the 'Benchmark' column;
statusHeader: header text for the 'Status' column;
statusHeaderSplit: value indicating the number of header rows in the status column.
extraOutputFieldHeader: Additional Field(s) definition specifically for scheduled workflow related topics. Recommended fields are:
Dispatch Time
Completion Time
T0
Workflow
What-if Scenario
Description
FDO
showOutputFieldsOnly: boolean value for displaying the outputfields only. This tableType requires a 'true' here.
Report Section
+
Figure 132 Example of the configuration of a forecastHistory table in the report section
Summary
The report configuration is quite extensive. To reduce configuration efforts it is now possible to use the inputVariables item in de <declaration>
section (at the top of the document) in stead of mentioning this same inputVariable in all individual reports. This latter should be replaced by one
line containing a reference to the locationId. Example configurations of both options are given below.
449
Config options
Individual reports contain individual inputVariables
Tags
450
The report template uses tags as placeholders to identify the location of objects in the report. In the following table the available tags are
described.
Tag Description
$CURRENTTIME(dateFormatId)$ The actual time the report was generated. Note that this is the Delft FEWS time, which
is not necessarily equal to the local time.
Arguments: 1
- dateFormatId: specified in the configuration file, sets the formatting for the date to be
displayed
Arguments: 1
- variableId: refers to the variableId assigned to the time series in the report
configuration.
$TIMEZERO(variableId; dateFormatId)$ The time zero of the forecast run in which the time series is created.
Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- dateFormatId: specified in the configuration file, sets the formatting for the date to be
displayed
Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- numberFormatId: specified in the configuration file, sets the formatting for the values
to be displayed
Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- dateFormatId: specified in the configuration file, sets the formatting for the date to be
displayed
$MINTIME(variableId; dateFormatId)$ The date/time of minimum value found in the time series
Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- numberFormatId: specified in the configuration file, sets the formatting for the values
to be displayed
$MAXTIME(variableId; dateFormatId)$ The date/time of maximum value found in the time series
Arguments: 2
- variableId: refers to the variableId assigned to the time series in the report
configuration.
- dateFormatId: specified in the configuration file, sets the formatting for the date to be
displayed
$MAXWARNINGLEVEL(variableId)$ returns the name of the highest warning level threshold that has been crossed
Arguments: 1
- variableId: refers to the variableId assigned to the time series in the report
configuration.
451
$DEFINITION(definitionId)$ The definition tag provides a means to enter some additional textual information into a
report. This information can be set for all reports at once, through the defineGlobal
element of the declarations section or for each report through the defineLocal element
in the reports section.
Arguments: 1
- definitionId: refers to ID provided in either the defineLocal or defineGlobal elements.
The defineLocalId takes preference over the defineGlobalId when both are the same.
$FILERESOURCE(resourceId)$ The fileresource tag provides a means to include an external file into the report. This
may be any file, as long as it is permissible in the report file. The inclusion is
Arguments: 1
- resourceId: Refers to the ID given to the fileResource element in the reports section.
The fileResource element specifies the location of the file to be included relative to the
region 'home' directory.
$TABLE(tableId)$ Inserts a table. The layout of the table is defined in the report configuration files.
Arguments: 1
- tableId: ID of the table definition
$CHART(chartId)$ Inserts a reference to the filename of the chart. The chart is created in the same
directory as the report file. The reference is inserted without any path prefixes. This
feature will only be useful in XML or HTML report files.
Arguments: 1
- charted: ID of the chart definition
$SUMMARY(summaryId)$ Inserts a map with overlying text, symbols or values of configured timeseries. This is
a complex tag that requires substantial preparation in the configuration.
Arguments: 1
- summaryId: ID of the summary definition
$STATUS(statusId)$ Inserts a table created using a SQL query on the database. The table may be
additionally formatted.
IMPORTANT: the HTML table header is not created by this TAG. The TAG only
creates the records of the table. This has been made to enable the user to provide
more user friendly header info than the field names.
Arguments: 1
- statusId: ID of the status definition
Report module
In the declarations section a table layout must be specified with a tableStyle.
In the reports section a report table must have a map ID linked to the timeseries in the table.
[Link] file: In this file the correct colours must be configured for the map ID's
452
10 Performance Indicator Module
Performance Indicator module
Assessing performance of modules
Assessing performance of forecast values- lead time accuracy
Assessing performance of forecast values- timing of thresholds
Configuration of performance module
performanceIndicatorSet
inputVariable
outputVariable
modulePerformanceIndicator
leadTimeAccuracyIndicator
thresholdTimingIndicator
additionalCriteria
description
leadTimes
leadTime
thresholdIds
thresholdId
Performance of the individual forecasting modules. This reflects how accurate a given forecasting module is, following the traditional
performance measures used widely in module calibration, for example root mean square error, Nash-Sutcliffe measure etc.
Performance of the forecasting system itself. This reflects the accuracy of the system in forecasting. Three types of measure are
proposed to this end, (i) lead time accuracy of forecast time series, (ii) accuracy of timing threshold event crossings and (iii) accuracy and
timing of peak predictions.
The first type of performance assessment can be used either in calibration of the system, or in the operational setting to determine performance of
modules and take actions such as the use of an alternative module due to poor performance.
The second type of measure can be assessed once observed data for which forecasts were made becomes available.
The first and most simple application of the performance indicator module is in the traditional module calibration. This is by comparing two time
series where one time series is the estimated series and the other is the reference time series. These time series are compared over a
configurable length. As with other time series this is referenced with respect to the forecast start time (T0).
The time series are compared using a number of performance indicators. is the estimated value, is the reference value, and is the
number of data points. is the mean of the reference values.
Bias (BIAS)
453
Mean absolute error (MAE)
To establish the peak accuracy, the peak must be identified- logic from the TransformationModule is to be used, although this needs extending to
make sure a peak is a peak. A peak needs to be independent, and it must be ensured that the peak given is not simply the maximum value in a
time window at the boundaries (see also Threshold Event crossing module). Note that the peak the estimated series does not need to fall exactly
on the same time as the reference peak, but must be identified within a window (see peak independence window).
On establishing the performance, the indicator is returned as a time series (simulated historical). This time series is a non-equidistant time series,
labelled as a forecast historical with the time stamp set to T0
Performance of forecast is assessed on the basis of lead time accuracy. This is done by comparing the forecast lead time value against the
observed value at the same time (received later!). For each lead time, this value is assessed over a given number of forecasts.
An option in the configuration of the module determines if the module identifies performance of approved forecasts only or of all forecasts.
Performance is assessed over all forecasts available for a given period of time- e.g over a week or month (relative view period). Clearly evaluation
can not be done over forecasts beyond the length of the rolling barrel in the local data store.
Lead time accuracy is evaluated using again the MSE, MAE or BIAS
454
Lead time accuracy in bias LEAD_BIAS
where is the lead time accuracy at time , J is the number of forecasts considered, is the reference value at time and
1. The results of the evaluation are written as a time series (simulated forecasting) , with as a reference time the T0 of the evaluation run and a
time stamp for each .
2. The results for each lead time are written as a different time series (simulated historical). This will allow assessment of lead time accuracy at
selected lead times to be compared against catchment conditions.
On selecting reference values , these may not yet be available (should this be the case then the number of forecasts considered ( J ) is
reduced accordingly. If less than the configured number is considered, then a WARN message indicating how many of the expected number were
actually used.
An important indicator of performance is the timing of predicted threshold event crossings. Again this is evaluated over a number of forecasts. To
evaluate this the threshold crossings in the indicator and the reference series are considered. For each pair of matching thresholds (matched on
threshold id's) the time between the two is evaluated, and expressed either as a time bias (T_BIAS) or a time absolute error (T_MAE). Times are
evaluated in terms of seconds.
where is the time of the threshold in the reference series, is the time of the threshold in the estimated series.
The results of the evaluation are written as a time series (simulated historical), with as a reference time the T0 of the evaluation run and a time
stamp for each .
455
Figure 134 Elements of the performance module configuration
performanceIndicatorSet
Root element for configuration of a performance Module indicator. Multiple elements may be defined for each performance indicator to be
assessed.
Attributes;
performanceIndicatorId : Optional Id for the configuration. Used for reference purposes only
inputVariable
Definition of inputVariables (time series). Input variables are identified by their VariableId. See transformation module on definition of the
inputVariable element. An input variable will need to be defined for both simulated and for observed time series.
outputVariable
Definition of outputVariable time series of performance indicator values is to be written to. This will normally be a non-equidistant time series as it
is not a-priori certain when the performance indicator module is run.
modulePerformanceIndicator
Attributes;
leadTimeAccuracyIndicator
Root element for configuration of performance indicator assessing lead time accuracy
Attributes;
thresholdTimingIndicator
456
Root element for configuration of performance indicator assessing accuracy of threshold Timing
Attributes;
additionalCriteria
Additional criteria identified in establishing performance indicators. Application depends on the performance indicator selected.
Attributes;
description
leadTimes
leadTime
Attributes;
457
Figure 137 Elements of the thresholdTimingAccuracy configuration.
thresholdIds
thresholdId
Attributes;
If observed data is to be kept in the system longer than forecast data without having severe implications on the size of the database, the
amalgamate module can be configured to amalgamate multiple small lengths of data to a single BLOB in a single record. These can be stored
with a much longer expiry time than the default. On a scheduled system this module can be run on a daily basis, amalgamating for example
import data that is a number of weeks old and is about to expire into single blocks with a much later expiry time.
When available as configuration on the file system, the name of the XML file for configuring an instance of the amalgamate module called for
example Amalgamate_Import may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
458
Figure 138 Elements of the Amalgamate Module configuration
task
Root element for definition of an amalgamate task. Multiple entries may exist.
maximumCombinedBlobLength
Attributes;
unit unit of time (enumeration of: second, minute, hour, day, week)
multiplier defines the number of units given above in a time step.**
divider same function as the multiplier, but defines fraction of units in time step.**
timeSeriesSet
Time series set to amalgamate. The input and output time series sets are identical. Set a new expiry time in the time series set to ensure
it is kept in the database for the required period.
12 Archive Module
Archive Forecast Module
Introduction
The standard version of Delft FEWS (FEWS) includes archiving functionality. DELFT-FEWS can create archives for selected forecasts,
thresholds, configurations and timeseries. Delft FEWS can also restore a forecast and its data from an archive. An archive of a forecast contains
all result data from the forecast run, but also includes all the data used by the forecast at the time of the run, including any initial module states.
The archive can be used for hindcasting and analysis; using the data that was available at the time of the forecast. Forecast archives, with all
associated data, can be created manually or automatically. The archives are placed as zip files in a user defined folder. Retrieving of archives can
be done by importing the zip files.
Management of the folders with archives is the responsibility of the system manager. When not selectively managed the disk space required for
archiving will quickly increase depending on the data volumes used and produced by a given FEWS configuration.
As all functional tasks are run by DELFT-FEWS through a workflow, a moduleInstance is created to allow archives to be made. The module
instances must be correctly registered in the moduleInstanceDescriptors (see Regional Configuration), and point to the relevant class in the
moduleDesriptors configuration (see System Configuration).
Archive modules
FEWS contains 4 modules that can create or restore archives. These modules need to be registered in the ModuleDescriptors file locateed in the
system configuration files.
459
ModuleDescriptors 1.00 [Link]
<description>Forecast archiver</description>
<className>[Link]</className>
<moduleDescriptor id="TimeSeriesArchiver">
<description>Time Series archiver</description>
<className>[Link]</className>
</moduleDescriptor>
<moduleDescriptor id="ThresholdEventsArchiver">
<description>Threshold Events archiver</description>
<className>[Link]</className>
</moduleDescriptor>
<moduleDescriptor id="ConfigurationArchiver">
<description>Configuration archiver</description>
<className>[Link]</className>
</moduleDescriptor>
]]>
The ConfigurationArchiver can be used to make archives of configuration changes in the FEWS database. In the example below, the
ConfigurationArchiver is scheduled to run once every 24 hours. The ConfigurationArchiver checks if there have been any configuration changes in
the last 24 hours. If so, it stores the configuration changes in a zip file and stores these in the folder configured in the [Link] file with the
Tag ARCHIVE_EXPORT_PATH.
The ThresholdEventsArchiver can be used to make archives of threshold crossings that have been stored in the FEWS database. Threshold
events are created by the FEWS Threshold module, and stored in the FEWS ThresholdEvents table. The threshold events can be used by the
FEWS Performance module to analyse the performance of hte Forecasting System.
An example of the ThresholdEventsArchiver is shown in the example below. As with the ConfigurationArchiver, the ThresholdEventsArchiver is
scheduled to run once every 24 hours. The ThresholdEventsArchiver checks if there have been any threshold events in the last 24 hours. If so, it
stores the threshold events in a zip file and stores these in the folder configured in the [Link] file with the Tag
ARCHIVE_EXPORT_PATH.
The ForecastArchiver can be used to make archives of forecasts that have been stored in the FEWS database. All forecasts that have been
460
made, together with the data that has been used to make the forecasts will be stored in the forecast archive. An example of the ForecastArchiver
is shown in the example below, the ForecastArchiver is scheduled to run once every 24 hours. The ForecastArchiver stores all forecasts in a zip
file and stores these in the folder configured in the [Link] file with the Tag ARCHIVE_EXPORT_PATH.
The TimeSeriesArchiver can be used to make archives of timeseries from selected module instances that will no be stored in a normal forecast
archive. These time series can be performance indicators or imported data that is not used by any forecast run. All timeseries that have been
stored in the database with the selected module instances will be stored in the forecast archive. An example of the TimeSeriesArchiver is shown
in the example below, the TimeSeriesArchiver is scheduled to run once every 24 hours. The TimeSeriesArchiver stores all forecasts in a zip file
and stores these in the folder configured in the [Link] file with the Tag ARCHIVE_EXPORT_PATH.
The standard procedure is to run a scheduled archive workflow every 24 hours. The Archive workflow can also be started from the Manual
Forecast Display. An example of the Archive Workflow is shown below.
461
Archive_Scheduled 1.00 [Link]
<workflow xmlns:xsi="[Link] xmlns="[Link]
xsi:schemalocation="[Link]
[Link] version="1.1">
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Archive_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Archive_Thresholds</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Archive_TimeSeries</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Archive_Configuration</moduleInstanceId>
</activity>
</workflow>
]]>
Archives can only be imported in a FEWS database from a Stand Alone System, it is not possible to import archives in a FEWS Operator Client.
To import an archive workflow must be configured that runs the archive modules again, this time to import the archives from an import folder.
The module instances for importing forecasts, timeseries and threshold events are almost similar. Instead of configuring an exportArchiveRun, an
importArchiveRun must be configured. The following example shows a configuration of an import archive module instance to import forecasts.
Archive Display
It is possible to configure an Archive display that facilitates the retrieval of archives using a Stand Alone system. This Archive display can only be
462
used when an Archive Server has been installed. Fo more information on installation of an Archive Server, please contact the FEWS Product
Manager.
As all functional tasks are run by DELFT-FEWS through a workflow, a moduleInstance is created to allow the rolling barrel to be run. The module
instance must be correctly registered in the moduleInstanceDescriptors (see Regional Configuration), and point to the relevant class in the
moduleDesriptors configuration (see System Configuration). The module does not require any configuration. There is therefore not an XML file
available, nor need one be configured to run this module.
In stand alone forecasting system without backup profiles the situation may occur that not all required forecasts or historical data are available.
For such events the system automatically forces the user to create a user defined set of observations or forecasts. In such cases the system
shows a table with the names of a set of base locations, also called Support Locations. These are locations that are tagged by the user as being
locations for which the user always knows the observed or forecasted values. The user has to complete the table before the forecast can be
executed any further. These values will be used as the base data on which the forecast is made.
When available as configuration on the file system, the name of the XML file for configuring an instance of the support locations module called for
example MeteoSupportLocations may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
supportStationsTimeSeriesSet
463
TimeSeriesSet defining the data to be used as support data. Only a single time series set may be defined, this time series set may include either
a (list of) locationId's or a locationSetId.
dataTimeSeriesSets
TimeSeriesSets defining the data to be checked for missing data on the same time instance. Multiple time series sets may be defined, and each
may include either a (list of) locationId's ar a locationSetId.
15 Scenario Module
Scenario Module Configuration
The scenarios module from Delft FEWS is used for generating scenario time series sets in a running forecast. These scenario time series sets are
used to transform model input and output time series sets.
In stand alone forecasting system a forecaster may want to run scenarios, or alternative forecast, by using simple transformations on time series.
The parameters used in the transformations are time series sets generated from coefficients inserted in the scenarios display.
When available as configuration on the file system, the name of the XML file for configuring an instance of the scenarios module called for
example Makescenarios may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
Scenario
Root element for the definition of a scenario. Multiple entries may exist.
Attributes;
Id : id of the scenario defined. Used for reference purposes only. This Id will be included in log messages generated.
Name : name of the scenario defined. Used to show in the scenario display.
464
Figure 141 Elements of the scenarios configuration.
description
The description is only used to give some background information on the scenario.,
scenarioVariable
variable
Definition of a variable in the scenario variable. Multiple entries may exist. The variable has a variableid as attribute and includes a time series set.
TimeSeriesSets
TimeSeriesSet defining the data to be generated from the variable values and transformation type.
transformationType
The transformationType is an enumeration of transformation functions that can be used in the scenario.
Equal
Linearwithstartvalueandendvalue
Linearwithstartvalueandincrement
defaultValue
The value used in the transformation function. For some of the transformation types multiple default value entries may exist.
Introduction
Current status
Other documentation
Module Configuration
Defining Area Map
Examples
Defining internal, input and output variables
Examples
Defining the PCRaster Model
Example
Sample configuration to perform a typical PCRaster Transformation
Points precipitation to grid example
Introduction
The pcrTransformation model allows a direct link between data in DELFT-FEWS and pcraster using the PCraster API based on in-memory
exchange of (XML) data. As such, Delft-Fews can use all available pcraster functions to process data. Pcraster documentation is available
elsewhere.
Current status
At this point a working version is available within Delft-Fews that can be used to perform all operations supported by PCraster. This means that all
465
time series data stored in Delft-Fews (grids and scalars) can be used as input to the module; all output is in grid format. If multiple timesteps are
fed to the module at one each timestep will be run separately, i.e. it is not possible to refer to data of a previous timestep within the module. A
pcraster model usually consists of a initial section (executed only once) and a dynamic section that is executed for each timestep. This version of
the pcrTransformation only implements the initial section.
As of release 2008.3 the system includes support for dynamic scripts. Existing script will continue to work without modification
(albeit significantly faster) and dynamic scripts are now supported.
Other documentation
Examples are available in the attached pdf document.
Module Configuration
The schema diagram is shown below, three main sections can be distinguished:
The diagram below shows the possible options when defining an area map. The area map can be define using three methods:
1.
466
1. Grid Definition (number of rows, columns etc.) (do not use if any of the other methods can be used)
2. Grid location Id (this will use a grid definition the the [Link] file in the RegionConfigFiles section).
3. TimeSeriesSet which defines a Grid TimeSeries. (the grid definition is taken from the timeseries itself)
Examples
Here are few examples, which show the different methods available when defining an Area Map.
1. The area map is defined as a (FEWS) grid location id which refers to the grid definition at the same location within [Link] configuration file
<areaMap>
<locationId>H-2002</locationId>
</areaMap>
2. The area map is defined as a (FEWS Grid) TimeSeries Set. For details on how to define TimeSeriesSet please refer FEWS Configuration
Guide.
<areaMap>
<moduleInstanceId>ImportGrid</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>P.m</parameterId>
<locationId>MeteoGrid</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>read only</readWriteMode>
<areaMap>
3. The area map is given as Grid Definition (Not recommended) . The grid definition contains information such as GeoDatum, coordinates of
upper left grid point, number of columns and rows and cell width and height.
<areaMap>
<geoDatum>WGS 1984</geoDatum>
<upperLeftCorner>
<x>2</x>
<y>1</y>
467
<z>90</z>
</upperLeftCorner>
<rows>100</rows>
<columns>100</columns>
<cellwidth>0.1</cellwidth>
<cellheight>0.1</cellheight>
</areaMap>
The diagram below gives an overview of schema on how to define PCRaster Variables:
boolean
nominal
ordinal
scalar
ldd (not yet implemented?)
directional
3. Input PCRaster model variable. Use to get data from Delft-Fews and pass it on to the pcraster transformation module:
468
- Variable id (should be matching exactly as defined in the PCRaster text model),
- Data type (similar to that used for internal variables),
- Scalar type data to be passed to PCRaster, if different from the normal data value. At present the following options are available:
timeInJulian
timeAsDayofYear
timeAsDayofMonth
timeAsHourofDay
timeAsDaysElapsedSince
timeAsHoursElapsedSince
Reference date. Needed if the scalar type is defined as "timeAsDaysElapsedSince" or "timeAsHoursElapsedSince".
Spatial type options are: spatial and non-spatial. Generally all grid input timeseries are treated as spatial data, while all scalar timeseries
(or constant values) are treated as non-spatial. To treat the scalar timeseries (single data value per time) value or constant value as
spatial, one can set this option to "spatial". By doing so, the grid (as defined by area map) will be filled with (single) data value from
timeseries for a corresponding timestep. Hence for a given timestep, the input to the PCRaster model will be a grid with a constant value
in all the grid cells.
However there is exception to the above mentioned approach. If the input variable is a scalar timeseries at multiple locations (using LocationSetId
in TimeSeriesSet definition) and spatial type is set to spatial, and then the following approach is used:
Please not that the input variable should be regarded as read-only in the actual pcraster script. You should NOT try to modify
them within the script. Make a copy in an other variable(e.g. mycopy = theinputvar; ) if this is needed.
Examples
Here are few examples, showing different possibilities to define an interval, input and output variables. Refer to the comments for details:
<definitions>
<!-- dataExchange options = Memory -->
<dataExchange>memory</dataExchange>
<!-- internalVariable name used within the PCRaster Test Model -->
<internalVariable variableId="blnmap" dataType="boolean"/>
<!-- internalVariable name used within the PCRaster Test Model
, now with the dataType as scalar -->
<internalVariable variableId="toSpatial" dataType="scalar"/>
<!-- InputVariable which refers to the external data file -->
<inputVariablevariableId="externalVar" dataType="scalar">
<external>d://[Link]</external>
</inputVariable>
<!---Input Variable which refers to TimeSeriesGrid Array i.e. Grid as input -->
<inputVariablevariableId="input" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>ModuleInstance</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2002</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<!-- InputVariable which refers to the TimeSeries Float Array i.e, scalar value per
time, non spatial in nature . In other words, a value per time distributed
469
constantly over the whole grid for calculation purpose -->
<inputVariablevariableId="input" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId> ModuleInstance</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2002</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<!-- InputVariable which refers to the Constant Value i.e, a constant scalar value
irrespective of time and non spatial in nature. In other words, a constant
value distributed constantly over the whole grid for calculation purpose -->
<inputVariablevariableId="constant" dataType="scalar">
<value>10</value>
</inputVariable>
<!-- InputVariable which refers to the TimeSeries Float Array for multiple location
(given by locationSetID) i.e, scalar value per time, spatial in nature (as
given by spatialType), and defined only at grid cells where contains the
location. In other words, the grid cell which contains the georeference
position of the location -->
<inputVariablevariableId="input" dataType="scalar"spatialType="spatial" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>ModuleInstance</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>TestLocLiesWithinGrid_H-2002</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<!-- InputVariable which refers to the TimeSeries Float Array (NonSpatial) ,
however the time is passed to PCRaster (scalarType =
timeAsDaysElapsedSince). For the scalar Type defined as time "Elapsed
Since" the reference Date has to be defined -->
<timeSeriesSet>
<moduleInstanceId>ModuleInstance</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2002</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
470
In this section, one can provide the PCRaster model as simple ASCII text. The text model given here is in fact the valid PCRaster model and can
be run directly using PCRaster model, except that it does not contain any area map or variable definition part. All the variables used within this
model should appear in the definition section as described in the section above.
Example
<pcrModel id="String">
<text>
<!---PCRaster accepts # as Comment--->
# there is no dynamic section!
# initial
# result should be the grid within constant value of 1.8 and 0.8
# generate unique Id's
Unq = uniqueid(boolean(input));
transfmap = spreadzone(ordinal(cover(Unq,0)),0,1);
</text>
</pcrModel>
Please remember, the PCRaster model, which is defined here, is full PCRaster model script written in PCRaster Modelling Environment
language. Take care that all the variables ids defined in Variable definition section matches the variables used here in the model. In other words,
the model defined here should be a PCRaster compatible script.
471
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</inputVariable>
<!-- Total potential Solar radiation -->
<outputVariable variableId="SL" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>Radiation</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
<!-- Diffuse radiation -->
<outputVariable variableId="SLDF" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>Radiation</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
<!-- direct radiation -->
<outputVariable variableId="SLDR" dataType="scalar" convertDatum="false">
<timeSeriesSet>
<moduleInstanceId>Radiation</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>[Link]</parameterId>
<locationId>Radiation</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</definitions>
<pcrModel id="String">
<text><![CDATA[
#! --unittrue --degrees
# Test script to determine radiation over a grid.
#
# Inputs from Delft-Fews into this script
# - YearDay -> scalar with day since beginning of year
# - Hour of day -> Fractional hour of day (e.g. 12.5 = 12:30)
# Ouputs to FEWS
# - SL -> Total Solar radiation
#
# This version determines Clear Sky radiation assuming a level surface using a uniform
# altitude. This level is configured in the script below.
Altitude=spatial(10);
Latitude = ycoordinate(boolean(Altitude));
Longitude = xcoordinate(boolean(Altitude));
Day =YearDay;
pi = 3.1416;
Sc = 1367.0; # Solar constant (Gates, 1980) [W/m2]
472
Trans = 0.6; # Transmissivity tau (Gates, 1980)
# Solar geometry
# ----------------------------
# SolDec :declination sun per day between +23 and -23 [deg]
# HourAng :hour angle [-] of sun during day
# SolAlt :solar altitude [deg], height of sun above horizon
# SolDec = -23.4*cos(360*(Day+10)/365);
# Now added a new function that should work on all latitudes!
theta =(Day-1)*360/365; # day expressed in degrees
HourAng = 15*(HourS-12.01);
SolAlt = scalar(asin(scalar(sin(Latitude)*sin(SolDec)+cos(Latitude)*
cos(SolDec)*cos(HourAng))));
# Solar azimuth
# ----------------------------
# SolAzi :angle solar beams to N-S axes earth [deg]
SolAzi = scalar(acos((sin(SolDec)*cos(Latitude)-cos(SolDec)*
sin(Latitude)*cos(HourAng))/cos(SolAlt)));
SolAzi = if(HourS le 12 then SolAzi else 360 - SolAzi);
Slope = spatial(0.0001);
Aspect = spatial(1);
# Surface azimuth
# ----------------------------
# cosIncident :cosine of angle of incident; angle solar beams to angle surface
cosIncident = sin(SolAlt)*cos(Slope)+cos(SolAlt)*sin(Slope)
*cos(SolAzi-Aspect);
# Radiation at DEM
# ----------------------------
# Sdir :direct sunlight on a horizontal surface [W/m2] if no shade
# Sdiff :diffuse light [W/m2] for shade and no shade
# Stot :total incomming light Sdir+Sdiff [W/m2] at Hour
# Radiation :avg of Stot(Hour) and Stot(Hour-HourStep)
# NOTE: PradM only valid for HourStep and DayStep = 1
Sdir = if(Snor*cosIncident<0,0.0,Snor*cosIncident);
Sdiff = if(Sout*(0.271-0.294*OpCorr)*sin(SolAlt)<0, 0.0,
Sout*(0.271-0.294*OpCorr)*sin(SolAlt));
]]></text>
</pcrModel>
473
</pcrTransformationSet>
</pcrTransformationSets>
Table of Contents
[Link]
[Link]
474
[Link]
+ -- Addition
- -- Subtraction
/ or div -- Division
* -- Multiplication ** -- nth power of a first expression, where n is the value of a second expression
abs -- Absolute value
accucapacityflux, accucapacitystate -- Transport of material downstream over a local drain direction network
accuflux -- Accumulated material flowing into downstream cell
accufractionflux, accufractionstate -- Fractional material transport downstream over local drain direction network
accuthresholdflux, accuthresholdstate -- Input of material downstream over a local drain direction network when transport threshold is exceeded
accutriggerflux, accutriggerstate -- Input of material downstream over a local drain direction network when transport trigger is exceeded
acos -- Inverse cosine
and -- Boolean-AND operation
areaarea -- The area of the area to which a cell belongs
areaaverage -- Average cell value of within an area
areadiversity -- Number of unique cell values within an area
areamajority -- Most often occurring cell value within an area
areamaximum -- Maximum cell value within an area
areaminimum -- Minimum cell value within an area
areanormal -- Value assigned to an area taken from a normal distribution
areatotal -- Sum of cell values within an area
areauniform -- Value assignseds to area taken from an uniform distribution
asin -- Inverse sine
aspect -- Aspects of a map using a digital elevation model
atan -- Inverse tangent
boolean -- Conversion data type to boolean data type
catchment -- Catchment(s) of one or more specified cells
catchmenttotal -- Total catchment for the entire upstream area
cellarea -- Area of one cell
celllength -- Horizontal and vertical length of a cell
clump -- Contiguous groups of cells with the same value ('clumps')
cos -- Cosine
cover -- Missing values substituted for values from one or more expression(s)
defined -- Boolean TRUE for non missing values and FALSE for missing values
directional -- Data conversion to the directional data type
downstream -- Cell gets value of the neighbouring downstream cell
downstreamdist -- Distance to the first cell downstream
eq or == -- Relational-equal-to operation on two expressions
exp -- Basee exponential
fac -- Faculty or factorial of a natural positive number
ge or >= -- Relational-greater-than-or-equal-to operation
gt or > -- Relational-greater-than operation
idiv -- Quotient of integer division of values on first expression by values on second expression
if then -- Boolean condition determining whether value of expression or missing value is assigned to result
if then else -- Boolean condition determining whether value of the first or second expression is assigned to result
kinematic -- Dynamic calculation of streamflow through a channel
ldd -- Data conversion from specific data types to local drain direction data type
lddcreate -- Local drain direction map with flow directions from each cell to its steepest downslope neighbour
lddcreatedem -- Modified digital elevation model
ldddist -- Friction-distance from the cell under consideration to downstream nearest TRUE cell
lddmask -- Local drain direction map cut into a (smaller) sound local drain direction map
lddrepair -- Reparation of unsound local drain direction map
le or <= -- Relational-less-than-or-equal-to operation
ln -- Natural logarithm (e)
log10 -- Log 10
lookup -- Compares cell value(s) of one or more expression(s) with the search key in a table
lt or < -- Relational-less-than operation
maparea -- Total map area
mapmaximum -- Maximum cell value
mapminimum -- Minimum cell value
mapnormal -- Cells get non spatial value taken from a normal distribution
maptotal -- Sum of all cell values
mapuniform -- Cells get non spatial value taken from an uniform distribution
max -- Maximum value of multiple expressions
min -- Minimum value of multiple expressions
mod -- Remainder of integer division of values on first expression by values on second expression
ne or != -- Relational-not-equal-to operation
nodirection -- Expression of directional data type
nominal -- Data conversion data type nominal data type
normal -- Boolean TRUE cell gets value taken from a normal distribution
not -- Boolean-NOT operation
475
or -- Boolean-OR operation
order -- Ordinal numbers to cells in ascending order
ordinal -- Data conversion to the ordinal data type
path -- Path over the local drain direction network downstream to its pit
pit -- Unique value for each pit cell
plancurv -- Planform curvature calculation using a DEM
pred -- Ordinal number of the next lower ordinal class
profcurv -- Profile curvature calculation using a DEM
rounddown -- Rounding down of cellvalues to whole numbers
roundoff -- Rounding off of cellvalues to whole numbers
roundup -- Rounding up of cellvalues to whole numbers
scalar -- Data conversion to the scalar data type
sin -- Sine
slope -- Slope of cells using a digital elevation model
slopelength -- Accumulative-friction-distance of the longest accumulative-friction-path upstream over the local drain direction network cells against
waterbasin divides
spread -- Total friction of the shortest accumulated friction path over a map with friction values from source cell to cell under consideration
spreadldd -- Total friction of the shortest accumulated friction downstream path over map with friction values from an source cell to cell under
consideration
spreadlddzone -- Shortest friction-distance path over map with friction from a source cell to cell under consideration, only paths in downstream
direction from the source cell are considered
spreadmax -- Total friction of the shortest accumulated friction path over a map with friction values from a source cell to cell under consideration
spreadzone -- Shortest friction-distance path over a map with friction from an identified source cell or cells to the cell under consideration
sqr -- Square
sqrt -- Square root
streamorder -- Stream order index of all cells on a local drain direction network
subcatchment -- (Sub-)Catchment(s) (watershed, basin) of each one or more specified cells
succ -- Ordinal number of the next higher ordinal class
tan -- Tangent
time -- Timestep
timeinput... -- Cell values per timestep read from a time series that is linked to a map with unique identifiers
timeinput -- Set of output maps per timestep with an extension that refers to the time at the timestep
timeoutput -- Expression value of an uniquely identified cell or cells written to a time series per timestep
timeslice -- Timeslice
uniform -- Boolean TRUE cell gets value from an uniform distribution
uniqueid -- Unique whole value for each Boolean TRUE cell
upstream -- Sum of the cell values of its first upstream cell(s)
view -- TRUE or FALSE value for visibility from viewpoint(s) defined by a digital elevation model
windowaverage -- Average of cell values within a specified square neighbourhood
windowdiversity -- Number of unique values within a specified square neighbourhood
windowhighpass -- Increases spatial frequency within a specified square neighbourhood
windowmajority -- Most occurring cell value within a specified square neighbourhood
windowmaximum -- Maximum cell value within a specified square neighbourhood
windowminimum -- Minimum value within a specified square neighbourhood
windowtotal -- Sum of values within a specified square neighbourhood
xcoordinate -- X-coordinate of each Boolean TRUE cell
xor -- Boolean-XOR operation
ycoordinate -- Y-coordinate of each Boolean TRUE cell
17 WorkflowLooprunner
What [Link]
Although this manual mentions the SOBEK model this module can be used for any external module (e.g. also ISIS)
To decrease the run time length the SOBEK model for the Rhine basin is only run for periods where the discharge at Lobith exceeds a given
threshold. Respectively the SOBEK model for the Meuse basin is only run for periods where the discharge at Borgharen exceeds a given
threshold. To achieve this a so called WorkflowLooprunner was configured in FEWS. There are two options to select the periods for which the
476
SOBEK model shall be run. The first option is to define a threshold for a given time series. If this threshold is exceeded the SOBEK model will be
run. The second option is to define the length of a time interval, e. g. yearly. In each time interval SOBEK is run for a defined time window around
the maximum value.
These options can be configured in the WorkflowLoopRunner "GRADE_SBKdag_Rijn_SelectedPeaks_Update.xml" for the Rhine basin and in the
file "GRADE_SBKdag_Maas_SelectedPeaks_Update.xml" for the Meuse basin. Currently the WorkflowLoopRunners are configured such that the
SOBEK model is run within a period of ten days before and two days after the maximum value of the discharge in Lobith respectively Borgharen
in a time interval of 40 years.
In the next two sections how to configure the different options in FEWS is explained.
trigger option
trigger time series
relative view period
step value option
step size
relative run window
First, choose the trigger option "Step Value Trigger" to run SOBEK for a period around a maximum in a pre-defined time interval. Then define the
trigger time series to which the maximum is referred to and the relative view period for which the WorkflowLoopRunner shall be run. After that
define the step value option, you can choose between maximum and minimum. The step size defines the time interval within the relative view
period. For each time interval the maximum/minimum value of the trigger time series will be defined.
The relative run window defines the period over which the SOBEK model is run around the maximum/minimum discharge value.
As an example see the schema of the WorkflowLoopRunner configuration file in Figure 1. This example shows how to run SOBEK for a period
around one maximum value in 40 years time.
In the example file the trigger time series is [Link], the discharge at Lobith calculated from HBV. The step value option is the maximum value of the
time series. According to the relative run window the SOBEK model is run from ten days before the maximum value until two days after the
maximum value.
In Figure 2 you can see how the relative run window is defined. On the left hand side of the figure the step size (time interval) is a third of the
relative view period. For each time interval the maximum value is defined. Around that maximum the relative run window for SOBEK is defined. If
the relative run windows from different time intervals overlap, SOBEK is run for the merged relative run window. On the right hand side of the
figure the step size equals the relative view period, thus resulting in one relative run window.
477
Figure 2: Definition of the relative run window for the trigger option "step value trigger" for step size unlike relative view period (left) and step size
equal to relative view period (right)
It is recommended to choose the time step size in such a way that the relative view period is a multiple of the time step size. If this is not the case,
the last part of the relative view period is not taken into account, e.g. if the relative view period contains ten days and the time step size is three
than the last day of the relative view period will be ignored in the WorkflowLoopRunner. Furthermore, choose an adequate time period before the
maximum value, so that the model can simulate the peak sufficiently. Also keep in mind that the run time increases if the relative run window
increases. Besides, one single SOBEK run must not be longer than 30 days, due to an internal setting in the SOBEK model.
• trigger option
• trigger time series
• relative view period
• value option
• value
• relative run window
Choose the trigger option "Value Trigger" to run SOBEK for a period where the trigger time series exceeds a given threshold. Then define the
trigger time series and the relative view period for which the WorkflowLoopRunner shall be run. With the value option you can configure in which
direction the trigger is activated. You can choose between "below" and "above" a given threshold. Define the threshold value in the field "value".
The relative run window defines the period over which the SOBEK model is run. The period contains the whole period where the trigger time
series exceeds the defined threshold and optional a period before and after that time. The additional time you can define with the start and end
time of the relative view period. The start time is relative to the time where the threshold is exceeded. The end time is relative to the time where
the trigger time series goes back to threshold value again. In Figure 4 you can see how the relative run window is defined. The green line
demonstrates the part of the relative run window where the threshold is exceeded. the red parts of the relative run window represent the additional
time which can be optionally added to increase the relative run window. If the relative run windows of two different peaks overlap, SOBEK is run
for the merged relative run window.
_Figure 3 Definition of the relative run window for the trigger option "value trigger"_
When you define the threshold value you should take into account that the relative run window must not get too long. Otherwise the workflow
might fail, because of a time out error. Nevertheless SOBEK has to run for an adequate time to simulate the peaks sufficiently. Also keep in mind
that the run time increases if the relative run window increases. Besides, one single SOBEK run must not be longer than 30 days, due to an
internal setting in the SOBEK model.
478
_Figure 4 : WorkflowLoopRunner with trigger option "value trigger"_
In the example file the trigger time series is [Link], the discharge at Lobith calculated from HBV. The relative view period is 14610 days (40 years).
For each time interval the maximum value of the trigger time series will be defined. As value option "above" was chosen. The threshold value is
4000. The relative run window includes the period where the discharge at Lobith exceeds the threshold value as well as to days before and after
that period.
Next to running the model for a relative run window with respect to
threshold values found in the relative view period of the indicated
timeSeriesSet, an option exsists run the model for the entire view period in
this event.
This implies that the relative run window is similar to the total relative
view period, independant of the instance at which the threshold value is
exceeded. This is illustrated in Figure 5.
479
_Figure 6: Run model for entire view period by selecting
alwaysFullPeriod="true"._
18 Mass-balances
What [Link]
Required no
Introduction
Horizontal flux
Vertical flux
Storage change
Remarks
Introduction
The mass-balances module determines the inflow, outflow and storage change within a give polygon from available flow fields. The various parts
of the mass balance are computed separately and result in a scalar timeseries:
To compute the horizontal inflow and outflow, you need to have timeseries of the flow in x- and y-direction defined on a rectangular grid.
To compute the vertical flow, you need to have timeseries of the flow coming in through the lower face of the grid cells and the flow
coming in through the upper face.
To compute the storage change, you need either the storage change per grid cell or the water table per grid cell. In the first case, the
computation consists of summing the values over all grid cells within the given polygon, in the second case, the change over time must
be computed as well.
The polygon for which the mass balance is determined is defined via the location set of the output timeseries: the locations defined in that set are
taken as the vertices of the polygon. Grid cells are considered to be inside the polygon if their centre is.
In the sections below the different elements of the configuration are described
Horizontal flux
The timeseries must be defined on the same rectangular grid for the same times.
The output consists of a timeseries of the nett in- and outflow, where the flow rate through the side faces is computed as the flow velocity times
480
the length of the side times a thickness of 1 m. The result is a flow rate in m3/s (assuming the flow velocity is given in m/s and the grid size in m).
Vertical flux
The timeseries must be defined on the same rectangular grid for the same times.
The output consists of a timeseries of the nett in- and outflow, where the flow rate through the faces faces is computed as the flow velocity times
the length and width of the grid cell. The result is a flow rate in m3/s (assuming the flow velocity is given in m/s and the grid size in m). Effects of
porosity are not taken into account.
Storage change
The storage change rate per grid cell (that is, the change in the water table per time step)
or
The timeseries must be defined on the same rectangular grid, and in the latter case there must be at least two times.
The output consists of a timeseries of the nett change in storage, where the stored volume per grid cell is computed as the water table times the
length and width of the grid cell. The result is the change in the volume of water present within the area delimited by the polygon. Effects of
porosity are not taken into account.
Remarks
While the above description refers to volume or mass balances, the module is more generally applicable to any parameter that represents a mass
balance, for instance, if instead of flow velocities, you specify the flux of nutrients (concentration times flow velocity), you can compute the nett
inflow/outflow of nutrients through the given polygon.
Porosity is not taken into account in the module, but you can correct for that via the transformation module.
19 Rating curves
What [Link]
When the transformation module needs to use a rating curve for a given location at a given time, then it will search all rating curves for the given
location id. From all rating curves with the given location id, it will use the rating curve that is valid for the given time.
It is also possible to have different rating curves with the same location id, the same rating curve type and with overlapping valid periods, as long
as they have different rating curve ids. This makes it possible to have rating curves with valid periods that only have a start date (no end date),
which are valid until the next rating curve with the next start date becomes valid. In this case, if multiple rating curves are valid for a given time,
then the transformation module will use the rating curve that has the most recent start date in its valid period.
The ratings are either "qhrelationtable" or "simpleratingcurve" refer to by the hydroMeteoFunction in the transformation module. An example of the
reference from the transformation module is shown below:
481
or
<hydroMeteoFunction ratingcurvetype="LevelToFlow" outputvariableid="Flow" function=
"simpleratingcurve" useratingcurve="true"/>
]]>
Rating curve
location for which the rating is valid should be the same as in the [Link]
ratingCurveType you can choose from either LevelToFlow or FlowToLevel
reversible if this option is set to "true" then both level to flow and flow to level calculations with be allowed
ValidPeriod allows you to enter a start and end date for which the rating is valid (e.g. a summer and winter rating). The dates and times can be
specified with or without a time zone. Use e.g. 2008-06-20T[Link]+05:00 for a time in time zone GMT+05:00. Use e.g. 2008-06-20T[Link]Z
for a time in GMT, where the Z means GMT. If a time is specified without a time zone, e.g. 2009-12-01T[Link], then the time is assumed to be
in local time. Note: 2008-06-20 [Link] in time zone GMT+5:00 is physically the same time as 2008-06-20 [Link] in GMT.
correction see below
ratingCurveTable see below
ratingCurveEquation see below
482
Figure 2: correction complex type
The correction complex type allows the user to specify a correction technique for unsteady flow (jones equation) or backwater (constant fall
method or normal fall method).
jonesEquation the user must specify the minimum h for which the method is valid (h_min and the a, b and c parameters - see below)
where:
Qm = unsteady discharge
Qc = steady discharge
S0 = energy slope for steady flow
vw = wave velocity
dh/dt = rate of charge of water level in time (m/day)
The adjustment factor 1/S0vw (day/m) varies with water level. This factor is fitted by a parabolic function of h:
for h>hmin
Stage-fall-discharge or twin gauge station fall-discharge methods are user to include backwater effects on stage-discharge ratings.
In these methods the fall F between the water level at the discharge measuring site and a downstream station is considered as an additional
parameter, to account for the effect of water surface slope on discharge. Both the constant fall method and normal fall method are based on the
following equation:
Where:
Qm = backwater affected discharge
Qr = reference discharge
Fm = measured fall
Fr = reference fall
p = power, with 0.4 < p < 0.6
In this method the reference fall Fr is taken as a constant. A special case of the constant-fall method is the unit-fall method, where Fr = 1m is
applied. In the computational procedure a value for Fr is assumed. Then a rating curve is fitted to the values:
483
normal fall method
In this method the reference fall (Fr) is modelled as a funtion of the water level; Fr = f(h). This funtion is represented by a parabola:
In Fews you should specify the a, b and c parameters and also a value of hmin, below which the backwater correction is not valid.
ratingCurveTable this allows you to simply enter the pairs of q and h values.
<ratingCurve ratingcurveid="TheBigRiver">
<location>
<locationId>X1123</locationId>
</location>
<ratingCurveType>LevelToFlow</ratingCurveType>
<reversible>true</reversible>
<ratingCurveTable>
<ratingCurveTableRecord flow="0.100" level="0.054"/>
<ratingCurveTableRecord flow="0.500" level="0.155"/>
<ratingCurveTableRecord flow="1.000" level="0.244"/>
<ratingCurveTableRecord flow="1.479" level="0.317"/>
</ratingCurveTable>
</ratingCurve>
]]>
A rating curve equation can be defined per section of the rating curve using the lowerLevel and UpperLevel tags. The equation can be in the form
of a power equation or a parabola. The form of the equation:
Power equation
Parabola
Here is an example:
<location>
<locationId>234206</locationId>
</location>
<ratingCurveType>LevelToFlow</ratingCurveType>
<reversible>true</reversible>
<ratingCurveEquation>
<lowerLevel>0</lowerLevel>
<upperLevel>0.391</upperLevel>
<equation>Power</equation>
<a>11.9001</a>
<b>0</b>
<c>1.55067</c>
</ratingCurveEquation>
<ratingCurveEquation>
<lowerLevel>0.391</lowerLevel>
<upperLevel>0.807</upperLevel>
<equation>Power</equation>
<a>16.6258</a>
<b>-0.1</b>
<c>1.8564</c>
</ratingCurveEquation>
]]>
484
20 Transformation Module (Improved schema)
What [Link]
</moduleDescriptor>
Contents
Contents
Transformation Module Configuration (New Version)
Configuration
Accumulation Transformations
Adjust Transformations
Aggregation transformations
DisaggregationTransformations
DischargeStage Transformations
Events Transformations
Filter Transformations
Interpolation Serial Transformations
Interpolation Spatial Transformations
Lookup transformations
Merge Transformations
Review transformations
StageDischarge transformations
Statistics Summary Transformations
Structure Transformations
TimeShift
User Transformations
DayMonth Sample
PCA and Regression Transformation
Selection Transformations
An improvement version of the FEWS Transformation Module is currently under construction. The new version is much more easy to configure
than the old version. The new version uses a new schema for configuration, also several new transformations are added.
Configuration
When available as configuration on the file system, the name of an XML file for configuring an instance of the transformation module called for
example TransformHBV_Inputs may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
The configuration for the transformation module consists of two parts: transformation configuration files in the Config/ModuleConfigFiles directory
and coefficient set configuration files in the Config/CoefficientSetsFiles directory.
In a transformation configuration file one or more transformations can be configured. Some transformations require coefficient sets in which given
coefficients are defined. For a given transformation that requires a coefficient set there are different ways of defining the coefficient set in the
configuration. One way is to specify an embedded coefficient set in the transformation configuration itself. Another way is to put a reference in the
transformation configuration. This reference consists of the name of a separate coefficient set configuration file and the id of a coefficient set in
that file.
485
Both the transformations and coefficient sets can be configured to be time dependent. This can be used for instance to define a given coefficient
value to be 3 from 1 January 2008 to 1 January 2009, and to be 4 from 1 January 2009 onwards. This can be done by defining multiple
periodCoefficientSets, each one with a different period, as in the following xml example.
<period>
<startDateTime date="2008-01-01" time="[Link]"/>
<endDateTime date="2009-01-01" time="[Link]"/>
</period>
<structure>
<pumpFixedDischarge>
<discharge>3</discharge>
</pumpFixedDischarge>
</structure>
<periodCoefficientSet>
<period>
<validAfterDateTime date="2009-01-01"/>
</period>
<structure>
<pumpFixedDischarge>
<discharge>4</discharge>
</pumpFixedDischarge>
</structure>
</periodCoefficientSet>
]]>
If a date is specified without a time, then the time is assumed to be [Link], so <validAfterDateTime date="2009-01-01"/> is the same as
<validAfterDateTime date="2009-01-01" time="[Link]"/>. To specify dates and times in a particular time zone use the optional time zone
element at the beginning of a transformations or a coefficient sets configuration file, e.g. <timeZone>GMT+5:00</timeZone>. Then all dates and
times in that configuration file are in the defined time zone. If no time zone is defined, then dates and times are in GMT. Note: 2008-06-20
[Link] in time zone GMT+5:00 is physically the same time as 2008-06-20 [Link] in GMT.
If for a given transformation there are different coefficientSets configured for different periods in time, then the following rule is used. The start of a
period is always inclusive. The end of a period is exclusive if another period follows without a gap in between, otherwise the end of the period is
inclusive. If for example there are three periodCoefficientSets defined (A, B and C), each with a different period, as in the following xml example.
Then at 2002-01-01 [Link] periodCoefficientSet A is valid. At 2003-01-01 [Link] periodCoefficientSet B is valid since the start of the period is
inclusive. At 2004-01-01 [Link] periodCoefficientSet B is still valid, since there is a gap after 2004-01-01 [Link]. At 2011-01-01 [Link]
periodCoefficientSet C is valid, since no other periods follow (the period of C is the last period in time that is defined). This same rule applies to
time-dependent transformations.
<periodCoefficientSet>
<!-- periodCoefficientSet B -->
<period>
<startDateTime date="2003-01-01" time="[Link]"/>
<endDateTime date="2004-01-01" time="[Link]"/>
</period>
...
</periodCoefficientSet>
<periodCoefficientSet>
<!-- periodCoefficientSet C -->
<period>
<startDateTime date="2010-01-01" time="[Link]"/>
<endDateTime date="2011-01-01" time="[Link]"/>
</period>
...
</periodCoefficientSet>
]]>
486
Accumulation Transformations
The following transformations can be used to calculate accumulative curves of time series.
AccumulationMeanInterval — MeanInterval: This transformation calculates the accumulative mean from the input time series within
several intervals.
AccumulationSum — Sum: Calculates the sum.
AccumulationSumInterval — SumInterval: This transformation creates cumulative curves from the input time series within several
intervals. The intervals are defined by the specified intervalTimeStep.
AccumulationSumOriginAtTimeZero — Sum: Calculates the accumulated sum forwards and backwards in time (forecast and historical).
AccumulationMeanInterval
Information
Transformation: MeanInterval
Transformation Accumulation
Group:
Description: This transformation calculates the accumulative mean from the input time series within several intervals. The intervals are
defined by the specified intervalTimeStep. For a given interval the first output value equals the first input value within the
interval and the other output values are equal to the mean of the corresponding input value and all previous input values
within the interval. The startTime of an interval is exclusive and the endTime of an interval is inclusive. The output time
series must have the same timeStep as the input time series.
Hydrological Information
Background This transformation also works for grid input and output. It does not work for irregular time steps. If the transformation is
and from instantaneous/mean input parameter type to accumulated output parameter type, then the result is multiplied by the
Exceptions: timestep in seconds, before the mean is calculated. In this case the input data is assumed to be in units/second.
Input
Input variable.
Options
intervalTimeStep This time step defines the intervals that are used for the accumulation. Each time in this time step is the boundary between two
intervals.
ignoreMissing Optional. If true, then missing values are ignored. If false, then output values will be set to missing values starting from the first
missing input value in an interval until the end of that interval. Default is true.
Output
Configuration Example
487
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<variable>
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationMeanInterval</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="3"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationMeanInterval</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" start="0" end="3"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="accumulation mean interval">
<accumulation>
<meanInterval>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<intervalTimeStep times="08:00"/>
<ignoreMissing>true</ignoreMissing>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</meanInterval>
</accumulation>
</transformation>
</transformationModule>
]]>
AccumulationSum
Information
Transformation: Sum
Transformation Accumulation
Group:
Description: Calculates for each timestep in the output period, the accumulated sum of the input values. In case the input is
instantaneous or mean and the output is accumulation, then the sum is multiplied by the duration of the input time step in
seconds. Each output value is the sum of the corresponding input value and all previous input values.
Hydrological Information
Purpose and use of This transformation can for instance be used to report the accumulated sum of the discharge per
Transformation: month.
488
Background In case the input is instantaneous or mean and the output is accumulation, the unit of the input must be unit/s. The input type
and (scalar or grid) must be the same as the output type and their timestep must be regular. In case the ignoreMissing value is set to
Exceptions: false, once a missing value is encountered, the output will only contain missing values after that time step.
Input
Options
CoefficientSets
No connection to CoefficientSets.
Output
Configuration Example
489
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<variable>
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSum</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="30"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</variable>
<!-- output variables -->
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSum</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="5" end="25"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<!-- transformations -->
<transformation id="accumulation sum">
<accumulation>
<sum>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<ignoreMissing>false</ignoreMissing>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</sum>
</accumulation>
</transformation>
</transformationModule>
]]>
AccumulationSumInterval
Information
Transformation: SumInterval
Transformation Accumulation
Group:
Description: This transformation creates cumulative curves from the input time series within several intervals. The intervals are defined
by the specified intervalTimeStep. For a given interval the first output value equals the first input value within the interval
and the other output values are equal to the sum of the corresponding input value and all previous input values within the
interval. The startTime of an interval is exclusive and the endTime of an interval is inclusive. The output time series must
have the same timeStep as the input time series.
490
Hydrological Information
Background This transformation also works for grid input and output. It does not work for irregular time steps. If the transformation is
and from instantaneous/mean input parameter type to accumulated output parameter type, then the result is multiplied by the
Exceptions: timestep in seconds. In this case the input data is assumed to be in units/second.
Input
Input variable.
Options
intervalTimeStep This time step defines the intervals that are used for the accumulation. Each time in this time step is the boundary between two
intervals.
ignoreMissing Optional. If true, then missing values are ignored and treated as 0. If false, then output values will be set to missing values starting
from the first missing input value in an interval until the end of that interval. Default is true.
Output
Configuration Example
491
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<variable>
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSumInterval</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="10"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSumInterval</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="0" end="10"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="accumulation sum interval">
<accumulation>
<sumInterval>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<intervalTimeStep unit="hour" multiplier="1"/>
<ignoreMissing>false</ignoreMissing>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</sumInterval>
</accumulation>
</transformation>
</transformationModule>
]]>
AccumulationSumOriginAtTimeZero
Information
Transformation: SumOriginAtTimeZero
Transformation Accumulation
Group:
Description: Calculates the accumulated sum of the input values for each timestep in the output period from timezero forwards and
backwards in time. In case the input is instantaneous or mean and the output is accumulation, then the sum is multiplied by
the duration of the input time step in seconds. The sum of the historical data is accumulated backwards in time starting at
T0. The sum of the forecast data is accumulated forwards in time starting at T0.
Hydrological Information
492
Purpose and use of This transformation can for instance be used to report the historical accumulated sum backwards in time and the
Transformation: forecast accumulated sum of the discharge per month.
Background In case the input is instantaneous or mean and the output is accumulation, the unit of the input must be unit/s. The input type
and (scalar or grid) must be the same as the output type and their timestep must be regular. In case the ignoreMissing value is set to
Exceptions: false, once a missing value is encountered, the output will only contain missing values after that time step.
Input
Options
CoefficientSets
No connection to CoefficientSets.
Output
Configuration Example
493
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<variable>
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSumOriginAtTimeZero</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="-15" end="15"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</variable>
<!-- output variables -->
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>AccumulationSumOriginAtTimeZero</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="-15" end="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<!-- transformations -->
<transformation id="accumulation sum origin at time zero">
<accumulation>
<sumOriginAtTimeZero>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<ignoreMissing>false</ignoreMissing>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</sumOriginAtTimeZero>
</accumulation>
</transformation>
</transformationModule>
]]>
494
Adjust Transformations
AdjustQ
AdjustQUsingMeanDailyDischarge
AdjustQUsingInstantaneousDischarge
AdjustStage
AdjustTide
AdjustQ
AdjustQ
Input
observedInstantaneousDischarge
observedMeanDailyDischarge
simulatedDischarge
Coefficient set
blending steps
errorTolerance
maxNumberOfIterations
iterpolationType
Output
adjustedSimulatedDischarge
Description
AdjustQ corrects the simulated discharges by using observed instantaneous discharges and observed mean dialy discharges. The procedure is
actually a combination of the
transformations AdjustQUsingInstantaneousDischarge and AdjustQUsingMeanDailyDischarge. First the simulated discharged will be corrected by
using the instantaneous discharges. If not all of the mean dialy discharges are within the error tolerance the simulated discharges will also be
corrected with the AdjustQUsingMeanDailyDischarge procedure. A detailed description of the configuration options in the coefficient set can be
found the sections of AdjustQUsingInstantaneousDischarge and AdjustQUsingMeanDailyDischarge.
495
AdjustQMeanDailyDischarge
AdjustQMeanDailyDischarge
Input
observedMeanDailyDischarge
simulatedDischarge
Coefficient set
error tolerance
maxNumberOfIterations
Output
adjustedSimulatedDischarge
Description
This procedure corrects the simulated discharge with mean dialy discharge values until the error is within the specified error tolerance. To correct
the simulated discharge, the mean dialy discharge for the simulated value will be calculated. The simulated values will then be corrected by
applying the following formula.
Qi = Qi * QME/SQME (1)
The correction procedure will continue until all the simulated discharges are within the error tolerance or until the maximum number of iterations is
reached. The maximum number of iterations is a configuration option.
AdjustQUsingInstantaneousDischarge
AdjustQUsingInstantaneousDischarge
Input
observedDischarge
simulatedInstantaneousDischarge
Coefficient set
blending steps
interpolations type
Output
adjustedForecastDischarge
Description
This procedure uses an observed instantaneous discharge to correct a simulated discharge. If there is an observed value for a certain time step
then that value will be used instead of the simulated value. If there is no observed data is available the correction procedure will calculate a value
or in some cased use the simulated value.
The configuration has two configuration options which will influence the behaviour of the correction procedure. The first one is blending steps. If
there is a gap in the observed data which is
x time steps large and x < blending steps then the output values for the gap will be determined using an interpolation procedure. If x >= blending
steps then a blend procecure will be used to
fill the gap.
Interpolation procedure
The second configuration option which influence the behaviour of the correction procedure is the interpolation type. Two options are
496
available:ratio and difference. To calculate the value of the adjusted time series a correction procedure is used for the simulated discharge. When
the ratio-option is selected the the simulated values will be corrected by multiplying the simulated value with a correction factor based on the
ratio's between the observed and simulated discharge at the start of the gap and at the end of the gap. The correction factor will be linearly
interpolated between the ratio at the beginning of the gap and the ration at the end of the gap. When the difference option is selected the
simulated value will corrected by adding a correction value to it. This value will be based on the difference between the observed and simulated
discharge at the beginning of the gap and the difference between the simulated and observed value at the end of the gap. In some cases it is
possible that the program overrules the configured interpolation option. When the ratio between the ratio's is larger than 2 or one of the ratio's is
larger than 5 than the program will switch to interpolating by difference even if ratio's was configured.
Blend procedure
When the gap in the observed data is to large to fill with a interpolation procedure then the gap in the observed data will be filled with the blend
procedure. This procedure is also used to provide a smooth transition between the observed data and the simulated at T0. The blend procedure
will provide a smooth transition between the observed data and the simulated data. The difference between the simulated discharge and the
observed discharge at the beginning of a gap, end of a gap or at the latest observed value will be used to correct the simulated value. The
following formula will be used to correct the simulated value.
AdjustStage
AdjustStage
Input
forecastStage
averageBalanceFirstSegment
averageBalanceSecondSegment
averageBalanceThirdSegment
averageBalanceFourthSegment
startFirstStageRange
startSecondStageRange
startThirdStageRange
startFourthStageRange
endFourthStageRange
Coefficient set
Output
adjustedForecastStage
Description
The transformation AdjustStage uses the output of the transformation StageReview to adjust the simulated stage values. The transformation
StageReview has divided the simulated stage values into 4 equally divided segments and has calculated for each segment, for each day an
average daily balance.
The AdjustStage procedure first determines the centres of each segment, secondly it determines which centres surrounds the simulated stage
value which has to be adjusted. For each centre the associated balance will be retrieved and the balance for the simulated stage value will be
calculated by using lineair interpolation. The simulated stage value will be corrected by adding the calculated balance. When the simulated stage
value is lower than the centre of the first segment or higher than the centre of the fourth segment. The balance will not be calculated by using
lineair interpolation but will be set equal to balance of the first segment, or the balance of the fourth segment.
AdjustTide
AdjustTide
Input
observedTidalStage
forecastTidalStage
tideBalance
Output
adjustedTidalStage
497
Description
The AdjustTide operation corrects a simulated tide with an observed tidal time series and a set of balances. The balances are calculated by the
transformation tidalBalance. When observed data is available, the observed data will be used for the output time series. When no observed data
is available the balances and the simulated tidal time series are combined to create an adjusted tide. First the times of the peaks and valleys are
determined. They are already calculated by the tideBalance transformation and located at the times at which the tideBalance operation has
written the balances. The estimated peaks and valleys of the tideBalance operation and the peaks and valleys of the simulated tidal time series
are matched. The adjusted peaks and valleys are calculated by adjusting the simulated peaks and valleys with the matched balance.
The value of the adjusted tidal time series between the peaks and valleys are calculated using a cosinus interpolation. The formula which is used
for the interpolation is:
= |tmax - tmin|
= *(tmax - tmin)/
Configuration example
Aggregation transformations
The graph below demonstrates the differences between the instantaneous, the instantaneousToMean and the MeanToMean methods when
aggregating data.
498
The blue line (first column in the table) shows the original 15 minute data. The red line (second column) is the result of the instantaneous
disaggregation. The next two columns show the results of the meanToMean (light green line) and the instantaneousToMean (dark green line)
method.
The default behaviour of all these aggregations is to save the result at the end of the time interval that is being investigated. This
explains the apparent shift in the hydrograph. Although this is very useful for many operational environments (it ensures you
have data NOW) it may not always be wanted. In that case it is easy to combine these aggregations with a delay.
Aggregation Accumulative
Accumulative
Input
inputVariable
Output
outputVariable
Description
This transformation performs an aggregation from an instantaneous time series to an aggregated time series. This procedure sums the values of
the input timeseries that are within the aggregation period. If no aggregation period is configured, then the aggregation period is equal to the
period between the current output time and the previous output time. Alternatively the aggregation period can be configured in the time series set
of the output variable. In that case the aggregation period is relative to the current output time and aggregation periods for different output times
are allowed to overlap. Using overlapping aggregation periods it is possible to use this transformation to calculate a moving sum. If one of the
input values is missing or unreliable the output is missing.
The table below shows an example of accumulating 6-hourly values to daily values using this transformation.
499
02-01-2007 06:00 6,00
The figure below shows original 15 minute data and the aggregated hourly data using the accumulative function:
Configuration example
500
<transformation id="aggregation accumulative">
<aggregation>
<accumulative>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>accumulative</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</accumulative>
</aggregation>
</transformation>
Aggregation Instantaneous
Instantaneous
Input
InputVariable
Output
OutputVariable
Description
This transformation performs an aggregation from an instantaneous input time series to an instantaneous output time series. Sets the output
value to the exact same value in the input timeseries at time t. I simply samples points. As such, if an output time has no equivalent in the input
series no value is given. The table below shows how 6-hourly values are converted to daily values using this method.
501
02-01-2007 12:00 NaN
Configuration example
502
Aggregation InstantaneousToMean
InstantaneousToMean
Input
inputVariable
Options
allowMissingValues
includeFirstValueOfAggregationPeriodInCalculation
Output
outputVariable
Description
This transformations calculates the mean value of instantaneous values over a certain period. If the option allowMissingValues is true (this is the
default behaviour), then a missing value is returned if one of the input values in the period is a missing value. If the option allowMissingValues is
false, then a mean value is calculated if there are 1 or more non-missing values in the aggregation period, i.e. missing values are ignored in this
case.
The transformation offers two different ways for calculating the mean value over a period. The default method (used by setting the
includeFirstValueOfAggregationPeriodInCalculation option to true - this is the default behaviour) calculates the mean of the last n pairs,
averages that, and stores it at the output time. An alternate method (similar to the MeanToMean aggregation) is enabled by setting the
includeFirstValueOfAggregationPeriodInCalculation option to false) calculates the mean of all values that fit in the output interval, excluding
the start time itself, and stores that at the output time.
In the four tables below examples of in and output using the different options are given.
2007-01-01 00:00 1
2007-01-01 06:00 2
2007-01-01 12:00 3
2007-01-01 18:00 4
2007-01-02 06:00 6
2007-01-02 12:00 7
2007-01-03 06:00 10
2007-01-01 00:00 1
2007-01-01 06:00 2
2007-01-01 12:00 3
2007-01-01 18:00 4
503
2007-01-02 06:00 6
2007-01-02 12:00 7
2007-01-03 06:00 10
2007-01-01 00:00 1
2007-01-01 06:00 2
2007-01-01 12:00 3
2007-01-01 18:00 4
2007-01-02 06:00 6
2007-01-02 12:00 7
2007-01-03 06:00 10
2007-01-01 00:00 1
2007-01-01 06:00 2
2007-01-01 12:00 3
2007-01-01 18:00 4
2007-01-02 06:00 6
2007-01-02 12:00 7
2007-01-03 06:00 10
504
Configuration example
<includeFirstValueOfAggregationPeriodInCalculation>true</includeFirstValueOfAggregationPeriodInCalculation>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>instantaneousToMean</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</instantaneousToMean>
</aggregation>
</transformation>
Aggregation MeanToMean
MeanToMean
Input
505
InputVariable
Output
OutputVariable
Description
This transformation performs an aggregration from an mean input time series to a mean output time series. The average value of the mean value
in the aggregation period (excluding the value at the start of the period) will be the calculated mean value for the output time series.
This method will give the same results as the instantaneoustoMean transformation transformation while setting the
includeFirstValueOfAggregationPeriodInCalculation option to false. However, it has no option to ignore missing values in the input series.
Configuration example
506
<transformation id="aggregation MeanToMean">
<aggregation>
<meanToMean>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>meanToMean</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</meanToMean>
</aggregation>
</transformation>
DisaggregationTransformations
The graph below gives an overviews of the results of the different disaggregations available.
507
Accumulative
Accumulative
Input
InputVariable
Output
OutputVariable
Description
This transformation performs a disaggregation on an accumulative input time series. Divides the values of the input time-series by the number of
time-steps in the output time-series and stores the resulting values at each step.
The table below shows how daily values are disaggregated to 6-hourly values using this method.
Input Output
508
Configuration example
Instantaneous
Instantaneous
509
disaggregates data by sampling the values and optionally interpolate linear
Input
inputVariable
Options
interpolate (true|false)
Output
outputVariable
Description
This transformation performs a disaggregation on an instantaneous input time series. The output values are copied from the input time series if a
matching time exists in the input value. If this is not the case the output value is calculated by linear interpolation if the option interpolate is
enabled. If the option is disabled the output value will be a missing value.
in which:
Configuration example
510
<transformation id="disaggregation instantaneous">
<disaggregation>
<instantaneous>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>dis_instantaneous</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="5"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</instantaneous>
</disaggregation>
</transformation>
MeanToInstantaneous
MeanToInstantaneous
disaggregates data
511
Input
inputVariable
Output
outputVariable
Description
This transformation takes a mean input time series as input and transforms it to an instantaneous time series. Because it is not possible to
calculate exactly how the instantaneous values were which resulted in this mean time series, the transformation will make a best estimate of the
instantaneous time series.
The first step in this procedure is analysing the previous mean value, the current mean value and the next mean value.
No change
If there is no significant rise in these values, which is the case when the current mean value is the same as the previous mean value and the next
mean value within an error tolerance of 0.1% the current mean value is considered to be the best estimate for the instantaneous values.
If this is not the case, the procedure checks if there is a continuous rise or fall or if the mean values are in a peak or valley.
Rise or Fall
If the mean values are in a continous rise (currentMeanValue >= previousMeanValue && currentMeanValue <= nextMeanValue) then the
estimation procedure is as follows.
First the instantaneous value at the end of the disaggregation period is estimated by
The values between the value at the end of the previous disaggregation period and the estimated end value are estimated by creating a small rise
or fall from the end value.
Peak or Valley
Secondly the place of the peak is estimated. This will be done analysing the ratio between difma and difmb.
When the value and the place of the peak are estimated the values between the peak and the end value and the last value of the previous period
are added
After this procedure the estimated instantaneous values are corrected by using the AdjustQMeanDailyDischarge-transformation (a volume
correction). This transformation will ensure that the mean values of the estimated instantaneous time series are equal to the orignal mean
values.
Configuration example
512
<transformation id="disaggregation imeanToInstantaneous">
<disaggregation>
<meanToInstantaneous>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>dis_meanToInstantaneous</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="5"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</meanToInstantaneous>
</disaggregation>
</transformation>
meanToMean
MeanToMean
513
Input
InputVariable
Output
OutputVariable
Description
This transformation performs a disaggregation from a mean time series to a mean time series.
Each output time series value within a given data time interval of the input time series is equal to the input time series value for that interval.
12:00 x 1
00:00 1 1
12:00 x 2
00:00 2 2
Configuration example
514
<transformation id="disaggregation MeanToMean">
<disaggregation>
<meanToMean>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>dis_meanToMean</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="5"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</meanToMean>
</disaggregation>
</transformation>
weights
Weights
Input
InputVariable
configuration
Output
OutputVariable
Description
This transformation performs a disaggregation from times series to another series in which each output points is multiplied bu a specified weight.
There MUST be a weight for each output point or the disaggregation will fail. E.g. when converting 15 minute values to 5 minute values three
weight elements must be specified.
Each output time series value within a given data time interval of the input time series is equal to the input time series value multiplied by the
weight specified for the output time.
00:00 1 1 1
515
12:00 x 2 0.5
00:00 2 2 1.0
In this case two eigth elements (0.5 and 1.0) have been specified. Note that the order in which the elements appear determines to which point
they are applied.
Configuration example
516
<disaggregation>
<weights>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>read only</readWriteMode>
<delay unit="minute" multiplier="0"/>
</timeSeriesSet>
</inputVariable>
<weight>0.9</weight>
<weight>1.1</weight>
<weight>0.9</weight>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>dis_weights</parameterId>
<locationSetId>hydgauges</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="5"/>
<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>1</synchLevel>
</timeSeriesSet>
</outputVariable>
</weights>
</disaggregation>
DischargeStage Transformations
mergedRatingCurves
power
table
ratingCurve
DischargeStageMergedRatingCurves
Information
Transformation: MergedRatingCurves
Transformation DischargeStage
Group:
Description: Merges two rating curves using a time dependent weight variable and uses the resulting rating curve to convert discharge
input values to stage output values. For each timeStep in the output time series, first the specified two rating curves are
merged using the value of the weight input time series at that timeStep. If weight is 1, then uses the first rating curve. If
weight is 0, then uses the second rating curve. If weight is between 0 and 1, then interpolates linearly between the first and
the second rating curve to get the merged rating curve. Then the merged rating curve is used to convert the discharge input
value for that timeStep to a stage output value. This can only use rating curves that are stored as time series in the
dataStore. This uses the inverse of the equation Q_output = weight*Q_ratingCurve1(H_input) + (1 -
weight)*Q_ratingCurve2(H_input)
Hydrological Information
517
Purpose and This can be used e.g. for a river reach with a lot of vegetation in the summer resulting in a higher hydraulic roughness.
use of Then, you might want to handle a rating curve for the winter period (level of 1m corresponds to 5 m3/s) and one for the
Transformation: summer (same water level represents only 3 m3/s due to the higher roughness). The weight value can be used for shifting
inbetween: weight=0 for the winter, weight=1 for the summer, and a weight value of 0.5 for a certain time in spring when
vegetation is growing.
Background Weight value must always be in the range 0 <= weight <= 1. If ratingCurve(s) not found, then logs a warning message and
and sets the output to missing values.
Exceptions:
Input
ratingCurve
References to two rating curves that are merged and used to convert discharge to stage values for this transformation. This can only use rating
curves that are stored as time series in the dataStore. To import ratingCurves into the dataStore use timeSeriesImport module with importType
pi_ratingcurves to import a file in the pi_ratingcurves.xsd format. The ratingCurves are referenced using their locationId and qualifierId. If no
locationId is specified, then the locationId of the stage input variable is used.
Output
Configuration Example
<dischargeStage>
<mergedRatingCurves>
<discharge>
<variableId>input</variableId>
</discharge>
<weight>
<variableId>eta</variableId>
</weight>
<ratingCurve>
<locationId>H-2001</locationId>
<qualifierId>winterRatingCurve</qualifierId>
</ratingCurve>
<ratingCurve>
<locationId>H-2001</locationId>
<qualifierId>summerRatingCurve</qualifierId>
</ratingCurve>
<stage>
<variableId>output</variableId>
</stage>
</mergedRatingCurves>
</dischargeStage>
]]>
DischargeStagePower
Information
Transformation: Power
Description: Converts discharge (Q) to stage (H) for an open cross section. Uses equation
Hydrological Information
518
Purpose and use of Transformation: Used to convert discharge (water flow) to stage (water level) for an open cross section.
Input
CoefficientSets or CoefficientSetFunctions
The coefficient set should contain the a, b and c coefficients for equation
and the type of calculations for which the coefficient set is valid.
When using coefficient set functions (available since build 30246), the a, b, c and type elements can contain tags between "@" signs (e.g.
"@NUMBER@") that refer to location attributes that are defined in the locationSets configuration file. The tags are replaced by actual values.
These values can be different for different locations and time periods. See 22 Locations and attributes defined in Shape-DBF files for more
information.
Coefficient a in equation
.
b
Coefficient b in equation
.
c
Coefficient c in equation
.
type
Type of calculations for which the coefficient set is valid. Can be level_to_flow, flow_to_level or level_to_flow_and_flow_to_level.
Output
Configuration Examples
519
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>DischargeStagePowerTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>DischargeStagePowerTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="discharge stage power test">
<dischargeStage>
<power>
<discharge>
<variableId>input</variableId>
</discharge>
<coefficientSet>
<a>57.632</a>
<b>3.01</b>
<c>2.147</c>
<type>level_to_flow_and_flow_to_level</type>
</coefficientSet>
<stage>
<variableId>output</variableId>
</stage>
</power>
</dischargeStage>
</transformation>
]]>
The example below uses coefficientSetFunctions (available since build 30246). Here the elements 'a', 'b', 'c' and 'type' are defined in
coefficientSetFunctions, where @A@, @B@ and @C@ refer to location number attributes and @type@ refers to a location text attribute defined
in the locationSets configuration file.
520
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>DischargeStagePowerWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>locationWithAttributes1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>DischargeStagePowerWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>locationWithAttributes1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="discharge stage power with coefficient set functions test">
<dischargeStage>
<power>
<discharge>
<variableId>input</variableId>
</discharge>
<coefficientSetFunctions>
<a>@A@</a>
<b>@B@</b>
<c>@C@</c>
<type>@type@</type>
</coefficientSetFunctions>
<stage>
<variableId>output</variableId>
</stage>
</power>
</dischargeStage>
</transformation>
]]>
Table
Table
Input
discharge
Coefficient set
type
authoriseExtrapolation
interpolationType
minimumStage
tableRecord
Output
stage
521
Description
This transformation will transform a discharge value to a stage value by doing a table lookup. The coefficient set used in this transformation has
an option type. The type will indicate if the lookup table can be used in a discharge to stage-transformation, a stage to discharge-transformation or
both. If a coefficient set which is defined as a level_to_flow type is used in this type of transformation an error will be issued. The
authoriseExtrapolation option will enable/disable the extrapolation option. The interpolationType can be used the configure the type of
interpolation used.
linear
logarithmic
When the option logarithmic is selected the calculation method used is almost the same the method used when the linear option is selected. The
only difference is that the calculation is done with the natural logarithm of the lookup-value and with the natural logarithm of the table values.
The minimum stage value allow configurators to enter a minimum stage value. Stage values below this value are converted to the minimum value.
The table record is the actual lookup table. Each tableRecord is a single entry in the lookup-table with a stage and a discharge value. Note that it
is also possible to define an offset for each tableRecord. This offset
will be applied as a positive offset to the stage value. Offsets will apply to the tableRecord in which it is defined and the records above this record
until a new offset is defined.
Configuration example
Events Transformations
The following transformations can be used to measure time series events. The measurements will be registered for each output period where the
event started.
522
EventsDischargeVolume — DischargeVolume: Calculates either the discharge volume of all events initiated in the output period or only
the discharge volume of the largest event initiated in the output period.
EventsDuration — Duration: Calculates either duration of all events initiated in the output period .
EventsMaximum — Maximum: Calculates the maximum input value of the events initiated in the output period.
EventsMeanDischargeVolume — MeanDischargeVolume: Calculates the mean discharge volume per event.
EventsNumberOfEvents — NumberOfEvents: Calculates the number of events initiated in the output period.
EventsDischargeVolume
Information
Transformation: DischargeVolume
Transformation Events
Group:
Description: Calculates either the discharge volume of all events initiated in the output period or only the discharge volume of the largest
event initiated in the output period. An event in this transformation is defined as a the largest possible series of subsequent
subevents where the duration of gaps (where there are no subevents) is shorter than the specified maxGapDuration
parameter. A subevent is defined as a measurement in time in the input where the value is larger than the specified
threshold parameter. The discharge volume of a subevent is calculated by multiplying the input value (m3/s) by the
duration of the input time step (s). The discharge volume for an event is the sum of the discharge volumes for its subevents,
and is registered only in the output period where the event initiated.
Hydrological Information
Purpose and This transformation can for instance be used to report discharge volume statistics on sewer spillage for each month.
use of
Transformation:
Background The unit of the input must be m3/s. The output time step must be bigger than the input time step. All input values must be
and non-missings, otherwise the result will be set to missing value. In case one of the inputs is doubtful, the output flag is set to
Exceptions: ORIGINAL_DOUBTFUL.
Input
Options
eventSelection Selects either discharge volume of all events or only the discharge volume of the event with the largest discharge volume.
threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours.
CoefficientSets
No connection to CoefficientSets.
Output
Configuration Example
523
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
Configuration example for discharge volume of the largest event of each month.
524
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events dischargeVolume largest event">
<events>
<dischargeVolume>
<discharge>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeVolume_LargestEvent</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</discharge>
<eventSelection>largest_volume_event</eventSelection>
<volume>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeVolume_LargestEventTest</
moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</volume>
</dischargeVolume>
</events>
</transformation>
</transformationModule>
]]>
EventsDuration
Information
Transformation: Duration
Transformation Events
Group:
Description: Calculates either the net duration or the gross duration of the events initiated in the output period. An event in this
transformation is defined as a the largest possible series of subsequent subevents where the duration of gaps (where there
are no subevents) is shorter than the specified maxGapDuration parameter. A subevent is defined as a measurement in
time in the input where the value is larger than the specified threshold parameter. The duration of a single subevent is
equal to the duration of the input time step. The duration of an event is the sum of the duration of its subevents plus the
duration of the gaps that do not exceed the maxGapDuration parameter, and is registered only in the output period where
the event initiated.
Hydrological Information
Purpose and use This transformation can for instance be used to report on the duration of sewer spillage events for each month.
of
Transformation:
525
Background and The output time step must be bigger than the input time step. All input values must be non-missings, otherwise the result
Exceptions: will be set to missing value. In case one of the inputs is doubtful, the output flag is set to ORIGINAL_DOUBTFUL.
Input
Options
threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours. In order to calculate the net duration instead of the gross duration, this value has to be set to zero.
outputTimeUnit Defines the time unit of the output (default is day).
CoefficientSets
No connection to CoefficientSets.
Output
output Duration of the selected events using the specified output unit.
Configuration Example
526
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events net duration">
<events>
<duration>
<input/>
<timeSeriesSet>
<moduleInstanceId>EventsDuration_NetDuration</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
527
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events gross duration">
<events>
<duration>
<input/>
<timeSeriesSet>
<moduleInstanceId>EventsDuration_GrossDuration</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<maxGapDuration unit="day"/>
<output>
<timeSeriesSet>
<moduleInstanceId>EventsDuration_GrossDuration</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</output>
</duration>
</events>
</transformation>
</transformationModule>
]]>
EventsMaximum
Information
Transformation: Maximum
Transformation Events
Group:
Description: Calculates the maximum input value of the events initiated in the output period. An event in this transformation is defined as
a the largest possible series of subsequent subevents where the duration of gaps (where there are no subevents) is shorter
than the specified maxGapDuration parameter. A subevent is defined as a measurement in time in the input where the
value is larger than the specified threshold parameter.
Hydrological Information
Purpose and use This transformation can for instance be used to report the maximum value of the events for each month.
of
Transformation:
Background and The output time step must be bigger than the input time step. All input values must be non-missings, otherwise the result
Exceptions: will be set to missing value. In case one of the inputs is doubtful, the output flag is set to ORIGINAL_DOUBTFUL.
528
Input
Options
threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours.
CoefficientSets
No connection to CoefficientSets.
Output
output Maximum of the input values of the events that initiated in the output period.
Configuration Example
Configuration example for calculation of the maximum for events for each month.
<output>
<timeSeriesSet>
<moduleInstanceId>EventsMaximum</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</output>
</maximum>
</events>
</transformation>
</transformationModule>
]]>
EventsMeanDischargeVolume
Information
529
Transformation: MeanDischargeVolume
Transformation Events
Group:
Description: Calculates the mean discharge volume of events initiated in the output period per event initiated in the output period. An
event in this transformation is defined as a the largest possible series of subsequent subevents where the duration of gaps
(where there are no subevents) is shorter than the specified maxGapDuration parameter. A subevent is defined as a
measurement in time in the input where the value is larger than the specified threshold parameter. The discharge volume
of a subevent is calculated by multiplying the input value (m3/s) by the duration of the input time step. The discharge
volume for an event is the sum of the discharge volumes for its subevents, and is registered only in the output period where
the event initiated.
Hydrological Information
Purpose and This transformation can for instance be used to report the mean discharge volume per event on sewer spillage for each
use of month.
Transformation:
Background The unit of the input must be m3/s. The output time step must be bigger than the input time step. All input values must be
and non-missings, otherwise the result will be set to missing value. In case one of the inputs is doubtful, the output flag is set to
Exceptions: ORIGINAL_DOUBTFUL.
Input
Options
threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours.
CoefficientSets
No connection to CoefficientSets.
Output
Configuration Example
Configuration example for calculation of the mean discharge volume per event for each month.
530
<transformationModule xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.0">
<!-- input variables -->
<!-- output variables -->
<!-- transformations -->
<transformation id="events dischargeMeanVolume">
<events>
<dischargeMeanVolume>
<discharge>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeMeanVolume</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour" multiplier="6"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</discharge>
<meanVolume>
<timeSeriesSet>
<moduleInstanceId>EventsDischargeMeanVolume</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</meanVolume>
</dischargeMeanVolume>
</events>
</transformation>
</transformationModule>
]]>
EventsNumberOfEvents
Information
Transformation: NumberOfEvents
Transformation Events
Group:
Description: Calculates the number of events initiated in the output period. An event in this transformation is defined as a the largest
possible series of subsequent subevents where the duration of gaps (where there are no subevents) is shorter than the
specified maxGapDuration parameter. A subevent is defined as a measurement in time in the input where the value is
larger than the specified threshold parameter.
Hydrological Information
Purpose and use This transformation can for instance be used to report the number of events initiated each month.
of
Transformation:
Background and The output time step must be bigger than the input time step. All input values must be non-missings, otherwise the result
Exceptions: will be set to missing value. In case one of the inputs is doubtful, the output flag is set to ORIGINAL_DOUBTFUL.
Input
531
input Equidistant measurements.
Options
threshold Only measurements are used with a value above this value. Default is 0.
maxGapDuration When there is a gap between two subsequent subevents exceeding this duration, these subevents belong to two separate
events. Default is 24 hours.
CoefficientSets
No connection to CoefficientSets.
Output
Configuration Example
Configuration example for calculation of the number of events initiated each month.
<output>
<timeSeriesSet>
<moduleInstanceId>EventsNumberOfEvents</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep monthdays="--01-01 --02-01 --03-01 --04-01 --05-01 --06-01
--07-01 --08-01 --09-01 --10-01 --11-01 --12-01"/>
<relativeViewPeriod unit="day" start="0" end="113"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</output>
</numberOfEvents>
</events>
</transformation>
</transformationModule>
]]>
Filter Transformations
LowPass
532
FilterLowPass
Information
Transformation: LowPass
Transformation Filter
Group:
Description: Low pass filter for discrete time series. This transformation calculates the following difference equation.
Here x is the input, y is the output, t denotes time, b0 to bM are the feedforward coefficients and a1 to aN are the feedback
coefficients. When this transformation runs, then it first retrieves the required previous output values from previous runs, if
available.
Hydrological Information
Background and Exceptions: This transformation filters out high frequency fluctuations in time series data.
Input
Input variable x(t). For each calculation of y(t) the input values x(t) to x(t-M) are required. If one of these input values is missing, then the output
value y(t) will be a missing value.
CoefficientSets or CoefficientSetFunctions
The coefficientSet should contain the a and b coefficients for the filter (see the equation above). It is possible to choose the number of coefficients
to use. The first defined a coefficient is a1, the second defined a coefficient is a2 and so on. The last defined a coefficient is aN. The first defined
b coefficient is b0, the second defined b coefficient is b1 and so on. The last defined b coefficient is bM.
When using coefficient set functions (available since build 30246), the a and b coefficient elements can contain tags between "@" signs (e.g.
"@NUMBER@") that refer to location attributes that are defined in the locationSets configuration file. The tags are replaced by actual values.
These values can be different for different locations and time periods. See 22 Locations and attributes defined in Shape-DBF files for more
information.
Output
Output variable y(t). For each calculation of y(t) the previous output values y(t-1) to y(t-N) are required. When this transformation runs, then it first
retrieves the required previous output values from previous runs, if available. If one of these previous output values is missing, then that output
value is ignored. Effectively this means that it behaves as if all previous missing output values would be 0.
Configuration Examples
533
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>FilterLowPassTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="10" end="43"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>FilterLowPassTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="10" end="43"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="filter low pass">
<filter>
<lowPass>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<coefficientSet>
<a>0.4</a>
<a>0.3</a>
<b>0.2</b>
<b>0.1</b>
</coefficientSet>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</lowPass>
</filter>
</transformation>
]]>
The example below uses coefficientSetFunctions (available since build 30246). Here the coefficients are defined in coefficientSetFunctions, where
@a1@, @a2@, @b0@ and @b1@ refer to location number attributes that are defined in the locationSets configuration file.
534
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>FilterLowPassWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>locationWithAttributes5</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="10" end="43"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>FilterLowPassWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>locationWithAttributes5</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="10" end="43"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="filter low pass with coefficient set functions test">
<filter>
<lowPass>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<coefficientSetFunctions>
<a>@a1@</a>
<a>@a2@</a>
<b>@b0@</b>
<b>@b1@</b>
</coefficientSetFunctions>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</lowPass>
</filter>
</transformation>
]]>
Block
Block
Input
inputVariable
Options
535
maxGapLength
Output
outputVariable
Description
This tranformation fills the gaps in the time series with the last value in the time series before the start of the gap. If a maxGapLength is defined
the gap will only be filled if the size of the gap is smaller than maxGapLength.
Configuration example
directionLinear
Information
Description: Fills gaps in a time series that contains direction data values (e.g. wind direction in degrees), using linear interpolation. The
direction values are interpolated over the smallest angle. For example halfway between directions 0 degrees and 350
degrees the interpolated value would be 355 degrees. For a gap between two directions that are exactly opposite (e.g. 90
and 270 degrees) the interpolated values will be equal to the last known direction.
Hydrological Information
Purpose and Linear interpolation of direction data values (e.g. wind direction in degrees).
use of
Transformation:
Background The direction values are interpolated over the smallest angle. For example halfway between directions 0 degrees and 350
and degrees the interpolated value would be 355 degrees. For a gap between two directions that are exactly opposite (e.g. 90
Exceptions: and 270 degrees) the interpolated values will be equal to the last known direction, because if two directions are exactly
opposite, then it is not possible to choose which is the smallest angle to interpolate over. Direction values can also be
designated to be "varying". Varying values are represented in the Delft-FEWS graphical user interface with a "?" sign. This
transformation handles varying values just like missing values, this means it replaces varying values with an interpolated
value.
Input
Options
directionRange
The range of the values in the input time series and output time series. For degrees this range could be e.g. 0 to 360 or e.g. -180 to 180. For
radians this range could be e.g. 0 to 2*PI. Input values outside the specified range will be handled like missing values, this means these will be
536
replaced with an interpolated value.
maxGapLength (optional)
Optional maximum length of gap in number of time steps. Gaps equal to or smaller than maxGapLength will be filled with interpolated values.
Gaps larger than maxGapLength will not be filled. If maxGapLength is not defined, then all gaps will be filled with interpolated values.
Output
Configuration Example
<interpolationSerial>
<directionLinear>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<directionRange>
<lowerLimit>0</lowerLimit>
<upperLimit>360</upperLimit>
</directionRange>
<maxGapLength>5</maxGapLength>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</directionLinear>
</interpolationSerial>
]]>
extrapolateExponential
Extrapolate exponential
Input
inputVariable
Options
extrapolateDirection
baseValue
recessionConstant
maxGapLength
Output
outputVariable
Description
This transformation will fill the gap at the end or start of a time series by using a exponential decay of the last value of the time series before the
gap. The option extrapolateDirection can be used to indicate if the gap at the start of the time series or at the end of the time series or both must
be filled. The transformation will extrapolate to the configured base value with the configured recession constant. The value at a certain time
which is n steps aways from the start of the gap will be calculated with the following formula:
Y = (Ystartgap - baseValue)*recessionConstant^n+baseValue
If the gap in the time series is larger than the configured mapGapLength the gap will not be filled.
Configuration example
537
Transformation - InterpolationSerial Linear
schema: [Link]
This transformation function is used to fill inner gaps in a time series. The inner gaps are filled with linearly interpolated data values.
A gap is defined as a number of consecutive values that are unreliable or missing. An inner gap is defined as a gap for which there is at least one
reliable or doubtful value before the gap and at least one reliable or doubtful value after the gap. This function fills only inner gaps.
Each inner gap is filled using linear interpolation between the value just before the gap and the value just after the gap.
This function has an option to define the maximum length of the gaps that should be filled. Gaps that are equal to or smaller than the defined
maximum gap length will be filled with interpolated values. Gaps that are larger than the defined maximum gap length will not be filled.
In this function one input time series and one output time series must be identified.
inputVariable: A time series with input values. This will typically contain inner gaps.
outputVariable: A time series in which the output will be stored. The output series will contain all input values and the inner gaps will be
filled.
Configuration
A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.
inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.
outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.
maxGapLength
Optional element defining the maximum length of gaps that should be filled. The length is equal to the number of time steps. Gaps equal to or
smaller than maxGapLength will be filled with interpolated values. Gaps larger than maxGapLength will not be filled. If maxGapLength is not
defined, then all gaps will be filled with interpolated values.
Example
538
<interpolationSerial>
<linear>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<maxGapLength>5</maxGapLength>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</linear>
</interpolationSerial>
]]>
Common issues
None reported.
Related items
[Link]
InterpolationBilinear
InterpolationSpatialAverage
schema: [Link]
This transformation function is used to calculate the average value of an input time series (grid or scalar) within the area of a polygon of the output
time series.
scalar
regular grid
irregular grid
When the input is a scalar time series the average value for a certain polygon in the output will be calculated by finding the points in the input time
series which are within the area of the polygon and calculate the average value of these points. When the input is a time series with a grid (regular
or irregular) the transformation will determine which cells of the input time series have an overlap with the output polygon and the average value
of these cells will be calculated. The average value will be a weighted average. The weight of each input cell wil be based on how much area of
the input polygon covers a certain part of the output polygon.
The output time series can be a output time series with polygons or a irregular/regular grid. However it is expected to have a slow performance
with large grids because this transformation is optimized for output time series based on polygons.
The configurator has the possibility to configure a minimum or a maximum value for the output of the transformation. If the output exceeds the
539
minimum or maximum value configured the output will truncated to the minimum or maximum value configured.
In this function one input time series and one output time series must be identified.
inputVariable: A time series with input values. This can be a scalar time series or time series with a regular/irregular grid.
outputVariable: A time series in which the output will be stored. The output time series can be a time series with polygons or with a
regular grid.
Configuration
A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.
inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.
outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.
minimumValue
Optional element defining the minimum value of the output time series. If the output value is lower than the configured minimum value the output
value will be equal to the configured minimum value.
maximumValue
Optional element defining the maximum value of the input time series. If the output value is higher than the configured maximum value the output
value will be equal to the configured maximum value.
Example
<average>
<minimumValue>0</minimumValue>
<maximumValue>20000</maximumValue>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</average>
]]>
Common issues
None reported.
Related items
[Link]
InterpolationSpatialClosestDistance
schema: [Link]
540
keywords: transformation, spatial interpolation, closest distance
This transformation function finds the closest location/grid cell in the input time series and uses the value of that location/grid cell for the output.
scalar
regular grid
irregular grid
longitudinal profile
scalar
regular grid
irregular grid
longitudinal profile
If the time series is not a scalar time series the centre of the grid cell will be used when trying to find the closest input location/grid cell.
The configurator has the possibility to configure a minimum and maximum value for the output. If the output exceeds the minimum or maximum
value the output is truncated to that value.
It is also possible to maximize the search radius in which the transformation searches for the closest input location/grid cell. This can be done by
setting the searchRadius in the configuration.
In this function one input time series and one output time series must be identified.
inputVariable: a time series with input values. This can be a scalar time series, longitudinal profile or a time series with a regular/irregular
grid.
outputVariable: a time series in which the output will be stored. This can be a scalar time series, longitudinal profile or a time series with a
regular/irregular grid.
Configuration
A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.
inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.
outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.
minimumValue
Optional element defining the minimum value of the output time series. If the output value is lower than the configured minimum value the output
value will be equal to the configured minimum value.
maximumValue
Optional element defining the maximum value of the input time series. If the output value is higher than the configured maximum value the output
value will be equal to the configured maximum value.
searchRadius
Optional element defining the maximum radius in which the transformation searches the closest location/grid cell.
Example
541
<closestDistance>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<minimumValue>0</minimumValue>
<maximumValue>1000</maximumValue>
<searchRadius>10000</searchRadius>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</closestDistance>
]]>
Common issues
None reported.
Related items
[Link]
InterpolationSpatialInverseDistance
schema: [Link]
This transformation function calculates the output based on the weighted average of the closest input locations/grid cells. The weight of each input
location/grid cell will be calculated by the inverse distance of each location.
If the time series is not a scalar time series the centre of the grid cell will be used when trying to find the closest input location/grid cell.
The configurator has the possibility to configure a minimum and maximum value for the output. If the output exceeds the minimum or maximum
value the output is truncated to that value.
It is also possible to maximize the search radius in which the transformation searches for the closest input location/grid cell. This can be done by
setting the searchRadius in the configuration.
542
The weight of each input value in the output is computed by the inverse distance from the input location/grid cell to the output location/grid cell.
The power to which the distance is raised in this calcuation can be configured. It is also possible to configure the maximum total number of input
values which are used to calculate the output. First the transformation will try to find the closest input locations/grid cells which should be used in
the calculation. If one or more values in the input values of these time series are missing values, the transformation will not search for the next
closest locations/grid cells but will ignore these values in the calculation.
In this function one input time series and one output time series must be identified.
inputVariable: a time series with input values. This can be a scalar time series, longitudinal profile or a time series with a regular/irregular
grid.
outputVariable: a time series in which the output will be stored. This can be a scalar time series, longitudinal profile or a time series with a
regular/irregular grid.
Configuration
A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.
inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.
outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.
minimumValue
Optional element defining the minimum value of the output time series. If the output value is lower than the configured minimum value the output
value will be equal to the configured minimum value.
maximumValue
Optional element defining the maximum value of the input time series. If the output value is higher than the configured maximum value the output
value will be equal to the configured maximum value.
searchRadius
Required element defining the maximum radius in which the transformation searches the closest location/grid cell.
InverseDistancePower
Required element to define the InverseDistanceOptionPower to which the inverse distance will be raised to calculate the weight factor of the input
location/grid cell
numberOfpoints
Required elemet to defie the maximum number of points/grid cells which will be used to calculate the output.
Example
543
<inverseDistance>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<minimumValue>0</minimumValue>
<maximumValue>10000</maximumValue>
<searchRadius>100000</searchRadius>
<inverseDistancePower>2</inverseDistancePower>
<numberOfPoints>3</numberOfPoints>
<outputVariable>
<variableId>output</variableId>
</outputVariable>
</inverseDistance>
]]>
Common issues
None reported.
Related items
[Link]
InterpolationSpatialMax
Max: Calculates the maximum of the input values within the output polygons.
Information
Transformation: max
Description: Calculates the maximum of the input values within each polygon specified in the output.
Hydrological Information
Purpose and use of Transformation: Can be used to compute the maximum radar cell value within a catchment (polygon).
Input
Output
Configuration Example
544
<interpolationSpatial>
<max>
<inputVariable>
<variableId>grid</variableId>
</inputVariable>
<outputVariable>
<variableId>polygon1</variableId>
</outputVariable>
</max>
</interpolationSpatial>
]]>
InterpolationSpatialMin
Min: Calculates the minimum of the input values within the output polygons.
Information
Transformation: min
Description: Calculates the minimum of the input values within each polygon specified in the output.
Hydrological Information
Purpose and use of Transformation: Can be used to compute the minimum radar cell value within a catchment (polygon)
Input
Output
Configuration Example
<interpolationSpatial>
<min>
<inputVariable>
<variableId>grid</variableId>
</inputVariable>
<outputVariable>
<variableId>polygon1</variableId>
</outputVariable>
</min>
</interpolationSpatial>
]]>
InterpolationSpatialSum
schema: [Link]
545
This transformation function is used to calculate the sum of an input time series (grid or scalar) within the area of a polygon of the output time
series.
scalar
regular grid
irregular grid
When the input is a scalar time series the sum for a certain polygon in the output will be calculated by finding the points in the input time series
which are within the area of the polygon and calculate sum of the input values of these points. When the input is a time series with a grid (regular
or irregular) the transformation will determine which cells of the input time series have an overlap with the output polygon and the sum value of
these cells will be calculated. If the input polygon is only partly within the output polygon the input value will only be accounted for the part which
covers the output polygon.
The output time series can be a output time series with polygons or a irregular/regular grid. However it is expected to have a slow performance
with large grids because this transformation is optimized for output time series based on polygons.
The configurator has the possibility to configure a minimum or a maximum value for the output of the transformation. If the output exceeds the
minimum or maximum value configured the output will truncated to the minimum or maximum value configured.
In this function one input time series and one output time series must be identified.
inputVariable: A time series with input values. This can be a scalar time series or time series with a regular/irregular grid.
outputVariable: A time series in which the output will be stored. The output time series can be a time series with polygons or with a
regular grid.
Configuration
A basic configuration of the function is described below. This describes the main elements and attributes required and provides an example
configuration.
inputVariable
Required element defining the identifier of the input time series with input values. This Id must reference a valid input time series.
outputVariable
Required element defining the identifier of the output time series with output values. This Id must reference a valid output time series.
minimumValue
Optional element defining the minimum value of the output time series. If the output value is lower than the configured minimum value the output
value will be equal to the configured minimum value.
maximumValue
Optional element defining the maximum value of the input time series. If the output value is higher than the configured maximum value the output
value will be equal to the configured maximum value.
Example
<sum>
<inputVariable>
<variableId>input</variableId>
</inputVariable>
<minimumValue>0</minimumValue>
<maximumValue>10000</maximumValue>
<outputVariable>
<variableId>ouput</variableId>
</outputVariable>
</sum>
]]>
546
Common issues
None reported.
Related items
[Link]
InterpolationSpatialWeighted
Information
Transformation: Weighted
Transformation InterpolationSpatial
Group:
Description: For each time step this transformation calculates the weighted average of the input variables. The weights are re-scaled so
that the total weight becomes 1. If for a given time an input variable has a missing value, then for that time that input
variable is ignored and the weights of the other input variables are re-scaled so that the total weight becomes 1.
Hydrological Information
Purpose and use of This transformation can for example be used to calculate the weighted average of the amount of rainfall of a
Transformation: number of locations in a catchment.
Background and
Exceptions:
Input
One or more weighted input variables. Each input variable has a weight.
Options
minInputValuesRequired (optional)
This is the minimum number of input variables that should have a non-missing value for the calculation. If for a given time the number of input
variables that have a non-missing value is less than this configured minimum, then for that time the output value will be a missing value. This can
be used for example to avoid getting output values of calculations for which very few input variables are available, because such calculations
would be inaccurate. If minInputValuesRequired is not specified, then it will be set to 1.
Output
Weighted average.
Configuration Example
547
<interpolationSpatial>
<weighted>
<weightedInputVariable>
<inputVariable>
<variableId>location1</variableId>
</inputVariable>
<weight>0.3</weight>
</weightedInputVariable>
<weightedInputVariable>
<inputVariable>
<variableId>location2</variableId>
</inputVariable>
<weight>0.2</weight>
</weightedInputVariable>
<weightedInputVariable>
<inputVariable>
<variableId>location3</variableId>
</inputVariable>
<weight>0.1</weight>
</weightedInputVariable>
<weightedInputVariable>
<inputVariable>
<variableId>location4</variableId>
</inputVariable>
<weight>0.4</weight>
</weightedInputVariable>
<minInputValuesRequired>2</minInputValuesRequired>
<outputVariable>
<variableId>average</variableId>
</outputVariable>
</weighted>
</interpolationSpatial>
]]>
Lookup transformations
conditional
multiDimensional
simple
Multidimensional
Multidimensional
Input
rowIndexLookupVariable
columnIndexLookupVariable
Coefficient set
interpolationType
extrapolationType
rowIndexLookupTable
columnIndexLookupTable
outputLookupTable
Output
output
Description
The output value will determined by looking up an output value in the output lookup table. To calculate the output value the position of the output
in the outputLookupTable must be calculated.
548
First the row position will be calculated. The input time series rowIndexLookupVariable provides the lookup value for the rowIndexLookupTable
which is a simple 1-dimensional lookup table which will provide the row position. In the same way the column position will be calculated. The
lookup value will be provided by the columnIndexLookupVariable time series and the column position will be calculated by doing a simple table
lookup in the columnIndexLookupTable with the lookup value.
When the row position and the column position of the output value in the outputLookupTable are determined it is possible to calculate the output
value. By doing a linear interpolation between the 4 surrounding nodes in the outputLookupTable the output value will determined.
Configuration Example
Simple
Simple
Input
input
Coefficient set
interpolationType
extrapolationType
lookupTable
Output
output
Description
The output will be calculated using a simple table lookup with the input value. The output value will be calculated by interpolation or extrapolation
in the lookup table. The type of
interpolation can be configured in the coefficient set with the interpolation type option. The options available are:lineair and logarithmic. When the
549
input value is outside the range
of the lookup table the behaviour of the transformation will be determined by the configured extrapolation type.
notAllowed,
maxMin,
linear
If the first option, notAllowed, is configured an input value outside the range will return a missing value. The second option will return the
minimum value or the maximum value in the lookup table. The third option linear enables the extrapolation for the function.
Merge Transformations
Merge Transformations
forecasts
mean
simple
synoptic
toggle
Simple Merge
Simple
Input
Options
fillGapConstant
Output
outputVariable
Description
This transformation will perform a simple merge operation that functions as a data hierarchy. The picture below shows the functionality of a merge
operation. First the most important time series is evaluated, if a value exists at a time in this series then this value will be used. If this is not the
case, the second time series (series 2 in the example), will be evaluated. This procedure will continue until a valid value is found in one of the time
series or until all time series are evaluated. If no valid value exists at time x in all of the input time series then a missing value will be returned
unless the user has specified a default value with the fillGapConstant option.
The hierarchy of the input will be determined by the order in which in input time series are listed in the
configuration. In the configuration example inputA would be evaluated first before inputB.
Configuration example
550
Review transformations
StageReview
TidalBalanceReview
Stage Review
Stage Review
Input
observedStage
forecastStage
Options
maxDifference
Output
averageBalanceFirstSegment
averageBalanceSecondSegment
averageBalanceThirdSegment
averageBalanceFourthSegment
startFirstStageRange
startSecondStageRange
startThirdStageRange
startFourthStageRange
endFourthStageRange
Description
The stage review transformation will divide the forecastStage into four equally divided segment. The lowest segement will start at the lowest
forecastStage and the fourth segment will end at the highest forecastStage. The start and is rounded off downwards to meters and the end is
rounded of upwards to meters. After the start the first segment and end of the fourth segment are calculated the range of the remaining segments
are calculated. Secondly for each day for each segment an average daily balance is calculated.
Configuration example
551
TidalBalanceReview
TidalBalanceReview
Input
observedTidalStage
forecastTidalStage
Output
tideBalance
Description
552
The TidalBalanceReview transformation creates a output time series tideBalance which will be the input for AdjustTide transformation. First the
peaks and valleys in the observed time series and the forecast time series are matched. The difference between the observed stage and the
simulated stage will be the balance associated with the peak or valley. This procedure will apply for the part of the time series before T0. After T0
the time of the peaks and valleys will be determined by identifying the peaks and valleys in the simulated time series and correcting the time of the
peak or valley with the lag of the observed tidal time series with the simulated time series. The balance will be calculated by multiplying the
balance of the associated peak or valley of the previous cycle with 0.8.
Configuration example
StageDischarge transformations
mergedRatingCurves
power
table
ratingCurve
StageDischargeMergedRatingCurves
Information
Transformation: MergedRatingCurves
Transformation StageDischarge
Group:
Description: Merges two rating curves using a time dependent weight variable and uses the resulting rating curve to convert stage input
values to discharge output values. For each timeStep in the output time series, first the specified two rating curves are
merged using the value of the weight input time series at that timeStep. If weight is 1, then uses the first rating curve. If
weight is 0, then uses the second rating curve. If weight is between 0 and 1, then interpolates linearly between the first and
the second rating curve to get the merged rating curve. Then the merged rating curve is used to convert the stage input
value for that timeStep to a discharge output value. This can only use rating curves that are stored as time series in the
dataStore. This uses the equation Q_output = weight*Q_ratingCurve1(H_input) + (1 - weight)*Q_ratingCurve2(H_input).
Hydrological Information
Purpose and This can be used e.g. for a river reach with a lot of vegetation in the summer resulting in a higher hydraulic roughness.
use of Then, you might want to handle a rating curve for the winter period (level of 1m corresponds to 5 m3/s) and one for the
Transformation: summer (same water level represents only 3 m3/s due to the higher roughness). The weight value can be used for shifting
inbetween: weight=0 for the winter, weight=1 for the summer, and a weight value of 0.5 for a certain time in spring when
vegetation is growing.
Background Weight value must always be in the range 0 <= weight <= 1. If ratingCurve(s) not found, then logs a warning message and
and sets the output to missing values.
Exceptions:
Input
553
stage input variable with stage (water level) values.
weight input variable with weight values.
ratingCurve
The transformation configuration references to two rating curves that are merged and used to convert stage to discharge values for this
transformation. This can only use rating curves that are stored as time series in the dataStore. To import ratingCurves into the dataStore use
timeSeriesImport module with importType pi_ratingcurves to import a file in the pi_ratingcurves.xsd format. The ratingCurves are referenced using
their locationId and qualifierId. If no locationId is specified, then the locationId of the stage input variable is used.
Output
Configuration Example
<stageDischarge>
<mergedRatingCurves>
<stage>
<variableId>input</variableId>
</stage>
<weight>
<variableId>eta</variableId>
</weight>
<ratingCurve>
<locationId>H-2001</locationId>
<qualifierId>winterRatingCurve</qualifierId>
</ratingCurve>
<ratingCurve>
<locationId>H-2001</locationId>
<qualifierId>summerRatingCurve</qualifierId>
</ratingCurve>
<discharge>
<variableId>output</variableId>
</discharge>
</mergedRatingCurves>
</stageDischarge>
]]>
StageDischargePower
Information
Transformation: Power
Description: Converts stage (H) to discharge (Q) for an open cross section. Uses equation
Hydrological Information
Purpose and use of Transformation: Used to convert stage (water level) to discharge (water flow) for an open cross section.
Input
CoefficientSets or CoefficientSetFunctions
554
The coefficient set should contain the a, b and c coefficients for equation
and the type of calculations for which the coefficient set is valid.
When using coefficient set functions (available since build 30246), the a, b, c and type elements can contain tags between "@" signs (e.g.
"@NUMBER@") that refer to location attributes that are defined in the locationSets configuration file. The tags are replaced by actual values.
These values can be different for different locations and time periods. See 22 Locations and attributes defined in Shape-DBF files for more
information.
Coefficient a in equation
.
b
Coefficient b in equation
.
c
Coefficient c in equation
.
type
Type of calculations for which the coefficient set is valid. Can be level_to_flow, flow_to_level or level_to_flow_and_flow_to_level.
Output
Configuration Example
555
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>StageDischargePowerTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>StageDischargePowerTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="stage discharge power test">
<stageDischarge>
<power>
<stage>
<variableId>input</variableId>
</stage>
<coefficientSet>
<a>57.632</a>
<b>3.01</b>
<c>2.147</c>
<type>level_to_flow_and_flow_to_level</type>
</coefficientSet>
<discharge>
<variableId>output</variableId>
</discharge>
</power>
</stageDischarge>
</transformation>
]]>
The example below uses coefficientSetFunctions (available since build 30246). Here the elements 'a', 'b', 'c' and 'type' are defined in
coefficientSetFunctions, where @A@, @B@ and @C@ refer to location number attributes and @type@ refers to a location text attribute defined
in the locationSets configuration file.
556
<variableId>input</variableId>
<timeSeriesSet>
<moduleInstanceId>StageDischargePowerWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.m</parameterId>
<locationId>locationWithAttributes1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>output</variableId>
<timeSeriesSet>
<moduleInstanceId>StageDischargePowerWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>locationWithAttributes1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="60"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="stage discharge power with coefficient set functions test">
<stageDischarge>
<power>
<stage>
<variableId>input</variableId>
</stage>
<coefficientSetFunctions>
<a>@A@</a>
<b>@B@</b>
<c>@C@</c>
<type>@type@</type>
</coefficientSetFunctions>
<discharge>
<variableId>output</variableId>
</discharge>
</power>
</stageDischarge>
</transformation>
]]>
StageDischarge table
Table
Input
stage
Coefficient set
authoriseExtrapolation
interpolationType
minimumStage
tableRecord
Output
discharge
Description
557
This transformation will transform a stage value to a discharge value by doing a table lookup. The coefficient set used in this transformation has
an option type. The type will indicate if the lookup table can be used in a discharge to stage-transformation, a stage to discharge-transformation or
both. If a coefficient set which is defined as a flow_to_level type is used in this type of transformation an error will be issued. The
authoriseExtrapolation option will enable/disable the extrapolation option. The interpolationType can be used the configure the type of
interpolation used.
linear
logarithmic
When the option logarithmic is selected the calculation method used is almost the same the method used when the linear option is selected. The
only difference is that the calculation is done with the natural logarithm of the lookup-value and with the natural logarithm of the table values.
The minimum stage value allow configurators to enter a minimum stage value. Stage values lower than the minimum value will return the lowest
discharge value in the lookup table.
The table record is the actual lookup table. Each tableRecord is a single en
try in the lookup-table with a stage and a discharge value. Note that it is also possible to define an offset for each tableRecord. This offset
will be applied as a positive offset to the stage value. Offsets will apply to the tableRecord in which it is defined and the records above this record
until a new offset is defined.
Configuration example
<stageDischarge>
<table>
<stage>
<variableId>input</variableId>
</stage>
<coefficientSet>
<type>level_to_flow</type>
<authoriseExtrapolation>true</authoriseExtrapolation>
<interpolationType>linear</interpolationType>
<tableRecord discharge="0" stage="0.433"/>
<tableRecord discharge="0.0595" stage="0.457"/>
<tableRecord discharge="0.190" stage="0.488"/>
<tableRecord discharge="1.84" stage="0.610"/>
<tableRecord discharge="3.85" stage="0.686"/>
<tableRecord discharge="6.71" stage="0.762"/>
<tableRecord discharge="14.9" stage="0.914"/>
<tableRecord discharge="306" stage="4.88"/>
<tableRecord discharge="340" stage="4.95"/>
<tableRecord discharge="377" stage="5.03"/>
<tableRecord discharge="408" stage="5.09"/>
<tableRecord discharge="419" stage="5.11"/>
</coefficientSet>
<discharge>
<variableId>output</variableId>
</discharge>
</table>
</stageDischarge>
]]>
count
kurtosis
max
mean
558
median
min
percentileExceedence
percentileNonExceedence
quartile
rootMeanSquareError
rsquared
skewness
standardDeviation
sum
variance
Structure Transformations
Structure Transformations
crumpWeir
A Crump weir is a standard design weir for moderate flow rates.
Note: When the downstream level of water in the river should be taken into account for backwater correction, use the crumpWeirBackwater
transformation instead of the crumpWeir transformation.
Input
1. headLevel: is the upstream level of water in the river measured from the top of the crest.
2. type: type can be 'simple' or 'crest_tapping'. With 'crest_tapping' the pressure tapping measurements are taken and used for the flow
calculation.
Coefficient set
1. pUpValue: is the distance in metres from the bottom of the river, or crest, to the top of the crest.
2. width: is the width if the weir crest in metres.
3. sSlope: is the side slope of the weir. Crump weirs that have side slopes are uncommon, and if present only possible on one crest. The
slope is the ratio of the horizontal distance over the vertical distance one metre (1m) expressed as a number.
4. dischargeCoefficient: weir discharge coefficient (default is equal to sqrt(g)).
5. energyHeadCorrection: if true energy head correction is taken into account (default is true).
Furthermore, one to three crests can be defined. The first crest is mandatory and only contains the relativeLevel attribute. The relativeLevels are
the required head level adjustments for each crest of the weir. Each relativeLevel is subtracted from the headLevel to give a true indication of
what the actual head level is over each crest. This is required as each crest at a particular weir may have different heights.
The second and third crests also must contain the width attribute. User has the choice between entering one of the following:
Output
Description
Calculates discharge of a triangular profile or Crump weir. The flow calculations are done using measurements taken at the weir.
The following formula is used:
559
Parameters:
Q = Cg . Cd . b . H1^(3/2)
Cg = sqrt(g)
Cd = 0.633
If this value is exceeded a missing value will be entered for the discharge. Further limiting conditions are:
P >= 0.06 m
b >= 0.30 m
h1/P <= 3
b/h1 >= 2
crumpWeirBackwater
A Crump weir backwater is a standard design weir for moderate flow rates.
Input
1. headLevel: is the upstream level of water in the river measured from the top of the crest.
2. tailLevel: is the downstream level of water in the river measured from the top of the crest and can be positive or negative.
3. type: type can be 'simple' or 'crest_tapping'. With 'crest_tapping' the pressure tapping measurements are taken and used for the flow
calculation.
Coefficient set
1. pUpValue: is the distance in metres from the bottom of the river, or crest, to the top of the crest.
2. width: is the width if the weir crest in metres.
3. sSlope: is the side slope of the weir. Crump weirs that have side slopes are uncommon, and if present only possible on one crest. The
slope is the ratio of the horizontal distance over the vertical distance one metre (1m) expressed as a number.
4. dischargeCoefficient: weir discharge coefficient (default is equal to sqrt(g)).
5. energyHeadCorrection: if true energy head correction is taken into account (default is true).
Furthermore, one to three crests can be defined. The first crest is mandatory and only contains the relativeLevel attribute. The relativeLevels are
the required head level adjustments for each crest of the weir. Each relativeLevel is subtracted from the headLevel to give a true indication of
what the actual head level is over each crest. This is required as each crest at a particular weir may have different heights.
The second and third crests also must contain the width attribute. User has the choice between entering one of the following:
Output
Description
Calculates discharge of a triangular profile or Crump weir with backwater correction. The flow calculations are done using measurements taken at
the weir.
flatVWeir
Flat V weirs are used to calculate the flow of a river or stream. Predominantly, Flat V weirs are used where the flow rates are low and river
sections quite narrow.
Note: When the downstream level of water in the river should be taken into account for backwater correction, use the flatVWeirBackwater
transformation instead of the flatVWeir transformation.
Input
1. headLevel: is the upstream level of water in the river measured from the top of the crest at the bottom of the V.
2. type: type can be 'simple' or 'crest_tapping'. With 'crest_tapping' the pressure tapping measurements are taken and used for the flow
calculation.
Coefficient set
560
1. pUpValue: is the distance in metres from the bottom of the river to the top of the crest.
2. width: is the width if the weir crest in metres. Note that there can only be one crest at a Flat V weir.
3. cSlope: is the slope of the "V" at the crest. The slope is the ratio of the horizontal distance over the vertical distance one metre (1m)
expressed as a number.
4. sSlope: is the side slope of the weir. Most Flat V weirs don't have side slope. The slope is the ratio of the horizontal distance over the
vertical distance one metre (1m) expressed as a number. There are two different calculation methods for Flat V weirs that are identified
by either having the s-slope equal to 9999, or an actual value.
Output
Description
Calculates discharge of a flat v weir. The flow calculations are done using measurements taken at the weir.
The following formula is used:
Parameters:
hr = reference level
P = height of weir or height of vertex above bottom
L = length of weir
b = width of weir opening of width of triangle
h1 = upstream water level
H1 = upstream energy level
H2 = downstream energy level
htr = height of triangle
B = width of channel
ha = height of opening
Q = Cg . Cd . m . H1^(5/2)
Cg = 4/5 sqrt(g)
Cd = 0.615 for m <= 15
Cd = 0.620 for 15 < m < 30
Cd = 0.625 for m >= 30
m = b/2htr
If the modular limit is exceeded a missing value will be entered for the discharge.
flatVWeirBackwater
Flat V weirs are used to calculate the flow of a river or stream. Predominantly, Flat V weirs are used where the flow rates are low and river
sections quite narrow.
Input
1. headLevel: is the upstream level of water in the river measured from the top of the crest at the bottom of the V.
2. tailLevel: is the downstream level of water in the river measured from the top of the crest at the bottom of the V and can be positive or
negative.
3. type: type can be 'simple' or 'crest_tapping'. With 'crest_tapping' the pressure tapping measurements are taken and used for the flow
calculation.
Coefficient set
1. pUpValue: is the distance in metres from the bottom of the river to the top of the crest.
2. width: is the width if the weir crest in metres. Note that there can only be one crest at a Flat V weir.
3. cSlope: is the slope of the "V" at the crest. The slope is the ratio of the horizontal distance over the vertical distance one metre (1m)
expressed as a number.
4.
561
4. sSlope: is the side slope of the weir. Most Flat V weirs don't have side slope. The slope is the ratio of the horizontal distance over the
vertical distance one metre (1m) expressed as a number. There are two different calculation methods for Flat V weirs that are identified
by either having the s-slope equal to 9999, or an actual value.
Output
Description
Calculates discharge of a flat v weir with backwater correction. The flow calculations are done using measurements taken at the weir.
StructurePumpFixedDischarge Transformation
PumpFixedDischarge
Input
Coefficient set
Output
Description
Calculates discharge of a pump, using a fixed discharge when the pump is on. The fixed discharge is equal to the capacity of the pump and is
defined in a coefficientSet.
Input can be equidistant or non-equidistant. First the intermediate result (discharge) is calculated at each time that is present in the status input
series. At a given time t1 the calculation uses the most recent status input value before t1 to determine if the pump is on. If the pump is off, then
the intermediate discharge at t1 is 0. If the pump is on, then the intermediate discharge at t1 equals fixedDischarge*(t1 - t0). t0 is the most recent
input time before t1 and fixedDischarge is defined in the coefficientSet. Finally the intermediate discharge is aggregated to the times in the
equidistant output time series.
StructurePumpHeadDischargeTable Transformation
PumpHeadDischargeTable
Input
Coefficient set
Contains a table with one or more table records. Each record lists the discharge of the pump for a given head. Heads need to be in ascending
order. For head values between records linear interpolation will be applied to get the discharge. For head values outside the table range a
warning will be logged and the discharge will be equal to the first (or last) discharge defined in the table.
Output
Description
Calculates discharge of a pump. When the pump is on, then the discharge equals the capacity of the pump. The capacity of the pump depends on
the head. The discharges for different heads are defined in a table in a coefficientSet.
Input can be equidistant or non-equidistant. First the intermediate result (discharge) is calculated at each time that is present either in the status
input series or in the head input series or in both input series. At a given time t1 the calculation uses the most recent status input value before t1
to determine if the pump is on and the most recent head input value before t1 to lookup the discharge (= previousDischarge) in the head
discharge table. If the pump is off, then the intermediate discharge at t1 is 0. If the pump is on, then the intermediate discharge at t1 equals
previousDischarge*(t1 - t0). t0 is the most recent input time before t1 (either status or head input time, whichever changed most recently). Finally
the intermediate discharge is aggregated to the times in the equidistant output time series.
StructurePumpSpeedDischargeTable Transformation
PumpSpeedDischargeTable
562
Input
Coefficient set
Contains a table with one or more table records. Each record lists the discharge of the pump for a given speed. Speeds need to be in ascending
order. For speed values between records linear interpolation will be applied to get the discharge. For speed values outside the table range a
warning will be logged and the discharge will be equal to the first (or last) discharge defined in the table.
Output
Description
Calculates discharge of a speed-controlled pump with a fixed capacity. When the pump is on, then the discharge of the pump depends only on the
speed. The discharges for different speeds are defined in a table in a coefficientSet.
Input can be equidistant or non-equidistant. First the intermediate result (discharge) is calculated at each time that is present either in the status
input series or in the speed input series or in both input series. At a given time t1 the calculation uses the most recent status input value before t1
to determine if the pump is on and the most recent speed input value before t1 to lookup the discharge (= previousDischarge) in the speed
discharge table. If the pump is off, then the intermediate discharge at t1 is 0. If the pump is on, then the intermediate discharge at t1 equals
previousDischarge*(t1 - t0). t0 is the most recent input time before t1 (either status or speed input time, whichever changed most recently). Finally
the intermediate discharge is aggregated to the times in the equidistant output time series.
StructurePumpSpeedHeadDischargeTable Transformation
PumpSpeedHeadDischargeTable
Input
Coefficient set
Contains a table with one or more table records. Each record contains the discharge of the pump for a particular speed value and a particular
head value. The records need to be sorted on speed. The speed values need to be in ascending order and for each speed value the
corresponding head values need to be in ascending order. For speed or head values between the listed values linear interpolation will be applied
to get the discharge. For speed or head values outside the range of listed values a warning will be logged and the first (or last) defined values will
be used to get the discharge.
For given head and speed input values, the calculation will lookup a discharge value as follows. For each listed speed value the corresponding
head and discharge values are used to create a head discharge table. Then for each listed speed value the corresponding head discharge table is
used to lookup the discharge value corresponding to that listed speed value and the head input value. This way a temporary speed discharge
table is created. Then the speed input value is looked up in the temporary speed discharge table to get the final discharge value.
Output
Description
Calculates discharge of a speed-controlled pump with a head-dependent capacity. When the pump is on, then the discharge of the pump depends
on both the speed and the head. The discharges for different speeds and heads are defined in a table in a coefficientSet.
Input can be equidistant or non-equidistant. First the intermediate result (discharge) is calculated at each time that is present in one or more of the
different input series. At a given time t1 the calculation uses the most recent status input value before t1 to determine if the pump is on and the
most recent speed input value before t1 and the most recent head input value before t1 to lookup the discharge (= previousDischarge) in the
coefficient set tables. If the pump is off, then the intermediate discharge at t1 is 0. If the pump is on, then the intermediate discharge at t1 equals
previousDischarge*(t1 - t0). t0 is the most recent input time before t1 (either status, speed or head input time, whichever changed most recently).
Finally the intermediate discharge is aggregated to the times in the equidistant output time series.
TimeShift
constant
length
variable
563
Constant
TimeShift Constant
Input
inputVariable, the time series which has to shift a certain number of time steps
Options
numberOfTimeSteps
Output
Description
The option numberOfTimeSteps defines the number of time steps the transformation has to shift. A positive value will shift the time series to the
future. In the shown example
the output time series shiftedInput will be shifted backwards 1 time steps to the input time series input
Configuration example
User Transformations
User Transformations
userSimple
userPeriodic
UserPeriodic Transformation
UserPeriodic
Input
It is possible to define embedded variables in this transformation. In the expression both embedded variables and variables defined at the start of
the transformations configuration file can be used. If an embedded variable and a variable defined at the start of the transformations configuration
file have the same variableId, then the embedded variable will be used.
Expression
For instance "X1 + X2 * 3". In the expression reference input variables or coefficients using their id, e.g. "X1 + a" where "X1" is the variableId of a
variable defined elsewhere and "a" is the id of a coefficient defined in a coefficientSet. A variableId or coefficientId should not start with a
numerical character and should not contain operators. The following operators can be used in the expression: +, -, /, *, ^, sin, cos, tan, asin, acos,
atan, sinh, cosh, tanh, asinh, acosh, atanh, log, ln, exp, sqrt, abs, pow. "pi" in lowercase letters is recognised as a standard constant. This means
that the user cannot use variables or coefficients with id "pi".
Coefficient set
564
Should contain the coefficients that are used in the free format expression. Defined the ids and values of the coefficients here, then reference to
the ids of these coefficients in the expression. Make sure that for all the coefficient ids in the free format expression the values are defined here.
Output values will be shifted periodically to within this range, e.g. [0, 360]. The lower and upper limits are inclusive.
Output
1. output: result of the evaluated expression, shifted periodically to within the given output range.
Description
Function specified by a custom free format expression and coefficients. Any number of input variables and coefficients can be used in the free
format expression. The expression may contain general mathematical operators. A function parser is used to evaluate the expression. For each
time step in the output time series the expression is evaluated. Each result is shifted periodically to within the given output range and written to the
output time series.
UserSimple Transformation
UserSimple
Input
It is possible to define embedded variables in this transformation. In the expression both embedded variables and variables defined at the start of
the transformations configuration file can be used. If an embedded variable and a variable defined at the start of the transformations configuration
file have the same variableId, then the embedded variable will be used.
Expression
For instance "X1 + X2 * 3" (without the quotes). In the expression input variables or coefficients can be referenced using their id, e.g. "X1 + a"
where "X1" is the variableId of a variable defined elsewhere and "a" is the id of a coefficient defined in a coefficientSet. A variableId or
coefficientId should not start with a numerical character and should not contain operators. The following operators can be used in the expression:
+, -, /, *, ^, sin, cos, tan, asin, acos, atan, sinh, cosh, tanh, asinh, acosh, atanh, log, ln, exp, sqrt, abs, pow. "pi" in lowercase letters is recognised
as a standard constant. This means that the user cannot use variables or coefficients with id "pi".
Furthermore it is possible to use "if statements" in the expression. This can e.g. be used to get one output value if X is greater than 3 and get
another output value if X is equal to or less than 3. For instance in the expression
the if statement will be replaced with 10.5 or -2, depending on the value of the variable 'X'. In this case if X is greater than 3, then the if statement
is replaced with 10.5. If X is equal to or less than 3, then the if statement is replaced with -2. The following symbols can be used in an if statement:
greater than
less than
Should contain the coefficients that are used in the free format expression. Define the ids and values of the coefficients here, then refer to the ids
of these coefficients in the expression. Make sure that for all the coefficient ids in the free format expression the values are defined here.
When using coefficient set functions (available since build 30246), the value elements can contain tags between "@" signs (e.g. "@NUMBER@")
that refer to location attributes that are defined in the locationSets configuration file. The tags are replaced by actual values. These values can be
different for different locations and time periods. See 22 Locations and attributes defined in Shape-DBF files for more information.
Output
1.
565
1. output: result of the evaluated expression.
Description
Function specified by a custom free format expression and coefficients. Any number of input variables and coefficients can be used in the free
format expression. The expression may contain general mathematical operators. A function parser is used to evaluate the expression. For each
time step in the output time series the expression is evaluated and the result is written to the output time series.
Configuration examples
In the example below 'X1' is a reference to a variable and 'a' and 'b' are references to a coefficient.
<variableId>X1</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleTest2</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="364"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>Y1</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleTest2</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="364"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="user simple test 2">
<user>
<simple>
<expression>(a + b)*X1 - 3</expression>
<coefficientSet>
<coefficient id="a" value="1.34"/>
<coefficient id="b" value="2.5"/>
</coefficientSet>
<outputVariable>
<variableId>Y1</variableId>
</outputVariable>
</simple>
</user>
</transformation>
]]>
The example below uses an if statement. Here 'X' is a reference to a variable and 'a' is a reference to a coefficient.
566
<variableId>X</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleWithIfElseStatementTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="9"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>Y</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleWithIfElseStatementTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2001</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="9"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="user simple with if else statement test">
<user>
<simple>
<!-- if X > 3, then the expression part if(X > 3, 10.5, -2) is replaced with 10.5
-->
<!-- if X <= 3, then the expression part if(X > 3, 10.5, -2) is replaced with -2 -->
<expression>if(X > 3, 10.5, -2) + 5*a</expression>
<coefficientSet>
<coefficient id="a" value="1.5"/>
</coefficientSet>
<outputVariable>
<variableId>Y</variableId>
</outputVariable>
</simple>
</user>
</transformation>
]]>
The example below uses coefficientSetFunctions (available since build 30246). Here 'X' is a reference to a variable and 'a', 'b' and 'c' are
references to coefficients. Here the coefficients are defined in coefficientSetFunctions, where @coef_a@, @coef_b@ and @coef_c@ refer to
location number attributes that are defined in the locationSets configuration file.
567
<variableId>X</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>Q.m</parameterId>
<locationId>locationWithAttributes4</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="9"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>Y</variableId>
<timeSeriesSet>
<moduleInstanceId>UserSimpleWithCoefficientSetFunctionsTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>locationWithAttributes4</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day"/>
<relativeViewPeriod unit="day" start="0" end="9"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
<transformation id="user simple with coefficient set functions test">
<user>
<simple>
<expression>a*(X^2) + b*X + c</expression>
<coefficientSetFunctions>
<coefficient id="a" value="@coef_a@"/>
<coefficient id="b" value="@coef_b@"/>
<coefficient id="c" value="@coef_c@"/>
</coefficientSetFunctions>
<outputVariable>
<variableId>Y</variableId>
</outputVariable>
</simple>
</user>
</transformation>
]]>
DayMonth Sample
Samples a multi-year time series to produce only a single data point per year using the T0 day Month to determine the sampling moment.
Example
Input series 1:
multi-year (1975-1977) timeseries with daily time step, values available at 00:00h
Input series 2:
multi-year (1975-1977) timeseries with monthly time step, values available at 1st of the Month, 00:00h
Input series 3:
multi-year (1975-1977) timeseries with daily time step, values available at 12:00h
T0=01-01-2010 00:00h
Output series 1:
non-equidistant timeseries with values at 01-01-1975, 01-01-1976, 01-01-1977 all 00:00h
Output series 2:
568
non-equidistant timeseries with values at 01-01-1975, 01-01-1976, 01-01-1977 all 00:00h
Output series 3:
non-equidistant timeseries with missing values at 01-01-1975, 01-01-1976, 01-01-1977 all 00:00h
T0=01-01-2010 12:00h
Output series 1:
non-equidistant time series with missing values at 01-01-1975, 01-01-1976, 01-01-1977 all 12:00h
Output series 2:
non-equidistant time series with missing values at 01-01-1975, 01-01-1976, 01-01-1977 all 12:00h
Output series 3:
non-equidistant time series with values at 01-01-1975, 01-01-1976, 01-01-1977 all 12:00h
T0=03-02-2010 00:00h
Output series 1:
non-equidistant time series with values at 03-02-1975, 03-02-1976, 03-02-1977 all 00:00h
Output series 2:
non-equidistant time series with missing values at 03-02-1975, 03-02-1976, 03-02-1977 all 00:00h
Output series 3:
non-equidistant time series with values at 01-01-1975, 01-01-1976, 01-01-1977 all 00:00h
The output timeseries will hold missing values if the input timeseries has missing values at the exact same dayMonth 00:00h.
Configuration
This function can be used in the transformation module as well as in the TimeseriesDisplay.
inputVariable
required element defining the identifier of the input time series with multi-year data. This ID must reference a valid input time series
outputVariable
required element defining the identifier of the output time series with data sampled by the dayMonth of T0. This ID must reference a valid
non-equidistant output time series
Example
<sample>
<dayMonthSample>
<inputVariable>
<variableId>ne</variableId>
</inputVariable>
<outputVariable>
<variableId>DM_ne</variableId>
</outputVariable>
</dayMonthSample>
</sample>
]]>
Example
<dayMonthSampleFunction/>
]]>
Remark:
The dayMonthSample sample function was produced for use with the PCA and Regression Transformation to conduct multi-year regression
analysis.
569
Principal components analysis (PCA) is used when a data set contains many time series (dimensions), and the dimensions need to be reduced,
while retaining the most significant relationships between the time series. Reducing the number of dimensions reduces the data set size and
removes unrelated variability.
The PCA regression transformation produces a linear regression equation and a root mean square error (RMSE). The PCA linear regression
equation produces an estimate of a parameter, given some combination of the input time series. The RMSE is a measure of dispersion around
the regression line.
The PCA regression transformation was developed to update basin snow models when they drift away from realistic output. Snow updating uses
historic and current snow water equivalence (SWE). Historic and current data come from monitoring stations within or near the basin, and from
simulations of SWE in the basins. Historic observed data are used for PCA for a basin, and can potentially include time series from many
monitoring stations. Current data are current daily SWE values, and are also either simulated (modelled basin) or observed (from monitoring
stations within or near the basin). PCA finds the strongest underlying relationships between the historic observed station time series, and
produces a linear equation. Current SWE values can then be input into the equation, and a PCA estimate of current basin SWE is produced.
Input/output timeseries
Each time series is assigned a variable ID which is used in the actual expression.
On the left hand side of Table 1, a dataset is shown that consists of one basin and three stations with varying start and end times. Station 3 has a
gap.
In the middle of the table, the resulting time series lengths are highlighted in color for various combinations of station and basin pairings.
On the right hand side of the table (light blue highlighting), the default FEWS pairing behavior is shown.
The BPA FEWS gap handling technique uses all of the available data, resulting in a longer dataset (gray highlighting).
570
b) PCA transformation
i) Data Preprocessing
Before PCA calculations take place, the data set may need to be normalized and standardized (ex. where two datasets have very different
means, standard deviations, or are not normally distributed) . However, no one type of preprocessing is appropriate for all time series. Therefore,
to automatically assess which preprocessing type produces the 'best' results, the FEWS PCA algorithm performs a variety of preprocessing
techniques. The user is presented with the result with the lowest RMSE.
Preprocessing can also standardize the dataset by subtracting the time series mean and dividing by the standard deviation.
Therefore there are eight possible preprocessing types for PCA: square-root and standardizing, square-root and not standardizing, cube-root and
standardizing, cube-root and not standardizing...
where:
'm', 'x', and 'y' are eigenvalues from basin 'm', and stations 'x' and 'y' respectively.
'z' is the PCA derived equation
'a' and 'b' are coefficients
'm' is the dimension of the matrix
'c' is a constant offset, derived by determining the mean of the historical SWE time series
i) Data Preprocessing
Regression preprocessing and iteration procedures are identical to PCA (square-root, cube-root, log10, or no preprocessing). Data can also be
either normalized or not. Therefore, there are eight regression preprocessing types: square-root and standardizing, square-root and not
standardizing, cube-root and standardizing, cube-root and not standardizing...
The user is informed in the FEWS statistics window if regression produces the lowest RMSE, and has been chosen.
Configuration
571
<timeSeriesSet>
<moduleInstanceId>DayMonthSampleSNWE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SNWE</parameterId>
<locationId>2A16</locationId>
<locationId>2A18</locationId>
<locationId>2A21</locationId>
<locationId>2A22</locationId>
<locationId>2A23</locationId>
<locationId>2A25</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="hour" startoverrulable="true" start="-240" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>Sim_hist</variableId>
<timeSeriesSet>
<moduleInstanceId>DayMonthSampleSWE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SWE</parameterId>
<locationId>MCDQ2IL</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="hour" startoverrulable="true" start="-240" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>Obs_current</variableId>
<timeSeriesSet>
<moduleInstanceId>DayMonthSampleSNWE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SNWE</parameterId>
<locationId>2A16</locationId>
<locationId>2A18</locationId>
<locationId>2A21</locationId>
<locationId>2A22</locationId>
<locationId>2A23</locationId>
<locationId>2A25</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-2" end="2"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>Current_sim</variableId>
<timeSeriesSet>
<moduleInstanceId>DayMonthSampleSWE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SWE</parameterId>
<locationId>MCDQ2IL</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-2" end="2"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>
572
<timeSeriesSet>
<moduleInstanceId>PCA_MCDQ2IL_RMSE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SWE</parameterId>
<qualifierId>pca</qualifierId>
<locationId>MCDQ2IL</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-2" end="2"/>
<readWriteMode>add originals</readWriteMode>
<ensembleId>main</ensembleId>
</timeSeriesSet>
</variable>
<variable>
<variableId>PCA_rmse</variableId>
<timeSeriesSet>
<moduleInstanceId>PCA_MCDQ2IL_RMSE_SnowAnalysis</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SWE</parameterId>
<qualifierId>rmse</qualifierId>
<locationId>MCDQ2IL</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-2" end="2"/>
<readWriteMode>add originals</readWriteMode>
<ensembleId>main</ensembleId>
</timeSeriesSet>
</variable>
<!--perform the PCA calculation-->
<transformation id="PCA">
<regression>
<principalComponentAnalysis>
<historicalObserved>
<variableId>Obs_hist</variableId>
</historicalObserved>
<historicalSimulated>
<variableId>Sim_hist</variableId>
</historicalSimulated>
<currentObserved>
<variableId>Obs_current</variableId>
</currentObserved>
<currentSimulated>
<variableId>Current_sim</variableId>
</currentSimulated>
<enableCombinationAnalysis>true</enableCombinationAnalysis>
<estimatedCurrentSimulated>
<variableId>PCA_swe</variableId>
</estimatedCurrentSimulated>
<errorStatistics>
<variableId>PCA_rmse</variableId>
</errorStatistics>
</principalComponentAnalysis>
</regression>
</transformation>
</transformationModule>]]>
573
<dayMonthSampleFunction/>
<statisticalFunctions>
<statisticalFunction function="principalcomponentanalysisrme">
<observedParameterId>SNWE</observedParameterId>
<simulatedParameterId>SWE</simulatedParameterId>
</statisticalFunction>
</statisticalFunctions>]]>
Selection Transformations
Selection of lows
Selection of peaks
Selection of independent lows
Selection of independent peaks
Selection of maximum
Selection of minimum
Input
Timeseries
Options
Requirements for definitions of low selections using gaps to define independence are:
574
An attribute "gapLengthInsec" must be defined. The value attribute defines the length of the minimum gap in seconds.
An attribute "totalNumberBeforeT0" must be defined. The value attribute defines the maximum number of lows to consider before T0.
An attribute "totalNumberAfterT0" must be defined. The value attribute defines the maximum number of lows to consider after T0.
An attribute "skipJustBeforeT0" indicates how many lows to lows just before T0. (optional)
An attribute "skipJustAfterT0" indicates how many lows to lows just after T0. (optional)
or
An attribute "totalNumber" must be defined. The value attribute defines the maximum number of lows to consider.
Output
Configuration example
]]>
In this example:
The time between two local minima (lows) should be at least 2700 seconds or 45 minutes.
Only the last three lows before T0 and the first four lows after T0 are considered.
The first two lows of the last three lows just before T0 are skipped, leaving only the third last one.
Similarly the first two lows just after T0 are skipped, leaving the third and fourth ones.
575
This transformation will select only the peaks which occur within the defined gap in time between peaks.
Input
Timeseries
Options
Requirements for definitions of peak selections using gaps to define independence are:
An attribute "gapLengthInsec" must be defined. The value attribute defines the length of the minimum gap in seconds.
An attribute "totalNumberBeforeT0" must be defined. The value attribute defines the maximum number of peaks to consider before T0.
An attribute "totalNumberAfterT0" must be defined. The value attribute defines the maximum number of peaks to consider after T0.
An attribute "skipJustBeforeT0" indicates how many peaks to skip just before T0. (optional)
An attribute "skipJustAfterT0" indicates how many peaks to skip just after T0. (optional)
or
An attribute "totalNumber" must be defined. The value attribute defines the maximum number of peaks to consider.
Output
Configuration example
]]>
In this example:
The time between two local maxima (peaks) should be at least 2700 seconds or 45 minutes.
576
Only the last three peaks before T0 and the first four peaks after T0 are considered.
The first two peaks of the last three peaks just before T0 are skipped, leaving only the third last one.
Similarly the first two peaks just after T0 are skipped, leaving the third and fourth ones.
Selection of lows
Description
Input
Timeseries
Options
In the configuration of low selections there are two choices for refining the selection:
An attribute "totalNumberBeforeT0" must be defined. The value attribute defines the maximum number of lows to consider before T0.
An attribute "totalNumberAfterT0" must be defined. The value attribute defines the maximum number of lows to consider after T0.
An attribute "skipJustBeforeT0" indicates how many lows to skip just before T0. (optional)
An attribute "skipJustAfterT0" indicates how many lows to skip just after T0. (optional)
or
An attribute "totalNumber" must be defined. The value attribute defines the maximum number of lows to consider.
Output
Configuration example
]]>
577
In this example:
Only the last three lows before T0 and the first four lows after T0 are considered.
The first two lows of the last three lows just before T0 are skipped, leaving only the third last one.
Similarly the first two lows just after T0 are skipped, leaving the third and fourth ones.
Selection of maximum
Description
Set of rules to allow selection of maximum values from an input time series.
Input
Timeseries
Options
An optional attribute "selectNumberOfHighestMax" may be defined. The value attribute defines the number of highest maximum values
which will be written to the output timeseries.
The periodTransformation may be applied to this transformation (see Configuration example 2 below).
Output
Configuration example 1
]]>
In this example:
The highest maximum value of the input series is returned by the output time series.
Configuration example 2
578
SelectionPeriodMaximumFunctionTest 1.00 [Link]
<periodTransformation>
<period>
<season>
<startMonthDay>--04-01</startMonthDay>
<endMonthDay>--03-31</endMonthDay>
</season>
</period>
<selection>
<maximum>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>RH_24H</parameterId>
<locationSetId>KNMIDAG</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-2924" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<selectNumberOfHighestMax>3</selectNumberOfHighestMax>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionPeriodMaximumFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>RH_24H.max</parameterId>
<locationSetId>KNMIDAG</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-2924" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</outputVariable>
</maximum>
</selection>
</periodTransformation>
]]>
In this example:
For each hydrologic year, the three highest maximum values are returned by the output time series.
Selection of minimum
Description
Set of rules to allow selection of minimum values from an input time series.
Input
Timeseries
Options
An optional attribute "selectNumberOfLowestMin" may be defined. The value attribute defines the number of lowest minimum values
which will be written to the output timeseries.
The periodTransformation may be applied to this transformation (see Configuration example 2 below).
Output
Configuration example 1
579
SelectionMinimumFunctionTest 1.00 [Link]
<selection>
<minimum>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionMinimumFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="0" end="365"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionMinimumFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>H-2010</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-5" end="15"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</outputVariable>
</minimum>
</selection>
]]>
In this example:
Configuration example 2
580
SelectionPeriodMinimumFunctionTest 1.00 [Link]
<periodTransformation>
<period>
<season>
<startMonthDay>--04-01</startMonthDay>
<endMonthDay>--03-31</endMonthDay>
</season>
</period>
<selection>
<minimum>
<inputVariable>
<timeSeriesSet>
<moduleInstanceId>Import</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>RH_24H</parameterId>
<locationSetId>KNMIDAG</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="day" multiplier="1"/>
<relativeViewPeriod unit="day" start="-2924" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</inputVariable>
<selectNumberOfLowestMin>3</selectNumberOfLowestMin>
<outputVariable>
<timeSeriesSet>
<moduleInstanceId>SelectionPeriodMinimumFunctionTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>RH_24H.max</parameterId>
<locationSetId>KNMIDAG</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="nonequidistant"/>
<relativeViewPeriod unit="day" start="-2924" end="0"/>
<readWriteMode>editing visible to all future task runs</readWriteMode>
</timeSeriesSet>
</outputVariable>
</minimum>
</selection>
</periodTransformation>
]]>
In this example:
For each hydrologic year, the three lowest minimum values are returned by the output time series.
Selection of peaks
Description
Input
Timeseries
Options
In the configuration of peak selections there are two choices for refining the selection:
An attribute "totalNumberBeforeT0" must be defined. The value attribute defines the maximum number of peaks to consider before T0.
An attribute "totalNumberAfterT0" must be defined. The value attribute defines the maximum number of peaks to consider after T0.
An attribute "skipJustBeforeT0" indicates how many peaks to skip just before T0. (optional)
An attribute "skipJustAfterT0" indicates how many peaks to skip just after T0. (optional)
or
An attribute "totalNumber" must be defined. The value attribute defines the maximum number of peaks to consider.
581
Output
Configuration example
]]>
In this example:
Only the last three peaks before T0 and the first four peaks after T0 are considered.
The first two peaks of the last three lows just before T0 are skipped, leaving only the third last one.
Similarly the first two peaks just after T0 are skipped, leaving the third and fourth ones.
21 Secondary Validation
What [Link]
Entry in
ModuleDescriptors <description>SecondaryValidation</description>
<className
>[Link]</
className>
]]>
582
The SecondaryValidation module can be used to perform certain checks on time series data and generate log messages when the specified
criteria are met.
Configuration
An XML file for configuring an instance of the SecondaryValidation module called for example CheckImportedData would be the following:
default Flag to indicate the version is the default configuration (otherwise omitted).
A SecondaryValidation configuration file is typically located in the ModuleConfigFiles folder and can be used to configure one or more checks.
The configured checks will be processed one by one in the specified order. The checks can generate log messages, which can trigger actions in
the master controller, like e.g. sending warning e-mails. A special type of check is available for automatically modifying flags to 'doubtful' or
'unreliable' per time step when a condition on multiple time series becomes true.
These checks are intended for generating log events when a specific constraint is violated. The time series configured in these checks will be
processed one by one. If a time series does not pass the check, then the configured log message is logged with the specified event code and
level. The log event code can be used to trigger a certain action in the master controller, e.g. sending warning emails.
minNumberOfValuesCheck: Logs a message when there are not enough values within a configured period.
minNonMissingValuesCheck: Logs a message when there are not enough non-missing values within a configured period. A
non-missing value is a value that is reliable, doubtful or unreliable.
minReliableOrDoubtfulValuesCheck: Logs a message when there are not enough values that are reliable or doubtful within a
configured period.
minReliableValuesCheck: Logs a message when there are not enough reliable values within a configured period.
The seriesComparisonCheck check is available for testing constraints between multiple time series or parameters per time step.
This check verifies constraints between multiple time series sets or multiple parameters and automatically modifies the flags per time step when
the required input data was available (reliable or doubtful) and the specified expression is evaluated and is true.
Check for setting flags per time step using other timeseries
The flagsComparisonCheck check is available for comparing and setting flags for multiple time series or parameters per time step.
This check determines for each timestep the most unreliable input flag within the input flags, and if it is more unreliable than the output flag it
updates the output flag.
Variable Definitions
The configuration contains variable definitions for one or more time series that can be used as input for checks. Each variable definition contains a
variableId and a timeSeriesSet. The variableId can be used to reference the time series in a check. Alternatively, depending on which check it is,
either variable definitions or variables can be embedded in the checks.
id: Identifier of the check. This is only used in log messages and exception messages.
variable: One or more time series that need to be checked. This can be either an embedded timeSeriesSet or a reference to a
variableDefinition defined at the start of the configuration file. If this contains multiple time series (e.g. for multiple locations), then each
time series is checked individually.
583
checkRelativePeriod: The check will only consider data in this time period. This time period is relative to the timeZero of the taskrun in
which the module instance runs. The start and end of the period are included. This period overrules any relativeViewPeriods specified in
the timeSeriesSets of the time series.
minNumberOfValues: The minimum required number of values in the time series to pass the check.
logLevel: Log level for the log message that is logged if a time series does not pass the check. Can be DEBUG, INFO, WARN, ERROR
or FATAL. If level is error or fatal, then the module will stop running after logging the first log message.
logEventCode: Event code for the log message that is logged if a time series does not pass the check. This event code has to contain a
dot, e.g. "[Link]", because the log message is only visible to the master controller if the event code contains a dot.
logMessage: Log message that is logged if a time series does not pass the check. It is possible to use the following tags in the
logMessage: %HEADER% and %LOCATION_NAME%. The %HEADER% tag will be replaced with the header of the time series. The
%LOCATION_NAME% tag will be replaced with the name of the location of the time series.
Tag Replacement
Configuration example for checks on amounts of reliable, doubtful, unreliable and missing values
<minNonMissingValuesCheck id="MinNonMissingValuesCheck">
<variable>
<variableId>input1</variableId>
</variable>
<variable>
<variableId>input2</variableId>
</variable>
<checkRelativePeriod unit="hour" start="-12" end="0"/>
<minNumberOfValues>18</minNumberOfValues>
<logLevel>INFO</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>Not enough values available for time series %header%</logMessage>
</minNonMissingValuesCheck>
584
<minNumberOfValuesCheck id="MinNumberOfValuesCheck">
<variable>
<variableId>input1</variableId>
</variable>
<variable>
<variableId>input2</variableId>
</variable>
<checkRelativePeriod unit="hour" start="-12" end="0"/>
<minNumberOfValues>24</minNumberOfValues>
<logLevel>DEBUG</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>Not enough values available for time series %header%</logMessage>
</minNumberOfValuesCheck>
<minReliableOrDoubtfulValuesCheck id="MinReliableOrDoubtfulValuesCheck">
<variable>
<variableId>input1</variableId>
</variable>
<variable>
<variableId>input2</variableId>
</variable>
<checkRelativePeriod unit="hour" start="-12" end="0"/>
<minNumberOfValues>12</minNumberOfValues>
<logLevel>WARN</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>Not enough values available for time series %header%</logMessage>
</minReliableOrDoubtfulValuesCheck>
<minReliableValuesCheck id="MinReliableValuesCheck">
<variable>
<variableId>input1</variableId>
</variable>
<variable>
<variableId>input2</variableId>
</variable>
<checkRelativePeriod unit="hour" start="-12" end="0"/>
<minNumberOfValues>6</minNumberOfValues>
<logLevel>WARN</logLevel>
<logEventCode>[Link]</logEventCode>
<logMessage>Not enough values available for time series %header%</logMessage>
</minReliableValuesCheck>
</secondaryValidation>
]]>
FlagsComparisonCheck
Contents of check for flagsComparisonCheck
Tag Replacement
585
%EXPRESSION% The expression that caused the flags to be altered.
%HEADER% The header names of the timeseries for which the flags were altered.
%LOCATION_NAME% The name of the locations where the alterations took place.
%PARAMETER_NAME% The name of the parameter where the alterations took place.
It is not possible to compare two different location sets both containing more than one location id, but the following comparisons can be
configured:
For each timestep, the most unreliable flag in the inputVariables is determined, e.g. unreliable > doubtful > reliable.
If the most unreliable flag in the inputVariables is unreliable, and the corresponding flag in the outputVariable is reliable or doubtful, it is made
unreliable as well.
If the most unreliable flag in the inputVariables is doubtful, and the corresponding flag in the outputVariable is reliable, it is made doubtful as well.
<!-- comparison of variables with similar location sets, different parameters, does
comparison per location -->
<flagsComparisonCheck id="FlagsComparisonCheck_similarLocationSet">
<!-- referred to by locationset5 and locationset6-->
<variableDefinition>
<variableId>locationLocationTestLocation12_H_obs_init1</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs1</parameterId>
<locationId>location12</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<!-- referred to by locationset5 and locationset6-->
<variableDefinition>
<variableId>locationLocationTestLocation13_H_obs_init2</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs1</parameterId>
<locationId>location13</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
586
</variableDefinition>
<!-- referred to by locationset5 and locationset6-->
<variableDefinition>
<variableId>locationLocationTestLocation12_H_obs2_init3</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs2</parameterId>
<locationId>location12</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<!-- referred to by locationset5 and locationset6-->
<variableDefinition>
<variableId>locationLocationTestLocation13_H_obs2_init4</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs2</parameterId>
<locationId>location13</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<variableDefinition>
<variableId>similarLocationSetTest1_H_obs_initSet</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs1</parameterId>
<locationSetId>locations5</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<variableDefinition>
<variableId>similarLocationSetTest2_H_obs_initSet</variableId>
<timeSeriesSet>
<moduleInstanceId>FlagsComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs2</parameterId>
<locationSetId>locations6</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<readWriteMode>read complete forecast</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<inputVariableId>similarLocationSetTest1_H_obs_initSet</inputVariableId>
<inputVariableId>similarLocationSetTest2_H_obs_initSet</inputVariableId>
<outputVariableId>similarLocationSetTest1_H_obs_initSet</outputVariableId>
<outputVariableId>similarLocationSetTest2_H_obs_initSet</outputVariableId>
<logLevel>INFO</logLevel>
<logEventCode>SecondaryValidation.similarLocationSetTest2_H_obs_initSet</logEventCode>
<logMessage>%AMOUNT_CHANGED_FLAGS% flags set to %OUTPUT_FLAG% by [%CHECK_ID%,
%EXPRESSION%], header=%HEADER%, location(s)=%LOCATION_NAME%</logMessage>
</flagsComparisonCheck>
587
</secondaryValidation>
]]>
SeriesComparisonCheck
Contents of check for seriesComparisonCheck
Tag Replacement
%HEADER% The header names of the timeseries for which the flags were altered.
%LOCATION_NAME% The name of the locations where the alterations took place.
%PARAMETER_NAME% The name of the parameter where the alterations took place.
It is not possible to compare two different location sets both containing more than one location id, but the following comparisons can be
configured:
The expression is always a comparison. The comparison operator is within XML one of (.ne., .eq., .gt., .ge., .lt., .le.). Each variable has to be a
single word without spaces. Mathematical symbols or functions like e, pi or cos cannot be used as variableId, but they will be interpreted
mathematically. Note that in case one of the variables of the expression contains missing values for a timestep, the expression fails, and no flags
will be altered for this timestep. Also manually edited flags will be left untouched.
Some mathematical functions worth mentioning are the following (these must be in lowercase):
Function Description
588
floor( x ) Floor
ceil( x ) ceiling
mod( x , y) x % y Modulus
sqrt( x ) SquareRoot
A simple configuration example for the seriesComparisonCheck is given below, it will make the workflow check the values that are reliable or
doubtful, and mark them as unreliable if they are smaller than thirteen:
A more complex sample does a comparison for different parameters in similar location sets, it will mark values that were reliable or doubtful as
unreliable,
in this case first for location1 and then for location2, when the difference between them is bigger than three:
<!-- comparison of variables with similar location sets, different parameters, does comparison
per location -->
<seriesComparisonCheck id="similarLocationSetSeriesComparisonCheck">
<!-- referred to by locationset1 and locationset2-->
<variableDefinition>
<variableId>H_obs1_location1</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>H.obs1</parameterId>
<locationId>location1</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>
589
</timeSeriesSet>
</variableDefinition>
<variableDefinition>
<variableId>locationSet1</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheckTest</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>locationset1</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>
<variableDefinition>
<variableId>locationSet2</variableId>
<timeSeriesSet>
<moduleInstanceId>SeriesComparisonCheckTest</moduleInstanceId>
590
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>locationset2</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="day" start="-30" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variableDefinition>
Sample screenshot
The sample screenshot below demonstrates the use of the seriesComparisonCheck. In this case it has been used to set flags to unreliable for
timesteps where the waterlevel measurements upstream are below the measurements downstream. The different output flags have been
displayed using different colors at the bottom of the screenshot. In this case the flags of the values above the yellow part have been set to
unreliable, whereas the flags of the values above the purple line have remained the same.
22 forecastLengthEstimator
Function: Sets the forecast length
Description: The forecastLengthEstimator is a module that can be used at the start of a workflow the set the length of the operations in
the other modules in that workflow.
591
Preconditions: the endoverrulable attribute in the relative view period in the time series sets must be set to true in all modules you want to
apply the forecast length to
Outcome(s):
Remark(s):
Contents
Contents
Overview
Configuration
Sample input and output
Error and warning messages
Known issues
Related modules and documentation
Technical reference
Overview
The forecastLengthEstimator is a module that can be used at the start of a workflow to set the length of the operations in the other modules in that
workflow. As most models cannot handle gaps in the input data, this option can be useful if you want to run a hydrological model only with the
data available and thus avoid e.g. extrapolating the meteorological forecast data.
Configuration
The forecast length is defined by the length of the external forecast time series (in this example ImportCOSMO2). You can define a mimimum and
/ or maximum forecast length (minForcastLength / maxForecastLength). If the actual forecast length of the external forecast looked at is shorter
than the minimum forecast length the forecast length is set to this minimum length (in this example 3 hours). If the actual forecast length is longer
than the maximum forecast length the forecast length is set to this maximum length (in this example 30 hours).
The logging will provide you with the information which forecast length was used in the run, see example below
Note
The endoverrulable attribute in the relative view period in time series sets must be set to true in all subsequent modules in which
you want to use the actual forecast length.
592
Sample input and output
Known issues
Technical reference
23 Decision Module
*** !!! This page is under construction / valid from Delft-FEWS version 2011.01 !!! ***
Decision Module
What [Link]
Entry in
ModuleDescriptors <description>DecisionModule</description>
<className
>[Link]</className>
]]>
Please note that at the moment the Decision Module is only available in the development build (2011.01).
Contents
Decision Module
Contents
Overview
Some important prerequisites
Barrier states should be defined
Input from present and future model state
Scalable
Two types of criteria
The decision evaluation process
Configuration
Variables definition
variableId
timeSeriesSet
Schema definition
Rules definition
593
Schema definition
variables
criticalConditions
transitionRules
DecisionTrees definition
Decision definition
Schema definition
evaluationType
stateDefinitionId
inputState
conditionRules
transitionRules
outputState
DecisionEvaluation
evaluationType
Overview
The Decision Module in Delft-FEWS is used to implement decision logic and evaluation for barriers. With this module we can iteratively evaluate
configurated decision rules. The configuration file of the Decision Module contains the definition of one or more Decision Trees. These Decision
Trees defined in the Decision Module are associated with a Barrier definition which are defined by the Barriers configuration file.
The decision logic (criteria) is linked to the barrier state. Barrier states could be for example "the barrier is open", "the barrier is closed", "the
barrier is halfway closed", "at a stage where some additional criteria need to be evaluated", etc. For each of these states, separate decision logic
may be relevant and should be evaluated.
While evaluating the decision logic, relevant input may consist of information from both the present and future model state.
When the barrier is open, and when the forecast results show that water levels at location A will exceed 3 meters, the barrier should start
closing when the water level at the barrier passes the 2 meter mark.
When the barrier is closed, the barrier should be opened when the local water level gradient is negative.
Scalable
Furthermore, the decision logic should be "scalable" (i.e. it should be easy to add additional rules). If we continue with the above example, a
decision rule could also be
When the barrier is open, and when the forecast results show that water levels at location A will exceed 3 meters, the barrier should
close. If the river discharge at location C is below 6000 m3/s, the barrier should start closing when the local water level passes the 2
meter mark. If the river discharge at location C exceeds the 6000 m3/s, the barrier should start closing at slack tide. The barrier is only
allowed to close when the shipping through the location B has been blocked in advance.
With regard to decision logic, we can differentiate between two different types of criteria in the above example:
1. Criteria which indicate that a barrier state change should occur (for example, a state change from "the barrier is open" to "the barrier is
closed").
For example, the forecast water level at location A exceeds 3 meters.
2. Criteria which indicate when a barrier state change should occur.
For example, the local water level passes the 2 meter mark. (Note that this criteria is conditional to the example 1).
As the decision logic takes both "future" information (if there is a high water event in our forecast horizon, do ...) and information not included in
the model state into account (in the above example, both the discharge at location C and the "shipping state" are not included state of the running
model), we cannot evaluate the decision logic based on triggers in this model. As such, we want to do this in an external module (i.e. FEWS in
this case).
When evaluating the decision logic, it is relevant to take into account the fact that the model state will change after the barrier state has been
updated (changed). To evaluate the decision logic for subsequent steps in the process, this implies that we will need to re-run the model to take
this state change into account. Also, if there are multiple barriers in our area of interest, this implies that if the state of one of these barriers
changes, we need to update our model simulation before we can assess the decision logic for the other barrier(s).
594
The entire process can be summarized as follows:
1. Run a baseline simulation (forecast) taking the actual state of the barriers as a starting point.
2. Evaluate the decision logic based on the baseline simulation.
3. If relevant criteria are met, the barrier state changes. Here, we distinguish between criteria which indicate if a state change is required.
And criteria which indicate when a state change is required.
4. Run a new simulation taking the barrier state change into account.
5. Evaluate the decision logic of this simulation. (Note that the decision logic will only be evaluated over the period following the latest
barrier state change.)
6. Loop this process starting from step 3 until no relevant criteria are met and therefore no state change is required.
While the decision logic is model independent, the model should be fed with the appropriate timeseries representing the appropriate barrier states
(for example timeseries of crest height, crest level and gate height for various barrier elements).
The configuration files are based around timeseries representing the barrier state, which are used and updated in the evaluation process. Each
value in this timeseries represents a model state. For example, if the value of this series is 0, this indicates the barrier is open. If the value of this
series if 1, this indicates the barrier is closed. Etc. This section only describes the configuration of the Decision module file. For an explanation of
Barriers configuration file, go to the Barriers section.
Configuration
Variables definition
All variables within the Decision Module are time series in the form of Time Series Sets. Within the Decision Module various items make use of a
variableId. This variableId is used in the actual section as an alias to that time series.
The identifier assigned to a time series should contain alphanumeric characters only. It should not start with a numerical character.
variableId
required element used to identify the time series in the decisionModule block or in the rules block.
timeSeriesSet
595
Example variable definition
<variableId>[Link]</variableId>
<timeSeriesSet>
<moduleInstanceId>RMM_Structures_Forecast_Processing_Extrapolate</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>SVKW</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<relativeViewPeriod unit="day" endoverrulable="true" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>[Link]</variableId>
<timeSeriesSet>
<moduleInstanceId>RMM_DecisionTree</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>SVKW</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="minute" multiplier="10"/>
<relativeViewPeriod unit="day" endoverrulable="true" start="0" end="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</variable>
]]>
Schema definition
Rules definition
The Rules section defines the rules which will be used by the decision entries of each decisionTree.
Schema definition
variables
This section contain a set of variables defined only for the rule definitions. For a description of the variables see the section above.
criticalConditions
596
The criticalConditions determine if further activity should be realized. The evaluation period for the criticalCondions is determined by the definition
in the decisionModule (evaluation = lastKnownState).
The definition of the criticalConditions reuses the criticalConditionLoopUp of the Lookup Table Module.
transitionRules
If certain criticalConditions are met, at a certain moment the state transition should be activated. The moment at which this occurs will be
determined by the transitionRules.
DecisionTrees definition
For each barrier a decisionTree is configured. Each decision element in this decisionTree should be unique (i.e. at each moment in time only one
element can be valid).
Example decistionTree
<barrierId>scheepvaart</barrierId>
<decision id="Stremming (peilsluiting)">
<evaluationType>lastKnownState</evaluationType>
<stateDefinitionId>scheepvaart</stateDefinitionId>
<inputState>
<stateValueId>geen stremming</stateValueId>
</inputState>
<conditionRules>
<allValid>
<anyValid>
<isTrue>[Link].d</isTrue>
<isTrue>[Link].d</isTrue>
</anyValid>
<isFalse>[Link]</isFalse>
</allValid>
</conditionRules>
<transitionRules>
<anyValid>
<isTrue>[Link]</isTrue>
<isTrue>[Link]</isTrue>
</anyValid>
</transitionRules>
<outputState>
<stateValueId>gestremd</stateValueId>
</outputState>
</decision>
<decision id="Stremming (kenteringsluiting)">
...
</decision>
<decision id="Vrijgeven scheepvaart">
...
</decision>
]]>
Decision definition
Schema definition
597
evaluationType
stateDefinitionId
inputState
conditionRules
transitionRules
outputState
DecisionEvaluation
The last section of this file deals with the evaluation of the decision logic. If we are in the first step of the iterative loop, the last know system
(barrier) state is used as initial value. The section within the 'initialConditionalWorkflow' tag is run, where the timeSeriesSets associated with the
variableId's are used for initial values. From this workflow the (external) model is called (for example, a workflow through which Sobek is run from
the FEWS General Adapter).
Example decisionEvalution
<initialConditionalWorkflow>
<variableId>[Link]</variableId>
<variableId>[Link]</variableId>
<variableId>[Link]</variableId>
<workflowId>Sobek_DSS_Forecast</workflowId>
</initialConditionalWorkflow>
<conditionalWorkflow>
<variableId>[Link]</variableId>
<variableId>[Link]</variableId>
<variableId>[Link]</variableId>
<stateChanges>
<evaluationType>FirstInTime</evaluationType>
<stateChange>
<decisionTreeId>SVKH</decisionTreeId>
</stateChange>
<stateChange>
<decisionTreeId>SVKW</decisionTreeId>
</stateChange>
<stateChange>
<decisionTreeId>scheepvaart</decisionTreeId>
</stateChange>
</stateChanges>
<workflowId>Sobek_DSS_Forecast</workflowId>
</conditionalWorkflow>
]]>
598
After running the model for the first time, we need to evaluate the decision logic prior to a (possible) second run. If a state change occurs in the
decisionTree, we need to run the model again taking this state change into account. From the second iteration and onwards, the section within the
'conditionalWorkflow' tag will be run. Note that we need to evaluate three state changes in this case (SVKH = the position of the Hartelkering,
SVKW = the position of the Maeslantkering and scheepvaart = the "shipping state"), each of which has a separate decisionTree definition.
evaluationType
If one state variable changes value, this will change the overall system state, and hence may effect the evaluation of the other state variables.
There are to options which can be defined:
All: all state variable changes will be taken into account in the next iteration.
FirstInTime: if there is a state change in more than one state variable, only the state change which occurs first in time should be taken
into account in the next iteration. After a new iteration with the model, the other state values will be re-evaluated in this case.
Barriers
*** !!! This page is under construction / valid from Delft-FEWS version 2011.01 !!! ***
Please note that at the moment the Decision Module is only available in the development build (2011.01).
Contents
Contents
Overview
Configuration
Overview
The Barriers configuration file is used to define the specifics of one or more barriers used in Fews.
For the use of the Barriers file in combination with the Decision module, we refer to the Decision module section for further details about the
functionality of this module.
In the Barriers file the characteristics of the barrier are defined. First and foremost these are the barrier states.
For example, the state of a certain barrier, which is linked to a particular variable and timeSeriesSet, can be:
"open" (value 0)
"sedimentstop" (value 1)
"closed" (value 2)
"drain" (value 3)
"[Link]" (value 4)
Furthermore, model dependent characteristics like crest level, crest height and gate level are linked to there barrier states. As such, once the
barrier state is known (i.e. we have a timeSeriesSet with values of 0, 1, 2, 3 and 4 in the above case), we can apply a mapping from these state
values to the appropriate input required by the running model. Note that a change from one barrier state to the other will not be instantaneous, but
should take some "rate of change" into account. The appropriate "rate of change" information from one state to the other is also included in this
file, by defining the "speed".
Configuration
work in progress
24. ImportAmalgamate
What ImportAmalgamate
Required no
Description
Configuration
workflowId
importRunMinimalAge
Example
599
Description
Workflows may produce inefficient blobs that only span a few time steps. These blobs will be amalgamated. After the amalgamate is finished, the
original import runs with all its imported external data is scheduled for deletion. Large grids, external forecasts and samples should be imported in
a separate workflow, that are not handled through this module.
Configuration
workflowId
One or more work flow ids that import external historical time series over a short time span (scheduled frequently).
importRunMinimalAge
Import runs younger than the specified age are skipped. After the amalgamate has runned it is no longer possible to create an archive with the
exact original data available during the run.
Example
[Link]
<className>[Link]</className>
]]>
[Link]
<moduleId>ImportAmalgamate</moduleId>
]]>
[Link]
<importAmalgamate xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns=
"[Link]
<workflowId>Import_Data</workflowId>
<workflowId>Procesoverzicht_Update</workflowId>
<importRunMinimalAge unit="hour"/>
</importAmalgamate>]]>
06 Configuring WorkFlows
What A [Link]
Required no
Introduction
Workflows are used in DELFT-FEWS to define logical sequences of running forecast modules. The workflow itself simply defines the sequence
with which the configured modules are to be run. There is no inherent information nor constraints within the workflow on the role the module has
in delivering the forecasting requirement.
Workflows may contain calls to configured module instances, but may also contain calls to other workflows. In the workflowDescripors
configuration described in the Regional Configuration section, the properties of the workflows is defined.
All workflows are defined in the Workflows section of the configuration; when working from a filesystem this is the WorkflowFiles directory. Each
workflow will have the same structure and must adhere to the same XML schema definition. Workflows are indentified by their name, which are
registered to the system through the workflowDescriptors configuration in the Regional Configuration section.
Workflows
Workfows defined may either be available from the Workflows table – when the configuration is loaded into the database – or available in the
600
WorkflowFiles directory when the configuration is available on the file system.
When available on the file system, the name of the XML file for configuring a workflow called for example ImportExternal may be:
default Flag to indicate the version is the default configuration (otherwise omitted).
Each processing step in the workflow is referred to as an activity. In defining workflows, several levels of complexity in defining these activities are
available;
Simple activities
Activities with a fallback activity;
Activities to be run as an ensemble.
601
Figure 142 Elements of the Workflow configuration.
activity
Root element for the definition of a workflow activity. Multiple entries can be defined.
runIndependent
Boolean flag to indicate if the activity is considered to be independent of other activities in the workflows. If the flag is set to "false" (default) then
the failure of this activity will casue the complete workflow being considered as having failed. No further activities will be carried out. If the flag is
set to "true", then failure of an activity will not cause the workflow to fail. The next activity in the workflow will also be attempted. An indication is
given to the user in the Forecast Management display if one or more workflow activities have failed.
moduleInstanceId
602
The ID of the moduleInstance to run as the workflow activity. This module instance must be defined in the moduleInstanceDescriptors (see
Regional Configuration) and a suitable module configuration must be available (see Module configurations).
workflowId
The ID of the workflow to run as the workflow activity. This workflow must be defined in the worfklowDescriptors (see Regional Configuration) and
a suitable workflow must be available (see Module configurations).
fallbackActivity
A fallback activity may be defined to run if the activity under which it is defined fails. This can be used to run a simple module if the more complex
method used in preference fails, and ensures continuity of the forecasting process. The definition of the fallbackActivity is the same as the
definition of an activity.
ensemble
ensemble:ensembleId
Id of the ensemble to apply in retrieving data from the database. For all time series sets used as input for the activities running as an ensemble, a
request for time series with this Id defined will be made. Ensemble id's in a sub workflow will override this ensembleId. A sub workflow without an
ensembleId will make use of this ensembleId
ensemble:runInLoop
Boolean flag to indicate if the activity it to be run as many times are there are members in the ensemble, or if it is to be run only once, but will use
all members of the ensemble in that single run. If the value is set to "true", then when running the workflow DELFT-FEWS will first establish how
many members there are in the ensemble, and then run the activity for each member. If the value is set to "false" then the activity will be run only
once. On requesting a time series set within the modules to be run, the database will return all members of that ensemble.
ensembleMemberIndex
Optional field to only run one particular ensemble member. If this field is used only the specified ensemble member will be run.
ensembleMemberIndexRange
Optional field to run a particular range of ensemble members. Processing starts at member start and ends at member end. If end is not specified
the processing will start at start and end at the last member of the ensemble.
603
Activities in a workflow or nested workflows may be called only once. This is to avoid mistakes through multiple calls to the
same activity in different locations thus creating ambiguous results, or circular references in defining fallback activities.
When running activities as an ensemble that request time series sets from the database that are not a part of that ensemble, the
default ensembleId should be added to the TimeSeriesSets definition. The default ensemble Id is "main".
All time series sets written when running in ensemble mode will have the ensembleId as specified in the workflow ensembleId
element, unless overruled by a local ensembleId defined in the timeSeriesSet on writing.
Examples
The workflow below runs seven moduleInstances. If the first moduleInstance fails in this example all other processing is stopped. If any of the
other activities fail the processing will continue.
<runIndependent>true</runIndependent>
<moduleInstanceId>PrecipitationGaugeToGrid_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Spatial_Interpolation_Precipitation_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>MergePrecipitation_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>GridToCatchments_Forecast</moduleInstanceId>
</activity>
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Singapore_Sobek_Forecast</moduleInstanceId>
</activity>
</workflow>
The example below is more complex and includes several modules that are run in ensemble mode.
604
<runIndependent>true</runIndependent>
<moduleInstanceId>Rhein_SpatialInterpolationCOSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Aggregate forecast data for display -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Rhein_AggregateForecast_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Disaggregate timeseries at HBV-centroids -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Rhein_DisaggregateSeriesCOSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Merge timeseries from historical run and forecast run -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_Merge_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Aggregate inputs for display -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_AggregateInputs_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Interpolate timeseries from historical run and forecast run -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_Interpolate_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Run HBV-model for forecast period-->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
<!--Run ErrorModule for forecast period-->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>HBV_Rhein_AR_COSMO-LEPS</moduleInstanceId>
<ensemble>
<ensembleId>COSMO-LEPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
605
<!--Calculate Statistics-->
<activity>
<runIndependent>true</runIndependent>
<workflowId>Statistics_COSMO-LEPS</workflowId>
</activity>
<!--Export forecast data to wavos format -->
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Rhein_ExportForecast_COSMO-LEPS</moduleInstanceId>
606
</activity>
</workflow>
07 Display Configuration
Introduction
DELFT FEWS supports several plug-in displays that can optionally be included in the configuration for a particular forecasting system. These
displays implement the DELFT-FEWS display plug-in interface, and while the list included here is a standard feature of DELFT-FEWS, specific
plug-in displays may be included as well. Multiple instance of each plug-in display can be applied, each with a unique name as registered in the
DisplayInstanceDescriptors (see System configuration). Each plug-in display used must of a supported type as registered in the
DisplayDescriptors (see System configuration). The display may be initiated from the fewsExplorer by defining a call to the display in toolbar or
in the tools menu (see configuration of the FEWS Explorer).
Grid display
Longitudinal Display
What-If scenario display
Lookup Table display
Correlation display
The main map display and the time series display are not considered optional and therefore form a part of the System configuration
Contents
01 Grid Display
02 Longitudinal Display
03 What-If Scenario Display
04 Lookup Table Display
05 Correlation Display
06 System Monitor Display
07 Skill Score Display
08 Time Series Modifiers
09 State editor display
10 Interactive forecast display
11 Threshold Display
12 Task Run Dialog Display
13 Manual Forecast Display — Configure the Manual Forecast Display:
14 ChartLayer
15 Schematic Status Display (formerly Scada Display)
16 Modifier display
01 Grid Display
Grid display
The grid display is used in DELFT-FEWS for viewing grid time series. These grid time series can be dynamically animated against a map
background (comparable to the main map display).
The Id of the grid display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the GridDisplay with an Id of FloodMapDisplay is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
607
Figure 143 Example of a configuration of the Grid Display
Besides plotting a grid in the grid display, it is also possible to plot scalar data (Figure 10) and longitudinal profiles (figure 9).
608
Figure 009 Example of a configuration of the longprofile in Grid Display
title
Name of the Grid Display. When opened this will appear in the title bar of the window.
gridPlotGroup
Definition of a group in the grid display. Each group may have its own set of maps and time series to display. Defining groups creates a tree view
in the left of the display (see example above). Multiple instances may exist.
Attributes;
description
Optional description of the display group/grid plot. Used for reference purposes only
highlight
Optional property to highlight the Group name in bold in the selection filter.
609
gridPlot
Definition of a grid plot within the display group. Each grid plot forms a node in the tree view. When a gridPlot is selected, the appropriate maps
will be displayed and the time series data retrieved from the database.
Attributes;
timeSeriesSet
Definition of the time series set to be displayed in the selected grid plot. This can refer to one location with valuetype grid or longitudinal profile, or
it can refer to a locationSet of scalars. Contourlines can only be displayed in combination with a regular grid.
classBreaks
Definition of colours to use in displaying the dynamic grid. These are also shown in the legend on the left of the grid display (see example above).
geoMap
Definition of the maps used as a background to the dynamic grid displayed. The layout and zoom extent are also defined in this element.
610
Figure 145 Elements of the configuration of class breaks
description
missingValueColor
missingValueOpaqueness
611
Not implemented yet.
When this is true the display unit for the class break values will be displayed in the legend. Default is false. The display unit can be configured in
parameter group.
Definition of the optional ordinal value that will always keep the same colour when the class break colours are rescaled in the grid display. After
rescaling, the highest lowerValue will be changed to the maximum grid value visible in the current zoom extent and the lowest lowerValue will be
changed to the minimum grid value visible in the current zoom extent. The lowerValues and the colours in between will be rearranged to fit
between the minimum and maximum. Thus the colours for given values change.
If no ordinal value is specified, then the colours are just rearranged. However, if e.g. ordinal value = 0 is specified and 0 values have a white
colour, then the rescaling will take this into account so that 0 values always stay coloured white. This can be used for instance when displaying
temperatures, where red colours are used for positive values and blue colours are used for negative values and zero values are coloured white.
lowerColor
Colour definition for the colour in the legend associated with the lowest value in the range.
upperColor
Colour definition for the colour in the legend associated with the highest value in the range.
lowerOpaquenessPercentage
Optional definition of the opaqueness of the colour in the legend associated with the lowest value in the range.
upperOpaquenessPercentage
Optional definition of the opaqueness of the colour in the legend associated with the highest value in the range.
lowerSymbolSize
Optional definition of the size of symbols associated with the lowest value in the range.
upperSymbolSize
Optional definition of the size of symbols associated with the highest value in the range.
lowerValue
Definition of the value at which the colour in the grid displayed changes. The legend will be a gradual change in colours from the lowest colour to
the highest colour, with the number of increments determined by the number of lowerValue items entered. Multiple entries may exist.
color
break
The options described above can be used for definitions of lowerValues that have colors that change gradually between a lowerColor and
upperColor. The break option can be used instead for specifying a discrete lowerValue with an absolute color, symbolSize and
opaquenessPercentage. Multiple entries may exist.
The background maps to be displayed are defined in the geoMap element. This is an XML implementation of the OpenMap configuration
described also in Appendix C for configuring the main map display (in time this will also be done using the geoMap element).
612
613
Figure 146 Elements of the geoMap configuration
The more advanced options are described below. Rather straightforward options like northArrowVisible are self explaining.
description
extents
Root element for the definition of a zoom extent. The extents defined will appear in a drop down list in the grid display.
geoDatum
Coordinate system the extents are defined in. Enumeration of available coordinate systems is available in Appendix B.
defaultExtent
Attributes;
name : name of the default zoom extent (displayed in the drop-down list)
extraExtent
Attributes;
Coordinates of the zoom extent. Note that in displaying the maps for the extent defined, the map display will be scaled to fit the extent in the
current display window.
wfsConnection
614
Notice that you need to specify a mapLayersCacheDir in the [Link], like mapLayersCacheDir=%REGION_HOME%/MapCache
arcSdeConnection
Notice that you need to specify a mapLayersCacheDir in the [Link], like mapLayersCacheDir=%REGION_HOME%/MapCache
615
serverShapeLayer
To make use of a Wfs or ArcSDE connection you have to use the option for serverShapeLayer.
616
617
openStreetMapLayer
To make use of a server that uses the open street map protocol
]]>
wmsLayer
To make use of a WMS server you have to use the option for wmsLayer.
url : Base url for the wms server. This is everything before the text "VERSION=" in the url. Use & to include a &
layer name : Layer name to display. It's the part after the text "LAYERS=" till the next & or ; in the url. To find the layer names enter the
url that ends withs "request=GetCapabilities" in a browser.
]]>
]]>
618
Demo HIRMLAM temperature Europe
<url>[Link]
<wmsLayerName>2011-05-26T[Link]Z/HIRLAM-temp/HIRLAM-temp-2m</wmsLayerName>
<cacheDir>$REGION_HOME$/wms_hirlam_cache</cacheDir>
]]>
esriShapeLayer
619
620
In this section the location of the background shape file can be defined.
An example of the various options, that can be completely mixed is shown in the below picture.
layer
Attributes;
id : required id of the map layer- must be unique for the current geoMap element.
name : optional name of the map layer defined
description
621
Optional description of the map layer. Used for reference purposes only.
className
Name of the class used in displaying the map layer. A different class is required for different types of GIS data.
NOTE: Defining a class name allows advanced users to add additional display functionality to the OpenMap utility, and this being used in map
displays in DELFT-FEWS. See the OpenMap documentation for details on how to add additional display classes.
visible
properties
Definition of properties associated with the map layer to be displayed. Properties that need to be defined depend on the class used. At least one
property must be defined. This may be a dummy property. Multiple entries may exist.
string
Definition of a string property. An example is the definition of the geoDatum for displaying shape files using the geoDatumDisplay class.
key
Value
Note: when displaying a shape file layer that does not use WGS 1984 as the coordinate system, a property must be defined that defines the geo
datum. To do this set the key value as "geoDatum" and define the coordinate system using the enumeration in Appendix B.
Configuration (Example)
The following example shows how to configure a Meteosat image as grayScaleImage in the Grid display.
622
Extract of [Link]
<gridPlot id="MeteoSat">
<timeSeriesSet>
<moduleInstanceId>ImportMeteosat</moduleInstanceId>
<valueType>grid</valueType>
<parameterId>image</parameterId>
<locationId>meteosat</locationId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="minute" multiplier="15"/>
<relativeViewPeriod unit="hour" start="-12" end="36"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<classBreaks>
<lowerColor>black</lowerColor>
<upperColor>white</upperColor>
<lowerValue>0</lowerValue>
<lowerValue>8</lowerValue>
<lowerValue>16</lowerValue>
<lowerValue>24</lowerValue>
<lowerValue>32</lowerValue>
<lowerValue>40</lowerValue>
<lowerValue>48</lowerValue>
<lowerValue>56</lowerValue>
<lowerValue>64</lowerValue>
<lowerValue>72</lowerValue>
<lowerValue>80</lowerValue>
<lowerValue>88</lowerValue>
<lowerValue>96</lowerValue>
<lowerValue>104</lowerValue>
<lowerValue>112</lowerValue>
<lowerValue>120</lowerValue>
<lowerValue>128</lowerValue>
<lowerValue>136</lowerValue>
<lowerValue>144</lowerValue>
<lowerValue>152</lowerValue>
<lowerValue>160</lowerValue>
<lowerValue>168</lowerValue>
<lowerValue>176</lowerValue>
<lowerValue>184</lowerValue>
<lowerValue>192</lowerValue>
<lowerValue>200</lowerValue>
<lowerValue>208</lowerValue>
<lowerValue>216</lowerValue>
<lowerValue>224</lowerValue>
<lowerValue>232</lowerValue>
<lowerColor>orange</lowerColor>
<upperColor>red</upperColor>
<lowerValue>240</lowerValue>
<lowerValue>248</lowerValue>
<lowerValue>255</lowerValue>
</classBreaks>
</gridPlot>
]]>
Introduction
The ArcSDE connection in FEWS relies on the open source GeoTools 2.5.5 library to connect to an ESRI ArcSDE or WFS server.
[Link]
623
[Link]
There is an open source GIS system, named uDig ([Link] that is based on GeoTools. With this application GeoTools can be
used outside FEWS and the ArcSDE connection can be tested. uDig provides an wizard to setup the connection.
FEWS is not using GeoTools for drawing maps for the simple reason that GeoTools did'nt exist at the time FEWS developments started. It is not
possible to have GeoTools map layers mixed with FEWS map layers because they use different projection techniques.
FEWS uses it's own implementations to render shape and grids to achieve the performance of the maps and grids we see today. FEWS also
provides a shape highlighting feature not easy to migrate to GeoTools.
The implementation of the ArcSDE connection in GeoTools 2.5.5 is too slow to use without caching and background loading. These caching is not
provided by the used GeoTools 2.5.5. A year later GeoTools 2.7.0 (sep 2010) is released that mentioned strong improved ArcSDE performance
and added support for caching.
A (remote) layer is divided virtually in a maximum of 64 tiles and a minimum of 200 shapes on average per tile. These tiles are downloaded on
demand as soon the user pans/zooms to the area of his interest. To reduce the load on the remote server, network and to improve the
performance the downloaded tiles are cached on a file system (directory). When a repaint is requested for a tile and the tile is available in the
cache the tile from the cache will be used to perform the repaint.
All updates done to a remote layer are visible the next day to the FEWS users. GMT 0.00 is seen as the start of a new day. All tiles not
downloaded today are invalidated. When a repaint of an outdated tile is requested this tile is re-downloaded in the background. While a tile is
downloaded the text CACHING appear in the lower left of the map. While downloading the outdated tile is displayed.
Sharing caches
To reduce the load on the ArcSDE server even more multiple FEWS OC/SA systems can share the same tile cache. When multiple users are
interested in the same area of a remote layer the area is downloaded from the server once and not for every user separately.
When a tile is repainted and downloaded earlier today the tile will not be re-downloaded but the tile from the cache will be used.
A tile cache can also be used when a layer is connected to a shape file located on a (network) file system. A shape tile cache to a shape file can
strongly reduce the memory usage of FEWS. In the FEWS about box the amount of memory used by the shape layers is displayed. When the
shape memory usage is more than 50MB it can be worthwhile to setup a map cache. Tiles are removed from memory when they are no longer
visible. Both shape file and the tile cache it self can be located on a network drive and shared by multiple users at the same time. Using a tile
cache can also reduce the network load when a shape file is located on a network drive. A tile is not recreated as long the time stamp of a shape
file is not modified. It is important that detailed layers become visible at a appropriate zoom level so not too much tiles are visible at the same
time.
Configuration
In a map in the explorer or spatial display configuration it is possible to define one or more connections.
<server>iris2</server>
<port>5154</port>
<database>irist</database>
<user>me</user>
<password>123</password>
]]>
<url>[Link]
]]>
<connectionId>sde</connectionId>
<layerId>INTWIS2.ODS_GEO_KADASTRALE_PERCELEN</layerId>
<visible>true</visible>
<lineColor>black</lineColor>
<fillColor>white</fillColor>
]]>
624
To see with layer ids are available you can download the earlier mentioned opensource gis [Link]
mapLayersCacheDir=%REGION_HOME%/mapLayersCacheDir
It is allowed that the cache is shared by multiple users simultaneously and located on a network drive.
02 Longitudinal Display
Longitudinal display
Since release 2007.02, the Time Series/Data display is able to handle longitudinal profiles in both graph and tables. It therefore
has overtaken the functionality of the Longitudinal Profile Display discussed below.
The longitudinal display is used in DELFT-FEWS for viewing longitudinal (vector) time series. These time series can be dynamically animated
against.
The Id of the longitudinal display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the LongitudinalDisplay with an Id of e.g. LongitudinalDisplay is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
625
Figure 148 Root elements of the LongitudinalDisplay configuration
displayGroup
Root element for each displayGroup. A display group forms one of the main nodes in the tree view and may contain multiple displays. Multiple
display groups may be defined.
Parameter;
description
display
Root element for the configuration of a display within a group. Multiple displays may be defined in each group.
Attributes;
description
timeSeriesSet
Time series set to be displayed. This should be a longitudinal time series. The location Id of the time series set must refer to a Branch definition to
allow an x-axis to be defined (see Regional configuration).
xaxis
thresholds
626
Thresholds may be plotted in the display for identified branch points. If this item is included, then a list of thresholds to be displayed can be
configured.
threshold
Identifier for threshold to be plotted. Attribute is the threshold id, which is a reference to a thresholdValueSet defined in the ThreholdValueSets
(see Region configuration).
branchLabel
Label in the branch (See branch definition in Region configuration) where the thresholds are to be plotted.
The configuration of the display defines only what time series what-if scenarios may be applied to. The layout of the display cannot be configured.
The Id of the what-if scenario display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file
for configuring the display with an Id of e.g. WhatIFScenarioFilters is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
description
variableSets
Attributes;
variableId : ID of the variable (group). Appears in list of variable to be selected in defining a what-if scenario in the display.
variableType : Optional type definition of variable (default to "any")
convertDatum : Optional Boolean flag to indicate if datum is to be converted for the what-if scenario defined. This may be required when
defining a typical profile what-if scenario.
configFiles
Template for defining what-if scenario applied to module parameters and module datasets. These templates are used when creating the
627
what-if scenario. Should be defined using optional/required elements if what-if scenarios for module parameters and module datasets are
to be supported.
Figure 151 Elements of the variable element definition in the What-if display configuration.
timeSerieSet
Time series set the what-if scenario is to be defined for. The relative view period over which the what-if scenario applies changes to the data is
defined in the TimeSeriesSet.
For each lookup table to be used interactively, the input series for which scenarios may interactively applied must be configured. The workflow to
run must also be configured. Note that the report generated by the lookup table module as a result of the run is also viewed locally. As a
consequence the workflow run is normally a copy of the normal lookup table run, with the exception that the report module is configured to send
the reports to a local directory rather than to the web served (via the Report_Export module).
The Id of the lookup table display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the display with an Id of e.g. LookupDisplay is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
628
Figure 152 Elements of the Lookup Table display configuration
general
reportDirectory
Root directory of the reports directory on the local OC for exporting reports generated by the lookup table display to and for viewing these from.
externalBrowser
Address of the external browser used to display reports generated by the interactive lookup table display.
lookupTableDisplayDescriptor
Root element for the description of a lookup table. Each entry defined will be available in a drop down list in the lookup table display, allowing
lookup tables to be run individually . Multiple entries may exist.
Attributes;
descriptorId : Id of the lookup table to be run from the lookup table display. This Id is used to populate the drop down list to select a lookup table
to run from.
description
workflowDescriptorId
Id of the workflow to run to explore the impacts of uncertainties in the input variables. This workflow must be defined in the workflows
configuration (see Regional Configuration) and an appropriately configured workflow must be available (see Workflows).
prefixWhatIfScenarioDescriptorId
Optional prefix for the What-If scenario defined. Item is redundant and need not be defined.
inputVariable
Definition of a time series set to be used as an input variable. This variable may be amended interactively by the user.
Attributes;
629
variableId : ID of the variable (group). Appears in list of variables to be selected in the lookup table display.
variableType : Optional type definition of variable (defaults to "any")
convertDatum : Optional Boolean flag to indicate if datum is to be converted for the what-if scenario defined.
outputVariable
Definition of the output variable for the lookup table display. This variable is currently not used, but is already available for a possible future
extension of the lookup table display.
timeSeriesSet
Definition of the time series set to be used for the input/output variable.
subDir
Optional definition of the sub directory the report produced by the lookup table is read from (may also be defined in the reportFileName item).
reportFileName
HTML file name of the report to be displayed as a result of the run made by the lookup table display. This file name (and sub directory) must the
same as the report created in the appropriate module in the workflow run. If this is not the case no report will be displayed.
05 Correlation Display
Correlation display
The correlation display is a display plug-in that extends the correlation module functionality. The display allows the user to interactively establish
correlation between upstream and downstream locations and derive a forecast based on these correlations.
For each correlation display to be used interactively, the configuration included only the definition of the correlationEventSets to be used. The
layout of the display cannot be configured.
The Id of the correlation display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the display with an Id of e.g. Correlationdisplay is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
630
Figure 153 Root elements of the configuration display configuration.
inputTimeSerieInfo
TimeSeriesSet defined for the input data. This time series set is used when applying the correlation established to a complete hydrograph.
eventSetsDescriptorId
Id of the correlationEventSets to be used in the display. The event set must be defined in the CorrelationEventSetsDescriptors configuration (See
regional Configuration).
travelTimesDescriptorId
Id of the travelTimesDescriptor to be used in the display. The travel times set must be defined in the TravelTimesDescriptors configuration (See
regional Configuration).
outputTimeSerieInfo
TimeSeriesSet defined for the output data. This time series set is used only for displaying the temporary time series to be displayed when
applying the correlation established to a complete hydrograph. This time series is not saved in the database.
correlationDisplayOptions
Root element of options element for setting line colours in the scatter plot.
scatterplotOptions
Options for setting the properties of the scatter plot. The lineStyle of the scatter plot is "none" by definition (need not be defined).
631
equationOptions
Options for setting the properties of the regression line determined with the equation established.
preferredColor
Preferred colour for plotting scatter plot / regression line. For enumeration see timeSeriesDisplay Configuration in System Configuration.
markerStyle
Marker style for scatter plot / regression line. For enumeration see timeSeriesDisplay Configuration in System Configuration.
markerSize
Required no
Description
Configuration
importStatus
description
tabName
tabMnemonic
defaultTimeThreshold
extraTimeThreshold
bulletinBoard
Description
Configuration file for the optional elements of the System Monitor display. These are:
Configuration
632
importStatus
The Import Status Tab shows the last time a data type has been imported and can be colour coded based on the amount of time since the last
import. An example file is attached.
description
Optional description
tabName
Required element that defines the name of the tab in the user interface.
tabMnemonic
defaultTimeThreshold
Default color coding for all datafeeds. The next element (extraTimeThreshold) can override these settings per datafeed.
Each timeTreshold element (see figure above) indicates a minimum age needed to switch to a certain colour.
extraTimeThreshold
This element is similar to the defaultTimeThreshold element. Howevere, in this case the colours are defined separately for each datafeedId.
633
The datafeedId is defined in the import module. If no datafeedId is configured in the Import module the directory from which the
files have been imported is used.
bulletinBoard
The bulletinBoard tab allows users to manually add log messages to the system. In order to use this the following should be added to the
configuration file:
<bulletinBoard>
<tabName>Bulletin Board</tabName>
<tabMnemonic>B</tabMnemonic>
</bulletinBoard>
default Flag to indicate the version is the default configuration (otherwise omitted).
Futher information on the use of the skill score display can be found in the 11 Skill Scores Display
634
635
Work in progress - Alex.
Background
The first step is to set up a contingency table. Given criteria on time intervals to consider when matching threshold crossing events, the values of
a , b , c and d in the table below can be filled in (where e.g. a is the number of matched observed and forecast threshold crossing events).
636
Once the contingency table has been filled, different skill scores can be established:
Probability of Detection
Critical Reliability (checks if a forecast is available at least some time (but not too late) before the observed threshold crossing)
The First Forecast of Threshold (FFOT) is determined as the average time between the T~0~ of the forecast run in which a threshold crossing
was detected and the time of the observed threshold crossing (i.e. the average lead time of the category a threshold crossing events in the
contingency table).
The Bias of paired threshold crossings is the average time between paired observed and forecast events.
To specify what matching threshold crossings mean, a number of criteria can be used. When clicking the [Change Criteria] button the following
display appears.
Min/Max T~0~ difference . These criteria define in what time interval the T0 of a forecast should fall that has predicted a threshold
crossing in order for this threshold crossing event to be included in the analysis.
Max dispatch time difference . This criterion is used to compute the Critical Reliability (CR) which determines whether a forecast was
available at the time of an observed crossing irrespective of its quality. By setting this criterion the maximum time difference can be
defined between the dispatch time of a forecast and the time of an observed crossing.
Max time threshold being too early/late . These criteria define what the maximum difference between a forecasted threshold crossing and
an observed threshold crossing are allowed to be in order to consider them as matching.
Matching Events
637
In the Matching Events tab different background colours are used to indicate in which sector of the Contingency Table events fall.
green: observed and forecast threshold crossing event match (sector a of contingency table)
yellow: a threshold crossing has been observed but not forecasted (sector c)
orange: a threshold crossing has been forecasted but not observed (sector b)
In general, skill scores are determined for all forecast locations being at a gauging station and having level thresholds. Typically these are
determined separately, but the structure of the display allows skill scores to be established for different models. This way the skill scores of the
different models can be compared.
The performance indicators that are computed on the basis of the selection in this tab are:
probability of detection
false alarm rate
critical success factor
first forecast of threshold
bias of paired thresholds
The filter boxes at the left hand side of the skill scores display enable the selection of certain locations or certain thresholds levels to be shown
alone. By activating the check box below the criteria, it's possible to display Up crossings of thresholds only.
In the Forecast Available for Events tab different background colours are used to indicate in which sector of the Contingency Table events fall.
green: a forecast is available for an observed threshold crossing event (sector e of contingency table)
yellow: no forecast available for an observed threshold crossing event (sector f)
638
The performance indicator that is computed on the basis of the selection in this tab is critical reliability .
Thresholds List
The Threshold List provides an overview of all threshold crossing events available for the selected locations. In the analyses subsets can be
made by setting the criteria discussed in the earlier part of this section.
639
Archiving Events
As the threshold crossing events are stored in the operational database for only some days (according to the length of the Rolling Barrel), it's
possible to manually export the list of threshold crossing events by pressing the [Export ] button in the skill scores display. This list can later be
imported in a Stand Alone system by pressing the [Import ] button. In this way, longer records can be established for analysis in an off-line
modus.
Instead of exporting the whole list of threshold crossing events, as described above, it's also possible to select some of them and export only the
selected ones by using the [Save ] button.
All threshold crossing events can also be archived automatically by the system. See also Archive Display.
640
Modifiers can be entered and managed through the Time Series Modifier Display.
Modifiers are defined through the modifier display, which can be opened either through the FEWS explorer menu, or through the toolbar on the
main window (when configured).
To select the time series to which the modifier is to be applied, identify and select the display group in which the time series is contained in the
tree view on the left.
Once a display has been selected, the locations panel will show all locations available in the display group selected. Select the desired location.
Once a parameter has been selected, the parameters panel will show all parameters available at the locations selected. Select the desired
parameter.
Following selection of the parameters the selection will be visible in the Create New tab of the Modifier information panel.
To create a new modifier, select the time series (location & parameter) to which the modifier is to be defined.
641
Open the Create New tab in the Modifier information panel.
Select the name of the modifier. On selection the name defaults to a combination of the location Id and the parameter Id.
Enter an appropriate description for the modifier. This is optional and can be displayed later when managing existing modifiers.
Define the start time and the end time of the data to change. By default the times defined are equal to the time series displayed in the main
window.
Define the start time and the end time of the validity period of the modifier. By default the times defined are equal to the time series displayed in
the main window.
Select the modifier operation and enter the value with which the data is to be changed. A value does not need to be added if the option to Edit the
data or to replace it with a missing value is selected. If the option to Edit the data has been selected, use the interactive edit facility to define the
changes.
Modifier Operations
A modifier is applied to the original data for a defined time span. This is defined as a start time and an end time. These times relate to the data in
the time series, and not to the time on the computer.
The start and end time can be changed by entering the date and time values in the appropriate input boxes. These can also be changed by
clicking on the display. Through the right mouse button a menu is available that allows selection of either the start time or the end time.
Start and end times selected are displayed as vertical lines in the display.
By default the modifier is valid for the same period as the start and end time of the modifier. However, a modifier can be set to be valid during a
642
different period than the data to which it applies. A modifier will only be applied to the data if the Forecast T0 of the workflow falls within the valid
period.
The period of validity can be entered as a start and end time in the appropriate text fields.
The time series modifier display allows the data to be edited interactively. Once the Edit operation has been selected, data can be manually
edited in two ways.
To enter data by clicking on graph, choose the Enable chart editing option. Use the mouse to draw the changed data on the display.
To enter data by entering data in a table, select the Enable table editing option. Enter the changes through the table. The column that contains the
location & parameter selected is shown with the values against a white background. All other columns remain with a grey background.
Once editing the data is complete, select the Create button to add the new modifier.
Managing modifiers
Through the Modifiers tab the modifiers defined can be managed. Following definition, the modifiers will have a temporary status. In the Modifiers
tab these are displayed with a blue square. Temporary modifiers are not available for use until these have been confirmed using the Apply button.
Modifiers that are available for use are preceded with a green square.
The remaining columns of this tab show the relevant details of the modifier. Each modifier can be enabled or disabled using the Active option.
When hovering over the information icon a pop-up is displayed with the description entered previously. If no description has been entered this
icon is greyed.
A modifier can be deleted through the red cross in the last column.
Note that if there are modifiers that have only a temporary status, the name of the tab will be marked in blue. Once these have all been Applied
the colour of the text will revert to black.
Uploading modifiers
Modifiers defined on the local client will only be available on that client until these have been uploaded to the central database. Through the
Upload Modifiers tab the modifiers which are considered to be used in forecast and historical runs on the central server system can be selected
and uploaded.
Once modifiers have been saved locally using the Apply button, these are available for uploading. This displays only those modifiers that have not
yet been uploaded. The modifiers to be uploaded can be selected using the Upload option. Select the Upload option to send these to the central
database and use throughout the system. Following upload the modifiers will be removed from the list displayed in this tab.
643
Constraints
Multiple modifiers can be applied to the same location & parameter. However, these will not be applied cumulatively if there is an overlap. In case
of an overlap the last modifier defined that is active will be applied.
Once a modifier has been defined it cannot be changed. At any time the modifier can be enabled and disabled, or deleted.
Making this display available only requires the relevant class be called from the Explorer. It is added as an explorer Task.
<iconFile/>
<mnemonic>X</mnemonic>
<taskClass>[Link]</taskClass>
<toolbarTask>true</toolbarTask>
<menubarTask>true</menubarTask>
<accelerator>ctrl X</accelerator>
<loadAtStartup>true</loadAtStartup>
]]>
Note that the use if Icons and Mnemonics, as well as the name of the display will depend on the configuration.
There are methods available that allow for the updating of states algorithmically, including for example Kalman Filters, empirical state correction
methods etc. Another approach is the direct updating of state variables through manual intervention.
The approach taken in that state variables are considered as time series of variables and FEWS handles these as it does any other time series.
The evolution of state variables can then be easily plotted against time as can for example the time series of resulting discharge. When a state
variable needs to be amended, the changed values are saved as non-equidistant values at the time of the change. When running the model these
values are imported into the model and at the time indicated used to overrule the value calculated internally.
To be able to use the state modifiers functionality, the adapter to the model for which the states are to be modified must have the ability to take in
the time series of amended values and overrule those used in the internal state. Additionally the model and its adapter should be able to export
the values of calculated state variables as a time series. The figure below illustrates the exchange of time series of model inputs & state values to
the external model, as well as the return of model outputs and state values.
644
State Editor Display
The State editor display supports the user in amending time series of state variables. The amended data are then passed to the model through
the model adapter to allow insertion into the state and subsequent change of response on running the model.
The state editor display is in principle independent of the DELFT-FEWS system, and has been developed as a web service client. Exchange of
data with the DELFT-FEWS database makes use of a web service data exchange protocol. To the normal user, however, the display appears as
an integrated part of the system.
645
Selecting the model for which the state is to be edited
To select the model for which the state is to be edited, identify the group which contains the desired model, open the tree if required by
double-clicking on the folder icon and select the desired model.
Once a model has been selected, the available state times for that model will be displayed, and the values of the slider controls and time series
will be updated to show the values in the time series of state variables at the time of the state selected.
Before entering the values for the new state, the time at which these are to be defined need to be set. There are three options to set the time. This
can be done by selection of the time at which there is a state for that model in the database, it can be done by entering a time in the available
input field Enter new state date/time or it can be done by selecting the appropriate time on the graph.
Select the variable to be set. If this is not displayed, use the arrow keys on the bottom right to navigate to the desired variable. If there are more
than six variables, keep scrolling to the right to reveal those that are not displayed.
The variable can be amended using the slider control. A small triangle on the display will indicate the new value at the selected time.
Once the values of selected variables have been set, the state can be saved by selecting the Save button. Note that this will save a complete set
of state variables. In other words, if there are six state variables, and only one has been amended, then the saved set will contain all six variables.
The amended value, as well as the other five with the value of the variable at the selected time.
Current Constraints
Once a set of state variables has been defined these cannot be deleted.
The values of the state variables displayed are only those from the historical run. Values from the forecast
646
allows the setting up of objects to be exchanged by assigning an external ID to be used by the PI-client to an internal object.
General
Several definitions are given in the general block. The exchange of data through the web service is similar to that through the General Adapter.
647
importDir
If the option to exchange data using an intermediate persistent layer is selected (see option writeToFile) then this specifies the directory the
service imports data from.
importIdMap
Specifies the Id Map to translate external identifiers for locations and parameters to internal identifiers for locations and parameters on import.
importUnitConversionId
Specifies the unit conversion table to translate external units to internal units on import.
exportDir
If the option to exchange data using an intermediate persistent layer is selected (see option writeToFile) then this specifies the directory the
service imports data to.
exportDir
Specifies the Id Map to translate external identifiers for locations and parameters to internal identifiers for locations and parameters on export.
exportUnitConversionId
Specifies the unit conversion table to translate external units to internal units on export.
writeToFile
Boolean option to exchange data through a persistent layer or to only exchange data in memory. When set to true the persistent layer is used.
timeZone
Defines the time zone to which data is to be exported to and from which it should be imported.
timeSeries
648
description
id
exportBinFile
Boolean option. When set to true the data is exported as a bin file (or byte stream) while the headers use the PI-XML format. When false all data
and headers are included in XML format.
timeSeriesSet
Definition of the FEWS time series set. Not that the use of locationSets etc is supported, meaning that any object may contain multiple time
series.
omitMissingValues
Boolean option. When true the exported arrays will not include missing values. When false (default) missing values will be included using the
specified missing value indicator.
missingValues
Identifier for missing values. If not defined missing values will be defined as NaN.
convertDatum
Boolean option. When true the values in the exported arrays will be transformed by the datum to provide the data at global data. This applies only
for those parameter groups that this applies to.
moduleDataSet
Definition of module dataset objects available for exchange. Note that these are exchanged as zip files (streamed).
649
id
moduleInstanceId
Identifier of the module dataset in the FEWS database using its moduleInstanceId.
parameterSet
Definition of module parameter set objects available for exchange. Elements to be defined are the same as in the moduleDataSet.
moduleState
Definition of module state set objects available for exchange. Elements to be defined are the same as in the moduleDataSet.
650
general
displayName
timeZone
modelGroups
651
modelGroup
Definition of the a group of models. This will appear as a folder in the treeview. The attributes are;
id Identifier for the group for later reference
name Name of the group to be displayed in the tree view
modelId
modelGroupId
list of identifiers of model groups included in this group (one or more). This allows recursive definition so that a complete tree can be formed.
Obviously circular references are not supported.
model
definition of the models for which state variables/parameters are available for editing.
id Identifier for the model for later reference
name Name of the model to be displayed in the tree view (id is used if not defined).
locationId
Location identifier associated to this model. This is used for matching time series to the this model.
652
stateParameterGroupId
Identifier of the group of parameters considered in this model state. The parameters and their properties are defined in the stateParameterGroup
element.
resultSeriesGroupId
Identifier of the result time series. The locationId is used in matching a model to a time series.
stateParameterGroup
Definition of the groups of parameters. For each model type a group needs to be defined. Different calibrated of the same model may allow
different sets of parameters to be edited.
id Identifier for the parameter group for later reference
stateParameterId
List of identifiers of the parameters included in the group. The properties of the parameters are defined in the stateParameter section.
seriesGroup
Definition of a time series to be shown in the lower plot of the display. This time series should be associated to the model response to give
guidance on the changes made to the state parameters.
id Identifier for the group of time series. This is for reference
name Name of the group of time series.
series
Definition of a time series. This definition is related to the definition of time series in the PI-Web Service. The identifiers used in this series groups
there, must be related to identifiers to time series objects in the definition of the PI-Service.
id Identifier for the time series object as defined in the PI-Web Service configuration.
locationId Identifier for the location of the time series. If this is a group of time series then the key word $ANY$ should be used. The locationId
associated to a model selection will then be filled run-time
parameterId Identifier of the parameter of the time series. This is the same parameter uses as externalParameter in the configuration of the
idMapping.
qualifier
Qualifier for the time series. This may be min, max or mean. Used only for displaying parameter climatology.
stateParameter
653
Definition of the parameters in a state. These can then be referenced in a stateParameterGroup.
id Identifier for the parameter for later reference
name Name of the parameter to be displayed (id is used if not defined).
range
inputSeries
Input data series for displaying as the time series of this parameter. This should relate to the time series for the current parameter. There is no
explicit check if this is the case. See definition of series
outputSeries
Output data series to write the amended values of the state to. This should relate to the time series for the current parameter, and typically is a
non equidistant time series. There is no explicit check if this is the case. See definition of series.
climatology
Definition of the climatology as a time series within this configuration. This is defined as eventData, with a given max, min and mean.
climatologySeries
Definition of the climatology as a time series obtained from the FEWS database (preferred option). A series can to be defined for the min, max
and mean at each location. The qualifier can be used to assign a series to each of these.
654
<iconFile>""</iconFile>
<mnemonic>C</mnemonic>
<arguments>SACSMA_StateModifiers %HOSTNAME% %PORT%</arguments>
<taskClass>[Link]</taskClass>
<toolbarTask>false</toolbarTask>
<menubarTask>true</menubarTask>
]]>
Once the set of modifiers have been defined, and the results from that sub-basin are deemed acceptable, the forecaster can then move on to the
next task downstream.
The display shows a logical layout of the sub-tasks. This is defined in configuration. A downstream sub-task may be dependent on an upstream
workflow, and this can be indicated graphically. When such a dependency is defined, will only allow the running of that sub-task if the task on
which it is dependent is run and completed first.
A simple colour scheme is used to show the status of each of the sub-tasks;
• Green - indicates the sub-task is up to date.
• Yellow - indicates this sub-task is not up-to-date. However, there are no dependencies on other tasks or all tasks are up to date. The task is
therefore available to be run.
• Red - indicates this sub-task is not up to date. There are dependent tasks which are not yet up to date. The task cannot yet be run.
• Purple - indicates the task is running.
The display shows the layout of the sub-tasks in the main panel, as well as a tree view on the left. Multiple groups may be defined. When selected
in the tree view on the left, the group of sub-tasks displayed associated to that group will be displayed in the main panel.
The properties of the tasks run interactively through the display can be changed from the default properties using the Properties button. An
information panel below the tree view indicates the properties that have been set.
A group of sub-tasks, usually associated to a basin can be selected from the tree view on the left. The sub-tasks displayed in the main panel will
655
be updated according to the group selected.
Running sub-tasks
To run a sub-task, select the task block it is associated to in the main panel. A dotted outline of the task block highlights that it has been selected.
To run the task selected, select the Run option. This is only possible if the task block is green or yellow.
To run all preceding tasks, select the Run to Selected option. This will run all tasks that have not yet been run in the defined sequence.
To run all preceding tasks, including re-running those tasks already up to date, select the Run all to Selected option. This will run all tasks that
have not yet been run in the defined sequence.
Once the iteration of tasks has been completed, the workflow associated with the group selected can be submitted for running on the server. In
effect this is the same as submitting that task through the Manual Forecast Dialogue. This workflow will, however, use the properties as defined
for the sub-tasks when running these locally.
To view the results of a sub task, the option Show Results... can be selected. This opens the time series display, and opens the pre-configured
display that has been configured to be associated with that sub-task.
The properties of the task runs can be amended from the default settings by selecting the Properties option.
Through this display the properties of the run can be specified. This display is similar to the options available in the Manual Forecast Dialogue
The run can be set to use to cold state start by selecting the Cold State option and setting number of days prior to the Forecast T0 that the run
should start.
The run can be set to use a warm state by selecting the state search period. This can be set relative to the Forecast T0 by using the Warm State
option. Alternatively the historical run to start from can be selected in the list of dates available in the Warm State Selection option.
The forecast length can be specified to deviate from the default by selecting the option Forecast Length and entering the appropriate number of
days.
One the options have been defined, these will be used for all tasks run within the active session of the interactive forecast display.
656
Configuration of the Interactive Forecast Display
The interactive forecast display is an implementation of the TaskRun dialogue.
This is the configuration of a display and can be found in the displayConfigFiles section of the configuration.
The configuration of the display requires a layout of the flow chart to be defined, as well as the workflows associated to each group of tasks, and
sub-tasks.
657
taskRunDialog
taskGroup
For each group of tasks this element is defined. This has two attributes;
Name the name of the group as displayed in the tree view on the left of the display.
workflowId the Id of the workflow which can be submitted to the central server. This workflow should run all the tasks in the group. Note that
there is no check if this is indeed the case.
658
flowChart
This element is used to define the overall layout of the flow chart shown on the main panel. The element has no attributes
scale
Scale can be used to increase and decrease the size of the flow chart, without the need to change each individual item.
taskSize
This defines the size of each task block. The values entered here are multiplied by the scale to find the actual size in pixels. There are two
attributes;
width width of each task block - the full width is found by multiplying the width defined with the scale.
height heightof each task block - the full width is found by multiplying the height defined with the scale.
simpleTask
Not used in the configuration of the Interactive Forecast display. This element should not be included.
operatorTask
Definition of each of the sub-tasks that can be run from the display. The attributes that are relevant to the configuration of this display are;
name Name of the task as it should appear on the display
659
workflowId Id of the workflow to run under this task
explorerLocationId Id of the main location associated with the sub-task. This is used to open the associated display to view results.
runLocally Option whether this task is to run locally or centrally (default: true)
center
Defines the centre of the associated task block. This coordinates are in the same scale as the taskSize. The actual pixels from the top left of the
display can be calculated by multiplying with the scale value. The attributes are;
x location of the centre of the task block from the top of the panel
y location of the centre of the task block from the left of the panel
dependency
Defines the tasks on which this task depends (i.e. its predecessors). Zero or more dependencies can be defined. The attributes are;
taskName name of the task that should be run before this one
vertex
If the task being defined and a dependent task cannot be connected by a straight line, a vertex point can be defined. The connecting line will pass
through that point. . The actual pixel location from the top left of the display can be calculated by multiplying with the scale value. The attributes
are;
x x-coordinate of the vertex from the left of the panel.
y y-coordinate of the vertex from the top of the panel
fewsPanel
Not used in the configuration of the Interactive forecast display. This element should not be included.
archiveTask
Not used in the configuration of the Interactive forecast display. This element should not be included.
DisplayDescriptors
<description>TaskRunDialog</description>
<className>[Link]</
className>
]]>
DisplayInstanceDescriptors
<description>TaskRunDialog</description>
<displayId>TaskRunDialog</displayId>
]]>
[Link]
<iconFile/>
<mnemonic>I</mnemonic>
<taskClass>[Link]</
taskClass>
<toolbarTask>true</toolbarTask>
<menubarTask>true</menubarTask>
<accelerator>ctrl I</accelerator>
]]>
Example of a configuration
11 Threshold Display
660
Threshold Overview display
The threshold display is a display plug-in that allows the user to see at a glance which locations have forecasted threshold crossings, a summary
of alarms and more detailed information about specific site forecasts. For a description of the functionality available - please see the user guide.
The Id of the threshold display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the display with an Id of e.g. Threshold overview display is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
general
661
This section gives the title of the dialog and allows the user to filter for workflows which are to be displayed e.g. Coastal_Forecast
displayDescriptor
The section of the configuration forms the main part of the configuration. Each "tab" has different functionality and shows a different aspect of the
data. Tab 1 shows multiple hour threshold crossing aggregates (i.e. highest crossing over 4 hours). Tab 2 shows highest alarms on an hourly
basis. Tab3 gives a text summary of alarms and the optional tab 4 can be used to show additonal site data.
inputVariable
columnAttributes
662
This section of the configuration relates specifically to the configuration of the site data in tab4.
The Id of the task run dialog display is identified in the DisplayInstanceDescriptors. When available on the file system, the name of the XML file for
configuring the display with an Id of e.g. task run dialogue display is for example:
default Flag to indicate the version is the default configuration (otherwise omitted).
663
Figure 1: Root elements of the task run dialog overview display configuration.
flow chart
simple task
664
operator task
archive task
665
13 Manual Forecast Display
Function: Configure the Manual Forecast Display
Why to Use? To change the default behaviour and appearance of the manual forecast dialog
Description: The Manual Forecast Dialog can be changed to alter it's default behaviour. Instructions on how to do this are given here
Preconditions: Method for starting up the display. Usually an entry in the explorer tasks section of the Explorer configuration.
Remark(s): This module should be registered in the DisplayInstanceDescriptors and the DisplayDescriptors files
Contents
Contents
Configuration
Related modules and documentation
Technical reference
Configuration
666
<coldState>
<startDate unit="hour" multiplier="72"/>
</coldState>
<warmState>
<stateSearchPeriod unit="hour" start="-72" end="1"/>
</warmState>
<forecastLength unit="day" multiplier="4"/>
<task workflowId="ImportExternal" forecastLengthSelection="false" stateSelection="false"/>
<task workflowId="Fluvial_Forecast">
<coldState>
<startDate unit="hour" multiplier="48"/>
</coldState>
<warmState>
<stateSearchPeriod unit="hour" start="48" end="24">
<description>Please use this feature to find old warm states</description>
</stateSearchPeriod>
</warmState>
<forecastLength unit="hour" multiplier="24"/>
</task>
See
Technical reference
Description: The Manual Forecast Dialog includes a facility to run a predefined list of forecast. This list can be defined in an XML file.
This facility is off by default but can be switch on using the instructions given here.
Preconditions: Stand Alone system, should not be used in client-server mode. Method for starting up the display. Usually an entry in the
explorer tasks section of the Explorer configuration.
Outcome(s): A button that can be used to load a list of forecasts will be added to the manual forecast display
667
Screendump(s):
he XML file for the Manual Forecast Dialog can also contain other configuration options which are described elsewhere.
Contents
Contents
Overview
Configuration
Error and warning messages
Known issues
Related modules and documentation
Technical reference
Overview
Apart from the batch tab in the manual forecast it is also possible to run batches of forecasts that have been predefined in an xml file. To be able
to use these the display must be configured to add a button to load these files. Configuration consists of the following steps:
Add the display to the DisplayDescriptors file if this has not already been done (see entry below)
Make an XML config file for the display in the DisplayConfigFIles directory named ManualForecastDisplay 1.00 [Link]. Set the
buttonVisible element to true(see example below).
Ad an entry for this file to the DisplayInstanceDescriptors file
After restarting the system the Macro button should be visible (see screendump)
You can now load an XML file with a list of forecast to perform (example).
Configuration
directory: Default directory the file dialog uses. This is where you would store you XML files with pre-defined forecasts.
buttonVisible: set to true to enable the button. Default is false.
668
Error and warning messages
Known issues
No Known issues.
Technical reference
14 ChartLayer
The schematic status display (formerly named Scada display) in Delft-FEWS is used for displaying and monitoring data. The schematic status
display shows one or more configurable schematic views that represent data values in some way. For example, to show the most recent data
value of a given time series, it is possible to show just the numerical value, or to show a rectangle that varies in height depending on the data
value, or to show a polygon that changes in colour when the data value crosses a certain threshold, etc. How data is represented and which data
is shown can be configured in the schematic status display configuration file. The schematic status display is dynamically updated whenever new
data becomes available in the system or whenever data is changed (e.g. in the time series editor). The schematic status display is comparable to
the main map display, only the schematic status display does not and cannot refer to any geographical coordinate system. Furthermore the
schematic status display can be used to show text and figures as well as objects that represent data values.
Please note that the schematic status display in Delft-FEWS is only used for displaying data, it does not implement all features
that could be expected from a SCADA (Supervisory Control And Data Acquisition) system.
Before 2011_02, multipliers were sometimes used to do unit conversion. As of 2011_02 the standard unit conversion from
Delft-FEWS will be applied to the data shown in the Schematic Status Display. In case of migration of systems from before
2011_02, it can be necessary to verify whether the unit conversion is not applied twice.
Note when using of transformations in the Schematic Status Display, it is only supported to have outputVariables of the
transformation output of timeSeriesType temporary. Though other transformation functions may function, it is only the
UserSimple function that is supported.
Contents
Overview
Contents
Configuration
Scada Display Configuration Options
displayName
showTimeNavigatorToolbar
timeNavigatorTimeStep
dateFormat
numberFormat
variable
scadaPanel
Time Navigator Toolbar Configuration Options
timeNavigatorRelativePeriod
669
movieFrameDurationMillis
Scada Panel Configuration Options
id
name
svgFile
nodeId
textComponentBehaviourDefinition
shapeComponentBehaviourDefinition
svgObjectId
leftSingleClickAction
leftDoubleClickAction
linkPropertiesToData
useThresholdWarningLevelColors
toolTip
replaceTags
Left Single Click Action and Left Double Click Action Configuration Options
switchToScadaPanel
scadaPanelId
openDisplay
timeSeriesDisplay
timeSeriesEditor
title
variable
runWorkflow
workflowId
Link Properties To Data Configuration Options
Link Height To Data Configuration Options
variable
dataLowerLimit
dataUpperLimit
heightLowerLimit
heightUpperLimit
anchorPoint
Link Rotation To Data Configuration Options
variable
dataLowerLimit
dataUpperLimit
rotationLowerLimit
rotationUpperLimit
anchorPointX
anchorPointY
Use Threshold Warning Level Colors Configuration Options
variable
thresholdGroupId
thresholdReference
colorType
Tooltip Configuration Options
variable
toolTipText
Replace Tags Configuration Options
variable
Variable Configuration Options
variableId
locationId
overrulingRelativeViewPeriod
timeSeriesSet
Transformations within ScadaDisplay
Sample configuration for transformations within the ScadaDisplay
Known Issues
Tips And Tricks
SVG specification
Embedding image files into SVG files
Controlling the resizing behaviour of an svg document within the scada display
Determining the rotation anchor point for an SVG object in user space coordinates
Aligning text within svg text objects
Export maps from ArcGis as svg files
Reduce the size of svg files
Configuration
The schematic status display shows one or more status panels, which can be selected in turn from the list on the left hand side. It is also possible
to have multiple schematic status displays, each one with different panels. In that case there would be one configuration file for each different
schematic status display, each one with a different filename. The filename of each schematic status display should be registered in the
[Link] configuration file. When available on the file system, the name of the xml file for configuring a schematic status
670
display is for example "[Link]". To register a schematic status display in the DisplayInstanceDescriptors configuration file use
e.g. the following xml code:
]]>
Furthermore the displayId that is used in the [Link] file should be defined in the [Link] configuration file.
This can be done with e.g. the following xml code:
<className>[Link]</className>
]]>
To be able to open a schematic status display from the user interface, there should be an explorer task for it in the [Link] configuration file.
The xml code for a schematic status display explorer task is for example:
<arguments>StatusTwentekanalen</arguments>
<taskClass>[Link]</taskClass>
<toolbarTask>true</toolbarTask>
<menubarTask>true</menubarTask>
]]>
Twentekanalen_10min.svg Example of an svg file, which is used in the [Link] example configuration file
Below is an overview of the options that are available in the schematic status display xml schema. All configuration options are also documented
in the annotations in the schematic status display xml schema. To get the most up to date information about the available configuration options
and their documentation in the annotations, please consult the schematic status display xml schema, which is available here.
671
displayName
showTimeNavigatorToolbar
Option to show a time navigator toolbar at the top of this schematic status display. The time navigator toolbar can be used to select the display
time for this schematic status display. It is only possible to select a display time that is contained within the configured relative period and is a valid
time according to the configured time step. This period is always relative to the current system time. If the current system time changes, then the
display time is reset to the current system time. If this option is not specified, then the time navigator toolbar is not shown.
timeNavigatorTimeStep
The default timestep by which the time navigator slider moves is by default the cardinal timestep which is configured in the [Link]
configuration file, see FEWS Explorer Configuration. This optional element can be used to use a different timestep for the time navigator than the
cardinal timestep.
dateFormat
Definitions of dateFormats that can be used for formatting dates and times in tags in texts of svg objects.
numberFormat
Definitions of numberFormats that can be used for formatting numbers in tags in texts of svg objects.
variable
Definitions of variables that can be used as input and/or output for the components in the scada display. A variable is always a time series.
Alternatively variable definitions can be embedded in the configuration below.
scadaPanel
One or more definitions of schematic status panels. In the user interface each schematic status panel will be available from the list in this
schematic status display.
timeNavigatorRelativePeriod
This is the period of the time navigator toolbar (slider) in this schematic status display. The time navigator toolbar can be used to select the
display time for this schematic status display. It is only possible to select a display time that is contained within this period and is a valid time
according to the cardinal time step (which is configured in the [Link] configuration file, see FEWS Explorer Configuration). This period is
always relative to the current system time. If the current system time changes, then the display time is reset to the current system time. The start
and end of the period are both included.
movieFrameDurationMillis
The duration of a frame when the time navigator is animating. This is the number of milliseconds a frame/time step is visible before the next time
step becomes visible. If this option is not specified, then 200 milliseconds is used by default. When the CPU is too slow to display the specified
frame rate, a frame will be displayed longer than specified.
672
Scada Panel Configuration Options
id
name
The name of this schematic status panel as it is displayed in the user interface. If not specified, then id is used as name.
svgFile
The name of an svg (Scalable Vector Graphics) file in the ReportImageFiles directory. This schematic status panel shows all svg objects that are
defined in the specified svg file. The svg objects in the svg file can be given special behaviour and/or properties using the configuration below.
See [Link] for the SVG 1.1 specification.
nodeId
Optional. Identifier that refers to a node in the topology configuration file. If specified, then the referenced topology node will be selected when this
scadaPanel is selected in the user interface. When the topology node is selected, then that may cause other things to be selected as well, like
e.g. the displayGroup in the TimeSeriesDisplay that corresponds to that node.
textComponentBehaviourDefinition
One or more items to define special behaviour and/or properties for components in this schematic status panel. Each item refers to an svg object
that is defined in the given svg file. Each item also contains definitions of behaviour and/or properties for that object. This way it is possible to e.g.
replace tags in the text of a text object with certain values from a certain time series, or to define what should happen when the user clicks on a
certain component.
Definition of special behaviour and/or properties for a text component in this schematic status panel. This refers to an svg object of type "text" that
is defined in the given svg file. This contains definitions of behaviour and/or properties for that svg object. An svg object of type "text" can be a
673
"text", "tspan", "tref", "textPath" or "altGlyph" element.
shapeComponentBehaviourDefinition
One or more items to define special behaviour and/or properties for components in this schematic status panel. Each item refers to an svg object
that is defined in the given svg file. Each item also contains definitions of behaviour and/or properties for that object. This way it is possible to e.g.
replace tags in the text of a text object with certain values from a certain time series, or to define what should happen when the user clicks on a
certain component.
Definition of special behaviour and/or properties for a shape component in this schematic status panel. This refers to an svg object of type
"shape" that is defined in the given svg file. This contains definitions of behaviour and/or properties for that svg object. An svg object of type
"shape" can be a "path", "rect", "circle", "ellipse", "line", "polyline" or "polygon" element.
svgObjectId
The id of the object in the svg file for which this item defines special behaviour and/or properties.
leftSingleClickAction
Action that is triggered when the user clicks once on this object with the left mouse button.
leftDoubleClickAction
Action that is triggered when the user double clicks on this object with the left mouse button.
linkPropertiesToData
Optional. Contains options to link properties of this component to actual data values. For example the height of the component can be changed
depending on the data values of a specified variable.
useThresholdWarningLevelColors
Optional. If specified, then the data for the specified variable within the specified relative view period is used to determine threshold crossings. For
crossed thresholds, warningLevels are activated. The color of the most severe activated warningLevel is used as the fill and/or stroke color for the
component, as specified.
toolTip
Optional. If specified, then a toolTip with the specified text is displayed for this component.
replaceTags
If specified, then the tags in the text of this component are replaced using data from the specified variable. Tags should be separated by "%"
signs. Text can be e.g. "Last value = %LASTVALUE(numberFormatId)%", which would be replaced by e.g. "Last value = 10.0". The following tags
can be used in the text (numberFormatId/dateFormatId should be replaced by the id of a numberFormat/dateFormat that is defined at the start of
this configuration file):
%MAXVALUE(numberFormatId)% is replaced by the maximum reliable or doubtful value in the time series.
%MINVALUE(numberFormatId)% is replaced by the minimum reliable or doubtful value in the time series.
%LASTVALUE(numberFormatId)% is replaced by the most recent reliable or doubtful value in the time series.
%LASTVALUETIME(dateFormatId)% is replaced by the date and time of the most recent reliable or doubtful value in the time series.
%STARTTIME(dateFormatId)% is replaced by the start date and time of the relative view period of the time series.
%ENDTIME(dateFormatId)% is replaced by the end date and time of the relative view period of the time series.
Left Single Click Action and Left Double Click Action Configuration Options
674
Click action configuration elements
switchToScadaPanel
Within this schematic status display the view will switch to the specified panel.
scadaPanelId
The id of the scadaPanel to switch to. The scadaPanel to switch to must be present in this config file.
openDisplay
timeSeriesDisplay
Open the timeSeriesDisplay using the specified options. The period that is shown in the display is the smallest period that completely includes the
relative view periods of all shown variables.
timeSeriesEditor
Open the timeSeriesEditor using the specified options. The data of the specified variables can be edited in the display. The period that is shown in
the display is the smallest period that completely includes the relative view periods of all shown variables.
title
variable
One or more variables to define the data that is shown in the display.
runWorkflow
workflowId
The workflow descriptor id of the workflow to run. This id should refer to a workflow that is defined in the WorkflowDescriptors configuration file.
The current system time is used as the time zero (T0) for the workflow run.
675
Link properties to data configuration elements
height
rotation
Optional. If specified, then for this component the height attribute is linked to the data values for the specified variable. This option can only be
used for svg objects of type "rect". If the data value is less than dataLowerLimit, then the height is set to heightLowerLimit. If the data value is
greater than dataUpperLimit, then the height is set to heightUpperLimit. If the data value is between dataLowerLimit and dataUpperLimit, then the
height will be linearly interpolated between heightLowerLimit and heightUpperLimit. If no data is available, then this component is made invisible.
Note: it is required that dataUpperLimit is greater than dataLowerLimit. However it is possible to define heightUpperLimit less than
heightLowerLimit to control the direction of the change of the height.
variable
The data for this variable is used to determine the height for this component.
dataLowerLimit
If the data value is less than or equal to dataLowerLimit, then the height will be equal to heightLowerLimit.
dataUpperLimit
If the data value is greater than or equal to dataUpperLimit, then the height will be equal to heightUpperLimit.
heightLowerLimit
heightUpperLimit
anchorPoint
The anchor point describes which part of the component should remain at the same position when the height is changed. Can be "bottom" or
"top".
676
Link rotation to data configuration elements
Optional. If specified, then for this component the rotation is linked to the data values for the specified variable. The rotation that is derived from
the data values is always relative to the rotation angle that is specified for this component in the svg file. This option can only be used for svg
objects of type "path", "rect", "circle", "ellipse", "line", "polyline", "polygon" or "text". If the data value is less than dataLowerLimit, then the rotation
is set to rotationLowerLimit. If the data value is greater than dataUpperLimit, then the rotation is set to rotationUpperLimit. If the data value is
between dataLowerLimit and dataUpperLimit, then the rotation will be linearly interpolated between rotationLowerLimit and rotationUpperLimit. If
no data is available, then this component is made invisible. If the data value is flagged as "varying direction" (e.g. varying wind direction), then the
rotation will increase linearly in time (animation). In this case the rotation will increase from rotationLowerLimit to rotationUpperLimit and then start
from rotationLowerLimit again.
Note: it is required that dataUpperLimit is greater than dataLowerLimit. However it is possible to define rotationUpperLimit less than
rotationLowerLimit to control the direction of the rotation. If rotationUpperLimit is greater than rotationLowerLimit, then increasing data values
result in clockwise rotation.
variable
The data for this variable is used to determine the rotation for this component.
dataLowerLimit
If the data value is less than or equal to dataLowerLimit, then the rotation will be equal to rotationLowerLimit.
dataUpperLimit
If the data value is greater than or equal to dataUpperLimit, then the rotation will be equal to rotationUpperLimit.
rotationLowerLimit
The rotation (in degrees) that corresponds to the dataLowerLimit value. This rotation is always relative to the rotation angle that is specified for
this component in the svg file.
rotationUpperLimit
The rotation (in degrees) that corresponds to the dataUpperLimit value. This rotation is always relative to the rotation angle that is specified for
this component in the svg file.
anchorPointX
The x coordinate of the anchor point. The rotation will be around the anchor point. This x coordinate has to be specified in the user space
coordinate system of the svg object for this component in the svg file. The user space coordinate system is determined by all transforms that are
specified in the parent svg objects of the svg object for this component. All transforms that are specified in the svg object itself are not part of the
user space coordinate system and thus should be taken into account in the coordinates that are specified here. E.g. to rotate a "rect" svg object
with attributes width="200" height="200" x="500" y="300" transform="translate(50 0)" around its center, use anchorPoint coordinates (x, y) = (650,
400). See also Determining the rotation anchor point for an SVG object in user space coordinates.
anchorPointY
The y coordinate of the anchor point. The rotation will be around the anchor point. This y coordinate has to be specified in the user space
coordinate system of the svg object for this component in the svg file. The user space coordinate system is determined by all transforms that are
specified in the parent svg objects of the svg object for this component. All transforms that are specified in the svg object itself are not part of the
user space coordinate system and thus should be taken into account in the coordinates that are specified here. E.g. to rotate a "rect" svg object
677
with attributes width="200" height="200" x="500" y="300" transform="translate(50 0)" around its center, use anchorPoint coordinates (x, y) = (650,
400). See also Determining the rotation anchor point for an SVG object in user space coordinates.
variable
The data for this variable is used to determine threshold crossings. For crossed thresholds, warningLevels are activated. The color of the most
severe activated warningLevel is used as the fill and/or stroke color for the component, as specified below.
thresholdGroupId
Optional. If specified, then only thresholds in the specified thresholdGroup are used in the determination of threshold crossings and warningLevels
for the specified variable. If not specified, then thresholds in all thresholdGroups are used.
thresholdReference
Specify which data is used for determining threshold crossings. Either choose the first or last reliable or doubtful value within the relative view
period, or choose all reliable or doubtful values within the relative view period. Can be "first_value", "last_value" or "relative_view_period".
colorType
Specify which color type (fill and/or stroke) should be changed to use warningLevel colors. Color types that are not specified here are not
changed. Can be "fill", "stroke" or "fill_and_stroke".
variable
The data from this variable is used to replace the tags in the specified toolTip text. If for a given tag the required data is not available, then that tag
is replaced by a dash symbol "-". This variable is only required if the specified toolTip text contains tags.
toolTipText
Text that is displayed in the toolTip for this component. This text can contain tags. The tags are replaced using data from the specified variable.
Tags should be separated by "%" signs. Text can be e.g. "Last value = %LASTVALUE(numberFormatId)%", which would be replaced by e.g.
"Last value = 10.0". The following tags can be used in the text (numberFormatId/dateFormatId should be replaced by the id of a
numberFormat/dateFormat that is defined at the start of this configuration file):
%MAXVALUE(numberFormatId)% is replaced by the maximum reliable or doubtful value in the time series.
%MINVALUE(numberFormatId)% is replaced by the minimum reliable or doubtful value in the time series.
%LASTVALUE(numberFormatId)% is replaced by the most recent reliable or doubtful value in the time series.
%LASTVALUETIME(dateFormatId)% is replaced by the date and time of the most recent reliable or doubtful value in the time series.
%STARTTIME(dateFormatId)% is replaced by the start date and time of the relative view period of the time series.
%ENDTIME(dateFormatId)% is replaced by the end date and time of the relative view period of the time series.
678
Replace tags configuration elements
If specified, then the tags in the text of this component are replaced using data from the specified variable. Tags should be separated by "%"
signs. Text can be e.g. "Last value = %LASTVALUE(numberFormatId)%", which would be replaced by e.g. "Last value = 10.0". The following tags
can be used in the text (numberFormatId/dateFormatId should be replaced by the id of a numberFormat/dateFormat that is defined at the start of
this configuration file):
%MAXVALUE(numberFormatId)% is replaced by the maximum reliable or doubtful value in the time series.
%MINVALUE(numberFormatId)% is replaced by the minimum reliable or doubtful value in the time series.
%LASTVALUE(numberFormatId)% is replaced by the most recent reliable or doubtful value in the time series.
%LASTVALUETIME(dateFormatId)% is replaced by the date and time of the most recent reliable or doubtful value in the time series.
%STARTTIME(dateFormatId)% is replaced by the start date and time of the relative view period of the time series.
%ENDTIME(dateFormatId)% is replaced by the end date and time of the relative view period of the time series.
variable
The data from this variable is used to replace the tags in the text in the svg object that this component refers to. If for a given tag the required data
is not available, then that tag is replaced by a dash symbol "-".
variableId
locationId
If the specified variable contains multiple locations, then specify the location to use here.
overrulingRelativeViewPeriod
Optional time period for which data should be read. This time period overrules the viewPeriod in the timeSeriesSet of the referenced variable. This
time period is relative to the selected display time in this scada display. The start and end of the period are both included. If the start and/or end of
the period is not a valid time according to the timeStep of the variable, then the start and/or end is shifted to the previous valid time (e.g. for a
period from 15:20 hours to 16:20 hours and a whole hour timeStep the period is shifted to be 15:00 hours to 16:00 hours).
timeSeriesSet
679
Up to 2011_01 it is necessary that all data to be displayed in the ScadaDisplay is beforehand available as time series. This includes simple sums
and differences between other time series. From 2011_02 onwards, it is possible to include in the ScadaDisplay configuration one or more
transformations. These transformations will make it easier to use derived time series. The derived timeseries will be calculated on-the-fly as
temporary timeseries. The transformations are processed in the order they appear in the configuration.
NB. For 2011_02 only the UserSimpleFunction is supported and tested as a transformation that can be used.
NB. It is required that the timeSeriesType of output variables are set to temporary.
This sample will allow the variable with variableId Observation_minus_correction to be displayed on a scadaPanel. This variable refers to a
temporary timeseries that will be updated on-the-fly with the difference between the two other variables Observation and Correction.
<variableId>Observation</variableId>
<timeSeriesSet>
<moduleInstanceId>Afgeleide_Twentekanalen</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Hydro_LMW_TK_H</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="hour" start="-1" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<variable>
<variableId>Correction</variableId>
<timeSeriesSet>
<moduleInstanceId>Afgeleide_Twentekanalen</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Hydro_LMW_TK_H</locationSetId>
<timeSeriesType>external historical</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="hour" start="-1" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</variable>
<variable>
<variableId>Observation_minus_correction</variableId>
<timeSeriesSet>
<moduleInstanceId>Afgeleide_Twentekanalen</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Hydro_LMW_TK_H</locationSetId>
<timeSeriesType>temporary</timeSeriesType>
<timeStep unit="hour"/>
<relativeViewPeriod unit="hour" start="-1" end="0"/>
<readWriteMode>add originals</readWriteMode>
<synchLevel>9</synchLevel>
</timeSeriesSet>
</variable>
<transformation id="TransformationObservationMinusCorrection">
<user>
<simple>
<expression>Observation - Correction</expression>
<outputVariable>
<variableId>Observation_minus_correction</variableId>
</outputVariable>
</simple>
</user>
</transformation>
...
]]>
680
Known Issues
When using Delft-FEWS the configuration can be present as files on the file system or can be contained in a local data store. The svg files that
are used for the schematic status display work in both cases. However, if the svg files refer to separate image files, then the schematic status
display can only display these images if the image files are present as files on the file system. If these separate image files are contained in a
local data store, then they cannot be displayed in the schematic status display. Therefore, when using separate image files, make sure that the
configuration is present as files on the file system. If this is not possible, then it is possible to choose from two different workarounds:
The first possible workaround is to not use separate image files. For a schematic image it is possible to create svg elements that
resemble the contents of the image. If these svg elements are added to the svg file for the schematic status display, then there is no need
to use the image file anymore.
The second possible workaround is to use embedded images instead of separate image files. The section Embedding image files into
SVG files describes how to do this.
SVG specification
The schematic status display uses SVG files. For details on the format and possibilities of SVG files, please refer to [Link]
for the SVG 1.1 specification.
It is possible to embed images file into an SVG file. If an image file is embedded into an SVG file, then the original image file is no longer needed,
because the image data is then available from the SVG file itself. To embed an image file into an SVG file using Inkscape, do the following. Open
the SVG file in Inkscape. Add one or more images to the file in the normal way. Then select menu "Extensions", then select "Images", then select
"Embed Images". Then click "Apply". Then save the SVG file in the normal way. Now the images are embedded into the SVG file. If all the images
in an SVG file are embedded, then Delft-FEWS only needs the SVG file itself (and not the original image files) for displaying in the scada display.
Controlling the resizing behaviour of an svg document within the scada display
In an svg file in the root element use the following attributes to control its resizing behaviour: width, height, viewBox, preserveAspectRatio.
If only width and height present, then the svg document gets an absolute size, appears in the top-left corner of the display and is never
resized (not even when the display window is resized). This means it can be cut-off when the display window is too small.
If only viewBox and preserveAspectRatio are present, then the viewBox determines the rectangular region of the svg document that is
drawn in the display window (the coordinates for the viewBox edges are the same as the coordinate system used within the svg file,
usually the coordinates are in pixels). The preserveAspectRatio determines how the drawn region is sized and aligned within the display
window. In this case the svg document is automatically resized when the display window is resized.
Examples:
The svg document is scaled to fit the display window and the aspect ratio is preserved.
681
The svg document is scaled and stretched to fill the window (aspect ratio is not preserved).
Only the region with coordinates 0 <= x <= 1200 and 0 <= y <= 700 pixels is shown. The svg document is not
resized when the display window is resized.
Background information:
The width and height attributes in the root svg element of an svg file determine the size of the viewport, in other words the size of the svg
document when it is viewed. The coordinates of the objects in the svg file are specified in user space, which is different from the viewport space.
The viewBox attribute in the root svg element defines the rectangle in user space that should be mapped to the edges of the viewport. The
preserveAspectRatio attribute in the root svg element determines how this mapping takes place. This mapping uses one of three possible
methods: "meet", "slice" or "none". See [Link] and
[Link] for more detailed information.
Determining the rotation anchor point for an SVG object in user space coordinates
To determine the rotation anchor point for an SVG object in user space coordinates using Inkscape, do the following. Open the SVG file in
Inkscape. Select the object. Select menu "Edit", then select "XML Editor". Then in the window that opens, in the box on the right, look for the
important attributes (e.g. "x", "y", "width", "height", "transform" or "d") and use their values to calculate the required anchor point. E.g. to rotate a
"rect" svg object with attributes width="200" height="200" x="500" y="300" transform="translate(50 0)" around its center, use anchor point
coordinates (x, y) = (650, 400).
By default text in an svg text object is left-aligned and the x and y coordinates of the object denote the upper-left corner of the object. To
right-align text in an svg text object, add the following attribute to the text element:
When an object is right-aligned, then the x and y coordinates of the object denote the upper-right corner of the object. The attribute text-anchor
can also have the values "start" or "middle". To create multiple pieces of text with different alignments, use separate text objects.
When using right-alignment, the decimal separator for number values can be aligned by using the following number format:
Here "#" means one or more decimals before the decimal separator and ".00" means that always two decimal places are shown (number is either
rounded or padded with zero's).
In ArcGis it is possible to export map as svg file. Go to "File > Export map" and select *.svg as export file type.
Within inkscape the size of an svg file can be reduced by saving it as compressed svg file (*.svgz) or as plain svg file. Also cleaning up the file by
using the "vacuum defs" option in the file menu makes the file significantly smaller.
16 Modifier display
Overview
The [Link] is used to configure the modifier display. The modifier display is used in an IFD-environment to
manage modifiers.
Contents
Overview
Contents
Schema
Create modifier buttons
TimeSeriesDisplayConfig
682
Schema
Below the schema of the modifiers-display is shown.
In the modifiers panel modifiers can be created by pressing the create modifier button and select a modifier type. A shortcut in creating
modifiers is using shortcut-buttons. The display below shows an example.
683
Besides the create modifier button two buttons are shown. A button with the text "wechng" and one with the text "aeschng". Both modifiers
can be used to create a modifier directly. For example after pressing the wechng-button a temporary wechng-modifier will be created.
Pressing this button will be the same as pressing the create-mod button and selecting "wechng".
To define for which modifier types a shortcut-button should be created a list of modifier-ids should be listed in the [Link].
<modifierId>wechng</modifierId>
<modifierId>aescchng</modifierId>
]]>
TimeSeriesDisplayConfig
showTimeSeriesModifiersButton
A timeseriesmodifier can be shifted in time by using arrow-buttons. By default these buttons are not visible because the default value of this option
is false.
However when this option is enabled, green arrow buttons appear next to the table- and graph-button.
showTablePanel
When the display to create timeseriesmodifiers is started by default a table and graph is shown. This can be adjusted with this
option. When this option is set to false then the display will by default with only the graph shown.
defaultOperationType
The option defaultOperationType can be used to define which operation type (add,substract etc) will be selected after startup of the display.
When no option is defined the operationType timeseries will be selected.
incrementOperationTypeAdd
The option incrementOperationTypeAdd will be used to define the increment for operation types add and substract when using the spinner-button
to increase or lower the value.
Introduction
DELFT-FEWS allows open integration with various external data sources and modules. Many of these will require identification of locations and
parameters using a native definition. DELFT-FEWS allows internal location Id's, parameter id's, flags and units to be mapped to external id's.
For each external data source, or each module, a different set of ID' mappings may be defined. In specific cases a mapping may be used on
exporting data to the module and a different mapping in importing data from the module.
Contents
01 ID Mapping
02 Unit Conversions
03 Flag Conversions
684
01 ID Mapping
IdMaps
IdMaps are defined to map internal location and parameter ID's to external location and parameter ID's. The configuration of these can be done in
two ways. In the first separate mappings can be defined for the locations and for the parameters. Although this is the mist efficient method, it is
not suitable in all cases, as specific locations may require a different mapping,. A second definition can be created where the mapping is done on
the basis of the unique combination of location/parameter. Each IdMap configuration may only use on method of defining mappings to avoid
ambiguity.
Each IdMap configured must be registered in the IdMapsDescriptors configuration (see Regional Configuration). The Id used in registering the
IdMap is the same as the name of the configuration. When available on the file system, the name of the XML file for configuring an IdMap called
for example ImportNWP may be:
default Unit to indicate the version is the default configuration (otherwise omitted).
parameter
Attributes;
location
Attributes;
internal : internal location Id (must be in the location configuration )
external : external location Id (free format string)
map
Attributes;
685
02 Unit Conversions
UnitConversions
UnitConversions are defined to map internal units to external units. For each unit to be converted, the conversion method can be defined. This
may be a simple multiplication (e.g. feet to metres), as well as a possible increment (e.g. o F to o C). The converted value is (inputUnitTypeValue *
multiplier) + increment. In DELFT-FEWS the convention for storing level data is that this is with reference to the local datum. If the external unit
specifies the datum is global, a Boolean flag can be used to indicate the data should be converted on import.
Each UnitConversion configured must be registered in the UnitConversionsDescriptors configuration (see Regional Configuration). The Id used in
registering the UnitConversion is the same as the name of the configuration. When available on the file system, the name of the XML file for
configuring a UnitConversion called for example NWPUnits may be:
default Unit to indicate the version is the default configuration (otherwise omitted).
addInverses
When set to true, the inverse unit conversions (from output to input unit) are automatically generated and added to this set of unitConversions
inputUnitType
Definition of the input unit. Depending on the conversion being used for import or for export this may be the unit as defined in DELFT-FEWS or
that as defined in the external data source.
outputUnitType
Definition of the output unit. Depending on the conversion being used for import or for export this may be the unit as defined in DELFT-FEWS or
that as defined in the external data source.
multiplier
incrementer
686
Value to be added to data on import/export.
convertDatum
Boolean flag to indicate if the data is to be converted on input to local reference level. If this value is true, and the parameter to be imported to
support datum conversion (see Parameter definition in Region Configuration), the "z" value of the location is subtracted from the data (see
Location definition in Region Configuration).
03 Flag Conversions
Flag Conversions
FlagConversions are defined to map internal quality flags to external quality flags. Each flag to be converted a conversion can be identified, but a
default flag may also be given to ensure the exported or imported data carries a flag. A flag to identify missing values must also be configured.
Each FlagConversion configured must be registered in the FlagConversionsDescriptors configuration (see Regional Configuration). The Id used in
registering the FlagConversion is the same as the name of the configuration. When available on the file system, the name of the XML file for
configuring a FlagConversions called for example NWPFlags may be:
default Unit to indicate the version is the default configuration (otherwise omitted).
flagConversions
687
Root element of for defining a flagConversion. For each element an inputFlag- outputFlag tupple must be defined.
inputFlag
name
value
description
defaultOutputFlag
missingValueFlag
Introduction
DELFT-FEWS allows module datasets to be defined for external forecasting modules. These datasets can then be managed through the
configuration management of DELFT-FEWS. This also allows multiple versions of module datasets to be defined (e.g. with adapted module
structure or module parameters). When constructing what-if scenarios, an alternative version to the default can be selected to explore the impact
this has on the results of forecasting modules.
Definition of datasets and parameters is not a requirement for the use of external forecasting modules. Module datasets and module parameters
are only used in DELFT-FEWS by the General Adapter module. Both can only be exported to the external module. Import of module datasets
and parameters is not possible.
Two methods are available for managing module datasets and parameters:
moduleDatasSets; in the ModuleInstanceDataSets table in the database of the configuration or in the ModuleDataSets directory.
moduleParameters in the ModuleParameters table in the database of the configuration, or in the ModuleParameters directory.
Contents
01 Module Datasets
02 Module Parameters
01 Module Datasets
Module Datasets
Module datasets are defined to be exported to a module directory prior running of the module. The module datasets is identified by the
688
ModuleInstanceId of the General Adapter configuration in which it is to be used.
The module datasets is not an XML file, but a ZIP file containing all native module data. This is exported by the General Adapter to a directory
specified in the General Adapter configuration (see Module Instance configuration section).If the external module requires a directory structure,
then this information should be contained in the ZIP file, relative to the directory specified as export directory.
When available on the file system, the name of the ZIP file for configuring a module dataset for example for the ISIS model of the Eden used in
the Eden_Historical General Adapter module:
default Flag to indicate the version is the default configuration (otherwise omitted).
02 Module Parameters
Module Parameters
Module Parameters can also be managed by DELFT-FEWS similar to management of Module datasets. The difference is that where the module
datasets are handled as ZIP files, with no actual interaction between the dataset and DELFT-FEWS, module parameters can be defined in a
native DELFT-FEWS format and exchanged with the external module in the published interface format through the General Adapter module. A
prerequisite for this exchange being meaningful is that the module adapter supports this format of the published interface, and can transform this
into the native module format.
As in the module datasets, module parameters are defined in a configuration where the name is the same as the moduleInstanceId of the General
Adapter module it is to be used in (though a different name may also be called by the General Adapter- see the moduleInstance configuration
section).
When available on the file system, the name of the XML file for configuring Module Parameters for example for the Eden_Historical module may
be:
default Flag to indicate the version is the default configuration (otherwise omitted).
The structure of the module parameter XML configuration is the same as that applied in the Published Interface format. See the relevant
documentation for the definition of the schema and required configuration.
Introduction
689
In this chapter some additional configuration is described which is required in providing DELFT-FEWS as an operational forecasting system.
Only the elements relevant to DELFT-FEWS as either a stand-alone system or as an operator client are described. Configuration of the Master
Controller as the hub of a live forecasting system is described in a separate document.
The items to be configured are predominantly the root configuration files. These determine how a local client is started (ie if it is in stand alone
mode or if it is an operator client), as well as for the live system the details on the Master Controller to connect to and options for
synchronisation.
clientConfig
logConfig
RollingBarrel_Database (this file should not be changed and is not described)
synchConfig
synchChannels
synchProfiles
This chapter also describes the procedure in setting up scheduled current forecasts in the live system and the procedure for setting up
enhanced forecasting.
The files described in this section requires specialist knowledge of DELFT-FEWS. Making changes in these files will influence
the behaviour of the system significantly
Contents
01 Root Configuration Files
02 Launching FEWS
03 Setting Up Scheduled Forecasts
04 Setting Up Event-Action Configuration
05 Setting up sending emails on events
06 Checklist for creating a live system from a stand alone system
07 Setting up alerts for the Alarmmodule
clientConfig
The clientConfig File determines if the instance of DELFT-FEWS is to run as a stand alone system, or if it is to connect to the master controllers
defined below.
Since 2011.01 the clientConfig file is no longer required for stand alone. Stand alone is the default when the client config is missing
clientType
Operator Client
Stand Alone
690
LogConfig
To be completed
synchConfig
The only setting that may demand editting is the <login timeout="10" />, which controls the timeout for a login attempt of the OC on the MC (in
seconds). (This element may be absent, in which case the timeout is 10secs)
It may be needed to extend this timeout if the JMS server is very busy (very many clients starting up and synchronising at the same time, e.g.
when all the PCs for a workshop are all starting up at the same time).
Note: the xml config can only extend the time out from the default 10 secs. Settings less than 10 secs are ignored)
synchProfiles
The file [Link] contains several different profiles for fine-grained control over the synchronisation with the database.
Full Profile for synchronising fully between the Operator Client and the Master Controller
Minimal Profile for synchronising minimal between the Operator Client and the Master Controller
Custom Customizable synchronisation between the Operator Client and the Master Controller
691
FS Synchronisation profile for the Forecasting Shell Server
From version 2010.01 onwards, it is possible to get an overview of the active users. This overview is available in both the Operator Client and the
Admin Interface. In order to make this functionality work, the file [Link] has to be configured properly in each of the Operator Clients.
Before version 2010.01, the file [Link] would typically contain several profiles containing the following snippet:
<channelId>[Link]</channelId>
<schedule>
<single/>
</schedule>
<timeOut>10000</timeOut>
]]>
From version 2010.01 onwards, it is recommended to replace this snippet for the profiles 'Full', 'Minimal' and 'Custom' (not for ConfigManager) by
the following:
<channelId>[Link]</channelId>
<schedule>
<continuous>
<period divider="1" unit="minute" multiplier="3"/>
<priority>low</priority>
</continuous>
</schedule>
<timeOut>10000</timeOut>
]]>
This defines that the information needed for these overviews is synchronized every three minutes.
synchChannels
To be completed
[Link] file
[Link] file
Module Name: [Link]
Where to Use? In the [Link] file in the root dir of the region
Screendump: na
Overview
1. Define global variable that can be used within the (XML) configuration files (e.g. if you define NAME_OFPROGRAM=c:\[Link] in the
[Link] file you can use the variable $NAME_OFPROGRAM$ in the configuration files).
2. To set software options.
692
Configuration
A list of available options is given below (all options are case sensitive):
localDatastoreFormat firebird Use firebird for the local datastore (instead of msaccess
msacces). Useful if you have large (> 2GB)
datastores. Notice that in case of using firebird the
localDataStore should be located at the fysical hard
disk and not at the network.
ARCHIVE_IMPORT_PATH
ARCHIVE_EXPORT_PATH
REGION_HOME
DEFAULT_EXPIRY_DAYS Any Sets the default expiry time for timeseries in the 10 Days
number database. You can override this when storing
of days individual timeseries by specifying the expirytime in
the timeseriesSet
DEFAULT_EXPIRY_DAYS_LOGEVENT
DEFAULT_EXPIRY_DAYS_LOGEVENT_MANUAL
alwaysAllowWriteSimulatedBelongingToAlreadyRunnedModuleInstance
alwaysAllowDummyModuleInstanceRuns
SCHEMALOCATION
PI_SCHEMALOCATION
UseLenientPiTimeSeriesParser
EXPLORER_SYSTEMCAPTION any Option to set the window title of the FEWS Explorer
string (i.s.o. [Link]). In [Link] the element
value <systemCaption> underneath <systemInformation>
should contain a reference
($EXPLORER_SYSTEMCAPTION$) to this global
property variable
NIMROD_DUMP_VALUES
checkFileWillExistOnCaseSensitiveFileSystem
693
GA_shiftExternalTimesToYear2000 boolean This setting is used to export data from the General false
Adapter always starting in the year 2000. True
means this setting is used. This overcomes the
issue with running FEWS after the year 10.000
which caused problems. Internally the dates are
handled normally.
REPORT_HTML2PDF_PROGRAM
COUNTRY
IP
PREFIXED_IP
NESTED_IP
JdbcServerPort integer IP port number on which the JDBC server will listen 2000
after startup
doCaseConsistencyCheckOnMc boolean Check new config files for case insensitive matches true if
on the MC, to prevent config corruption in localDatastoreForm
(case-insensitive) MsAccess localDataStores is MsAccess, false
Firebird
tempDir string Sets the temp dir to something other than the Windows default
windows default e.g. =F:/Temp
694
hideExternalHistoricalAfterSystemTime boolean If true, only external historical data prior to T0 is false
visible in the timeseries display. Any existing
external historical data after T0 will not be shown.
Known issues
02 Launching FEWS
How to setup the launch of Delft-FEWS
There are a number of options when configuring the launch of your Delft-FEWS application. The most simple is to double click on the appropriate
executable in the bin directory e.g. C:\FEWS\bin\Your_Region.exe - this will launch FEWS directly.
In the bin there will also be a file of the same name with the extension .jpif (e.g. Your_region.jpif). Since the executable is generic (except for the
name) this file contains all the information required to launch your application.
-cp
$JARS_PATH$
*If wanting to use the config manager use line: [Link] or for the launcher use
[Link]
-cp
$JARS_PATH$
[Link]
$USER_HOME\Application Folder name of configuration, of which a working copy will be copied and used in the defined directory.
Data\FEWS\Anglian_SA The base directory should always be at the same level a JRE and BIN
Notice that JAVA system properties should be defined after the -D keyword.
You must create a folder in the root directory (same level as the bin and jre) which will contain the launcher configuration files (called for example
FEWSLauncher). You will need to have an executable and jpif in the bin directory - you start the launcher by double clicking on the executable.
But first we need to set up the launcher config files...
Firstly you can configure the password protected level of access required. Please note that this is not a highly secure method of password
protection but is meant simply to restrict access to those who require it.
695
This is done using the [Link] file (details below). The passwords are contained in the binary [Link] file.
Once you have entered the correct password you will be shown the appropriate screen from which you can choose the FEWS appplication you
wish to launch.
This is configured using the [Link] (details below). You can also display your organisations logo or picture of your choice by adding an
image in the FEWSLauncher directory called [Link] of size 455 x 540 Pixels.
[Link]
The [Link] file needs to follow the diagram in the following schema.
696
This [Link] contains the actions and roles required. The actions are linked directly to the [Link]. The 'role' describes which users are
have access to which actions. For example a forecaster might have access to the explorer only, while a system administrator may have access to
the admin interface and configuration management interfaces. For an example file click here. This file links actionIds to user roles.
ViewReports
LaunchFewsClient,
LaunchConfigManager,
LaunchAdminInterface,
Upload OnLine
The role of Forecaster has the privileges to run the fews client (LaunchFewsClient).
The role of ConfigManager is allowed to run fews, to run the configmanager, and upload files (LaunchFewsClient, LaunchConfigManager,
Upload OnLine).
The SystemManager is allowed what the configmanager can do as well as login to the admin interface (LaunchFewsClient,
LaunchConfigManager, LaunchAdminInterface, Upload OnLine).
Howto create a [Link] file from a [Link] is specified in the privileged section, see
[Link]
[Link]
This file contains the actions which are accessed through the launcher. The id links with the id given in the [Link]. You can see from the
schema that the action can link to a web page (for example the admin interface) or to a java application (fews explorer or config manager). You
will notice similarities between the attributes of the JavaAppType and those found in the .jpif file in the bin directory. An example file can be seen
here
The jvm Option gives you the chance to tweak the heap size used by the java virtual machine. Use -Xms for the initial java heap size (e.g.
-Xms256m), -Xmx for the maximum heap size (e.g. -Xms1G). The syntax is then as follows:
jvmOption="-Xmx1G -Xms256m"
697
03 Setting Up Scheduled Forecasts
DELFT-FEWS will, however, not allow a manually submitted forecast to be set as the "current" forecast. Scheduling of Current forecast should be
only done by suitably authorised users, and follow a carefully defined scheduling plan. These users must have access to the admin interface tool
(see Admin Interface manual).
Start an Operator Client. Select the workflow to be scheduled. Select the option "Scheduled Forecasting". Set the Start Time and End
Time properties as required, set the repeat time and the ShiftT0.
Submit the forecast to the Master Controller by clicking on run.
Go to the Administrator Interface. Select the tab Forecast Tasks and the Scheduled Tasks item. The forecast run just scheduled should
be available in the list.
Click on Details. This will open a new web page with relevant details on the forecast run.
Select "Download Properties". Save the XML file to a suitable location/name.
Open the XML file. Change the attribute
<makeForcastCurrent>false</makeForcastCurrent>
to
<makeForcastCurrent>true</makeForcastCurrent>
Save the XML file.
698
Cancel the forecast just submitted.
Return to the list of scheduled forecasts in the Admin Interface
Select "Schedule new Forecast"
Enter a relevant description (this will appear as a description of the run).
Enter a tag if this forecast is to be enhanced (see next section)
Select workflowId. This should be the same as the original forecast submitted.
Specify Failover behaviour. This is only relevant when two master controllers are available running in Duty/Standby. If this
workflow is to run on one Master Controller only, and should be replicated on the other then this item should be set to true. It
should then not be scheduled on the other Master Controller. If it is set to False, then the run should be scheduled on both
Master Controllers separately.
Enter details on TaskDue time and TaskRepeat time
Select the file to upload and use the browse button to load the XML file just defined.
Select Submit.
Confirm results in the Scheduled Forecasts list.
This can only be done by suitably authorised users, and should follow a carefully defined scheduling plan. These users must have access to the
admin interface tool (see Admin Interface manual).
A run that is to be enhanced must have already been submitted to the Master Controller and available in the Scheduled Forecasting Lists. The run
is identified by its tag available in that list.
NOTE: when deleting action configurations, the Action Event Mapping must be deleted first due to relations in the database.
699
<?xml version="1.0" encoding="UTF-8"?>
<actionxml type="task">
<enhance>
<tag name="AIRE_FORECAST"/>
<suspend/>
</enhance>
</actionxml>
Note: a one off task requires a cardinal time step and a reference time to establish a correct T0 for the run. It also needs the " template task" with
the relevant tag scheduled (in suspended mode) as a singe/one-off task.
Then use the info here to create a new MC:SystemAlerter task. For the new task use the tag from the action config and make it a one-off task.
The xml file that needs to be uploaded contains the settings for the emails (see [Link] schema). In here it is possible to use the tag
%LOG% in the body of the email. This tag will then be replaced by the logmessage(s) that triggered sending the email. An example:
<alerts>
<emailalert>
<recipients>
<recipient email="[Link]@[Link]"/>
</recipients>
<configuration>
<smtp host="[Link]"/>
</configuration>
<subject>
<subjectline content="The subject line of the email to send"/>
<substitutions/>
</subject>
<body value="%LOG%"/>
<attachments/>
</emailalert>
</alerts>
]]>
When scheduling the task, set the start time somewhere in the future, otherwise the task runs immediately. Then search the new task in
"scheduled tasks" and set it to suspend. When the action config is triggered, then it copies the suspended task and runs the copy once.
700
06 Checklist for creating a live system from a stand alone system
There are a number of configuration aspects which must be considered when moving from a stand alone environment (i.e. workflows are
executed on your local PC) to a live system (i.e. workflows are executed on a forecasting shell machine).
Please ensure that these steps are followed to avoid problems in a live system environment
1. Synch levels
The synch levels determine how data is synchronised between the components of the live system. Please check all timeseries sets are assigned
a synch level. Note that when the synchlevel is omitted, it defaults to 0, so only for scalar forecasting timeseries the synchlevel can optionally be
left out.
The different synch levels which should be assigned to time series sets are described here section A.5.
2. Maintainance workflows
There are a number of maintenance tasks which should be scheduled on the live system through the admin interface (this is described in detail
here. These include the rolling barrel workflow for the forecasting shell machine.
This workflow should be created which include two "dummy" module instances:
]]>
These modules do not need require configuration in the modules directory but they should be registered in the ModuleInstanceDescriptors file i.e.
<moduleInstanceDescriptor id="MarkedRecordManager">
<description>Records pending deletion</description>
<moduleId>MarkedRecordManager</moduleId>
</moduleInstanceDescriptor>
]]>
These modules should also be included in the modules file (systemConfigFiles) where the link is made to the appropriate class:
<moduleDescriptor id="MarkedRecordManager">
<className>[Link]</className>
</moduleDescriptor>
]]>
Contents
Contents
Introduction
Schedule fixed alert
701
Schedule template alert
Introduction
Starting with release 2011.02, a task for the MC_SytemAlerter workflow can be configured to send an alert to the Alarmmodule, developed by
Imtech for the IWP system. The alert sending has been implemented as an invocation of the Alarmmodule via a SOAP call (also know as a
Webservice call). This page how to describe the FEWS-related part of the implementation.
Currently these tasks can only be scheduled using the "Upload task(s) from file" functionality in the Admin interface. The uploaded XML
configuratio should conform to the [Link] schema. For examples see below. The Alarmmodule alert part is defined in the referenced
[Link] schema.
702
<taskList xmlns:xsi="[Link] xsi:schemalocation="
[Link] [Link] xmlns="
[Link]
<task>
<taskStatus>S</taskStatus>
<runOnFailOver>false</runOnFailOver>
<taskProperties>
<description>IWP_0_Alarm</description>
<workflowId>MC_SystemAlerter</workflowId>
<taskSelection>
<singleTask>
<time0>2011-09-21T[Link].000Z</time0>
</singleTask>
</taskSelection>
<forecastPriority>Normal</forecastPriority>
<makeForcastCurrent>false</makeForcastCurrent>
<makeStateCurrent>false</makeStateCurrent>
<mcSystemAlerter>
<alerts>
<alarmModuleAlert>
<webserviceURL>
[Link]
<alarmDiagTemplateLine code="%EVENT_CODE%" level="1" source="%MC_ID%"
description="%LOG%"/>
</alarmModuleAlert>
</alerts>
</mcSystemAlerter>
</taskProperties>
</task>
</taskList>
]]>
Introduction
This section can be used by the user as a guide when setting up a forecasting system. It will help and support with the main steps of setting up a
DELFT-FEWS application.
DELFT FEWS is a collection of standard displays, modules and plug-in. Together with external models these will form a forecasting system. In
this chapter only the standard DELFT-FEWS components will be used. For background information on the different components reference is
made to the description of the individual components in previous chapters.
Contents
01 Requirements
02 Designing the Forecasting System
703
03 Creating a FEWS Application Directory
04 Static Configuration
01 Requirements
These questions are just a small selection of questions that should be asked before really starting to build a FEWS. Once a clear idea about what
is required is established you can start designing and building a FEWS. Once your FEWS application is completed you can use the FEWS as a
system that helps you on a structured manner through all required steps of making a forecast.
All of the above questions will not be answered in this section, but will focus on the practical side of the system. What data do you need when
setting up a FEWS application.
ESRI Shape files for GIS map layers of the area you want to make a FEWS for. The FEWS explorer uses shape files as a background for
your locations. Make sure the GIS map layers are in the correct co-ordinate system.
Meta-data of the locations that are used in the FEWS (id's, names, co-ordinates)
Meta-data of the observed time series (hard and soft limits, thresholds, time steps)
Parameters used in the FEWS (id's, names, units, etc..)
Next you need to know what data is available at the gauging stations and in what format this will be delivered in the operational system.
All time series file formats, with good description of the file format for the time series you want to import. You also need to know the time
step of the data.
All grid file formats for the meteorological forecasts.
What is the co-ordinate system used. The FEWS needs all data, maps and locations, to be in the same co-ordinate system.
In what time zone is the date to be made available.
Extra information you need when setting up a FEWS application can be:
What is the procedure of the meteorological institute for making observations and forecasts available.
What interpolation procedures do I want to use for my time series and Grid data:
Linear interpolation of water levels and discharges?
Spatial interpolation of meteorological data, rainfall and temperature?
What is the data flow I want to use, i.e. which data is required for input for the models and which data is coming out of the models.
Do I have calibrated configuration files of the simulation models used in the FEWS
When making reports what will be the layout of the reports you want to produce.
When setting up a FEWS system it is recommended that you have a complete set of the required data before you start configuring. Experience
shows that incremental updating of the FEWS configuration files can cost you a lot of extra time. It is best to first make a layout of your system,
prepare GIS map layers, decide on the number of stations and series you want to use, decide on the interpolation procedures, decide on the
models to use, etc.., before starting to build a FEWS application.
import observed hydrological and meteorological data, and import forecast meteorological data
fill gaps using interpolation
run a rainfall- runoff model
show the results in a report
704
The FEWS should also check the imported data on outliers and extreme values. In a simple schema we will show the four main tasks.
Of course each of these tasks can be split into smaller elements. The import task (or workflow) will for example import data from different sources;
it is best to import data from different sources in separate elements. Each of these elements will be a Module Instance, i.e. a configuration of the
FEWS import module.
The main FEWS application directory must contain the 'bin' and 'jre' directories; these are the FEWS binaries and the Java Runtime directories.
Add a new directory named 'NewFEWS' in the main FEWS directory.
This application directory must now be filled with all required sub-directories and configuration files used by a basic FEWS application. Copy the
sub-directories of an existing application directory to the newly created 'NewFEWS' sub-directory. The minimum FEWS structure must look like
this:
"\ColdStates" empty
"\Config" FEWS configuration files
"\Help" FEWS Help file
"\Icons" FEWS icons
"\localDataStore" contains the FEWS database and cache files, empty at start
"\Map" contains the map properties file '[Link]' and shape files
705
FEWS executable
Besides a FEWS application sub-directory every FEWS has its own application executable, this is a small executable telling the main
FEWS programs where the application directory is located and which programs should be used. Associated to each executable is a jpif
file which contains information on how to start the Java Virtual machine.
e.g Files required in the \bin directory for our new FEWS application
[Link]
[Link]
It is possible to run multiple FEWS applications with one set of FEWS program files (binaries), to make a distinction between the FEWS
applications each FEWS should have an ID ('NewFEWS'). The FEWS ID is specified in the last row in associated jpif file.
04 Static Configuration
Adding static data, configure the FEWS static configuration files
Map layers
Parameters
Locations
First Prototype
Adding Workflows and Module Instances
Workflows
Module Instances
Configuring FEWS displays
Map layers
The FEWS explorer contains a component to show map layers. These map layers are loaded by the FEWS explorer when the FEWS application
is started. The location of the map layers is the "\map" directory of the application. As of the 2007-1 release the maplayers are now put in the
Config/MapLayerFiles directory. As such they can now be uploaded with the Config manager and distributed to all the connected clients. When
using map layers make sure the files are ArcView Shape files and that the co-ordinate system of the shape files are the same. For for information
on how to configure the MapLayers see the Fews Explorer component documentation.
Parameters
Parameters are stored in the "Parameters" XML file, located in the regional configuration files. Make sure you keep the number of parameters
limited to a basic set, the more parameters you use the more complicated the configuration will become. An important property of a parameter or
parameter group is the unit. Use the same unit for a parameter group; this can minimise errors in configurations where conversions between time
series may introduce errors. Remember that the FEWS can convert units when importing or exporting external time series. More on configuring
parameters can be found in paragraph 5.4: "Parameters".
Locations
The Locations configuration file contains the information on all locations of the FEWS application. In a normal FEWS you will have a set of
meteorological stations and hydrological stations. You can also add basins as locations. In this case you must enter the same information as for a
location, you just treat it as a basin.
When adding locations to the system a location ID must be entered. Try using known location ID's, for example the same ID's as used in the
telemetry system. Configuration of locations is explained in chapter 5.2.
Location Sets are introduced in the FEWS to define logical groups of locations. During configuration of the FEWS locations file try to add the new
locations also to the correct location sets. More on configuring location sets can be found in paragraph 5.3: "Location Sets".
Grids, Polygons and Longitudinal Profiles also require a location to be configured in the Locations XML file. Grids and Longitudinal Profiles require
extra information, configured in the grids and branches configurations respectively.
Configuration of the location icons is done in the "LocationIcons" XML file located in the FEWS system configuration files, see chapter 4.5:
"Location Icons".
706
First Prototype
In principal all basic regional information has now been configured and we can start our first prototype. Three more files must be configured to
start our first prototype:
"Explorer" XML file, located in the FEWS system configuration files directory. Enter correct names, co-ordinate system selection, etc..
See chapter 4.2 for details about configuring the FEWS explorer configuration file.
"[Link]" ASCII file, located in the FEWS application root directory. This file must contain some global properties like the
regional directory.
"LogConfig" XML file, located in the FEWS application root directory. The correct location of the '[Link]' file (the FEWS debug log file)
must be configured in this file.
We are now ready to launch our first prototype by running the NewFEWS executable located in the '\bin' directory.
RegionConfigFiles
Locations 1.00 [Link]
LocationSets 1.00 [Link]
Parameters 1.00 [Link]
SystemConfigFiles
DisplayDescriptors 1.00 [Link] (not edited)
DisplayInstanceDescriptors 1.00 [Link] (not edited)
Explorer 1.00 [Link]
LocationIcons 1.00 [Link]
ModuleDescriptors 1.00 [Link] (not edited)
The final task in setting up a basic forecasting system is configuring the workflows that perform the actual forecasting steps.
Workflows
A detailed description of workflows is given in chapter 7. In our test we will add four workflows; an Import workflow, an Interpolation workflow, a
Model workflow and a Report workflow. Each of these workflows will include one or several module instances. Workflow files are added in the
workflows directory and a descriptor of the workflow must be added to the "WorkflowDescriptors" XML file located in the regional configuration
directory.
Module Instances
Module instances are the actual configured FEWS modules, included in a workflow that do the forecasting tasks. When adding Module Instance
to the FEWS the XML configuration file must be added to the "ModuleConfigFiles" directory and the id of the Module Instance must be added to
the "ModuleInstanceDescriptors" XML file located in the regional configuration files.
Some Modules require additional configuration files to be added to the system beside the standard Module Instance configuration file. The import
module for example can include configuration files that;
After making the workflows the FEWS displays must be configured in order to see the data that is stored in the FEWS database.
The most important configuration file is the "filters" XML file located in the regional configuration directory. The filters configuration file defines the
locations and parameters that are displayed on the main map and in the list boxes of the FEWS explorer. Filters preferably include the parameters
and location sets; a good configuration of location sets can reduce the number of filters that need to be configured.
The FEWS displays that can be configured for making graphs are:
There are two FEWS components not included in the workflows that can be configured to show additional information on the time series. These
707
are the threshold and validation components of the FEWS. It is not required to use these files, they will however add additional information to the
FEWS.
Introduction
The Configuration Manager has been specifically created to allow the management of the configuration files for a regional configuration. Given
that DELFT-FEWS can be used in different levels, the configuration manager will need to be aware of the usage and modify its way of working
accordingly.
With the above activities a configuration can be managed but also modified.
Contents
01 Managing Configurations
02 Validation of a Configuration
03 Analysis of a Configuration
04. Automatic Configuration Update
01 Managing Configurations
Managing configurations
When the Configuration Manager is started, it is initialized with the information available in the local datastore. The datastore that is used is
specified in the jpif file that needs to be provided with the executable file.
After startup the user may attempt to connect to the Master Controller of the region. The details of which are obtained from the synchronization
configuration files for that region. If a connection has been established successfully, the download and upload buttons will be activated. If a
connection cannot be established, the buttons will remain inactive.
The file menu provides a command to create a connection to the master controller, to facilitate the possibility that a user has connected to the
network after opening a Master Controller session.
708
After selecting Login the following window is shown (an example from the Southern Configuration. If multiple master controllers are available,
these will be shown also)
The user must manually initiate a synchronization session with the master controller to download the latest configuration files. Seeing that the
configuration manager works independent from FEWS, this is a required action to ensure that the local datastore is up to date. Only configuration
files need to be synchronized.
Click on [Download] to download all configuration files. The download button is only available if a connection has been established with a master
controller.
If no download can be performed, this does not mean that the manager cannot be used.
The import function allows a single or multiple configuration file(s) to be imported from the file system. Files can only be imported into a given
group if the Configuration Manager configuration allows this.
On import, each configuration file will be assigned a new, locally unique ID. This ID is prefixed with "CM: " following by the date/time of the import
in milliseconds. The exact date/time is arbitrary, as the local ID needs to be unique only on the local machine.
In case the file to be imported is the first file that is imported of a certain configuration schema, this file is directly set as Active.
Three types of configuration files are handled by the configuration manager: XML, HTML and Binary files. The handling of each file type is slightly
different, as is shown in the following table.
FileType Handling
709
XML The file is stored in a readable form in the data store. The content of the xml file is validated before being imported. Invalid files will
not be imported.
HTML The file is stored in a readable form in the data store. The content is not validated, as no schema will be available to validate against.
Binary Binary configuration files include all other configuration files. These may be xml configuration files, module data sets or module
states.
When a configuration file is imported, the user that has imported the file is registered.
How are configuration files displayed?
An active configuration is shown having a yellow background. A selected configuration file is shown having a blue background. An active selected
configuration file is shown with a blue background, except for the ID which is shown with a yellow background. Below an example is given of two
available Locations configuration files. The active file is selected.
Of any configuration file instance, only one can be the active file. After selecting a file, click on [Set Active] to make the selected file the active file.
Only one configuration file may be active at any moment.
Deleting configuration files is possible only in limited situations. A configuration file could at some stage be used in a forecast and must therefore
remain available for at least the length of the rolling barrel.
Exporting a configuration file allows the file to be saved to the file system. The filename will be set by the configuration manager. The filename will
follow the configuration file naming convention used in the file system:
Where:
710
After exporting, the configuration manager will start up the application that has been associated with the specific file. This association must be
configured in the configuration management configuration file.
Uploading a configuration (file) involves that all modified and added configuration files are synchronized with the master controller database. An
essential aspect of uploading is that the configuration files are provided with a unique Master Controller ID. The local ID that has been set for the
configuration file cannot be guaranteed to be unique for the master controller as multiple users may be changing a configuration.
During the period between downloading a configuration and the ensuing uploading of a modified configuration, in theory some one else could
have made a change to the same configuration. The configuration manager will not deal with this theoretical possibility. The procedure that is
used is that of optimistic locking, where the last changes that are made to a regional configuration are the changes that are stored.
The uploading of configuration files is carried out in a transaction, enabling the complete transaction to be rolled back if an error occurs.
The procedure to follow when certain configuration file needs to be changed is simple. To do so, carry out the following steps:
02 Validation of a Configuration
Validation of a Configuration
Primary Validation
Direct Validation
Indirect Validation
Secondary Validation (internal dependencies validation)
Validation of a Configuration
The validation of a configuration is carried out on two levels:
The Configuration Manager will not import or upload any configuration which violates the validation rules that have been set in the Configuration
Manager configuration file.
Primary Validation
Primary validation of an XML configuration file means that the content of the XML file is in accordance with the xml schema for that type of
configuration file. There are two possibilities to carry out a primary validation: direct or indirect validation.
Any other configuration files that are not XML, ie HTML and binary files, cannot be validated.
Direct Validation
711
Direct validation involves a direct check of the configuration file against the schema as defined in the configuration file. The schema that is
referenced in the xml file may or may not refer to the latest schema. Future releases of DELFT-FEWS should however have schemas that are
backwards compatible. The latest schema is the schema that is posted on the Delft Hydraulics website.
Direct validation is the most robust validation method and is recommended to be used. The Configuration Manager has been configured to use
Direct Validation.
Indirect Validation
When indirect validation is used, the configuration file is read with the code that has been created based on the latest schema at the time the code
was compiled. The file is valid if it can be successfully read.
This validation method is less robust than the direct method and should only be used when direct access to the schemas is not available.
There are a number of distinct differences in the use of either direct or indirect validation:
Direct validation ensures that a configuration file is in accordance with the latest schema. To use this option the latest version of the
Delft-FEWS code is required. Direct validation requires access to the Delft Hydraulics web site. An alternative could be to have all
schemas available locally, but this requires that the configuration files are edited to reflect the change in schema location.
Indirect validation ensures that the configuration file will always be accepted by the system in use. No connection to the internet is
required.
The internal dependencies of a regional configuration follows the rules of a relational database. These are described in the configuration manager
management file.
When a violation is found, the Configuration Manager will provide an appropriate message indication which violation has been found, together
with the file or files that have been found to cause the violation.
The validation of internal dependencies is only be carried out on the set of Active configuration files
03 Analysis of a Configuration
Analysis of a Configuration
Approach
Implementation
Matching of timeseries
Handling of cyclic references
Handling of special cases
Using the Configuration Manager Analysis Tool
Analysis of a Configuration
Configuration analysis provides the means to in detail analyze a configuration. The analysis uses the dependencies between three configuration
objects: workflows, Module Instances and timeSeriesSets to provide a visual overview of the configuration.
712
Approach
The principle of the configuration analysis is quite simple. For each workflow in the configuration, all module instances that are used are shown in
the order in which they are used. For a selected module instance then all timeseries that are created by that module instance are displayed as the
top level timeseries.
For each of the timeseries in cascading order the following questions are answered:
This procedure is recursively followed through until for example an import module instance is encountered. Following this through allows broken
links, unexpected starts or ends to be easily found in a configuration.
Implementation
When the Analysis mode is selected, a database of all timeseries is created. This involves that all module instances are analyzed for all the
timeseries that are created and that are required within each module instance. For a selected module instance the input and output timeseries are
matched in order to build the analysis tree.
The configuration files have been made such that a large amount of freedom is given to the user in setting up a configuration.
Matching of timeseries
Key Comment
locationId A locationId is unique. A timeseries may also be identified using a locationSetId, which is a collection of locationId's. A
locationSetId may consist of a large number of locationId's. When a locationSetId is used, the locationSetId is shown in the
tree and the underlying timeseries are shown in a table.
713
moduleInstanceId a timeseries will in most cases be assigned a moduleInstanceId that is equal to the moduleInstance that creates it.
However, this is not a rule. It may equally be possible that a time series is assigned a different moduleInstanceId. For each
timeseries it is therefore required that both moduleInstanceId's are used to identify the timeseries:
- the moduleInstanceId of the module that creates the timeseries. This moduleInstanceId is the same as the configuration
filename
- the moduleInstanceId that is assigned to the timeseries when it is created
The configuration files allow a certain degree of cyclic referencing. A cyclic reference is caused when a module creates a timeseries that matches
the input timeseries. In this case an infinite loop would be caused in the analysis processing. Note that for the actual configuration a cyclic
reference does not pose any problem whatsoever.
For certain modules a cyclic reference is expected and must be handled explicitly. This is the case for the interpolation module and for the
transformation module. In other cases when a cyclic reference is found no absolute inference can be made regarding the nature of the cyclic
reference, and further analysis must be stopped.
Transformation module
A typical example of apparent cyclic referencing in the transformation module configuration is given with the following example, in which a time
series is created through merging a number of time series where after in the same configuration the merged time series is checked to ensure that
no values below a given value are found.
The handling of the time series is carried out in two transformation steps:
Step 1: merge the inputs using data hierarchy. This results in a merged time series.
Step 2: the merged time series is transformed using an arithmetic function, applied in two segments. The result is exactly the same as the
time series in the step 1.
The above example illustrates that using the exact same timeseries as the resulting output timeseries in two separate transformation sets is
allowed. The same could also be achieved through two different module instances, but would obviously lead to additional modulInstanceId's.
Handling of this case is straightforward. When in a single transformation module multiple transformationSets are configured having the exact
same timeseries as a result, it may be safely assumed that the last timeseries is the principal result timeseries. The other timeseries may be
skipped in the analysis.
Interpolation module
A typical example of apparent cyclic references in the interpolation module is given by the following example, in which interpolation of a single
timeseries is applied three times. Each interpolation however is configured in a different manner for a special reason. The input and output
timeseries for each interpolation are however exactly the same.
Step 1: Interpolation of the burn-in period (required to create a smooth start-up for a hydrodynamic model). This is a simple linear
interpolation, over a fixed length in a fixed period. This period is usually longer than the period allowed in step 1
Step 2: Ensure that any small gaps up to e.g. 2 hours, are interpolated using linear interpolation
Step 3: Ensure that any remaining gaps are filled in with a default value. This interpolation is required to prevent unexpected crashing of
the hydrodynamic model.
Handling of this case is straightforward. When in a single interpolation module multiple interpolationSets are configured having the exact same
timeseries as a result, it may be safely assumed that the last timeseries is the principal result timeseries. The other timeseries may be skipped in
the analysis.
There are a number of special cases, exceptions to a general rule, that must be handled correctly by the analysis.
general adapter Optional: timeseries or import from filesystem in PI format optional: timeseries or export to filesystem
714
For each workflow in the configuration, all module instances that are used are shown in the order in which they are used. These are displayed in
the left pane of the window:
The Configuration Manager configuration allows certain workflows to be specifically excluded from the analysis. In some cases this is required of
the workflow would create cyclic references that cannot be resolved internally. In the above example, this is the case for the
database_Maintenance workflow.
For a selected module instance then all timeseries that are created by that module instance are displayed as the top level timeseries:
In the right pane, Module Instances are shown with a green ball icon, while time series sets are shown with a yellow icon. A time series set may
consist of a single time series or possibly multiple time series. The details of the time series are shown in the table below the right hand pane:
715
In the above example the selected time series set consists of 4 time series.
Finding blind starts
Each result time series will be created through a module instance. This module instance in turn requires input data, either imported or created in
the system. For each result time series, all module instances that are used to create it can be followed through, analyzing the input data that is
used. The configuration manager will flag a blind start with an explicit error icon if a module instance input time series cannot be found.
Description: Functionality to automatically import and process regional configuration changes in locations, locationSets, location DBF and
attribute files
Contents
Contents
Overview
Configuration
Configuration Update Script ModuleConfigFile
PI Configuration Update Script
Sample input and output
Error and warning messages
Known issues
Related modules and documentation
Technical reference
Overview
To be able to have other programs that maintain the list locations, locationSets etc, it is possible to import configuration changes through an
import that imports update script files.
The configuration update works like a regular data import. There is a moduleinstance that imports PI update script files from an import directory.
The script file contains options for the next possible configuration updates:
716
imported, then the existing rating curve will be replaced.
importMapLayerFiles: Import dbf files that contain location attributes. The imported dbf files are put into the mapLayerFiles configuration
directory. This only works for dbf files for which an older version is already present in the configuration. If the dbf files to import would
make the configuration invalid, then an error is logged and none of the dbf files will be imported.
ConfigManager
This functionality only works for configurations that are distributed through the database via the Config Manager.
Configuration
[Link]
<configUpdateScriptConfig xmlns:xsi="[Link]
xsi:schemalocation="[Link]
[Link] xmlns
="[Link]
<versionIncrement>0.1</versionIncrement>
<scriptDirectory>$IMPORT_FOLDER$/ConfigUpdate</scriptDirectory>
<failedDirectory>$IMPORT_FAILED_FOLDER$/ConfigUpdate</failedDirectory>
</configUpdateScriptConfig>
]]>
versionIncrement = Number to increment configuration file version with. Note that the increment should be zero in case no version numbers are
used in the config files.
scriptDirectory = Location of script files to be imported.
failedDirectory = Files that could not be imported due to an error are copied to this directory.
backupDirectory = Successfully imported files are moved to this directory.
In the import directory the configuration update script files should be available. See the schema of the import moduleinstance at
pi_configupdatescript.xsd
717
PI_UpdateScript.xml
<configUpdateScript xmlns:xsi="[Link] xmlns="
[Link] xsi:schemalocation="[Link]
[Link] version="1.2">
<updateCommand>
<importMapLayerFiles>
<importPath/>
</importMapLayerFiles>
</updateCommand>
.... or
<updateCommand>
<addLocation>
<location id="loc2" name="loc2">
<description>description</description>
<shortName>short</shortName>
<toolTip>new location</toolTip>
<x>1</x>
<y>2</y>
<z>3</z>
</location>
</addLocation>
</updateCommand>
.... or
<updateCommand>
<addLocationSetItem locationsetid="Boezem_Poldergemaal_H.meting" locationid="loc2">
<idMapData idmapid="IdImportCAW" internalparameterid="[Link]" externalparameterid=
"P-new-ext" externallocationid="C-new-ext"/>
<validationRuleData hardmin="-10" rateoffall="-10" hardmax="150" parameterid="[Link]"
rateofrise="10"/>
</addLocationSetItem>
</updateCommand>
</configUpdateScript>
]]>
Note that the importPath is relative to the location of the PI script file.
Example [Link]
Known issues
None
None
Technical reference
718
Entry in moduleDescriptors: Specification of: ENTRY and DESCRIPTION in the SystemConfigFiles\[Link]
<moduleDescriptor id="ConfigUpdateScript">
<className>[Link]</className>
</moduleDescriptor>
13 Additional Modules
This chapter is not finished yet. Further content is needed about how to download a configuration from a master controller and
calibration module
Introduction
In this chapter a number of additional modules are described. These modules are generally seen as a part of DELFT-FEWS. These can,
however, also be used independent of the DELFT-FEWS system, as they have been developed as modules that can be connected to
DELFT-FEWS using the same concept that is used to link third party modules to the system i.e. through the General Adapter. Currently the
modules described include:
The chapter also includes a brief description of the Calibration Module. While this module is a part of DELFT-FEWS, and its use is described in
the User Guide, some of the concepts are described here, as well as specific conditions it poses on the configuration of DELFT-FEWS.
Contents
01 Flood Mapping Module
03 Automatic WorkflowRunner in SA mode
04 Bayesian Model Averaging (BMA)
05 Historic Forecast Performance Tool (HFPT) Adapter
719
profileFile
timeSeriesFile
asciiDemFile
asciiAxisFile
asciiMapSectionFile
geoReferenceFile
interpolationOptions
pcrScript
output
outputOption
asciiGrid
pcrGrid
filename
mapStackFileName
contour
filename
numberOfCountours
Running the flood map module
The role of the flood mapping module is to provide a projection of the results of a one-dimensional hydrodynamic module as a 2D flood surface
map. The result of the module can be displayed using the grid display, which is a part of DELFT-FEWS. The results can be equally exported to as
standard GIS interchange files using for example the ESRI Shape file and or ASC grid file formats.
Setting up a flood mapping module within DELFT-FEWS is equivalent to creating a new model. As such, specific requirements are posed on data
required in setting up the module, as well as some steps to make this data available to the module in the correct way.
Data requirements
The interpolated flood map is derived on the basis of the results of a one-dimensional hydrodynamic model. Effectively the results of the model,
given at the water level calculation points, are geo-referenced and subsequently interpolated to form a water level surface. The primary data
requirements are therefore those that link the 1D model to a 2D location. In some cases a careful interpretation of how the 1D model is
represented in two dimensions must be given, and a good understanding of the assumptions made in establishing the 1D hydrodynamic model for
the reach in question is prerequisite.
Important to note is the assumption made in 1D modelling that the water level calculated at each grid point in the model is valid at all locations on
the cross section. The georeferenced points available should therefore not only include one point per cross section (e.g. at the centre of the river),
but multiple points describing how the cross section crosses the floodplain and the main channel. Each of these points must carry the same
identification lable, and will be assigned the same water level calculated in the 1D model (see the example in Figure 160). For flood storages
areas, where the 1D model calculates a single water level, the outline of this area should be represented by a suitable number of points (an
example is given in Figure 161).
720
Figure 160 Example of geo-referenced cross sections. Triangles indicate geo-referenced points. Empty circles (see the dotted selection line)
show points belonging to one cross section
Figure 161 Example of a geo-referenced flood storage basin. Triangles indicate geo-referenced points. Empty circles (see the dotted selection
line) show points belonging to a storage basin for which the 1D model provides one calculated water level.
River Axis
The flood mapping module establishes a final flood map on the basis of flood areas with a contiguous connection with the main river channel. A
shape file representing this river axis (normally a line) is required. Note that for storage basins with an embabnkment around the edge, this
contigous connection to the main channel may be a problem. As a consequence the complete storage area should be included in the river axis
theme.
721
Figure 162 River axis map. These are indicated by the hatched surfaces.
Although not strictly used by the flood mapping module, TIFF layers of the reach at sufficient resolution (e.g. 1:10000, or 1:25000) arte required to
check consistency of data and avoid problems such as positional errors etc.
Data preparation
Prior to application of the flood mapping module the source data will need to be prepared/reformatted to make this suitable.
The resulting flood map will return missing values where the digital elevation model contains missing values. These are often seen in the main
channel in Laser Altimetry derived elevation models. These should be filled in prior to running the flood mapping module. Technbiques such as
spatial interpolation or nearest neighbour filling can be used to remove these values. The resulting elevation model should always be checked
afterwards. Once complete the digital elevation model should be saved as an ARC-INFO format ASCII grid file
722
Figure 163 Flood map sections. These are number consequetively, starting at 0.
River Axis map
The river axis map should also be saved as an ARC-INFO format ASCII grid file at the same resolution and extent as the digital elevation model.
If this theme is a line theme then it is good practice to buffer the line with a distance equal to the grid cell resolution prior to saving as a grid file.
The value of the grid cells is not important, but grid cells not in the river axis map should be saved as missing values.
Georeferenced points
The list of georeferenced points are saved to an XML file. The points for each of the sections in the flood map section coverage is defined in one
group. When creating a flood map
geoDatum
Coordinate system used in defining the points. For an enumeration of available coordinate systems see Appendix B.
geoReferenceData
Definition of a set of points falling in a particular section. For each of the sections (see above) the points to be considered when establishing the
flood map for that section should be defined in separate groups.
Attributes;
label : label of the points in this group. Each label must be associated with a label in the longitudinal profile/time series used to create the
flood map.
mapSectionId
Id of the flood map section. This should comply with the section Id's in the Flood Map Section theme. The sections given in this file are
those for which a floodmap is established. If the section id is not included in this file then a flood map will not be interpolated for that
section id.
723
point
Definition of a point falling within the current section and to be allocated a calculated level associated with the current label.
Attributes;
Once the data has been prepared, the flood map module itself can be configured. This is again configured through an XML file. The flood map
module can be applied in a number of different ways.
1. Input in the form of a set of time series, one for each calculation point in a river branch; output as a longitudinal profile time series.
2. Input in the form of a set of time series, one for each calculation point in a river branch; output as a flood map.
3. Input in the form of a longitudinal profile time series; output as a flood map.
When used to create flood maps, outputs can be defined to be returned in a number of ways;
1. As a time series of grids, showing distributed depth data for each time set,
2. As a single grid, showing the maximum distributed flood depth,
3. As a polygon of the maximum flood extent. This polygon can be formatted both as a Published Interface Polygon file, and as an ESRI
compatible Shape file.
724
Figure 165 Elements of the flood map module configuration
floodMapdirectories
rootDir
Root directory for the module. All other directories can be defined relative to this root dir.
workDir
outputDir
Output directory for resulting flood maps and publsished Interface FloodMap Stack file. This is the directory from which the General Adapter
running the flood map should be configured to read the maps.
inputDir
Input directory for time series / longitudinal profile for which a flood map is to be calculated. This is the directory where the General Adapter
running the flood map should be configured to write the data.
pcrDir
Optional location for PCRaster engine used in deriving flood maps. Required when using PC Raster executable (DLL is in the FEWS Bin
directory).
floodMapSet
Root element for the definition of activities to be run for the flood map module.
geoDatum
Definition of the coordinate system used in flood mapping. See Appendix B for enumeration of available options.
longitudinalProfile
Root element to be used when requesting output from the flood map module as a longitudinal profile.
branchFile
725
Published Interface formatted file with the calculation points to be included in the profile. Labels (id's) at the locations should co-incide with the
labels (id's) in the time series.
timeSeriesFile
Time series inputs. These should be given for each location to be considered in the profile. Note this can be used either to create a profile for use
in flood mapping in an ensuing step, or to create a profile for visualisation using the longitudinal profile display.
profileFile
Name of the output longitidunal profile. For each label where a match is found between the time series and the branch file data is retained in this
output file.
floodExtentMap
Root element to be used when using the module to create a flood extent map.
input
Figure 167 Elements of the configuration of inputs from the flood map.
profileFile
Name of the longitudinal profile file (Published Interface XML format) if this is used as an input (may have been created in the previous step). The
labels in the profile file should coincide with the labels in the geoReference points file.
timeSeriesFile
Name of the time series file (Published Interface XML format) if this is used as an input. The labels in the time series file should coincide with the
labels in the geoReference points file.
asciiDemFile
Name of the digital elevation model. This must be in the Arc-info ASCII grid format.
asciiAxisFile
Name of the axis file. This must be in the Arc-info ASCII grid format.
asciiMapSectionFile
Name of the map sections file. This must be in the Arc-info ASCII grid format.
geoReferenceFile
Name of the published Interface XML file with the geo-referenced point data.
interpolationOptions
726
Options to be used in interpolation. See the interpolation module (Module Configuration) for details. The flood map module should be defined to
use bi-linear interpolation.
<interpolationOptions>
<interpolationOption>bilinear</interpolationOption>
<interpolationType>seriesgeneration</interpolationType>
<valueOption>normal</valueOption>
</interpolationOptions>
pcrScript
Name of the PC Raster (GIS) script to run when creating the flood maps. This item should be set as;
<pcrScript>
<pcrScriptFile>pcr_flood_clump.mod</pcrScriptFile>
</pcrScript>
<pcrScript>
<pcrScriptXMLFile>[Link]</pcrScriptXMLFile
</pcrScript>
output
Root element for definition of the required output. Different output options may be selected. Multiple options may also be defined.
outputOption
727
Definition of an outpus block, requesting the flood map module to return the given output type. Enumaration of available options includes;
asciiGrid
Root element for requesting output as a time series of ASCII grid fles.
pcrGrid
Root element for requesting output as a time series of PC raster grid fles.
filename
Filename of output grid. Note that the time step number is appended E.g for time step 7 and filename "asc" this becomes "asc0000.001".
mapStackFileName
File name for the Published Interface XML format file used by the general adapter for importing the resulting flood map.
contour
Root element for requesting output as a time series of contours (polygon files). This option can only be used for creating a maximum
flood extent.
filename
Filename of output polygon file. Note that if this is given with a suffix "xml" then this is a Published Interface formatted XML file. If it is
"shp" then the output will be as an ESRI shape file.
numberOfCountours
The flood map module is run within DELFT-FEWS through the General Adapter. Details on the configuration options can be found in the
Module Configuration section. It is important to correctly configure the Import/Output directories correctly to allow the module to work
correctly.
An example of the General Adapter configuration is given below. Note a Java Class is run (this is the flood map module) with the name of the
XML file configuring the module as an argument. The other items in the General Adapter configuration are for defining data to export to and data
to import from the Flood Map module.
728
03 Automatic WorkflowRunner in SA mode
Workflowrunner
Workflows can automatically be started on stand alone systems by using the WorkflowRunner program. The WorkflowRunner will start the
given workflow either in the running region or start the region to run the workflow. The WorkflowRunner makes use of a socket interface started by
the stand alone system.
[Link]
In order to enable automatic workflow execution on a region, one has to configure the region to listen for workflow requests on a socket interface.
The socket interface can be configured in the piServicePortRange tag of the [Link] file. Next example shows a configured pi socket for port
8432.
]]>
A port number can only be configured once per operating system. Use different port numbers when running multiple regions on one machine.
Note that the start and end attribute of the piServicePortRange are set on the same port number to make the port number a fixed port number.
Run an automatic workflow from the command line using JPIF config
Workflow runs can be started from the command line. The easiest way to accomplish this is using a jpif configuration. In the bin directory
configure the '[Link]' and '[Link]' files. Configure the JPIF as follows
"[Link]
<Workflow id="Id">
<ip service="service" port="port" nr="nr">
\[optional system time in format: yyyy-mm-dd hh:mm:ss\]
]]></ip></Workflow>
Where the bold rows are WorkflowRunner specific. Here is an example of a valid jpif configuration:
After starting the jpif configuration either one of following situations can exist when the port numbers are correctly configured:
case result
The stand alone region is running and listening In in this case one should see that the workflow is executed in the system log of the explorer
on the correct port number. gui.
The stand alone region is not running. The region will be started up. The explorer gui appears and one should see that the workflow
is executed in the system log of the explorer gui.
Contents
Introduction
Approach within FEWS
BMA in FEWS
Introduction
The Bayesian Model Averaging (BMA) is standard statistical approach for post-processing ensemble forecasts from multiple competing models.
Bayesian Model Averaging (BMA) is a standard statistical approach for post-processing ensemble forecasts from multiple competing models
(Laemer, 1978). The method has been widely used in social
and health sciences and was first applied to dynamic weather forecasting models by Raftery et al (2005). Details of the method can be found
therein.
The basic principle of the BMA method is to generate an overall forecast probability distribution function (PDF) by taking a weighted average of
the individual model forecast PDFs. The weights represent the model performance, or more specifically, the probability that a model will produce
the correct forecast. In a dynamic model application, the weights are continuously updated by investigating the model performance over the most
729
recent training period. The variance of the overall forecast PDF is the result of two components. The first component is associated with the spread
between the model forecasts. The second component is the uncertainty of each individual model forecast. The magnitude of this latter component
is also determined over the training period.
See also published paper on USE OF BAYESIAN MODEL AVERAGING TO DETERMINE UNCERTAINTIES IN RIVER DISCHARGE AND
WATER LEVEL FORECASTS
BMA in FEWS
BMA in FEWS.
The present version of BMA within FEWS uses an R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging -
"Ensemble BMA Package". The package was developed by Chris Fraley, Adrian E. Raftery, J. McLean Sloughter and Tilmann Gneiting at the
University of Washington. This package is distributed under General Public License (version >= 2) .
R Package can be downloaded from CRAN R project website or directly from R package . However it is recomended that you download the
version of ensembleBMA from the link given below instead since the version 3.0-3 is tested and used within FEWS. The new version which is
available from the above mentioned link may or may not work flawlessly within FEWS.
Versions
FEWS uses R version 2.7.0.
Package: ensembleBMA, Version: 3.0-3, Date: 2008-07-21
Supporting Package chron, Version: 2.3-24, Date: 2008-07-18
Please Note: The ensemble BMA is an older version that is no longer supported by the original developers
To run BMA in FEWS, firstly install the correct version of R on the computer where BMA Module is running .
Copy the contents of Ensemble BMA [Link] and Chron [Link] under library directory of R Package.
Please Note: Use only the package versions as mentioned above for running BMA Module in Delft-FEWS.
Systematic Diagram
Preprocessor
BMA Module preprocesser prepares the input for ensemble BMA R Package. Ensemble BMA R Package uses input as CSV format. The General
Adapater configuration of Preprocessor is shown as below.
730
<executeActivity>
<command>
<className>[Link]</className>
</command>
<arguments>
<argument>%ROOT_DIR%</argument> <!-- root directory -->
<argument>piOutputTimeSeries/[Link]</argument> <!-- outputfile -->
<argument>%TIME0%</argument> <!-- Time0 -->
<argument>0</argument> <!-- Start of Lead time period in days -->
<argument>[Link]</argument> <!-- Parameter file - each column represents a row -->
<argument>piOutputTimeSeries/[Link]</argument> <!-- Number of (partly) complete Forecasts
used for calculating the training period -->
</arguments>
<timeOut>4000000</timeOut>
</executeActivity>
The above configuration has to be repeated for different lead time. Make sure that name of output file accordingly changed.
BMA Module
BMA Module is a script written under R package which uses the ensembleBMA package written for R as briefly described above. The General
Adapater configuration for running BMA Module is shown as below.
<executeActivity>
<command>
<executable>$R_EXE$</executable>
</command>
<arguments>
<argument>--vanilla</argument>
<argument>%ROOT_DIR%/config/BMA_FEWS_Script.R</argument>
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- inputfile -->
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- outputfile qauntile -->
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- outputfile weights-->
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- outputfile bias-->
<argument>0</argument> <!- input lead time in days (not used) -->
<argument>%ROOT_DIR%/piOutputTimeSeries/[Link]</argument> <!-- inputfile Forecast Length -->
</arguments>
<timeOut>1000000000</timeOut>
<overrulingDiagnosticFile>%ROOT_DIR%/[Link]</overrulingDiagnosticFile>
</executeActivity>
The above configuration has to be repeated for different lead time. Make sure that name of input and output files are accordingly changed.
Please note: $R_EXE$ is attribute which is defined in [Link] file as "R_EXE=C:/Program Files/R/R-2.7.0/bin/[Link]"
731
--- Read Arguments
--- Check if files exists
--- Read Forecast Length file
--- Load Ensemble R
--- Read input data
-- Assign labels (hard coded - similar to parameter file) (R-Code - Make sure to update this line for
your model)
labels <-c("SBK_MaxLob_DWD_GME_Q.fs","SBK_MaxLob_DWD_LM_Q.fs",...............)
Please make sure that the line "labels <-c("SBK_MaxLob_DWD_GME_Q.fs","SBK_MaxLob_DWD_LM_Q.fs",...............)" is changed according to
the number of models used.
Output of each BMA Module run are 3 files, with extension ...
forecast-date, weight for model one, weight for model 2 , .... and so on .... , sigma
Postprocessor
BMA Module postprocesser reads the output of R model prepares the data which is to be later imported into FEWS database. The General
Adapater configuration of Postprocessor is shown as below.
<executeActivity>
<command>
<className>[Link]</className>
</command>
<arguments>
<argument>%ROOT_DIR%</argument> <!-- root directory -->
<argument>piOutputTimeSeries</argument> <!-- outputDirectory -->
<argument>%TIME0%</argument> <!-- Time0 -->
<argument>3</argument> <!-- max lead time in days -->
<argument>[Link]</argument> <!-- Parameter file - each column represents a row -->
</arguments>
<timeOut>4000000</timeOut>
<overrulingDiagnosticFile>%ROOT_DIR%/[Link]</overrulingDiagnosticFile>
</executeActivity>
The postprocessor uses the output of BMA Module run (i.e. quantiles, weights and bias) and the input to generate new forecasted timeseries +
quantiles (10 , 25, 75 and 90) timeseries.
The forecasted timeseries are generated using the weights, sigma and bias correction.
732
FOR EACH models_i (skip missing forecasts)
END
BMA = BMA / sumweights
1 Introduction
This document ([Link] ) describes the Historic Forecast Performance Tool (HFPT) adapter which was first developed under
Environment Agency R&D project SC080030 ‘risk based probabilistic flood forecasting’ [1]. The original scientific name of the method is ‘Quantile
Regression’ which was subsequently renamed to HFPT. This report includes a description of the Historic Forecast Performance Tool adapter that
can be used within NFFS, the file formats for reading and writing of the quantiles, the configuration of the Historic Forecast Performance Tool
adapter in NFFS. In addition, a limited background on the method is described. In Appendix A the off-line calibration module of the Historic
Forecast Performance Tool is described.
The migration of the prototype R&D to the current version of the NFFS adaptor consists of several steps:
Increase robustness module for operational purpose. This includes adding error handling and creation of log files, adding flags to module
output/result files, module under subversion (SVN), simplified configuration (removing unnecessary items).
Updating test configurations SC080030 for case studies developed under the R&D project.
Documentation of configuration of the adapter in NFFS.
[1]
[Link]
2 Role in NFFS
The role of the Historic Forecast Performance Tool is to provide a probability distribution of the water level forecasts (or flow) conditioned on the
deterministic water level forecast (or flow forecast). This can one, a few or many or percentiles or quantiles (including median or any other
percentile/quantile like 0.05, 0.10, 0.25, 0.50, 0.75, 0.95).
The Historic Forecast Performance Tool adapter is linked to NFFS by means of the general adapter (see Figure 2.1).
figure [Link]
Figure 2.1 Schematic Interction between between Delft-FEWS and Historic Forecast Performance Tool adapter (see Werner et al., 2004, Weerts
et al., 2010)
3 Method description
The Historic Forecast Performance Tool (i.e. Quantile Regression) adapter as developed in R&D project SC080030 (see Weerts et al., 2011)
makes use of offline-derived quantiles (median, quartiles, percentiles, etc) of the probability density function of the forecast error at different lead
times (i.e. climatology of the forecast error at different lead times). The estimates of the quantiles of the forecast error are conditional on the
(deterministic) water level forecast (or flow forecast) and leadtime. In real-time, based on the water level forecast and leadtime, the moment of the
forecast error is looked up and added to the water level forecast (or flow forecast).
733
The Historic Forecast Performance Tool estimates the uncertainty due to all uncertainty sources affecting the forecast error. In NFFS (i.e.
Delft-FEWS), the Historic Forecast Performance Tool is implemented as a post-processor on a deterministic forecast (see Figure 3.1).
[Link]
Although, it is possible to estimate the QR relationships for each leadtime, this is in practise unfeasible. Therefore, an interpolation approach
(linear) between the QR relationships of different leadtimes (i.e. assuming that the change in error characteristic between leadtimes is linear) is
used. Depending on the response time of the catchments the lead-time interval between the estimated QR relationships may vary (1-2 hours vs 3
hours). For example, for Todmorden an interval of 2 hours is used while for the Upper-Severn an interval of 3 hours is used.
column description
1 Record number
2 Hindcasted discharge or water level value at the given lead time (are ordered in ascending order)
3 (n+2) Quantile error, belonging to the hindcasted value, given in the same row
734
4 Functionality HFPT Adapter
4.1 Introduction
The Historic Forecast Performance Tool adapter is written in R (R Development Core Team, 2010) and executed via running an R script in the
general adapter (Weerts et al., 2010; Weerts et al., 2011). The R package can be downloaded from [Link]
Hmisc
XML
zoo
quantreg
The [Link] can be run via the General Adapter using command line arguments.
4.2.1 [Link]
The R package contained in the ModuleDataSet files is exported and unzipped to the modules directory creating the directory
%REGION_HOME%/Modules/R-2.13.0. In the [Link] files the location of the [Link] must be defined as follows
R_EXE=%REGION_HOME%/Modules/R-2.13.0/bin/[Link]
The location of the HFPT adapter is assumed to be under %REGION_HOME%/Modules/HFPT and this is also assumed to be the
%ROOT_DIR%.
4.2.2 ModuleDataSets
The base package and the additional libraries are contained the directory named R-2.13.0 (+/-29Mb). This directory is stripped as much as
possible from unnessary items like html and pdf files. However, because of limitations on the size of the ModuleDataSet files in NFFS, due to the
use of weblogic (Boot, pers. comm.), the directory is split up into four ModuleDataSet files all smaller than 12 Mb. Resulting in four
ModuleDataSet Files
R_part1_Module.zip
R_part2_Module.zip
R_part3_Module.zip
R_part4_Module.zip
The Retrieve_Zipped_Configurations.xml workflow exports all ModuleDataSet files by for example adding these lines to the
Retrieve_Zipped_Configurations.xml file
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>Midlands_US_HFPT_Modules</moduleInstanceId>
</activity>
would export the ModuleDataSets configured in the moduleInstance Midlands_US_HFPT_Modules. The same needs to be done the four
ModuleDataSet files containing R.
735
xsi:schemaLocation="[Link] [Link]
<general>
<rootDir>%REGION_HOME%/Modules/</rootDir>
<workDir>%ROOT_DIR%</workDir>
<exportDir>%ROOT_DIR%</exportDir>
<exportDataSetDir>%ROOT_DIR%</exportDataSetDir>
<importDir>%ROOT_DIR%</importDir>
<dumpFileDir>$GA_DUMPFILEDIR$</dumpFileDir>
<dumpDir>%ROOT_DIR%</dumpDir>
<diagnosticFile>%ROOT_DIR%</diagnosticFile>
<convertDatum>false</convertDatum>
</general>
<activities>
<exportActivities>
<exportDataSetActivity>
<moduleInstanceId>Midlands_US_HFPT_Modules</moduleInstanceId>
</exportDataSetActivity>
</exportActivities>
</activities>
</generalAdapterRun>
<moduleInstanceDescriptor id="R_part1_Module">
<moduleId>GeneralAdapter</moduleId>
</moduleInstanceDescriptor>
<moduleInstanceDescriptor id="R_part2_Module">
<moduleId>GeneralAdapter</moduleId>
</moduleInstanceDescriptor>
<moduleInstanceDescriptor id="R_part3_Module">
<moduleId>GeneralAdapter</moduleId>
</moduleInstanceDescriptor>
<moduleInstanceDescriptor id="R_part4_Module">
<moduleId>GeneralAdapter</moduleId>
</moduleInstanceDescriptor>
736
<moduleInstanceDescriptor id="Midlands_US_HFPT_Modules">
<description>Retrieves Midlands_US_HFPT_Modules</description>
<moduleId>GeneralAdapter</moduleId>
</moduleInstanceDescriptor>
The following folder structure is necessary and contained in the ModuleDataSet file
config
QR_models
• locationId[1]
• locationId[2]
• locationId[3]
• locationId[n]
Work
• Forecast location
• Forecast leadtime
The folder location of each QR relationships for a specific location (2638 in this example) in our example would be as follows:
%ROOT_DIR%\HFPT\QR_models\2638
where 2638 specifies the locationId. The QR relationships are contained in comma-separated text files. The file naming convention, associated
with the QR relationships should be
%ROOT_DIR%\Modules\HFPT\QR_models\2638\QR_2638*_LT*.csv
The string _LT means 'Lead Time' and is used to find the associated lead time with the error model. characters in between _LT and .csv indicate
the lead time in hours (for example QR_2638_LT03.csv).
4.3.1 Header
The T0 is used by the HFPT module and should be exported as argument. To be able to do that the time0Format should be define in the general
section of the general adapter run. This enables the use of the %TIME0% later in the general adapter run. Below an example of the general
section is given. This will look the same for each moduleInstance.
<general>
<rootDir>%REGION_HOME%/Modules/HFPT</rootDir>
<workDir>%ROOT_DIR%/Config</workDir>
<exportDir>%ROOT_DIR%/work</exportDir>
<importDir>%ROOT_DIR%/work</importDir>
<dumpFileDir>$GA_DUMPFILEDIR$</dumpFileDir>
<dumpDir>%ROOT_DIR%</dumpDir>
737
<diagnosticFile>%ROOT_DIR%/work/[Link]</diagnosticFile>
<time0Format>yyyy-MM-dd HH:mm:ss</time0Format>
</general>
The input files are exported by the general adapter in PI-timeseries format and contains the forecast water level (or flow) for the location. The
locationId of the forecasted time series is used to identify the error model directory.
The HFPT module is stateless. The relative viewperiod determines the length of the exported timeseries. This can be adjusted according to the
specific requirements of the region. If necessary the start can also be controlled by a dummy exportStateActivity (not shown).
Below an example configuration of the exportTimeSeriesActivity. Note that this example if for Upper Severn Midlands where they use a hourly
timestep.
<exportTimeSeriesActivity>
<exportFile>[Link]</exportFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>Severn_Usev_FlowToLevel</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>2638</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</timeSeriesSets>
</exportTimeSeriesActivity>
After exporting the input file as [Link], the general adapter carries out the execute activities as shown below.
<executeActivities>
<executeActivity>
<command>
<executable>$R_EXE$</executable>
</command>
<arguments>
<argument>--vanilla</argument>
<argument>%ROOT_DIR%/config/QR_FEWS.R</argument>
<argument>%ROOT_DIR%/work/[Link]</argument>
<argument>%ROOT_DIR%/work/[Link]</argument>
<argument>%TIME0%</argument>
738
</arguments>
<timeOut>60000</timeOut>
</executeActivity>
</executeActivities>
This executeActivity produces the error estimates conditional on the forecast water level time series contained in [Link]. These results are
written in [Link]. The number of output timeseries depends on the number of quantiles specified in the csv files containing the error model.
Each timeseries gets a suffix in the parameterId based on the header of the error model file
<parameterId>[Link].Q5</parameterId>
<parameterId>[Link].Q25</parameterId>
Etc.
Flag=”5” value is unreliable, extrapolated beyond the domain Quantile Regression relationships calibration
The time series in the [Link] are imported during the import activities.
<importActivities>
<importTimeSeriesActivity>
<importFile>[Link]</importFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>QR_2638_H_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>2638</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>QR_2638_H_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link].05</parameterId>
<locationId>2638</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<readWriteMode>add originals</readWriteMode>
739
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>QR_2638_H_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link].25</parameterId>
<locationId>2638</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
Etc.
Etc.
</importTimeSeriesActivity>
</importActivities>
To be able to import the timeseries the parameters outputted by the HFPT module adapter must be specified in the
\Config\RegionConfigFiles\[Link] (containing the definitions of all parameters).
During the the HFPT Module run a log file is created in the work directory. Below an example of the log file and screenshot of display in NFFS.
<Diag
xsi:schemaLocation="[Link] [Link]
<line level="4" description="Reading data, locationId: 2074 startDate: 2008-01-04 [Link] endDate: 2008-01-07 [Link]"/>
<line level="4" description="working with location: 2074 System date/time: Thu Apr 14 [Link] 2011"/>
740
<line level="4" description="Forecast quantiles successfully written to PI-timeseries: F:\NFFS_UF\Midlands_SA\Modules\QR\work\[Link]"/>
</Diag>
Below an example of the configuration in the DisplayGroups. This will display the area between 5-95 (gray) and 25-75 (blue) in different colours.
The colours available are listed
<display name="Llanidloes">
<description>2072</description>
<subplot>
<timeSeriesSet>
<moduleInstanceSetId>DODO_Historical</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>2072</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceSetId>DODO</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>2072</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceSetId>DODO</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
741
<locationId>2072</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>2072</locationId>
<timeSeriesType>external historical</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</subplot>
<subplot>
<area>
<color>gray50</color>
<opaquenessPercentage>50</opaquenessPercentage>
<timeSeriesSet>
moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link].05</parameterId>
<locationId>2072</locationId>
<timeSeriesType>simulated
forecasting</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link].95</parameterId>
<locationId>2072</locationId>
742
<timeSeriesType>simulated
forecasting</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</area>
<area>
<color>blue</color>
<opaquenessPercentage>50</opaquenessPercentage>
<timeSeriesSet>
<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link].25</parameterId>
<locationId>2072</locationId>
<timeSeriesType>simulated
forecasting</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link].75</parameterId>
<locationId>2072</locationId>
<timeSeriesType>simulated
forecasting</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</area>
<timeSeriesSet>
<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>2072</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
743
<relativeViewPeriod unit="hour" start="0" end="48"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>QR_2072_H_Forecast</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link].50</parameterId>
<locationId>2072</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceSetId>DODO</moduleInstanceSetId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>2072</locationId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>ImportTelemetry</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationId>2072</locationId>
<timeSeriesType>external historical</timeSeriesType>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</subplot>
</display>
5 References
Koenker, R.: Quantile Regression, Cambridge University Press., 2005.
744
Koenker, R.: Quantile regression in R: A vignette, [online] Available from: [Link] 2010.
Koenker, R. and Basset, G.: Regression Quantiles, Econometrica, 46(1), 33-50, 1978.
Koenker, R. and Hallock, K. F.: Quantile Regression, The Journal of Economic Perspectives, 15(4), 143-156, 2001.
Weerts, A.H., J. Schellekens, F. Sperna Weiland, 2010. Real-time geospatial data handling and forecasting: Examples from DELFT-FEWS
forecasting platform/system, IEEE J. of Selected Topics in Appied Earth Observations and Remote Sensing, 3, 386-394, doi:
10.1109/JSTARS.2010.2046882.
Weerts, A.H., H.C. Winsemius, J.S. Verkade, 2011. Estimation of predictive hydrological uncertainty using quantile regression: Examples from
the National Flood Forecasting System (England and Wales), Hydrol. Earth Syst. Sci., 15, 255--265, doi:10.5194/hess-15-255-2011. Available
from: [Link]
Werner, M.G.F., Van Dijk, M. and Schellekens, J., 2004, DELFT-FEWS: An open shell flood forecasting system, In 6th international conference
on Hydroinformatics, Liong, Phoon and Babovic (Eds.), World Scientific Publishing Company, Singapore, 1205-1212.
In order to derive the error models, a long enough hindcast needs to be performed. For each lead time considered, an error model (i.e. relation
between forecast value, forecast error and leadtime) can be derived from such a hindcast. Background information can be found in Weerts et al.
(2011) also at [Link] or directly via
[Link]
A procedure has been written in R to support this. To use this procedure, first produce a long enough hindcast and make sure that observed
values are written to a CSV file as follows:
The forecast values should be written in exactly the same manner. The dates are not used in the current version. Simply make sure that both files
contain the same number of lines and that the observed top value in time corresponds to the top value of the simulated values. If for instance QR
error relationships are derived for a lead time of 6 hours, and the first entry in the observations represents the date and time 2010-01-01 [Link]
then the first value in the paired simulation CSV file should contain the forecast value of the forecast with t0 = 2009-12-31 [Link] and lead time
6 hours. This setup is not very user friendly and we may change this in the future to use PI-timeseries with lead time information in the header
instead.
745
Once these files are known, the user can run the batch file 'QR_derive.bat' with the following 6 inputs as arguments (make sure R is in your path,
if not, the batch file will assume R is located in C:\program files\R\bin):
Location name
Lead time
Quantiles
Missing value
A dataset for location VIKING1 has been delivered with the error derivation procedure. An example command line has been given below:
QR_derive "CSV/VIKING1_obs_01.csv" "CSV/VIKING1_mod_01.csv" "VIKING" "1" "0.05, 0.25, 0.5, 0.75, 0.95" "-990"
A forecast model run for the HBV model in the Rhine may be defined in a module called:
HBV_Rhine_Forecast.xml
HBV_Rhine_ForecastInterpolate.xml
HBV_Rhine_ForecastMergeInputs.xml
This clearly indicates the association between modules and brings structure to the configuration.
The convention is that data series sourced from external systems (telemetry, meteorological forecasts) are referred to as either External Historical
or External Forecasting. All data produced by the system itself is referred to as Simulated Historical or Simulated Forecasting.
All data from external sources transformed by DELFT-FEWS in such a way that the transformation is invariant remain External
Historical. This includes transformations such as rating curves, catchment averages etc.
746
Delft3D-FEWS adapter configuration manual
Models linked to Delft-Fews — An overview of the models linked via the Published Interface
Adapter Manuals
The general adapter allows FEWS to run external models (for example hydrological and hydrodynamic models) outside of FEWS. This page is the
respository for the latest third party manual adapters and should be read in conjunction with the section of configuration manual relating to the
general adapter.
Please be aware that this page is not available to external users due to copyright issues.
Contents
Contents
Introduction
Operating Forecasting Model
Operating HEC-RAS Model and FEWS Adapter
Download
Interface between FEWS and HEC-RAS
ID Mapping
Directory structure
Technical details about communication between HEC-RAS adapter and DELFT-FEWS system.
Description of the HEC-RAS data files
Configuring HEC-RAS adapter
Add global properties for hecras model and binaries
Overriding gate, levee breach settings
List of input and output variables which can be exchanged with the Delft-FEWS system and HEC-RAS adapter
Running model from FEWS
Introduction
The conceptual solution for the interface between HEC-RAS and FEWS has been illustrated in Figure 1. Two modes of working are identified that
each support a basic use case. The modes of working are:
The technical implementations for both modes of working are quite different. For running HEC-RAS in operational forecasting mode from FEWS, a
software interface will be required that directly controls the model runs.
Calibration is considered as an activity that should be carried out offline from the forecasting system. This means that no direct control from
FEWS will be required but a user will need to be able to migrate model datasets (calibrated schematizations) from the HEC-RAS calibration
747
environment to the forecasting environment.
Present documentation will describe the first mode of operation. For details about operating model in the calibration mode please check standard
HEC-RAS documentation.
Figure 1 Components used to run forecasts using HEC-RAS model in the FEWS/CHPS system
The HEC-RAS model provides the compute engine for running a hydraulic model schematization for a section of a river or a part of a river system.
The HEC-RAS Adapter forms the interface between the FEWS Forecasting Shell and the HEC-RAS model.
The HEC-RAS compute engine is, as its name suggests, the component that actually performs the HEC-RAS simulation. This simulation is
controlled from the FEWS Adapter, and all run time data such as initial and boundary conditions, and parameter settings are passed through the
adapter from and to the FEWS Forecasting Shell.
Download
Download of the model adapter is not available here: pls e-mail to Delft-FEWS Product Management for more information.
Configuration Manual: how to add a hecras model in [Link]
The FEWS Adapter for HEC-RAS forms the interface between the FEWS Forecasting Shell and the HEC-RAS model. The adapter accepts the
request from the Forecasting Shell to run HEC-RAS, and imports the required data provided by the Forecasting Shell.
This data shall be provided in a standardized XML interface format, the FEWS Published Interface. Once a HEC-RAS run has been completed,
relevant results are passed back to the Forecasting Shell in the form of the standardized XML interface format.
A schematic representation of the communication between the Forecasting Shell and the HEC-RAS model via the FEWS Adapter is shown in the
diagram below.
748
Figure 2 Data flows involved during run of HEC-RAS model FEWS adapter
The FEWS Adapter allows running of HEC-RAS by FEWS. The FEWS Adapter should be considered as a thin communication (software) layer on
top the existing HEC-RAS engine. The adapter is tightly connected to the model engine. For longer term consistency, a FEWS adapter should
therefore preferably be maintained by the owner of the model code, in this case HEC. The FEWS Adapter for HEC-RAS shall be developed by
HEC or handed over to HEC upon completion.
Postprocessing 02 Read the output time series from the RAS DSS and binary output files
ID Mapping
The location parameters used in FEWS can be coupled to HEC-RAS DSS path names through ID-mapping. The configuration files for
ID-mapping should be created separately for each HEC-RAS model. Please consult 08 Mapping Id's flags and units for more information on how
to configure id mapping in FEWS system.
Directory structure
The data directories and configuration files that are required for operating the FEWS Adapter for HEC-RAS have been shown below.
749
Note that only binary and configuration files relevant to the HEC-RAS adapter are included, in a real configuration a lot more files can be involved
used by another modules of the FEWS system.
+---bin
| <FEWS binaries>
\---nerfc_sa
|
+---Config
| +---ColdStateFiles
| | HECRAS_CONNECTICTUT_UpdateStates [Link]....cold state files
| |
| +---IdMapFiles
| | [Link].......................... custom mappings for the HEC-RAS variables
and locations
| |
| +---ModuleConfigFiles
| | HECRAS_CONNECTICTUT_Forecast.xml............ main configuration file of the adapter
| |
| +---ModuleDataSetFiles
| | HECRAS_CONNECTICTUT_UpdateStates.xml.........zipped hecras files, transported to
Models directory
| |
| \---ModuleParFiles
| HECRAS_CONNECTICUT_Parameters............. configuration file which allows to
override some model and structure parameters
|
\---Models
\---hec/hecras
+---bin........................................ directory which contains all HEC-RAS
executables for Windows and Linux platforms
| [Link].......................... generates binary file containing detailed
model output
| dss_writer
| [Link]................... converts geometry files from GUI ASCII
format to binary
| geo_pre
| [Link]............................. performs steady flow simulations
| steady
| [Link]........................... performs unsteady flow simulations
| unsteady
| [Link]
| [Link]
| [Link]
| [Link].1
| libwldelft_native.so
| [Link]............. pre- and pos- adapter, Coverts HEC-RAS
data files to/from FEWS-PI format
| [Link]............................. main library used by the adapter, reads
and writes HEC-RAS data files
| [Link]
| [Link]
| [Link]...................... the rest of the files below are FEWS
dependencies used by adapter
| [Link]
| Delft_FEWS_castor.jar
| Delft_FEWS_schemas.jar
| Delft_PI.jar
| Delft_PI_castor.jar
| Delft_Util.jar
| jaxp-api-1_3.jar
| [Link]
| jaxp-sax-1_3.jar
| jaxp-xalan-1_3.jar
| jaxp-xercesImpl-1_3.jar
| [Link]
| [Link]
| [Link]
| [Link]
750
| xerces-c_2_8.dll
| [Link]
| [Link]
|
\---connecticut
| run_info.xml.......................... a file generated by FEWS containing paths,
run options
|
+---input.................................. input directory of the adapter, input
FEWS-PI time series files
| [Link]
|
+---log.................................... log messages written by the hec-ras
adapter
| [Link]
|
+---output................................. contains HEC-RAS output converted from the
binary and dss output files
| [Link]
|
\---work................................... working directory of the adapters
ctfld2ras.b01
ctfld2ras.b02
ctfld2ras.b03
ctfld2ras.c02
ctfld2ras.f04
ctfld2ras.g02
ctfld2ras.p01
ctfld2ras.p02
ctfld2ras.p05
[Link]
ctfld2ras.r01
ctfld2ras.r02
ctfld2ras.r03
ctfld2ras.r05
ctfld2ras.u01
751
ctfld2ras.u02
ctfld2ras.x02
Technical details about communication between HEC-RAS adapter and DELFT-FEWS system.
Communication between FEWS system and pre-/post- adapter strictly follows the FEWS Published Interface format.
Current implementation of the HEC-RAS adater has all files required to run it (even in a stand-alone mode, without DELFT-FEWS system). The
diagram below shows all dependencies from the FEWS libraries.
An adapter itself works only as a bridge between [Link] library and DELFT-FEWS system. [Link] provides a set of functions which allow
to read/write all required HEC-RAS data files, including files used by the graphical user interface of HEC-RAS model.
For more technical details about functionality used by the adapter see [Link] and [Link] files in attachment.
Current version of HEC-RAS adapter is able to update all required HEC-RAS GUI files automatically when model is started from
DELFT-FEWS. As result the user is able to get a complete model input generated by the DELFT-FEWS. This allows user to
analyze model input in details using HEC-RAS GUI.
Extension Description pre-adapter input pre-adapter output post-adapter input post-adapter output
HEC-RAS model adapter follows standard way of integrating external models into the Delft-FEWS system by use of General Adapter. For more
details about configuration of General Adapter please check 05 General Adapter Module.
A very important part of the configuration is defined under the <exportRunFileActivity> element. It contains path to the RAS project file, location
752
of the RAS binary files and list of variables to be written into the output files. Additionally user may override logging level of the adapter to DEBUG
in order to see more detailed output from the adapter. This is useful during configuration of the adapter since list of possible output variables that
model can produce or list of input variables that can be consumed by the adapter are also printed to the log file.
List of output variables is defined under outputTimeSeriesParametersFilter item uses Regular Expressions. In most cases it is
a list of variable names delimited with '|' character and for those variables where name can occur in another variable names
(e.g. FLOW and FLOW AT GATE) it is necessary to use ^ as a prefix and $ as a suffix of the variable. For example:
]]>
753
<exportTimeSeriesActivity>
<exportFile>%ROOT_DIR%/input/[Link]</exportFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>HECRAS_KENNEBEC_Preprocessing_UpdateStates</
moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>QINE</parameterId>
<locationId>SIDM1ME</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>HECRAS_KENNEBEC_Preprocessing_UpdateStates</
moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>STID</parameterId>
<locationId>CASM1ME</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" end="0"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</timeSeriesSets>
</exportTimeSeriesActivity>
<exportDataSetActivity>
<moduleInstanceId>HECRAS_KENNEBEC_UpdateStates</moduleInstanceId>
</exportDataSetActivity>
<exportParameterActivity>
<fileName>[Link]</fileName>
<moduleInstanceId>HECRAS_KENNEBEC_UpdateStates</moduleInstanceId>
</exportParameterActivity>
<exportRunFileActivity>
<exportFile>%ROOT_DIR%/run_info.xml</exportFile>
<properties>
<string value="%ROOT_DIR%/work/[Link]" key="hecRasProjectFile"/>
<string value="$HECRASBINDIR$" key="hecRasBinDirectory"/>
<string value="^STAGE$|^FLOW$" key="outputTimeSeriesParametersFilter"
/>
<string value="^STAGE$|Hydr Radius L" key=
"outputLongtitudionalProfileParametersFilter"/>
<string value="DEBUG" key="logLevel"/>
<string value="false" key="skipBinaryOutput"/>
<string value="LD_LIBRARY_PATH=$HECRASBINDIR$:$LD_LIBRARY_PATH"
key="hecRasEnvironment"/>
</properties>
</exportRunFileActivity>
</exportActivities>
<executeActivities>
<executeActivity>
<command>
<className>[Link]</className>
<binDir>$HECRASBINDIR$</binDir>
</command>
<arguments>
<argument>%ROOT_DIR%/run_info.xml</argument>
</arguments>
<timeOut>1500000</timeOut>
</executeActivity>
</executeActivities>
<importActivities>
<importStateActivity>
<stateConfigFile>%ROOT_DIR%/work/[Link]</stateConfigFile>
754
<synchLevel>20</synchLevel>
</importStateActivity>
<importTimeSeriesActivity>
<importFile>%ROOT_DIR%/output/[Link]</importFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>HECRAS_KENNEBEC_UpdateStates</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SSTG</parameterId>
<locationId>AUGM1ME</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>HECRAS_KENNEBEC_UpdateStates</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>SQIN</parameterId>
<locationId>AUGM1ME</locationId>
<timeSeriesType>simulated historical</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</timeSeriesSets>
</importTimeSeriesActivity>
</importActivities>
</activities>
</generalAdapterRun>
]]>
The hecras files (b01,prj,u01,x01,[Link] etc.)are saved in the /Config/ModuleDataSet directory. These are copied to the
/Model/hecras/<model>/work directory during the exportDataSet activity in the General Adapter file.
The $HECRASBINDIR$ property is defined in the [Link] at the same level of the Config and Models directory:
HECRASMODELDIR=%REGION_HOME%/Models/hec/hecras
HECRASBINDIR=$HECRASMODELDIR$/bin
In a current version of the HEC-RAS adapter the user may also override computational interval of the model as well as structure parameters.
This can be done using parameters file which need to be also referenced by the HEC-RAS module config file. See <exportParameterActivity>
element in the general adapter configuration above.
An example belows shows list of the parameters which are supported for now.
Name of the structure defined in the HEC-RAS must exactly match group id and location of the structure (station, river,
chainage) must be the same as locationId.
<modifierType>HECRAS</modifierType>
<group id="default" name="hec-ras run parameters">
<parameter id="ComputationInterval">
<description>Computation interval in minutes. Does not change interval of output
data.</description>
<intValue>5</intValue>
</parameter>
</group>
<!-- Gate name and locationId should be equal to what is defined in the HEC-RAS gui -->
<group id="Gate #1" name="hec-ras gate parameters">
<locationId>CT River R1/18100</locationId>
755
<!--
Gate parameters depend on the mode selected in the RAS configuration files (gui),
<parameter id="RateClose">
<dblValue>0.05</dblValue>
</parameter>
<parameter id="MaxOpen">
<dblValue>20.0</dblValue>
</parameter>
<parameter id="MinOpen">
<dblValue>0.0</dblValue>
</parameter>
<parameter id="InitOpen">
<dblValue>3.0</dblValue>
</parameter>
<parameter id="ZClose">
<description/>
<dblValue>3.0</dblValue>
</parameter>
<parameter id="ReferenceWS">
<description>Depending on the ReferenceWSType parameter</description>
<stringValue>R1</stringValue>
</parameter>
<parameter id="referenceWSOpen">
<description>Reference elevation at which gate begins to open</description>
<dblValue>4.0</dblValue>
</parameter>
<parameter id="referenceWSClose">
<description>Reference elevation at which gate begins to close</description>
<dblValue>3.0</dblValue>
</parameter>
756
-->
<parameter id="stageDiffUS">
<description>Depends on the stageDiffUSType parameter</description>
<stringValue>Reach</stringValue>
</parameter>
<parameter id="stageDiffDSType">
<description>Downstream River, Reach, RiverStation or StorageArea location for stage
difference computation</description>
<stringValue>Reach</stringValue>
</parameter>
<parameter id="stageDiffDS">
<description>Depends on the stageDiffDSType parameter</description>
<stringValue>R1</stringValue>
</parameter>
<parameter id="stageDiffOpen">
<description>Stage difference at which gate begins to open</description>
<dblValue>0.1</dblValue>
</parameter>
<parameter id="stageDiffClose">
<description>Stage difference at which gate begins to close</description>
<dblValue>0.1</dblValue>
</parameter>
-->
</group>
<parameter id="IsActive">
<description>true when breach is activated, otherwise model skips it during
computations</description>
<boolValue>false</boolValue>
</parameter>
<parameter id="IsWSStart">
<description>true if trigger for failure is WS elevation</description>
<boolValue>true</boolValue>
</parameter>
<parameter id="ThresholdWS">
<description>water surface elevation for breaching</description>
<dblValue>3.4028E38</dblValue>
</parameter>
<parameter id="ThresholdDuration">
<description>threshold time (hours) for breaching</description>
<dblValue>3.4028E38</dblValue>
</parameter>
<parameter id="StartDate">
<description>Start date for breaching (e.g. 01MAR2001)</description>
757
<stringValue/>
</parameter>
<parameter id="StartTime">
<description>Start time for breaching (e.g. 1630)</description>
<stringValue/>
</parameter>
<parameter id="CenterStation">
<description>Center of breach (XS station / location)</description>
<dblValue>8800.0</dblValue>
</parameter>
<parameter id="BottomWidth">
<description>Final bottom width</description>
<dblValue>500.0</dblValue>
</parameter>
<parameter id="BottomElevation">
<description>Final bottom elevation</description>
<dblValue>-10.0</dblValue>
</parameter>
<parameter id="LeftSideSlope">
<description>Left side slope</description>
<dblValue>2.0</dblValue>
</parameter>
<parameter id="RightSideSlope">
<description>Right side slope</description>
<dblValue>2.0</dblValue>
</parameter>
<parameter id="BreachTime">
<description>Full formation time (hours)</description>
<dblValue>1.0</dblValue>
</parameter>
<parameter id="WeirCoef">
<description>Breach weir coefficient</description>
<dblValue>2.6</dblValue>
</parameter>
<!-- parameter below are used only when IsPipe = true -->
<parameter id="IsPipe">
<description>true if piping failure, false if overtopping</description>
<boolValue>true</boolValue>
</parameter>
<parameter id="PipingCoefficient">
<description>Piping coefficient (default is .8)</description>
<dblValue>0.8</dblValue>
</parameter>
<parameter id="InitialPipingElevation">
<description>Initial piping elevation</description>
<dblValue>-0.5</dblValue>
</parameter>
</group>
]]>
List of input and output variables which can be exchanged with the Delft-FEWS system and HEC-RAS adapter
758
The HEC-RAS adapter is configured properly and forecast is performed from the Delft-FEWS system - a list of input and output variables will be
written into the standard log file of the system. The location and variables are based on the active <region>.b01 file of the HEC-RAS model
configured in the GUI of HEC-RAS. Pre-adapter of the HEC-RAS will provide list of all possible input variables and locations in the following part
of the log file:
Locations and variables listed after the line Found input at locations: can be configured in the Delft-FEWS as a part of adapter input, e.g.
[Link] in this case may contain something like the lines below:
[Link]
<timeZone>0.0</timeZone>
<series>
<header>
<type>instantaneous</type>
<locationId>CT RIVER R1/334752.0</locationId>
<parameterId>Flow Hydrograph</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>-999.0</missVal>
<stationName>Connicut River at Thompsonville</stationName>
<units>cms</units>
</header>
<event value="14.98" time="[Link]" flag="0" date="2008-11-06"/>
<event value="14.705" time="[Link]" flag="0" date="2008-11-06"/>
<event value="14.43" time="[Link]" flag="0" date="2008-11-06"/>
<event value="14.155" time="[Link]" flag="0" date="2008-11-06"/>
<event value="13.88" time="[Link]" flag="0" date="2008-11-06"/>
<event value="13.605" time="[Link]" flag="0" date="2008-11-06"/>
...
]]></series>
Note that <parameterId> and <locationId> are exactly the same as the variables and locations listed in the log file.
In the same way list of all output variables and locations can be found in the post-adapter log output, for example:
Note that parameter names to be written into the output FEWS-PI file will contain only the short name of the parameter, e.g.: Q Right and not
Flow in right overbank., (cfs).
The variables listed here will be written into the file specified as a "--output-binary-pi-file=<path>" argument of the post-adapter. Example of the
resulting FEWS-PI xml can be found below:
759
Example of the FEWS-PI containing binary output of the HEC-RAS model
<timeZone>0.0</timeZone>
<series>
<header>
<type>instantaneous</type>
<locationId>CT River R1/334752.0</locationId>
<parameterId>W.S. Elev</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>NaN</missVal>
<units>[?]</units>
</header>
<event value="32.06013" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.06013" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.034" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.03394" time="[Link]" flag="0" date="2008-11-06"/>
...
<event value="32.03618" time="[Link]" flag="0" date="2008-11-07"/>
<event value="32.03598" time="[Link]" flag="0" date="2008-11-08"/>
</series>
<series>
<header>
<type>instantaneous</type>
<locationId>CT River R1/334752.0</locationId>
<parameterId>E.G. Elev</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>NaN</missVal>
<units>[?]</units>
</header>
<event value="32.06734" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.06734" time="[Link]" flag="0" date="2008-11-06"/>
<event value="32.056885" time="[Link]" flag="0" date="2008-11-06"/>
...
]]></series>
Additionally to the variables available in the binary output of the HEC-RAS, usually called <file>.O01, a DSS output is available. In most cases it
contains FLOW and STAGE variables. Example of the FEWS-PI generated from the DSS file is given below:
760
Example of the FEWS-PI containing DSS output of the HEC-RAS model
<timeZone>0.0</timeZone>
<series>
<header>
<type>instantaneous</type>
<locationId>CT RIVER R1/0.00</locationId>
<parameterId>FLOW</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>NaN</missVal>
<units>CFS</units>
</header>
<event value="24.38823" time="[Link]" flag="0" date="2008-11-06"/>
<event value="-5.8442316" time="[Link]" flag="0" date="2008-11-06"/>
<event value="68.705124" time="[Link]" flag="0" date="2008-11-06"/>
<event value="391.09784" time="[Link]" flag="0" date="2008-11-06"/>
...
<event value="438.6425" time="[Link]" flag="0" date="2008-11-07"/>
<event value="-5259.6562" time="[Link]" flag="0" date="2008-11-08"/>
</series>
<series>
<header>
<type>instantaneous</type>
<locationId>CT RIVER R1/0.00</locationId>
<parameterId>STAGE</parameterId>
<timeStep unit="second" multiplier="3600"/>
<startDate date="2008-11-06" time="[Link]"/>
<endDate date="2008-11-08" time="[Link]"/>
<missVal>NaN</missVal>
<units>FEET</units>
</header>
<event value="5.0" time="[Link]" flag="0" date="2008-11-06"/>
<event value="5.0" time="[Link]" flag="0" date="2008-11-06"/>
<event value="5.0" time="[Link]" flag="0" date="2008-11-06"/>
<event value="5.0" time="[Link]" flag="0" date="2008-11-06"/>
...
]]></series>
Check Delft-FEWS User Guide on how to run configured model from the Delft-FEWS system.
SYNHP Adapter
Introduction
In FEWS Extreme Discharges the SYNHP routing model was applied for the reach between Basel and Maxau. SYNHP is operated by the BfG,
the Landesamt für Wasserwirtschaft Rheinland Pfalz and the Landesamt für Umweltschutz Baden-Württemberg (Lammersen et al 2002,
Rheinland Pfalz, 1993). This model describes the routing of the flood wave through a series of linear stores, and although the model exists for the
Rhine downstream of Maxau, it is applied here only to the reach between Basel and Maxau. FEWS Extreme Discharges is an old version of the
FEWS system and it used it's own adapters. SYNHP will now be part of FEWS GRADE and therefore a new Module Adapter should be created in
order for FEWs to be able to communicate with SYNHP. The Module adapter will be implemented in Java and will be run from the General
Adapter.
The FEWS communicates with all forecasting modules through adapters. These adapters may be applied at two levels:
At the first level through use of the General Adapter and a published interface. The forecasting module must then comply with this
published interface. Provision is made in the published interface for static data, dynamic time series data, dynamic time series of grid
data as well as passive handling of module initial state files, execution of one or more steps to run the module and elementary
diagnostics.
At the second level through a specific adapter developed for the particular forecasting modules. A library of these is available, including
models such as the SOBEK hydrodynamic model, the HBV Rainfall-Runoff model, the LISFLOOD Rainfall-runoff model etc.
761
The Adapters component thus comprises of a General Adapter Module and of an, in principle, unlimited number of Model Specific Adapter
Modules. The FEWS component layout with the location of the adapter used for the SYNHP model component can be seen from the underneath
picture.
2 Role in FEWS
The role of the SYNHP adapter is to allow SYNHP to be run by the General Adapter using the DELFT FEWS Published Interface for module
input. The adapter also allows the output of SYNHP to be read by the General Adapter using the DELFT FEWS published interface. The General
Adapter export time series to the Published Interface (PI) file format. The Module pre-processor converts this PI-format to a specific ASCII format
to be read by the SYNHP. The postprocessor then converts the output from SYNHP back to a PI-file format. Figure 2 shows the position of
SYNHP and it's adapter in the Delft FEWS system.
762
Figure 2 The role of the SYNHP adapter in the Delft FEWS system
Functionality
The main functionality of the SYNHP adapter is to be able to communicate with the SYNHP model. The SYNHP adapter will convert the
Published Interface (PI) format, exported by the General Adapter, to a an ASCII format specific fro SYNHP. It will also convert the output from
SYNHP back to PI-format, which is then being imported by the General Adapter.
Design
Introduction
The SYNHP module will be implemented like all other external modules, with a pre-processor, and a postprocessor. The pre- and post-processor
will convert the time series from the Delft FEWS PI-format to a standard ASCII file format. The conventions of the ASCII file format will be
described in the section 'input data'.
Input data
The SYNHP module adapter receives the required time series form the General Adapter. The module adapter converts the PI file format to an
ASCII file format. The input file for SYNHP should be named '[Link]'. It must be able to link the received input data to the correct variable
SYNHP model. The General Adapter takes care of the ID-mapping and Parameter mapping for the time series to be stored in the local data store
within the NFFS.
Example:
DATUM TAG=01 MONAT=01 JAHR=1961 RUN=306768
Basel *01.01.1961
00.00 245
00.00 276
00.00 297
00.00 315
00.00 318
00.00 320
00.00 326
00.00 336
00.00 353
00.00 368
00.00 378
Output data
The calculated time series must be stored in the NFFS database. These time series must be sent back to the General adapter to be stored in the
local data store. The module adapter converts the ASCII file format (named '[Link]') to a PI file format. The module adapter must be able to link
the calculated time series to the correct output variable. The General Adapter takes care of the ID-mapping and Parameter mapping for the time
series to be stored in the local data store within the NFFS.
Example:
***WOPL-5.7/30.05. 1 *** BIN-0 : input-1.0 300907
1.01.1961 DT=0.20*60***** 1 : IST
BEZEICHNUNG
MAXAU
KILOMETRIERUNG
763
363.00
MODUL-NUMMERN
008
STATUS
2
VORGAENGER
#063
0 324.04
60 332.35
120 354.40
180 382.40
240 415.28
300 448.13
360 477.91
420 497.62
480 505.75
540 509.72
600 509.40
660 510.89
Configuration
The schema used for the configuration of the SYNHP Module adapter is very straightforward. Some standard configuration option for an external
module should be available and therefore both the pre- and the post-adapter configuration files are similar to other external module like SOBEK
and HBV.
The pre- and post-adapter are very similar in structure. The only difference is the specification pre- or post- for the AdapterActivities. The general
section contains a reference to each folder within the SYNHP Module folder. It specifies which folder is the workDir, the importDir, the exportDir,
where the diagnostics file can be found and what number is used to indicate a missing value. An example of the pre-adapter configuration file is
given below.
The next section contains the activities, both pre- and post adapter activities are part of this section. They consist of a dateFormat indicator, the
name of the input file (to be read by the adapter) and the name of the output file (to be created by the adapter). The dateFormat indicator
determines the format used for in the output file for date or time. The next bit of this section contains the mapping of the time series to the correct
input or output series for/from the Module. It converts FEWS internal location and parameter id's to the id's or columns used for the input for and
output from the SYNHP model. An example of the post-adapter configuration file is given below.
764
Developing a FEWS (Compliant) Adapter
This is an overview of some of the requirements for developing a Delft-FEWS compliant adapter. This list is by no means
exhaustive so before starting to develop an adapter please contact [Link]@[Link]
A key feature of DELFT-FEWS is its ability to run external modules to provide essential forecasting functionality. The General Adapter is the part
of the DELFT-FEWS system that implements this feature. It is responsible for the data exchange with these modules and for executing the
modules and their adapters. The Delft3D model adapter is an example of such a module run from the General Adapter (see also FEWS manual:
General Adapter).
In order to develop a model adpater for a FEWS application, it is important to have a clear understanding of the relation between the General
Adapter and a model adapter. This section summarizes some of the functionalities included in the FEWS general adapter module, and their
relation to a model adapter.
The schematic interaction between the general adapter and an external moduleis shown in the below figure.
765
Figure 1: schematic interaction between the FEWS and an external module
1. The general adapter is that part of DELFT-FEWS which exports model input, imports model output data, and executes the pre-adapter,
module and post-adapter.
2. Export and import of model data is done in datafiles following the Published Interface (PI) XML format.
3. The preAdapter, Module and postAdapter together form the model adapter. The model adapter is initiated from the general adapter.
4. The preAdapter is that part of the model adapter which converts input data in PI XML format to native model input data.
5. The postAdapter is that part of the model adapter which converts native model output data to PI XML format to be imported by FEWS.
6. The Module is that part of the model adapter which starts a model simulation.
Essential in this division of tasks between model adapter and general adapter is that from the vantage point of the general adapter the model
adapter is a black box, and visa-versa. Exchange of information between these components is done based on exchange of PI XML data files;
DELFT-FEWS does not have any knowledge about the modelling system (Delft3D in this case), whereas the modelling system does not have any
knowledge about DELFT-FEWS. Translation of data from one component to the other is done using the model adapter. Thus model adapter is not
a part of the Delft-FEWS system itself, but is essentially an external program initiated by DELFT-FEWS through the general adapter. In other
words, for Delft-FEWS system a model adapter(s) + model is a model itself which read PI XML files as input and writes PI XML files as output.
This external program (the model adapter) should provide all the functionalities to, i) convert specific PI XML data to specific native model input
files and model state ii) initiate a model simulation and iii) convert model output data and end state to PI XML data readable by DELFT-FEWS. In
addition, this model adapter has the following tasks, iv) logging of errors during all steps by the model adapter and by the model itself and v)
administation of the model state. For both these tasks pre-defined PI XML file formats exists readable by DELFT-FEWS.
The model adapter should be as far as possible made configurable and must be done in a consistent way.
External Modules and their adapters can be written in any programming language. However, in case Java is used, they have to implement the
interface ModuleAdapterRunnable. Modules and module adapters can only communicate with the General Adapter via the Published Interface.
The only 2 means to exchange information with the General Adapter are:
A diagnostic file written in the Published Interface format. If such a diagnostic file is available the General Adapter will read it and write
corresponding logs to the FEWS system.
The return code indicating the result of an execution.
766
return code meaning
0 successful execution
Modules and their adapters cannot use exception or event handling as a means to communicate with the General Adapter.
On graceful failure the GA expects a non-zero return code. The type of error and the accompanied message are stored in the diagnostics
file according to the GA.
On non graceful failure the stack trace will be written for adapter (applies to adapters written within FEWS only) otherwise generic
message will be given that adapter has crashed.
On successful execution the GA expects a zero return code.
a non graceful failure or on a graceful failure that indicates a fatal error (see the definition of the diagnostics file below)
the GA will make a binary dump of the entire module configuration (a list of directories supplied by the module adaptor
constructor).
The GA will stop processing the remaining instructions.
all other occasions
the messages in the diagnostics file are passed on to the event logging system
further operation is continued
Diagnostics file
The published interface documentation describes the (simple) xml file format. Per batch (pre processing - module run - post processing) one
diagnostics file is always expected. Each line/message in this file contains the actual text and the warning level attached to this text. The warning
levels in the diagnostics file will be interpreted by the general adapter. according to the following table:
0 fatal fatal error, complete module crash Module Processor: division by zero
All levels higher than 3 are regarded as non-essential (debug) information. The warnings are recorded in the system, but no actions will be taken.
Updating PI State XML file within Pre and Post Model Adapters
Updating PI State XML file within Pre and Post Model Adapters
This briefly explains the method of using the State files of for a model in FEWS environment.
Generally all model requires only the state at the start of the run and writes the state at end of each run.
In FEWS, in an update run (sometimes refers to as Historical Run), the initial state of the model is copied from database to appropraite
directory/file where the model expects it to be.
And at end of each run a state is copied back to FEWS database to be used again for the next (update/forecast) model run.
767
<?xml version="1.0" encoding="UTF-8"?>
<State
xsi:schemaLocation="[Link]
[Link]
version="1.2" xmlns="[Link]
xmlns:xsi="[Link]
<stateId>warm</stateId>
<timeZone>0.0</timeZone>
<dateTime date="2008-02-10" time="[Link]"/>
<stateLoc type="file">
<readLocation>D:\Fews\App\po_sa\Modules\Topkapi\Trebbia\Fews_in\State\Topkapi_in.stt</readLocation>
<writeLocation>D:\Fews\App\po_sa\Modules\Topkapi\Trebbia\Fews_in\State\Topkapi_in.stt</writeLocation>
</stateLoc>
</State>
Model is then runnned, using this state file, for a given period. The states files are written back to the state directory say ".\Model\state" directory
at the end of simulation.
After the model run is completed, the Postprocessor then copies the state file or files back to the zip model state zipped file (say:[Link])
The Postprocessor not onlyu copy back the state file but also updates the pi state file with the last state file and changes the <dataTime> field to
appropraite date time
then the date and time in the <dateTime> field should be changed to <dateTime date="2006-06-30" time="[Link]"/> and
the file name under <writeLocation> should be changed to
<writeLocation>D:\FewsPO\Modules\Topkapi\Reno\States\[Link]</writeLocation>
The General Adapter will then accordingly act on the new pi state file and stores the correct state to FEWS database.
0
5 123456
79.48435 4.6171375E-02 0.0000000E+00 0.0000000E+00 0.0000000E+00
Introduction
The ISIS/RS adapter for the EA FFS will be driven by an INI file created by ISIS. The purpose of the file is to identify the locations and unit types
of the inputs, outputs and controls of the ISIS model in a form that the adapter can read without knowledge of the ISIS DAT file format.
The file format has been developed with the following in mind:
768
To minimise coding and maximise the sharing of code between the adapter and FloodWorks, the parameters are the same as those in
the equivalent FloodWorks parameter file.
The element list is generic, and should allow adapters to be developed in future for other FloodWorks algorithms such as PDM and KW
with a minimum of effort. Other adapters would use the same code to read the INI file. This has resulted in a relatively verbose file. Most
of the elements in the file are actually size specifications, indicating the sizes of arrays of parameters.
The generic approach means there are several elements that will always take the same value in the ISIS adapter but which would have
different values in, say, a PDM adapter.
Sections
1. General
2. InputDimensions
3. Inputs
4. ControlDimensions
5. Controls
6. OutputDimensions
7. Outputs
8. StatesDimensions
9. RealParameterDimensions
10. RealParameters
11. IntegerParameterDimensions
12. IntegerParameters
13. CharacterParameterDimensions
14. CharacterParameters
The sections can be in any order.
Parameters
The full parameter list is below, followed by endnotes explaining some of the particular ISIS values. The following definitions will be useful:
Flow input series: QTBDY marked as being used as an input to the operational model
Stage input series: HTBDY marked as being used as an input to the operational model
Wind input series: component of a WIND unit marked as being used as an input to the operational model
Simple control: GAUGE, VERTICAL SLUICE, RADIAL SLUICE, ABSTRACTION, GATED WEIR, PUMP, or BLOCKAGE marked as being
used as an input to the operational model
Flow output series: node marked as providing flow output to the operational model
Stage output series: node marked as providing stage output to the operational model
General
ModelID Model ID The ID of this particular ISIS model - the name specified for the export
(e.g. MyModel)
InputDimensions
Total Total number of input series Number of flow or stage input series + 2 * Number of wind input series
Size1d Dimensions of 1d series Number of flow or stage input series, 2 * Number of wind input series
Inputs
IDs Location IDs for the input data ISIS node label for each flow or stage input,
streams Identifier for each wind component input stream (each separated by a
comma and a space)
ControlDimensions
769
Total Total number of control series Number of simple controls + 5 * Number of breaches
Controls
IDs Location IDs for the control data Upstream ISIS node label for each simple control,
streams Identifier for each breach component stream (each separated by a
comma and a space)
OutputDimensions
Outputs
IDs Location IDs for the output data ISIS node label for each output (each separated by a comma and a
streams space)
StatesDimensions
RealParameterDimensions
RealParameters
770
Values Real parameters Blank Timeout per 1h of simulated time (s), Save interval (s)
IntegerParameterDimensions
Total Total number of integer 2 + Number of flow or stage input series + Number of simple controls +
parameters Number of output series +
2 * Number of breaches
Size1d Dimensions of 1d integer Number of flow or stage input series, Number of simple controls, Number
parameter arrays of output series
IntegerParameters
Values Integer parameters Version number (currently 4), ISIS label length (8 or 12), Unit type for
each flow or stage input series ,
Unit type for each simple control series ,
Unit type for each output series ,
Unit type and component count for each breach
CharacterParameterDimensions
CharacterParameters
Example
Attached.
771
The present WIKI contains a manual for configuration of the Delft3D-FEWS adapter. This adapter provides the interface between the
Delft-FEWS forcasting shell and the Delft3D modelling package. It has the following main features:
1. The Delft3D-FEWS adapter supports the following packages of the Delft3D suite: FLOW, WAQ (including ECO, BLOOM, CHEM etc)
and PART.
2. The Delft3D-FEWS adapter provides all of the basic functionalities required to run the models in an operational system.
3. The Delft3D-FEWS adapter is setup to be fully compliant with the Delft-FEWS system and philosophy.
For a brief overview of some generic features of Delft3D and existing Delft3D-FEWS applications, see General.
For a required steps manual on how to setup the Delft3D-FEWS adapter for a FEWS application, see Adapter configuration.
Examples of a configured Delft3D-FEWS system are provided in Example configuration to serve as a guideline in setting up new
systems.
Best practices with regard to configuring Delft3D-FEWS systems are provided in Best practices.
For more information on Delft-FEWS, the reader is referred to Delft-FEWS WIKI. For more information on the Delft3D modelling package, the
reader is referred to Delft3D website.
Table of Contents
1. General
2. Adapter configuration
3. Example configuration
4. Best practices
Contact
The Delft3D-FEWS adapter was setup and tested to fascilitate all "standard" modelling applications in Delft3D. In case of missing features or
bugs, however, please contact Daniel Twigt or Arjen Markus. Similarly, with questions concerning the contents of this WIKI, please contact
Daniel Twigt.
1. General
This section provides some generic information on Delft3D and Delft-FEWS. Also, examples of existing Delft3D-FEWS applications are provided
for the interested reader.
Delft3D
Delft3D is the main 3D modeling package of Deltares. The package consists of a number modules, each of which has a specific purpose.
Available modules are:
Delft-FEWS
772
Delft-FEWS provides an open shell system for managing forecasting processes and/or handling time series data. Delft-FEWS incorporates a wide
range of general data handling utilities, while providing an open interface to any external (forecasting model). The modular and highly configurable
nature of Delft-FEWS allows it to be used effectively for data storage and retrieval tasks, simple forecasting systems and in highly complex
systems utilising a full range of modelling techniques. Delft-FEWS can either be deployed in a stand-alone, manually driven environment, or in a
fully automated distributed client-server environment. For more information, see FEWS WIKI.
Delft3D-FEWS
Subject of the present WIKI is the Delft3D-FEWS adapter. This adapter provides the interface between Delft3D and Delft-FEWS, based on the
FEWS design philosophy. This implies that:
1. FEWS manages data streams and workflows to execute model simulations / forcasts.
2. Delft3D is used to perform model simulations / forcasts based on data provided by FEWS.
3. The adapter provides the interface between both; it converts output data by FEWS to native model input files, manages model state
handling, executes model simulations, converts model output data to the FEWS PI XML file format.
In this way, the Delft3D model adapter allow to embed a Delft3D model in a operational FEWS system.
Whereas FEWS was originaly setup to fascilitate 1D/2D operation runoff modelling, the system has also found its way into the 3D realm of open
waters and lakes. The combination of Delft-FEWS with the 3D Delft3D modeling package offers a range of new possibilities in this sense. For
example, with regard to operational surge modelling in open waters
and water quality modeling, where vertical variability may be essential.
Up to this date, a number of pilot projects have been performed at Deltares to exploit these benefits. A short summary of these projects is
provided below.
ADD INFORMATION
ADD INFORMATION
2. Adapter configuration
Introduction
This section of the WIKI contains all information required for configuring the Delft3D-FEWS adapter for a FEWS application.
1. An overview of the design philosophy based upon which the adapter was developed is provided in section Design Philosophy. This
section motivates the choice for the current approach, states the high-level assumptions upon which the adapter is based and described
the high-level design choices.
2. Section Configuration workflow provides a step-by-step plan which the user is adviced to follow when setting up a Delft3D-FEWS
system.
3. Section GeneralAdapter Configuration provides some useful background information on the relation between the Delft3D-FEWS
adapter (model adapter) and the FEWS General Adapter from which this model adapter is run.
4. Section XML configuration scheme provides background information on the XML configuration scheme used for the Delft3D-FEWS
adapter. This section elaborates the contents of this file and the way in which to configure it.
5. Section Template files provides background information on the template files used by the Delft3D-FEWS adapter. This section
elaborates the contents of these files and the way in which to configure them.
6. Section Naming conventions describes the various naming conventions used for the XML configuration scheme and the template files.
During configuration these naming conventions should be adhered to strictly.
Example files from a fully configured Delft3D-FEWS system are provided for reference and discussed in the separate section Example
configuration.
Best practices with regard setting up a Delft3D-FEWS system are provided in section Best practices.
Table of Contents
773
06 Adapter configuration - naming conventions
A number of high-level design choices have been made during development of this model adapter. These choices were made to:
Provide consistency between the adapters for the different Delft3D packages
Make the adapter fully compliant with "the FEWS standard"
Keep the effort required for adapter configuration to a minimum
Keep the adapter largely unaware of the different types of native model input file format, thus making it less prone to errors due to
changes in these formats.
For the Delft3D-FEWS model adapter user, these choices amount to the following practical issues, which should be taken into account during
adapter configuration:
1. The adapter assumes that a fully calibrated and setup model is provided for configuration in the FEWS system. Additional changes to the
model schematisation may require additional changes in the configuration of the model adapter (though not necessarily).
2. The adapter works using templates for native model input files. In each of these template files, keywords serve as placeholders for
dynamic time series data to be obtained from FEWS. The model adapter subsequently replaces these placeholder keywords with the
appropriate data, exported from FEWS in PI XML format from the generalAdapter module. The template files have to be prepared during
configuration of the model adapter, following pre-defined naming convention described in this manual. By using this approach, the
adapter is (almost) independant of the structure of the Delft3D input files. Also, this approach is adequate for all Delft3D modules.
3. The model adapter is subdivided over three sub-modules. These sub-modules can also be run independantly of one and other, which is
relevant for specific Delft3D applications, line coupled Delft3D-Sobek models, or coupled FLOW-WAQ simulations. The sub-modules are:
The pre-adapter, which prepares the native model input data based on data exported to PI XML by FEWS from the
generalAdapter.
The adapter, which executes the model simulation.
The post-adapter, which converts selected model output to the appropriate PI XML data types to be imported by FEWS.
4. The adapter and template files allow for combining dynamic data provided by FEWS with static data included in the template files. This is
relevant in case, for example, a significant number of constant discharges are included in a FLOW or WAQ model (can be up to 50+). In
that case, these constant discharges do not have to included in (governed from) the FEWS configuration.
5. The adapter assumes that dynamic data is provided by FEWS at all times. Error checking of this data should be done in FEWS primarily
(based on available FEWS functionalities). If inappropriate data is provided by FEWS, the error checking done by the model adapter is
limited. Instead, the adapter will log and display error messages as provided by the model in such a situation.
6. The adapter applies a specific configuration file, used to define some adapter specific settings. The contents of this file is described in this
manual.
7. Delft3D-FEWS was originally setup to work with 1D and 2D models (river system and catchments modelling). The Published Interface
(PI) XML file format, used for data transfer between FEWS and model systems, was setup to accomadate this. This implies that the PI
XML format supportes both 1D and 2D data types. For 3D data, however, no specific PI XML data type is available. This implies that in
FEWS, a 3D dataset should be seen as a stack of 2D or 1D grids and timeseries. The user should take this into account during
configuration by, for example, assigning each layer of a 3D grid with a unique parameter/location combination in FEWS.
A key feature of DELFT-FEWS is its ability to run external modules to provide essential forecasting functionality. The General Adapter is the part
of the DELFT-FEWS system that implements this feature. It is responsible for the data exchange with these modules and for executing the
modules and their adapters. The Delft3D model adapter is an example of such a module run from the General Adapter (see also FEWS manual:
774
General Adapter).
In order to configure a Delft3D-FEWS application, it is important to have a clear understanding of the relation between the General Adapter and
the Delft3D model adapter. This section summarizes some of the functionalities included in the FEWS general adapter module, and their relation
to the Delft3D model adapter.
The schematic interaction between the general adapter and an external module (like the Delft3D model adapter) is shown in the below figure.
Figure 1: schematic interaction between the General Adapter and an external module
1. The general adapter is that part of DELFT-FEWS which exports model input, imports model output data, and executes the pre-adapter,
module and post-adapter.
2. Export and import of model data is done in datafiles following the Published Interface (PI) XML format.
3. The preAdapter, Module and postAdapter together form the model adapter. The model adapte is initiated from the general adapter.
4. The preAdapter is that part of the model adapter which converts input data in PI XML format to native model input data.
5. The postAdapter is that part of the model adapter which converts native model output data to PI XML format to be imported by FEWS.
6. The Module is that part of the model adapter which starts a model simulation.
Essential in this division of tasks between model adapter and general adapter is that from the vantage point of the general adapter the model
adapter is a black box, and visa-versa. Exchange of information between these components is done based on exchange of PI XML data files;
DELFT-FEWS does not have any knowledge about the modelling system (Delft3D in this case), whereas the modelling system does not have any
knowledge about DELFT-FEWS. Translation of data from one component to the other is done using the model adapter. This model adapter not a
part of the Delft-FEWS system itself, but is essentially an external program initiated by DELFT-FEWS through the general adapter. This external
program (the model adapter) should provide all the functionalities to, i) convert specific PI XML data to specific native model input files, ii) initiate a
model simulation and iii) convert model output data to PI XML data readable by DELFT-FEWS. In addition, this model adapter has the following
tasks, iv) logging of errors during all steps by the model adapter and by the model itself and v) administation of the model state. For both these
tasks pre-defined PI XML file formats exists readable by DELFT-FEWS.
Note that, while the Delft3D model adapter provides all these functionalities, as described in this document, an amount of system configuration is
required to get each particular Delft3D-FEWS system operational. This configuration (described in the remainder of this document) must be done
in a consistent way for the model adapter and the general adapter based on the relationship outlined above.
775
process is represented as a number of workflows (or steps). The FEWS configurator should fellow these steps and adhere to the conventions
therein in order to setup a Delft3D-FEWS system. Some specifics on naming conventions and template files are outlined in separate sections (see
sections Adapter configuration - naming conventions and Adapter configuration - template files ).
1. Preparing the Delft3D model setup. Note; It is assumed a fully setup and calibrated model is provided for configuration in FEWS (see
section Design philosophy). This fully setup model is subsequently prepared for usage from FEWS in this step.
2. Preparing the model adapter XML scheme (hereinafter called [Link], but other names can prescribed by the user).
3. Preparing the General Adapter XML scheme (hereinafter called [Link], but other names can be prescribed by the user).
These three steps are outlined below in more detail. Before doing so, however, the user should setup a directory structure which adheres to the
conventions listed below.
Note that, for reference, examples of each of the steps listed below are provided in section Example configuration.
Directory structure | Step 1: Preparing the Delft3D model setup | Step 2: Preparing the General Adapter XML scheme | Step 3: Preparing the
model adapter XML scheme
Directory structure
To minimize the number of choices that have to be made by the user (thus reducing the possibility for mistakes), the Delft3D-FEWS adapter
expects a fixed set of directories and files. Some directories, however, are changeable by the user for practical purposes. This (mandatory)
directory structure is illustrated by figure 1.
The directory structure illustrated in figure 1 is elaborated in more detail in the below table. Conventions as indicated in this table should be
adhered to be the user when setting up a Delft3D-FEWS application.
%rootdir%/input/ Contains all the files with timeseries and map stacks exported by Delft-FEWS Fixed
%rootdir%/input/map_<param>.xml XML-file describing the map stacks with parameter <param> (see naming conventions in Fixed
section Adapter configuration - naming conventions )
%rootdir%/stateInput/ Contains the initial conditions (state) from which the computation must start. Exported Fixed
from [Link]
%rootdir%/stateInput/[Link] The XML-file describing the time of the state files Fixed
%rootdir%/output/ Contains all the files with timeseries and map stacks as output by the adapter (model Fixed
results) and to be imported by Delft-FEWS
%rootdir%/output/[Link] XML-file with the modelled (scaler) timeseries, to be imported by Delft-FEWS Fixed
776
%rootdir%/output/map_<param>.xml XML-file describing the modelled map stacks with parameter <param> (see naming Fixed
conventions in section Adapter configuration - naming conventions )
%rootdir%/stateOutput/ Contains the new state file produced by the model, to be imported by Delft-FEWS. Fixed
%rootdir%/stateOutput/[Link] The XML-file describing the time of the new state files, to be imported by Delft-FEWS Fixed
%rootdir%/logs/ Contains the diagnostics file produced by the GeneralAdapter and by the model adapter Fixed
%rootdir%/logs/[Link] The XML-file containing diagnostic information as output by the GeneralAdapter and by Fixed
the model adapter
%rootdir%/<workdir> Directory in which the model computation will be run. Specified by the user in Changeable
[Link]. For example, %rootdir%/Flow or %rootdir%/<modelname>
%rootdir%/<modeldir> Directory in which the static (non-changing) model schematisation is stored, to be copied Changeable
to <workdir> by the model adapter. Specified by the user in [Link]. For
example, %rootdir%/FlowSchematisation or %rootdir%/<modelname>Schematisation
Note that it is possible to include multiple <workdir>'s and <modeldir>'s in a %rootdir% folder (for example, FLOW and WAQ model). In this case,
all fixed folders will be shared by both models.
For a Delft3D model to be used with the Delft3D-FEWS adapter, a number of adaptations have to be made to particular model input files. This
process is illustrated by the workflow in figure 2. The different steps in this workflow are elaborated in more detail below.
Figure 2: workflow of required adaptations to Delft3D model for usage in Delft3D-FEWS application.
1a) The Delft3D-FEWS adapter work using template model input files. In these templates, placeholder keywords can be assigned, which
are replaced by dynamic data from FEWS by the model adapter. These placeholder keywords have to be included during configuration of
the Delft3D-FEWS application. Naming conventions for keywords and template files are described in sections Adapter configuration -
naming conventions and Adapter configuration - template files .
1b) In a similar fashion, the simulation time frame has to be updated by the model adapter for new model simulations. This is achieved by
including placeholder keywords for the model timeframe in the MDF (FLOW) or INP (WAQ, PART) files. Additionally, for FLOW
simulations applying gridded meteorological forcing, placeholder keywords have to be included for the spatially varying meteorological
fields in the MDF file. Naming conventions for keywords and template files are described in sections Adapter configuration - naming
conventions and Adapter configuration - template files .
1c) In case WAQ or PART simulations where the hydrodynamics are obtained from a preceeding FLOW simulation (in the form of
communication files), the WAQ template files should refer to the correct FLOW <workdir> (see directory structure) where the
communication files are stored.
1d) Once all template files are prepared, both these template files and the static model schematisation (grid files etc) have to be included
in the appropriate <modeldir> (see directory structure)
777
By default, the FEWS general adapter module is used to execute the Delft3D adapter, export the necessary input and state data for a model
simulation and import the necessary output and state data prepared by the model adapter. This is described in more detail in section Adapter
configuration - generalAdapter configuration . The general adapter module must be configured in the correct way to provide the required input to
the Delft3D adapter, execute the Delft3D adapter and import the output provided by this adapter. This process is illustrated by the workflow in
figure 3. The different steps in this workflow are elaborated in more detail below.
1a) In the <general> section of the [Link], the correct <rootDir> has be be specified (hereinafter specified as %rootdir%,
see also directory structure described above).
1b, 1c and 1d) The <exportDir>, <importDir> and <diagnosticFile> directories have to be set to the appropriate paths, based on the
%rootdir% specified at step 1a and the directory structure described above (respectively %rootdir%/input, %rootdir%/output and
%rootdir%/logs/[Link]).
2a) Under <exportActivities>, <exportStateActivity> the <stateExportDir> should be set to %rootdir%/stateInput. The <StateConfigFile>
should be set to %rootdir%/stateInput/[Link]. See also directory structure above.
3a) Under <executeActivities>, <executeActivity>, <command>, the pre-adapter, adapter and model adapter have to be executed with
the <classname> option, refering to the appropriate JAVA class (include names!). Under <arguments>, the following execution arguments
are mandatory: 1) %rootdir%, and 2) model adapter XML scheme ([Link]) (at which location?).
4a) In <importActivities>, <importStateActivity>, the <stateConfigFile> should be set to %rootdir%/stateOutput/[Link] (see directory
structure described above).
4b) In <importActivities>, <importTimeSeriesActivity>, the appropriate timeseries as prepared by the model adapter based on mapping
relations described in the [Link] file have to be included. Note that in this case, no importIdMap is required. In
<importActivities>, <importMapStacksActivity>, the appropriate map stacks as prepared by the model adapter based on mapping
relations described in the [Link] file have to be included. Note that in this case, no importIdMap is required.
Include text
778
Figure 3: workflow of required configuration to model adapter scheme ([Link]).
In this section, the contents of the Delft3D-FEWS model adapter XML scheme is described and the user is explained on how to edit this file for
particular Delft3D-FEWS setups. While this is also briefly discussed in the section Configuration workflow, this section provides more details on
the various settings in this file.
For this manual, we will assume that the XML configuration file is named [Link] (in accordance with the XSD scheme). During
configuration, however, the user is free to rename this file is required.
As outlined above, the Delft3D-FEWS model adapter applies an XML configuration file in order to group configuration dependant settings in a
practical way. More precisely, this configuration file is used to define model adapter settings not supported by the FEWS General Adapter module
(or settings requiring additional flexibility). During setup of this file, the goal was to prevent duplicate information in the model adapter XML
configution file and the general adapter module, thus preventing possible conflicts and errors in the system configuration. As such, available
settings in the model adapter XML configution are limited to those strictly required by the system.
Figure 1 below shows the XSD scheme for the Delft3D-FEWS model adapter XML configuration file. In the below text, the sections <general>,
<preAdapter> and <postAdapter> are explained in more detail.
Section <general>
Because Delft3D consists of several modules that can be used in different configurations, it is necessary to specify which module should be run
(the keyword module). This module (either FLOW, WAQ, ECO, PART or WAVE) determines together with the string specified as the run-id
(keyword <runId>) which files will be used (see Table 1 below). With these four keywords the user should be able to define the characteristics of
most runs of Delft3D modules. For some particular cases, for example when running Delft3D-FLOW with RTC, additional input arguments are
required to executate the model. These arguments should be provided as additional input arguments when executing the model adapter from the
general adapter module. For more information, see section Configuration workflow.
The keyword <workDir> indicates in which directory the computational programs will start and the keyword <modelDir> should refer to the
directory containing the template files and the other (fixed) files that together make up the input for the computation. See also section
Configuration workflow about the relation between these directories and the overall directory structure of a Delft3D-FEWS application.
Keyword Settings
779
<runId> RunId of template input files (<runId>.mdf, <[Link]> or <[Link]>
Table 1: Configurable settings in <general> section of XML configuration file Delft3D-FEWS adapter.
Section <preAdapter>
The <preAdapter> section contains one keyword only, <steeringTimeSeriesName>, the name of the timeseries that is to be used as to determine
the time frame of the simulation (found in the timeseries exported by Delft-FEWS).
The reason for this keyword is that Delft-FEWS determines the actual modelling timeframe based on user defined start and stop times in the
general adapter module and on the availability of state information within this timeframe. Because of the latter, the start time of a model simulation
is not necessarily fixed but may vary based on the available of this state information (see also section State handling). The Delft3D model should
be able to cope with this by changing the simulation period accordingly. To achieve this, the model adapter will assess the user specified
<steeringTimeSeriesName> and will base its simulation period (starttime and stoptime) on the duration of this timeseries.
The name of the timeseries is to be formed in this way: 'external name of the parameter/external name of the location'. For instance: if the
external parameter name is 'H' and the location is 'Southern boundary', then the name for that time series is: 'H/Southern boundary'. See also the
section on Naming conventions.
Section <postAdapter>
The section <postAdapter> describes the actions to be taken after completion of the model run. Rather than blindly export all the results from the
model run to Delft-FEWS and let it pick up the timeseries and map stacks of interest, the adapter exports only those timeseries and map stacks
described in this section. This is preferable given the (possibly) significant file size of Delft3D output files.
To achieve this, the <postAdapter> section contains a mapping table relating Delft3D output to internal Delft-FEWS parameters and locations.
Based on these mapping relations, the postAdapter will convert native model output to PI XML timeseries and mapStacks to be imported by
FEWS during the <importActivities> of the general adapter module (see also section Configuration workflow).
Note that this mapping table works in a similar way as FEWS IdMaps, where external locations and parameters and internal locations and
parameters are related to each other. Since the mapping table in this configuration file already links model output to the appropriate internal
locations and parameters in FEWS no specific IdMap is required in this case. To this end it is essential that these locations and parameters exist
in the given FEWS application, however.
The external locations and parameters as described in this mapping table are derived from the parameter names and the location names as seen
in Delft3D-GPP (the adapter uses the same library as Delft3D-GPP to read the model output files). This implies that the external locations and
parameters should be set based on naming conventions which are outlined in section Naming conventions.
In these templates, placeholder keywords have to be assigned, which are replaced by dynamic data from FEWS by the model adapter. These
placeholder keywords have to be included during configuration of the Delft3D-FEWS application. In the below section, these template files and
keywords are descibed per Delft3D module.
Delft3D-FLOW
Delft3D-FLOW (referred to as FLOW hereinafter) applies a wide range of attibute files for different types of input data. Essentially, all forcing data
is contained in these files, and the FLOW Master Definition File (MDF) referres to these. For FLOW, the Delft3D model adapter distinguishes
between the following types of files:
1. Files containing timeseries data (*.bct, *.bcc, *.bcb, *.dis, *.eva, *.tem, *.wnd)
2. Files containing gridded data (gridded meteorological forcing)
3. The master definition file (*.mdf)
780
In these files, keywords should be included in the following way:
Each timeseries which has to be updated by the Delft3D model adapter based on data provided by FEWS, has to be replaced by the following
keyword.
Keyword Description
FLOW_TIMESERIES Placeholder to fill in the timeseries in the so-called tim format (only the time and data including the 'number of records'
entry where applicable, not the header). The keyword should be followed by the names of all timeseries that should be
filled in there, separated by spaces (see example below). Note that naming of timeseries should be done in accordance
with naming conventions described in Naming conventions.
In this particular case, timeseries for discharge, salinity and temperature will be added by the model adapter for this location. Note that a fixed
reference time is assumed. The model adapter will subsequently determine the relative timeframe of the included timeseries with respect to this
reference time (as required by FLOW).
Files containing gridded data are build from scratch by the model adapter, based on mapStack data exported by the general adapter (see section
Configuration workflow). This can be achieved based on placeholder keywords in the MDF file (see next section).
Keyword Description
FLOW_TIME_START Start of the simulation (format in accordance with Delft3D-FLOW). This is actually the time in minutes since the
reference time, found in the mdf-file.
FLOW_TIME_RST Total simulation duration, applies to output restart (state) file at end of model simulation. Note that if this keyword is
omitted and a fixed interval in specified, the postAdapter will select the last restart file written by the model.
FLOW_MAPSTACK Placeholder for the name of the file that will hold the gridded forcing data (as found in the mapstack files exported by
FEWS). It should be followed by the name of the parameter, for example, FLOW_MAPSTACK 'pressure'. In this case,
XML mapStack data described by the file map_pressure.xml will be used to construct the input file [Link] for
FLOW. What about the name for the reference grid?
781
Example MDF file
...
Itdate= #2008-01-01#
Tunit = #M#
Tstart= FLOW_TIME_START
Tstop = FLOW_TIME_STOP
Dt = 10
...
Restid= #<runId>.rst#
...
Flmap = FLOW_TIME_START 60 FLOW_TIME_STOP
Flhis = FLOW_TIME_START 10 FLOW_TIME_STOP
Flpp = FLOW_TIME_START 0 FLOW_TIME_STOP
...
Flrst = FLOW_TIME_RST
...
Filwu = FLOW_MAPSTACK 'windu'
Filwv = FLOW_MAPSTACK 'windv'
Filwp = FLOW_MAPSTACK 'pressure'
Filwr = FLOW_MAPSTACK 'humidity'
Filwt = FLOW_MAPSTACK 'temperature'
Filwc = FLOW_MAPSTACK 'cloudiness'
Some additional items which have to be taken into account during preparation of a Delft3D model for usage by the Delft3D model adapter are;
1. The adapter assumes a fixed reference time is applied. This time (as indicated in the MDF template file) will be used to determine the
relative time frame for all timeseries in the attribute files. This also implies that the reference time as indicated in the MDF file and in these
attribute files have to be identical.
2. The adapter assumes that astronomical tidal forcing data is provided with the original model (if applicable). This implies that tidal
components as prescribed in the *.bca and *.cor files will be used. These components are static forcing from the vantage point of FEWS.
3. The interval for map and timeseries output as specified in the MDF (Flmap and Flhis) should correspond with the interval of the PI XML
data imported by FEWS under <importActivities> in the general adapter.
4. The model adapter will check all attribute files found in the static data repository (<modelDir>, see section Configuration workflow) for the
abovementioned keywords, as will it check the MDF file. Note that the name of the MDF file should match the <runId> as specified in the
model adapter configuration file (see section XML configuration scheme). The user is free in naming fo the attribute files.
5. In is assumed that the FLOW model starts from a spatially varying restart file at all times, whether this is a 'warm' state file or a 'cold'
initial state file. This file has a fixed name at all times. This implies that the model output state (restart file) is renamed to this fixed name
by the model adapter. During configuration, a cold state file in a similar format as a restart file must be provided.
Delft3D-WAQ
In contrary to Delft3D-FLOW, Delft3D-WAQ applies a single input file, *.inp file (through additional files can be included in the *.inp file using the
INCLUDE statement). Both the simulation timeframe, timeseries data and gridded data are specified in this file.
Keyword Description
WAQ_TIMESERIES Placeholder to fill in the timeseries in the WAQ /ECO format (only the time and data, not the header). The keyword
should be followed by the names of all timeseries that should be filled in there, separated by spaces (see example
below).
WAQ_MAPSTACK Placeholder for the name of the file that will hold the gridded forcing data (as found in the mapstack files exported by
FEWS). It should be followed by the name of the parameter, for example, WAQ_MAPSTACK 'windvel'. In this case, XML
mapStack data described by the file map_windvel.xml will be used to construct the input file [Link] for WAQ.
In addition to these keywords, it is important to note that the inp file should refer to the communication files as output by a preceeding FLOW
simulation. In all likelyhood, this FLOW simulation was during an earlier phase of the FEWS workflow, in a <workDir> specified in the model
adapter configuration file (see sections Configuration workflow and XML configuration scheme). The communication file paths in the inp file
should point towards this <workDir>. Note that it is stongly advised to use relative paths in this case!
782
Example INP file (timeframe)
...
WAQ_TIME_START ; start time
WAQ_TIME_STOP ; stop time
0 ; constant timestep
0003000 ; time step
...
WAQ_TIME_START WAQ_TIME_STOP 0120000 ; monitoring
WAQ_TIME_START WAQ_TIME_STOP 0120000 ; map, dump
WAQ_TIME_START WAQ_TIME_STOP 0120000 ; history
...
...
-2 ; first area option
'..\<workDir FLOW>\[Link]' ; area file
;
-2 ; first flow option
'..\<workDir FLOW>\[Link]' ; flow file
;
...
...
TIME BLOCK
DATA
'Continuity' 'Salinity' 'DetC' 'DetN' 'DetP'
WAQ_TIMESERIES 's/bound-1' 'DetC/bound-1' 'DetN/bound-1' 'DetP/bound-1'
...
...
FUNCTIONS
'Wind'
LINEAR
DATA ;
WAQ_TIMESERIES 'windvel/bound-1'
...
...
SEG_FUNCTIONS
'Radsurf' ; name of segment function
ALL
WAQ_MAPSTACK 'sunshine'
...
...
'<runId>.res' ; initial conditions in binary file
'<runId>.res' ; binary file
...
Delft3D-PART
783
Delft3D-WAVE
3. Example configuration
PM
4. Best practices
MCRM Rainfall-Runoff EA UK
784
HEC-ResSim Reservoir Simulation USACE USA
RTC Tools Real-Time Control, Model Predictive Control, Reservoir Simulation Deltares Netherlands
Model
Modflow
785
Modflow can be connected to Delft-FEWS using the Modflow adapter developed by Adam Taylor.
PCOverslag
PCOverslag can be connected to Delft-FEWS using the PCOverslagAdapter developed by Deltares.
The files needed to run the PCOverslagAdapter from Delft-FEWS can be found in the install artifacts [Link]. The following files
should be located in the bin directory in the PCOverslag Module location:
Adapters_PCOverslag.jar
[Link]
Delft_PI.jar
Delft_PI_castor.jar
Deflt_Util.jar
[Link]
[Link]
[Link]
[Link]
Input
Wave height
Wave direction
Wave period
Waterlevel
Output
Golf oploop
Golf overslag
Golf oploop niveau
Golf overslag niveau
Overslag debiet
Below is an example of the general adapter configuration file, to be used with version Stable2011.02 onwards.
786
<filter>%WORK_DIR%/*.*</filter>
</purgeActivity>
</startUpActivities>
<exportActivities>
<exportTimeSeriesActivity>
<exportFile>[Link]</exportFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>Kopieer_Hydra_naar_Dijkvak</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Dijkvak</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-6" end="12"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>Kopieer_Hydra_naar_Dijkvak</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Dijkvak</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-6" end="12"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>Kopieer_Hydra_naar_Dijkvak</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Dijkvak</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-6" end="12"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>Kopieer_Hydra_naar_Dijkvak</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>Dijkvak</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<relativeViewPeriod unit="hour" start="-6" end="12"/>
<readWriteMode>read only</readWriteMode>
</timeSeriesSet>
</timeSeriesSets>
</exportTimeSeriesActivity>
<exportDataSetActivity>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
</exportDataSetActivity>
<exportRunFileActivity>
<description>This pi run file is passes as argument to PcOverslagAdapter</description>
<exportFile>%WORK_DIR%/[Link]</exportFile>
<properties>
<description>Specific configuration required for PcOverslagAdapter</description>
<string value="no" key="WITH_ITERATION"/>
<string value="%ROOT_DIR%/profiles" key="PROFILE_DIR"/>
</properties>
</exportRunFileActivity>
</exportActivities>
<executeActivities>
<executeActivity>
<description>PC Overslag Adapter</description>
<command>
<className>[Link]</className>
<binDir>%ROOT_DIR%/bin</binDir>
</command>
<arguments>
787
<argument>%WORK_DIR%/[Link]</argument>
</arguments>
<timeOut>300000</timeOut>
</executeActivity>
</executeActivities>
<importActivities>
<!-- Import PC Overslag results-->
<importTimeSeriesActivity>
<importFile>[Link]</importFile>
<timeSeriesSets>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
<timeSeriesSet>
<moduleInstanceId>PCOverslag_Voorspelling</moduleInstanceId>
<valueType>scalar</valueType>
<parameterId>[Link]</parameterId>
<locationSetId>DijkvakGolf</locationSetId>
<timeSeriesType>simulated forecasting</timeSeriesType>
<timeStep unit="hour" multiplier="1"/>
<readWriteMode>add originals</readWriteMode>
</timeSeriesSet>
</timeSeriesSets>
</importTimeSeriesActivity>
</importActivities>
788
</activities>
</generalAdapterRun>
Properties
WITH_ITERATION
With this option there is a choice between running the PCOverslag dll's with or without iteration (Default is no).
PROFILE_DIR
This is the path where the profile description files are located. The profile description files are ascii files which describe the characteristics and
geometry of the profiles which will be computed.
DAM 3
DAMHOOGTE 1.8
RICHTING 280
KRUINHOOGTE 3.16
VOORLAND 1
0.000 1.800 0.000
-61.440 -4.500 1.000
-49.500 1.470 1.000
-3.500 2.170 1.000
0.000 3.160 1.000
MEMO
profiel handmatig toegevoegd (MvR, 08/10/2007)
obv rapport 110303/OF2/249/000144/AM, profiel L601dv
hoogte havendam obv AHN_5
Locatie: 177000;539229
For more details on the PCOverslag application see the helpdeskwater pages and the PCOverslag programming guide (in Dutch).
RTC Tools
RTC Tools is a modelling package for Real-Time Control. It can be applied as a stand alone application in Delft-FEWS or linked to hydraulic
modelling packages via OpenMI.
Background
Areas of application
Integration into Delft-FEWS
Contact
Background
RTC (Real-Time Control) Tools originates from the integration of several project-specific reservoir simulation modules in flood forecasting systems
for Austria, Germany and Pakistan. It's original design in Java in 2007, also refered to as the Delft-FEWS reservoir module, aims at the simulation
of pool routing in reservoirs and reservoir systems including related reactive controllers and operating rules.
Support for more advanced model predictive controllers was introduced in 2008 and extended in 2009. This includes the implementation of a
kinematic wave model as an additional internal model for the predictive controllers as well as the introduction of adjoint systems for selected
789
modeling components. Latter resulted in significant speed-ups of these controllers.
In 2010, the concept of triggers for switching on and off controllers and operating rules was introduced for enabling the simulation of more
sophisticated heuristic control schemes. Furthermore, the software was redesign in C++ and enhanced by a C# OpenMI wrapper for integration
into modeling packages such as SOBEK or Delft3D (ongoing activity in the first half of 2010).
Areas of application
Because of the need for internal modeling in Model Predictive Controllers, the tool includes a number of simple routing models. These enable also
its stand-alone use in forecasting systems. Furthermore, the OpenMI interface allows a user to couple the tool to a wide range of hydraulic
modeling packages.
The software pays special attention to state handling. This includes by definition all system states of triggers, controllers, operating rules and all
modeling components.
Check the Technical Reference for the integration of RTC Tools into Delft-FEWS
Contact
[Link]@[Link]
17 Launcher Configuration
Introduction
The Launcher application of FEWS requires two types of configuration file:
Contents
Launcher XML
Security XML
See [Link]
Launcher XML
Security XML
Introduction
Currently there are a number of mechanisms that allow external applications to exchange data with the Delft-FEWS. Importing and exporting
data via a number of possible file formats is used in current operational systems.
Other, more interactive, methods have been developed and are described in this section. Up till now these mechanisms are not fully developed
790
yet. They have been set up as test cases, each with a particular goal in mind. The following sections will give a description of the current status
of these projects, what their strong points are and what their weaknesses are.
Fews JDBC server provides a simple JDBC interface. The FEWS JDBC server uses an OC region configuration and initializes a synchronization
process with the MC to update the data. The client application sets up a connection to the JDBC driver and uses SQL queries to retrieve data.
Currently it is only possible to retrieve data from the FEWS system and not write data to the FEWS system.
Fews PI service provides a simple API using a SOAP framework called XFire. The SOAP framework is set up using an OC region configuration.
This OC also initializes a synchronization process with the MC to update the data. The client application uses XFire to retrieve a proxy of the
API. With the proxy instance the client application can retrieve data from the FEWS system and also write data to the FEWS system. The
exchange of data occurs using strings containing the content of FEWS PI files.
Fews Workflow runner service provides a simple API using a SOAP framework called XFire. The SOAP framework is set up by passing a
configuration file as an argument in the main method of the service runner class. Once the service is started by the runner class, the client
application uses XFire to retrieve a proxy of the API. With the proxy instance the client application can run single task workflows on the MC. On
running the a workflow the client must pass the input timeseries arrays as argument to the proxy. The output timeseries produced by the
workflow run are written to output files configured in the configuration file.
Contents
Fews JDBC server
Fews PI service
Fews Workflow Runner service
JDBC vs. FewsPiService
Introduction
To be able to query timeseries directly using SQL statements Delft-Fews can be set up to act as a jdbc server. This can be done using an OC
configuration (which will log in and automatically synchronise date with the MC, thereby assuring all data is constantly being updated), or by
running this stand-alone. In the latter case the system will only export what is in the local datastore at startup.
791
Locations
The locations table allows the client application to query the available FEWS locations.
Parameters
The parameters table allows the client application to query the available FEWS parameters.
Timeseries
The timeseries table allows the client application to query the available FEWS timeseries. The information shown in the TimeSeries table provided
by the JDBC server does not match the information of the FEWS TimeSeries table. The JDBC server provides a view of the data of a queried
timeseries.
ExTimeSeries
The extended timeseries table allows the client application to query the available FEWS timeseries. The information shown in the ExTimeSeries
table provide by the JDBC server is similar to the information presented in the FEWS TimesSeries table. The JDBC server provides a view of the
metha data of a queried timeseries.
TimeSeriesGraphs
The TimeSeriesGraphs table allows the client application to retieve an image of a FEWS timeseries chart for the queried timeseries. The query
returns a byte array value containing the content of a BufferedImage.
Filters
The Filters is set up as a view. This is because the Filters does not represent a FEWS table. Instead the Filters view represents the content of the
FEWS configuration file '[Link]'.
792
TimeSeriesStats
The TimeSeriesStats is set up as a view. This is because the TimeSeriesStats does not represent a FEWS table. Instead the TimeSeriesStats
view shows the results of a statistical analysis performed on the timeseries returned by the query.
Windows
Step 1: Install an OC
Step 2: Delete the "[Link]" from the "OC" directory. When starting the application a new "[Link]" file will be
generated for logging.
Step 3: Make a new "<OC-Name>_JDBC.exe" and "<OC-Name>_JDBC.jpif" file in the \bin directory. The "<OC-Name>_JDBC.jpif" must contain
the following information.
..\jre
-mx512m
-cp
$JARS_PATH$
[Link]
<OC-Name>_JDBC
Step 4: Start the FewsJdbcServer by clicking on the <OC-Name>_JDBC.exe. The Server will start as an OC and synchronise its localDataStore
with the Central Database using the synchprofiles of an OC.
Step 5: Stop the FewsJdbcServer by killing the application using the System Monitor. In the attachements an exe is provided that opens a console
window. If this console window is stopped, the FEWS JDBC driver process is also stopped.
Step 6: unzip the "JDBC service [Link]" to a directory at the same level as the bin and application directory, eg. like "service"
Step 7: replace in the file "run_installscript.bat" the BIN directory and the FEWs application name and directory
Step 8: run the batch file "run_installscript.bat"
Step 9: go to the services window and define the correct properties for the just installed service, like
automatic startup
correct user settings in login tab
restart options after 5 minutes
Notice that the batch calls the file install_JDBC_Service.bat, that contains a list of the *.jar files in the bin directory. If these filenames have
changed or the list has changed, this list should be updated. If not, running the service may not be successful. Also notice that your JAVA_HOME
environment variable has been set and refers to your JRE directory. This JRE directory should not contain space characters in the name. If so,
make a copy of your JRE to a directory with a name without space and set in the run_installscript.bat the JAVA_HOME variable to this new path.
Linux
Step 1: Install an OC
Step 2: Delete the "[Link]" from the "OC" directory. When starting the application a new "[Link]" file will be
generated for logging.
Step 3: Take the fews_jdbc.sh script file and place this one level higher than the \bin directory.
Step 4: Go to the directory where the ./fews_jdbc.sh script file is located and type ./fews_jdbc.sh <OC-Name>.
Step 5: Stop the FEWS JDBC service by typing exit in the console window where the JDBC startup script was executed. An other option is to kill
the process of the FEWS JDBC service.
793
Starting JDBC Service from FEWS Explorer
For debugging purpose it is possible to start the JDBC from the stand-alone FEWS Explorer. With the F12 key you get a list of debug options.
Select "start embedded vjdbc server". The service will start and can be accessed from a database viewer.
Step 1: Install DbVisualizer on your PC. Make sure it is not installed in a folder with spaces, such as "Program Files". When there is a space in the
folder name, it will NOT work correctly. This is a DbVisualizer bug that can not be solved by FEWS.
Step 2: Copy the files "[Link]" and "[Link]" to a folder on your computer. These are the drivers used by DBVisualizer. Also
this folder name should not contain any space characters (use the 8.3 format).
Step 3: Add a new JDBC driver to DBVisualiser:
Start DbVisualizer
Open the Tools menu and the Driver Manager
Create a new driver and give it the name "vjdbc". Load the two jar files in the "User Specified" tab. * Close the Driver Manager Window.
The FEWS JDBC Server has been tested with the Easysoft JDBC-ODBC bridge, this can be purchased. This allows the user to access the JDBC
Server from other applications like Microsoft ACCESS that do only support ODBC. To use the JDBC driver with the ODBC-JDBC bridge, do the
following:
Make sure you add the [Link] and [Link] file to the classpath
The url is: jdbc:vjdbc:rmi://<host>:2000/VJdbc,FewsDataStore (under <host>, enter the machine where the fews jdbc application runs)
When the FEWS JDBC application runs you can test the connection using the Test button.
794
Example SQL queries
There are a number of SQL queries that can be used to retrieve data from the database. Only (read-only) statements are supported. Statements
must be formatted as:
SELECT [DISTINCT] <select_expr> FROM TABLE_NAME [WHERE <where_condition>] [ORDER BY COLUMN_NAME [ASC
|DESC]]
For the Locations, Parameter and Filters table the SQL Query "Select * from <TableName>" is allowed. For the TimeSeries Table this query will
return an error.
Note ! When creating a query using the clause time BETWEEN '2007-03-17 [Link]' AND '2007-04-01 [Link]', then it
is good to realise that the start time is used as system time for retrieving the timeseries data. This could be important when
retrieving 'external forecasting' data with an 'externalForecastTime' later than the start [Link] will result in no data being
returned.
SELECT id, locationid, parameterid FROM filters WHERE id = 'ImportSHEF' ORDER BY location
795
Return all locations from a specific filter
The Time series can be queried with or without the Filter ID. An example of a query without using the filter ID is:
796
SELECT * from TimeSeries
WHERE filterId = 'ImportSHEF'
AND parameterId = 'FMAT'
AND locationId = 'DETO3IL'
AND time BETWEEN '2008-12-19 [Link]' AND '2008-12-23 [Link]'
AND Value BETWEEN '1.9' AND '2.0'
All values are in the configured time zone of the JDBC application.
All unreliable values will not be returned in the query. The complete time step of unreliable values is missing in the
returned recordset.
The Time series can be extracted from the database as a graph (binary obejct) through the Timeseriesgraphs table. Queries with or without the
Filter ID can be used, similar to the time series table. An example of a query with the use of a filter ID is:
By default the graphs have a size of 300 (width) * 200 (height) pixels. In the SQL query the width and height can also be fixed.
As from 201001 it is allowed to combine data from different locations and/or parameters into one graph by 'joining' them using OR-operators.
Such a clause with OR-operators must be put in between brackets:
As from 201001 it is possible to optionally specify the time zone for the resulting graph; time clauses in the query remain to be specified in GMT.
797
SELECT *
FROM TimeSeriesgraphs
WHERE filterId = 'Ott_ruw'
AND parameterId = '[Link]'
AND (locationId = '10.H.59' OR locationId = '15.H.20')
AND time BETWEEN '2008-05-01 [Link]' AND '2008-05-01 [Link]'
AND height = 500 AND width = 750 AND timezone='GMT-1';
Example code
Here follows some example code of how client applications can set up a connection to a JDBC server hosted by a FEWS OC.
No special jars other than the ones provided by the JRE are required.
Miscellaneous
By default the port number of the JDBC Server is 2000. It is possible to use a different port number when starting the application. In the
[Link] a property can be added like this:
JdbcServerPort=2078
Rolling Barrel
When the FEWS JDBC Server is started, the OC rolling barrel configuration will not be used. Instead the Rolling Barrel will run once a day at
02:00 GMT. After the FEWS Rolling Barrel, the compact Database script (only for MS ACCESS databases) will also be executed automatically.
FEWS stores timeseries with timestamp in GMT, without DayLightSaving (DLS) conversion.
JDBC Client applications like DBVisualizer adopt timezone settings from the (local) Operating System.
This means that data is converted (from FEWS GMT) to local timezone. When DLS conversion is active, a query on data from the night that DLS
is switched (zomertijd to wintertijd, when clock is set back a hour) results in 'double' timeseries records between 2:00 and 3:00 AM.
The JVM for the JDBC client (like DBVisualizer) can be started with an extra commandline option, and forces timezone setting for the JVM rather
than adopting it from the local OS. This commandline option looks like:
-[Link]=GMT
or
-[Link]=GMT+1
or
-[Link]=GMT-5
and so on...
When starting DBVisualizers JVM with -[Link]=GMT results are in GMT, without DLS conversion.
Known issues
[Link]: [Link]: no protocol....
This is an exception that occurs due to a bug in DBVisualizer. Check whether DBVisualizer OR the vjdbc drivers are located in directories
that contain spaces in their path. Move them to a directory path without spaces to solve this issue.
798
Fews PI service
Introduction
Fews PI Service API
Getter methods
System info
Identifiers
Content
Timeseries
Client datasets
Setter methods
Run methods
Conversion methods
Data management methods for client data sets
Installing a PI Service Client
PI Service configuration
Initializing PI Service in Explorer config
Installing a FEWS PI Service as a backend process
Example code
Setting up a connection in JAVA
Setting up a connection in C
Setting up method calls JAVA
Setting up method calls C
Appendix
FewsPiService WSDL
FewsPiService API
FewsPiServiceConfig XSD
Running Delft-FEWS in the background
Introduction
The Fews PI service data exchange uses XFire, a java SOAP framework. This framework allows a client application to obtain a proxy instance to
the FewsPiService API. With this API the client can retrieve data from an OC or write data to an OC. Before a client application can access the
FEWS system there is some configuration work that needs to be done.
User's looking to use XFire on a new project, should use CXF instead. CXF is a continuation of the XFire project and is
considered XFire 2.0. It has many new features, a ton of bug fixes, and is now JAX-WS compliant! XFire will continue to be
maintained through bug fix releases, but most development will occur on CXF now. For more information see the XFire/Celtix
merge FAQ and the CXF website.
To use the Fews PI service from other programming languages, see Using the Fews PI service from C, which also contains an example of using
Tcl.
You can run the FEWS system in the background, if this is needed or desirable (see below).
Getter methods
System info
clientId: File name of client configuration file located in the OC configuration directory 'PiClientConfigFiles'. This file is free format and
content is only read by client application. Only requirement is that content is text based.
fileExtension: Extension of client file.
returns: Text file containing client configuration.
799
Get last time that OC is updated.
Identifiers
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a ModuleState element in the service configuration file.
returns: List of ColdStateGroup ids.
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a ModuleState element in the service configuration file.
returns: Available warm state times for requested ModuleState.
800
Content
piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_Filters XML file containing Fews Filters.
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not
provided then no id mapping will be done.
filterId: Filter Id. Can be retrieved using the String getFilters(String piVersion) method.
piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_Locations XML file containing the locations available for passed filter id.
Get timeseries parameter information available for the passed 'filterId' argument.
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not provided then no id mapping will be done.
filterId: Filter Id. Can be retrieved using the String getFilters(String piVersion) method.
piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_TimeSeriesParameters XML file containing the parameters available for passed filter id.
Retrieves rating curves content in PI-rating curve format for selecetd rating curve identifiers
Get log messages produces by the last run of given task id.
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a ModuleDataSet element in the service configuration file.
ensembleId: <currently not supported>
ensembleMemberIndex: <currently not supported>
returns: Binary content of the ModuleDataSet file for requested ModuleState.
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a ModuleParameterSet element in the service configuration file.
ensembleId: <currently not supported>
ensembleMemberIndex: <currently not supported>
returns: String content of the ModuleParameterSet file for requested ModuleState.
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
801
id: Reference to the ID of a ModuleState element in the service configuration file.
stateTime: Time for which to retrieve warm state file. Time values can be obtained from method getAvailableStateTimes
ensembleId: <currently not supported>
ensembleMemberIndex: <currently not supported>
Timeseries
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a TimeSeries element in the service configuration file.
taskId: <id not required however can not be null>
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
parameterIds: Subset of parameter IDs for which to retrieve timeseries.
locationIds: Subset of location IDs for which to retrieve timeseries.
ensembleId: Id of the ensemble, can be null.
ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
thresholdsVisible: (Optional) Option to add threshold values in the header if set to TRUE. Default is FALSE
returns: String content of a PiTimeseries XML file only containing header information.
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a TimeSeries element in the service configuration file.
taskId: <id not required however can not be null>
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
parameterIds: Subset of parmater IDs for which to retrieve timeseries.
locationIds: Subset of location IDs for which to retrieve timeseries.
ensembleId: Id of the ensemble, can be null.
ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
thresholdsVisible: (Optional) Option to add threshold values in the header if set to TRUE. Default is FALSE
returns: String content of a PiTimeseries XML file.
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
id: Reference to the ID of a TimeSeries element in the service configuration file.
taskId: <id not required however can not be null>
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
parameterIds: Subset of parameter IDs for which to retrieve timeseries.
locationIds: Subset of location IDs for which to retrieve timeseries.
ensembleId: Id of the ensemble, can be null.
ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
returns: Content of the binary file that can be exported together with the PITimeseries XML files.
Get the header information for requested timeseries using a filter id.
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not
provided then no id mapping will be done.
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
filterId: Filter Id. Can be retrieved using the String getFilters(String piVersion) method.
locationIds: Subset of location IDs for which to retrieve timeseries.
parameterIds: Subset of parameter IDs for which to retrieve timeseries.
useDisplayUnits: (Optional) Option to export values using display units (TRUE) instead of database units (FALSE).
802
piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_Timeseries XML file only containing header information.
Get the timeseries data for requested timeseries using a filter id.
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not
provided then no id mapping will be done.
startTime: Start date/time of run - [Link] if the configured default is to be used
timeZero: Forecast time zero.
endTime: End date/time of run - [Link] if the configured default is to be used
filterId: Filter Id. Can be retrieved using the String getFilters(String piVersion) method.
locationIds: Subset of location IDs for which to retrieve timeseries.
parameterIds: Subset of parameter IDs for which to retrieve timeseries.
convertDatum: Option to convert values from relative to location height to absolute values (TRUE). If FALSE values remain relative.
useDisplayUnits: Option to export values using display units (TRUE) instead of database units (FALSE).
piVersion: (Optional) Pi Version for the return file. Defaults to the latest PI version.
returns: String content of a Pi_Timeseries XML file only containing header information.
Get the timeseries data for requested timeseries using a filter id.
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not provided then no id mapping will be done.
timeZero: Forecast time zero.
segmentId: identifier of the segment node
startTime: Start date/time of run - [Link] if the configured default is to be used
endTime: End date/time of run - [Link] if the configured default is to be used
threhsoldsVisisble: Option to include (TRUE) or exclude (FALSE) thresholds in the time series headers. Default is FALSE
returns: String content of a Pi_Timeseries XML file containing all timeseries including headers for this segemnt
Client datasets
Retrieves all identifiers of sgement nodes holding one or more client datasets
Retrieves all identifiers to client datasets available for this segement node
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
nodeId: Segment node identifier
returns: Array of string identifiers referring to client datasets
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client dataset
returns: String description of dataset content
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client dataset
returns: Byte content of the requested client dataset
803
returns: Modifcation Time as Long (modified Julian date)
Setter methods
<not implemented>
<not implemented>
<not implemented>
Insert a timeseries.
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
taskId: <id not required>
id: Reference to the ID of a TimeSeries element in the service configuration file.
piTimeSeriesXmlContent: Time Series content in the form of a Pi timeseries xml file.
ensembleId: Id of the ensemble
ensembleMemberIndex: Ensemble member index for this time series. NULL if this is not an ensemble.
clientId: File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an instance of
the FewsPiServiceConfig XSD.
taskId: <id not required>
id: Reference to the ID of a TimeSeries element in the service configuration file.
piTimeSeriesXmlContent: Time Series content in the form of a Pi timeseries xml file.
byteTimeSeriesContent: TimeSeries data content in the form of a byte array.
ensembleId: Id of the ensemble
ensembleMemberIndex: Ensemble member index for this time series. NULL if this is not an ensemble.
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD. If not
provided then no id mapping will be done.
piTimeSeriesXmlContent: Time Series content in the form of a Pi timeseries xml file. Timeseries must be
available in the FEWS Filters configuration file.
byteTimeSeriesContent: (Optional) TimeSeries data content in the form of a byte array.
convertDatum Option to convert the values to values that are relative to location height (TRUE). If FALSE then no conversion is
performed.
Run methods
804
Create a new task.
<not implemented>
Conversion methods
parameterId: Parameter identifier, used to trace down the associated base unit as well as the display unit
value: array of values to be converted
returns: Array of converted values (floats)
parameterId: Parameter identifier, used to trace down the associated base unit as well as the display unit
value: array of values to be converted
returns: Array of converted values (floats)
Converts stage value to discharge using a rating curve at a specific location valid for a particular moment in time
Converts discharge value to stage using a rating curve at a specific location valid for a particular moment in time
805
discharge: array of discharges to be converted to stage
returns: array of values respresenting stage
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client data set
description: (Optional) description of the client dataset content
dataSet: Byte object holding the client dataset
nodeId: Segment/nodeId which is associated to this client dataset
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client data set
description: (Optional) description of the client dataset content
dataSet: Byte object holding the client dataset
nodeId: Segment/nodeId which is associated to this client dataset
clientId: (Optional) File name of service configuration file located in the OC configuration directory 'PiServiceConfigFiles'. This file is an
instance of the FewsPiServiceConfig XSD.
id: Identifier of client data set
Initiates a database synchronization to upload the client data set to the Master Controller
Install an OC or SA.
Select an available port number on which the service will be listening.
Configure a FewsPiServiceCofig file for the OC/SA region.
PI Service configuration
The Pi Service configuration files are located in the directory 'PiServiceConfigFiles' of the region configuration. These files link the IDs know by
the client applications to FEWS data, such as TimeSeries, States, ModuleDataSets and ModuleParameterSets.
1. General: Contains general configuration, such as import and export locations. Id mapping information for mapping client
location/parameter ids to FEWS location/parameter ids.
2. TimeSeries: Contains the mapping of client timeseries ids to the FEWS timeseries sets. Also some extra export options.
3. ModuleDataSet: Contains the mapping of client moduleDataSet ids to the FEWS moduleInstance descriptor ids.
4. ModuleParameterSet: Contains the mapping of client moduleParameterSet ids to the FEWS moduleInstance descriptor ids.
5. ModuleState: Contains the mapping of client moduleState ids to the FEWS moduleInstance descriptor ids.
If a client application requires an application specific configuration file then this file must be configured in the directory ' PiClientConfigFiles' of the
region configuration. This file is free format (text) and can be obtained from the API by calling getClientConfigFile.
The FEWS system does not automatically start up the PI service listener environment. This needs to be configured in the Explorer configuration
file located in the directory 'SystemConfigFiles' of the region configuration. To do this the following line must be entered at the end of Explorer
configuration file:
]]>
806
Where 'start' and 'end' represent the port number range within which the PI Service must find an available port. To fix the PI Service to a single
port enter the same number twice.
It is also possible to initialize the FEWS PI Service as a backend process. The backend process is the same as the FEWS PI Service Client
application, but then without the user interface. To run the service as a backend process the client needs to be installed first, as described above.
After this has been completed continue with the following installation procedures:
The backend FEWS PI Service can be started in Windows by installing a Windows service that can start and stop the FEWS PI Service. Unpack
the following archive containing the service installation files: NT_FewsEnvironmentShell_Service_Install.zip. Open a DOS command prompt in the
directory containing the unpacked archive. Run the install:
<region name="name">
# Where 'work dir' is the directory containing the BIN, JRE and REGION dirs
# and 'region name' is the directory name of the region for which to install the service
]]></region>
Start and stop the new service using the Windows services application. The service will have the following name; FewsShell <region name>.
Or in Linux by using the [Link] file. This file should be placed in the same directory as the links to the JRE, BIN and the REGION dirs.
start
This backend FEWS PI Service does not use the Explorer configuration, as described above, to obtain the listener port number. This is to make
sure that the client FEWS PI Services can run independently from the backend FEWS PI Services without port conflicts. Instead the backend
FEWS PI Service obtains its listener port from the [Link] files, using the entry:
PiServicePort=2001
Example code
Here follows some example code of how client applications can set up a connection the PI Service of a running FEWS OC.
Before starting the client will require the following library: [Link]. This library can be found in the bin directory of the FEWS system.
Setting up a connection in C
Before starting the client will require the following librarys: [Link] and the libraries belonging to the TclSOAP package.
While there is a C++ libraries to handle SOAP, we could not find a pure C library. At first sight the C++ libraries seem to make it necessary to
define classes and so on before you can start using them. With the Tcl approach the code remains quite simple. (We have not investigated
exactly what TclSOAP requires, but it can be downloaded from ActiveState)
For the example [Link] file used refer to the attachment [Link]
807
#include <tcl.h>
Tcl_Interp *interp;
Tcl_FindExecutable(argv0);
interp = Tcl_CreateInterp();
if (Tcl_Init(interp) != TCL_OK) {
return TCL_ERROR;
}
Here follow some code examples of how to query the PI Service proxy from the client application. The configuration used to make these examples
can be found in the attached Example Configuration.
* With this test the system time of a running Fews Region (on port 8191) can be retrieved.
If the
* system time of the region is changed then the returned value here will match the change.
*/
808
public void testSystemTime() {
String clientId = "not required";
Date date = [Link](clientId);
/**
* Return the warm state times from the data base for the configured moduleState
id=RSNWELEV_DETO3I
* in the service configuration file = TestConfig.
*/
public void testGetWarmStateTimes() {
//name of service config file
String clientId = "TestConfig";
//id of moduleState element
String moduleId = "RSNWELEV_DETO3I";
/**
* Return the cold state ids from the data base for the configured moduleState
id=RSNWELEV_DETO3I
* in the service configuration file = TestConfig.
*/
public void testGetColdStateIds() {
//name of service config file
String clientId = "TestConfig";
//id of moduleState element
String moduleId = "RSNWELEV_DETO3I";
/**
* Return the warm state file from the data base for the configured moduleState
id=SNOW17_LSMO3U
* in the service configuration file = TestConfig. Write file to location
d:/temp/<moduleStateId>.zip
*/
public void testGetModuleStateBinary() throws IOException {
//name of service config file
String clientId = "TestConfig";
//id of moduleState element
String moduleId = "SNOW17_LSMO3U";
809
} finally {
[Link]();
}
}
/**
* Returns the content of a client configuration file from the config directory
PiClientConfigFiles.
*
* Only requirement is that a configuration file (any type) with the clientId as name and
moduleId as extension
* must exist in this config directory.
*/
public void testGetClientConfigurationFile(){
//Name of client file
String clientId = "MyClientConfigFile";
//extension of client file
String moduleId = "txt";
/**
* Retrieve the content of the locations xml. Write this to d:/temp/[Link]
* @throws IOException
*/
public void testGetLocations() throws IOException {
/**
* Return the member indices for the ensemble id = "ESP" from the timeseries table.
*
* This only works when there are ensemble members in the TimeSeries table.
*/
public void testGetEnsembleMemberIndices(){
/**
* Get the logentries for an Import TaskId.
*
* Check the LogEntries table to find a matching taskId.
*
* @throws IOException
*/
810
public void testGetLogInformation() throws IOException {
/**
* Return the timeseries defined in the service configuration file under timeSeries element
with id 'Reservoir'.
*
* Filter using parameter ids QIN and RQIN and location ids DETO3, GPRO3 and FOSO3
*
* Check timeseries table to look for existing times for these timeseries.
*
* @throws IOException
*/
public void testGetTimeSeries() throws IOException {
/**
* Upload a pi_diag log file to the region.
*
* Make sure to create an input file in corresponding directory.
*
* @throws IOException
*/
public void testInsertLogMessages() throws IOException {
/**
* Upload a pi_timeseries file to the Region. The timeseries in this file must be linked to
timeseries configured
* in the PiServiceConfig file.
*
* Make sure to create an input file in the corresponding directory.
*
* @throws IOException
*/
public void testInsertTimeSeries() throws IOException {
811
String timeseriesText = [Link]("d:/temp/[Link]");
[Link]("Input timeseries: " + timeseriesText);
[Link]("TestConfig", null, "Reservoir", timeseriesText, null, -1);
}
/**
* Schedule a task run and wait for it to finish.
*
*/
public void testRunningTask(){
String taskId = [Link]("id not required");
Date date = [Link]("");
[Link]("Starting task with id " + taskId + " time=" + new
Date([Link]()));
String taskRunId = [Link]("id not required", taskId, "Santiam_Forecast",
date, date, date, null, null, "Test user", "PiWebservice taskrun");
Here follow some code examples of how to query the PI Service proxy from the client application. The configuration used to make these examples
can be found in the attached Example Configuration.
Appendix
FewsPiService WSDL
Example of the FewsPiService WSDL definition. This definition file is used by other applications to connect to the webservice.
[Link]
FewsPiService API
/* ================================================================
* Delft FEWS
* ================================================================
*
* Project Info: [Link]
* Project Lead: Karel Heynert ([Link]@[Link])
*
* (C) Copyright 2003, by WL | Delft Hydraulics
* P.O. Box 177
* 2600 MH Delft
* The Netherlands
* [Link]
*
* DELFT-FEWS is a sophisticated collection of modules designed
* for building a FEWS customised to the specific requirements
* of individual agencies. An open modelling approach allows users
* to add their own modules in an efficient way.
*
* ----------------------------------------------------------------
* [Link]
* ----------------------------------------------------------------
* (C) Copyright 2003, by WL | Delft Hydraulics
812
*
* Original Author: Erik de Rooij
* Contributor(s):
*
* Changes:
* --------
* 10-Sep-2007 : Version 1 ();
*
*
*/
package [Link];
import [Link];
/**
* TODO <code>ISSUES</code>
*<li>Optional Date arguments can not be set to NULL when not used.</li>
*<li> When retrieving timeseries. What use is it to add ensemble Id? Because the timeseries set
already contains the ensemble id. </li>
*<li> Some interface call do not require the clientId. Should we remove this to simplify things?</li>
*/
public interface FewsPiService {
* @param clientId Id of web service client (obtained on command line when invoked)
* @param fileExtension. Case insensitive. Extension of the config file (e.g. xml, ini), One
client can have multiple config files
* with different file extensions.
* @return client config text file. Format of file must be known by client.
*/
String getClientConfigFile(String clientId, String fileExtension);
/**
* Create a new Task. Use return Task id when exporting data to the WebService.
* @param clientId Id of web service client (obtained on command line when invoked)
* @return Task id for the new task.
*/
String createTask(String clientId);
/**
* Return the current time zero of the system.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @return Time zero
*/
Date getSystemTime(String clientId);
/**
* TODO
*
* ID of Configured timezone for the webservice
* @param clientId
* @return
*/
String getTimeZoneId(String clientId);
/**
* Run a Single task. TaskId must be obtained using the method {@link
FewsPiService#createTask(String)}.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task Id
* @param workflowId Workflow Id
813
* @param startTime start date/time of run - NULL if the configured default is to be used
* @param timeZero Forecast time zero.
* @param endTime end date/time of run - NULL if the configured default is to be used
* @param coldStateId String identifying the cold state to use - NULL if a cold state start is not
forced
* @param scenarioId String identifying the "what if" scenario - NULL if not used
* @param userId Id of user running task.
* @param description Description
* @return Returns the TaskRun id
*/
String runTask(String clientId, String taskId, String workflowId, Date startTime, Date timeZero,
Date endTime, String coldStateId, String scenarioId, String userId, String description);
/**
* Request Ids of available cold states
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param id Id of the State module instance for which to retrieve the cold state ids.
* @return List of available cold state groups
*/
String[] getColdStateIds(String clientId, String id);
/**
* Request run status of task. This can be used to wait for a task to complete
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task Id
* @param waitMillis number of milli-seconds to wait between status requests
* @return boolean if task is complete or has been cancelled
*/
boolean waitForTask(String clientId, String taskId, int waitMillis);
/**
* cancel task. Cancel a running task
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task Id
*/
void cancelTask(String clientId, String taskId);
/**
* Retrieve the indices for the given ensemble id.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param ensembleId Id of the ensemble
* @return All valid indices for this ensemble.
*/
int[] getEnsembleMemberIndices(String clientId, String ensembleId);
/**
* Write timeseries associated to a specific task to webservice. The webservice will store this
information in the database.
*
* If the time series is an ensemble then each ensemble member needs to be submitted individually.
* using the <i>ensembleMemberIndex</i> argument. For deterministic time series the
<i>ensembleMemberIndex</i> is NULL
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid ensemble member index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is NULL the time series is written as data visible to all other processes in FEWS
* In case it is TaskId the time series will be used only by the task run with TaskId (e.g.
scenario time series)
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param id Id of the Pi timeseries xml content.
* @param piTimeSeriesXmlContent Time Series content in the form of a Pi timeseries xml file.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
814
ensemble.
*/
void putTimeSeries(String clientId, String taskId, String id, String piTimeSeriesXmlContent,
String ensembleId, int ensembleMemberIndex);
/**
* Write timeseries. The webservice will store this information in the database.
*
* <p>
* For performance reasons it is possible to split the timeseries header information from the
timeseries data. The header information
* is stored in the <i>piTimeSeriesXmlContent</i> and the timeseries data is stored in the
<i>byteTimeSeriesContent</i>.
* <p>
* If the time series is an ensemble then each ensemble member needs to be submitted individually.
* using the <i>ensembleMemberIndex</i> argument. For deterministic time series the
<i>ensembleMemberIndex</i> is NULL
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid ensemble member index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is NULL the time series is written as data visible to all other processes in FEWS
* In case it is TaskId the time series will be used only by the task run with TaskId (e.g.
scenario time series)
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param id Id of the Pi timeseries xml content.
* @param piTimeSeriesXmlContent TimeSeries content in the form of a Pi timeseries xml file.
* @param byteTimeSeriesContent TimeSeries data content in the form of a byte array.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
void putTimeSeriesBinary(String clientId, String taskId, String id, String piTimeSeriesXmlContent,
byte[] byteTimeSeriesContent, String ensembleId, int ensembleMemberIndex);
/**
* Write information about the parameter set file given the webservice.
*
* The TaskId may be NULL or the requested task id.
* In case it is NULL the parameters version will be upaded
* In case it is TaskId the parameters will be used only by the task run with TaskId (e.g.
scenario run)
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param id Id of parameter set
* @param piParameterSetXmlContent Parameters content in the form of a Pi parameters xml file.
* @param validityStartTime Start time of parameter validity (NULL if not applicable)
* @param validityEndTime End time of parameter validity (NULL if not applicable)
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
void putModuleParameterSet(String clientId, String id, String taskId, String
piParameterSetXmlContent, Date validityStartTime, Date validityEndTime, String ensembleId, int
ensembleMemberIndex);
/**
* Write information about the dataset file for the given taskId back to the webservice. The
webservice will store this information and will add it to the
* task properties when the task is run.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param id Id of Module DataSet file.
* @param byteModuleDataSetContent Zipped module dataset file
* @param validityStartTime Start time of dataset validity (NULL if not applicable)
* @param validityEndTime End time of dataset validity (NULL if not applicable)
815
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
void putModuleDataSet(String clientId, String taskId, String id, byte[] byteModuleDataSetContent,
Date validityStartTime, Date validityEndTime, String ensembleId, int ensembleMemberIndex);
/**
* Write state information to webservice. The webservice will store this information in the
database.
*
* <p>
* The state information consists of two seperate parts. The <i>piStateXmlContent</i> containing
information
* about the state files. And the <i>byteStateContent</i> containing the actual state data for the
module.
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param piStateXmlContent Pi state xml file.
* @param byteStateFileName name of the state file data content byte array.
* @param byteStateContent State file data content in the form of a byte array.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
void putState(String clientId, String taskId, String piStateXmlContent, String byteStateFileName,
byte[] byteStateContent, String ensembleId, int ensembleMemberIndex);
/**
* Put a log message
*
* @param clientId Id of web service client (obtained on command line when invoked)
* @param piDiagnosticsXmlContent Pi Diagnostics xml file.
*/
void putLogMessage(String clientId, String piDiagnosticsXmlContent);
/**
* Read module dataset information from webservice.
*
* <p>
*Default data set is returned.
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the module data set .
* @return Module data set file as byte array.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
byte[] getModuleDataSet(String clientId, String id, String ensembleId, int ensembleMemberIndex);
/**
* Read module parameter set information from webservice.
*
* <p>
*Default data parameterSet is returned.
*
* @param clientId Id of webservice configuration that is to be queried
* @param id name of the binary module parameter set file.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
* @return Module parameter set PiParameters xml.
*/
String getModuleParameterSet(String clientId, String id, String ensembleId, int
ensembleMemberIndex);
816
/**
* Read all available state times for requested state file.
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the module state .
* @return All available state times for this module state file.
*/
Date[] getAvailableStateTimes(String clientId, String id);
/**
* Read module state information from webservice.
*
* <p>
*Module state data file is returned for given time. Use method {@link
FewsPiService#getAvailableStateTimes(String, String)}
* to retrieve the available state times.
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the state .
* @param stateTime Time for which to retrieve a state file.
* @return Module state data file as byte array.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. NULL if this is not an
ensemble.
*/
byte[] getModuleStateBinary(String clientId, String id, Date stateTime, String ensembleId, int
ensembleMemberIndex);
/**
* Read the timeseries from the webservice. Returns a pi timeseries xml file containing the
timeseries information.
*
* <p>
* If the ensemble id has been configured for this timeseries then add the
<i>ensembleMemberIndex</i> as argument. Use NULL if not an ensemble
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is TaskId the time series is retrieved for the taskId only for simulated time series
* In case it is NULL time series for the current forecast will be retreived
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the time series string.
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param startTime start date/time of run - NULL if the configured default is to be used
* @param timeZero Forecast time zero.
* @param endTime end date/time of run - NULL if the configured default is to be used
* @param parameterIds Subset of parmaters for which to retrieve timeseries.
* @param locationIds Subset of locations for which to retrieve timeseries.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
* @return PiTimeseries xml file content.
*/
String getTimeSeries(String clientId, String id, String taskId, Date startTime, Date timeZero,
Date endTime, String[] parameterIds, String[] locationIds, String ensembleId, int
ensembleMemberIndex);
/**
* Read the timeseries from the webservice. Returns a pi timeseries xml file
* containing the timeseries headers information. Retrieve the timeseries data using the method
* {@link FewsPiService#getTimeSeriesBytes(String, String, String, [Link], [Link],
[Link], String[], String[], String, int)}
*
* <p>
* If the ensemble id has been configured for this timeseries then add the
<i>ensembleMemberIndex</i> as argument. Otherwise
817
* this argument is skipped by the webservice.
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is TaskId the time series is retrieved for the taskId only for simulated time series
* In case it is NULL time series for the current forecast will be retreived
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the time series string.
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param startTime start date/time of run - NULL if the configured default is to be used
* @param timeZero Forecast time zero.
* @param endTime end date/time of run - NULL if the configured default is to be used
* @param parameterIds Subset of parmaters for which to retrieve timeseries.
* @param locationIds Subset of locations for which to retrieve timeseries.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
* @return PiTimeseries xml file content.
*/
String getTimeSeriesHeaders(String clientId, String id, String taskId, Date startTime, Date
timeZero, Date endTime, String[] parameterIds, String[] locationIds, String ensembleId, int
ensembleMemberIndex);
/**
* Read the timeseries data from the webservice. Returns the data belonging to the
* timeseries that are retrieved when the method {@link FewsPiService#getTimeSeriesBytes(String,
String, String, [Link], [Link], [Link], String[], String[], String, int)}
* is called using the same arguments.
*
* <p>
* If the ensemble id has been configured for this timeseries then add the
<i>ensembleMemberIndex</i> as argument. Otherwise
* this argument is skipped by the webservice.
* Use {@link FewsPiService#getEnsembleMemberIndices(String, String)}
* to obtain valid index values.
*
* The TaskId may be NULL or the requested task id.
* In case it is TaskId the time series is retrieved for the taskId only for simulated time series
* In case it is NULL time series for the current forecast will be retreived
*
* @param clientId Id of webservice configuration that is to be queried
* @param id Id of the time series string.
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @param startTime start date/time of run - NULL if the configured default is to be used
* @param timeZero Forecast time zero.
* @param endTime end date/time of run - NULL if the configured default is to be used
* @param parameterIds Subset of parmaters for which to retrieve timeseries.
* @param locationIds Subset of locations for which to retrieve timeseries.
* @param ensembleId Id of the ensemble
* @param ensembleMemberIndex Ensemble member index for this time series. (Only if configured)
* @return PiTimeseries xml file content.
*/
byte[] getTimeSeriesBytes(String clientId, String id, String taskId, Date startTime, Date
timeZero, Date endTime, String[] parameterIds, String[] locationIds, String ensembleId, int
ensembleMemberIndex);
/**
* get a log message associated to a specified taskId
* @param clientId Id of web service client (obtained on command line when invoked)
* @param taskId Task id. Obtained using method {@link FewsPiService#createTask(String)}
* @return PiDiagnostics XML file content.
*/
String getLogMessages(String clientId, String taskId);
818
}
FewsPiServiceConfig XSD
819
820
Running Delft-FEWS in the background
When you use the PI service it is not necessary to have the main window displayed or anyone
operating the program at all. Under Windows, there is always a monitor, but on Linux machines
this may not be the case, especially with server machines.
To start Delft-FEWS in the background on Linux with no window present, use the following
receipe:
This way no window will be visible and no monitor will be needed. A small shell script
will take care of the details:
821
#
# Start the Xvfb server using screen "1" (to avoid issues with a possibly running X server)
Xvfb :1
#
# Set the DISPLAY environment variable so that FEWS will use the Xvfb server
#
export DISPLAY=:1.0
#
# Start FEWS (standalone or operator client) in the background
#
bin/[Link] REGION
Example C interface
When querying the FEWS database with a small C program like the one below, the result is a string - in simple cases, just the value or values you
wanted, in other cases the contents of an XML file, conforming to the FEWS published interface (PI).
The program below is a translation of most of the Java example. It simply prints the answer, but in an actual program you will need to parse the
XML content to extract the relevant information:
/*
* Sample program, using the wrappers
*/
#include <stdio.h>
/*
* Wrapper code is included
*/
#include "pi_wrapper.c"
/*
* Main program
*/
char *result;
char *stateTimes;
char *firstTime;
char *textContents;
char *zipContents;
FILE *outf;
char *startTime;
char *systemTime;
char *endTime;
char *timeZero;
char *date;
char *taskId;
char *taskRunId;
int success;
822
printf( "State times: %s\n", getAvailableStateTimes( "TestConfig", "RSNWELEV_DETO3I" ) );
printf( "Cold states: %s\n", getColdStateIds( "TestConfig", "RSNWELEV_DETO3I" ) );
/* This test is excluded: there is no binary state file in the sample configuration */
#if 0
/* Extract the first time */
free( zipContents );
#endif
printf( "Locations:\n---begin---\n" );
printf( "%s", getLocations( "aa" ) );
printf( "---end---\n" );
#define TRUE 1
#define FALSE 0
printf( "Timeseries:\n---begin---\n" );
printf( "%s", getTimeSeries( "TestConfig", "Reservoir", NULL,
startTime, systemTime, endTime,
parameters, nParameters, locations, nLocations,
NULL, -1, FALSE ) );
printf( "---end---\n" );
/*
* Running a task ...
*/
printf( "Running Santiam_Forecast ...\n" );
taskRunId = runTask( "aa", taskId, "Santiam_Forecast", date, date, date, NULL, NULL,
"Test user", "PiWebservice taskrun" );
success = waitForTask( "aa", taskRunId, 120000); /* Wait for 120 seconds = 120000 ms */
823
TearDown();
}
Some notes
The program in the attachment uses several external libraries to take care of the actual connection. These libraries are: the Tcl library (version 8.4
or 8.5 should do) and the TclSOAP extension. More on this below.
Not all the services documented on the Wiki work for the sample configuration, as a local data store is not provided. This means that not
all services could be tested, but the main ones can be.
The C example produces the same output as the Tcl example, but there is a caveat when using the C interface: most wrapper functions
return a pointer to the Tcl result. Upon the next call to one of these wrapper functions, this pointer will be either invalidated ("dangling
pointer") or the contents of the memory it points to is changed. A solution would be to make a duplicate of the return value (see
getModuleStateBinary() for instance), but that puts the responsibility of cleaning up at the user's side.
Several C functions have extra arguments: getTimeSeries() takes two lists of IDs, the length of these lists is stored in an extra argument.
For returning the zipped contents of a module state file, I have also introduced an extra argument (C's strings are terminated with a null
byte, unless you use some count).
For convenience I have written a function that extracts an element from a space-separated list of words etc. This function contains a
memory leak, but that should not present a big problem.
Code
The C code for the wrapper functions and a Tcl example are available via the attachments.
The Tcl library and the TclSOAP extension can be downloaded from ActiveState ([Link]
The TclSOAP extension uses the TclDOM 2.6 extension and Tcllib. All of these are available as open source.
Introduction
The Fews Workflow Runner service uses XFire, a java SOAP framework. This framework allows a client application to obtain a proxy instance to
the FewsWebServiceRunner API. With this API the client can run workflows on the MC from the client code. The timeseries produced by the
workflow run can read by the client application. Before a client application can access the FEWS system there is some configuration work that
needs to be done.
User's looking to use XFire on a new project, should use CXF instead. CXF is a continuation of the XFire project and is
considered XFire 2.0. It has many new features, a ton of bug fixes, and is now JAX-WS compliant! XFire will continue to be
maintained through bug fix releases, but most development will occur on CXF now. For more information see the XFire/Celtix
merge FAQ and the CXF website.
824
Runs a FEWS workflow on the MC.
clientId: A descriptive id used in logging and passed as user id in the taskProperties. Required
workflowId: A workflow id known by the MC configuration. Required
forecastStartDateTime: The start time of the forecast. If provided a module state at or before the start time will be used. When not
specified the the forecast will start at the last available warm state or will use a cold state when no warm state is available. WARNING !
Because XFire does not support nulls for date/times pass new Date(0) instead of null. Optional
forecastDateTime0: The time for new saved states during this run, a time observed data is likely to be available for all stations. When
not specified the current time will be used. WARNING! Because XFire does not support nulls for date/times pass new Date(0) instead of
null.
forecastEndDateTime: The end time of the forecast. When not specified a default is used specified in the fews configuration.
WARNING! Because XFire does not support nulls for date/times pass new Date(0) instead of null. Optional.
inputTimeSeries: The input timeseries required by the workflow.
returns: The output timeseries produced by the workflow.
throws: An exception when something goes wrong.
The MC Service component is started up by the MC (TaskWebServiceRunner). The client application does not use this service directly. The only
configuration required for this component is that the following line is added to the MC configuration file; [Link].
]]>
The Workflow Runner service requires a configuration file that is an instance of the [Link].
port: This is the port number on which the FewsWebService will be hosted. This port must be accessible by the client application.
timeOutSeconds: This is the length of time that the FewsWebService will wait for the workflow to complete runnging.
inputPiTimeSeriesFile: This is the file from which the MC workflow run will read the input timeseries. When calling the FewsWebService
API the timeseries passed as argument will be written to this file. This file must therefore match the file configured in the MC workflow.
outputPiTimeSeriesFile [1..>: These are the files to which the MC workflow will write the output timeseries. When calling the
FewsWebService API the timeseries are read from the output files after the workflow run is completed. These timeseries are returned by
the call. These files must therefore match the files configured in the MC workflow.
mcTaskWebService: This contains information that allows the FewsWebService to connect to a specific running instance of the
McTaskWebService. Although this entry is optional it is required!
Starting on Windows
Step 2: Make a new [Link] and [Link] file in the \bin directory. The [Link] must contain the following
information.
/[Link]
]]>
Step 5: Stop the FewsWebServiceRunner by killing the application using the System Monitor.
Starting on Linux
825
Step 4: To stop the service type ./fews_webservice.sh stop
To make sure that the service keeps running there is also a 'watcher' script. This script should be run as a cron job. What this script does is, check
if the web fews webservice script is still running. If the service is not running the it is restarted.
Example code
Here are some examples of how a client application would instantiate a FewsWebService and fire of a workflow to the MC.
Before starting the client will require the following library: [Link]. This library can be found in the bin directory of the FEWS system.
Setting up a connection
Running Workflows
Appendix
WebService XSD
826
JDBC vs. FewsPiService
Currently the FEWS JDBC server and the FewsPiService co-exist within the FEWS system. They are both hosted by an instance of a FEWS
Operator Client. At present an Operator Client can host either an instance of the FEWS JDBC server or an instance of the FewsPiService. Not
both at the same time.
827
simple access to Locations, Parameters and Timeseries
predefined graphs
simple timeseries statistics
access to the Filter configuration
Description: The runInLoopParallelProcessorCount en try in the global properties files indicated the number of cores Delft-FEWS may
use when running ensemble members in a loop
Remark(s): The speedup that may be obtained is highly dependent on the type of module you are running
Contents
Contents
Overview
Configuration
Tested modules
Sample input and output
Error and warning messages
Known issues
Related modules and documentation
Technical reference
Overview
Delft-FEWS can split ensemble workflows (that have the runInLoop element set to true) over multiple cores. Based on the available amount of
cores a number of queues is made, one for each core. When running the activity the different ensemble members are added to the different
queues. An example of a workflow that can use this feature is shown below:
828
<activity>
<runIndependent>true</runIndependent>
<moduleInstanceId>MOGREPS_Spatial_Interpolation</moduleInstanceId>
<ensemble>
<ensembleId>MOGREPS</ensembleId>
<runInLoop>true</runInLoop>
</ensemble>
</activity>
Configuration
By default Delft-FEWS will only use one core and all tasks are run one after another. To enable the parallel running of ensemble members the
runInLoopParallelProcessorCount entry must be set in the global properties file. Here you either specify the number of cores to use or specify 100
to use all available cores.
Config Example
# to use all available cores/cpu's:
runInLoopParallelProcessorCount=100
For all internal Delft-FEWS modules that have been tested no changes are needed to the configuration. For external modules that are run using
the General adapter some changes may be needed to the configuration.
Tested modules
Module Remarks
pcrTransformation Test ok
Known issues
Running modules in parallel means you will use more memory
In some cases, the increase in speed may be very limited. Although it depends on a case by case basis the following simple rules may be used to
determine the experted increase in execution speed:
execution time of an individual module <= 1 sec: expected increase < 20%
execution time of an individual module > 1 sec < 10: expected increase between 20 and 50%
execution time of an individual module > 10 sec: expected increase > 50 % and < 100 %
The percentage given in the list above should be scaled using the amount of cores used. The 100% in the example above is a two-fold increase
using two cores.
829
Other factors that influence this are the amount of data being retrieved and store in the FEWS database in relation to the total execution time and
(in the case of an external module) the amount of data written to and read from the file system.
Technical reference
Appendices
A Colours Available in DELFT-FEWS
B Enumerations
alice blue alice blue eff7ff medium turquoise medium turquoise 48cccd
white white ffffff medium violet red medium violet red ca226b
antique white antique white f9e8d2 midnight blue midnight blue 151b54
antique white1 antique white1 feedd6 mint cream mint cream f5fff9
antique white2 antique white2 ebdbc5 misty rose misty rose fde1dd
antique white3 antique white3 c8b9a6 misty rose2 misty rose2 ead0cc
antique white4 antique white4 817468 misty rose3 misty rose3 c6afac
blanched almond blanched almond fee8c6 navajo white2 navajo white2 eac995
blue violet blue violet 7931df navajo white4 navajo white4 806a4b
cadet blue cadet blue 578693 old lace old lace fcf3e2
cadet blue1 cadet blue1 99f3ff olive drab olive drab 658017
cadet blue2 cadet blue2 8ee2ec olive drab1 olive drab1 c3fb17
cadet blue3 cadet blue3 77bfc7 olive drab2 olive drab2 b5e917
cadet blue4 cadet blue4 4c787e olive drab3 olive drab3 99c517
dark goldenrod dark goldenrod af7817 orange red2 orange red2 e43117
830
dark goldenrod1 dark goldenrod1 fbb117 orange red3 orange red3 c22817
dark goldenrod2 dark goldenrod2 e8a317 orange red4 orange red4 7e0517
dark goldenrod4 dark goldenrod4 7f5217 pale goldenrod pale goldenrod ede49e
dark green dark green 254117 pale green pale green 79d867
dark khaki dark khaki b7ad59 pale green1 pale green1 a0fc8d
dark olive green dark olive green 4a4117 pale green2 pale green2 94e981
dark olive green1 dark olive green1 ccfb5d pale green3 pale green3 7dc56c
dark olive green2 dark olive green2 bce954 pale green4 pale green4 4e7c41
dark olive green3 dark olive green3 a0c544 pale turquoise pale turquoise aeebec
dark olive green4 dark olive green4 667c26 pale turquoise1 pale turquoise1 bcfeff
dark orange dark orange f88017 pale turquoise2 pale turquoise2 adebec
dark orange1 dark orange1 f87217 pale turquoise3 pale turquoise3 92c7c7
dark orange2 dark orange2 e56717 pale turquoise4 pale turquoise4 5e7d7e
dark orange3 dark orange3 c35617 pale violet red pale violet red d16587
dark orange4 dark orange4 7e3117 pale violet red1 pale violet red1 f778a1
dark orchid dark orchid 7d1b7e pale violet red2 pale violet red2 e56e94
dark orchid1 dark orchid1 b041ff pale violet red3 pale violet red3 c25a7c
dark orchid2 dark orchid2 a23bec pale violet red4 pale violet red4 7e354d
dark orchid3 dark orchid3 8b31c7 papaya whip papaya whip feeccf
dark orchid4 dark orchid4 571b7e peach puff peach puff fcd5b0
dark salmon dark salmon e18b6b peach puff2 peach puff2 eac5a3
dark sea green dark sea green 8bb381 peach puff3 peach puff3 c6a688
dark sea green1 dark sea green1 c3fdb8 peach puff4 peach puff4 806752
dark sea green2 dark sea green2 b5eaaa pink pink faafbe
dark sea green3 dark sea green3 99c68e plum plum b93b8f
dark sea green4 dark sea green4 617c58 powder blue powder blue addce3
dark slate blue dark slate blue 2b3856 red red ff0000
dark slate gray dark slate gray 25383c rosy brown rosy brown b38481
dark slate gray1 dark slate gray1 9afeff rosy brown1 rosy brown1 fbbbb9
dark slate gray2 dark slate gray2 8eebec rosy brown2 rosy brown2 e8adaa
dark slate gray3 dark slate gray3 78c7c7 rosy brown3 rosy brown3 c5908e
dark slate gray4 dark slate gray4 4c7d7e rosy brown4 rosy brown4 7f5a58
dark turquoise dark turquoise 3b9c9c royal blue royal blue 2b60de
dark violet dark violet 842dce royal blue1 royal blue1 306eff
deep pink deep pink f52887 royal blue2 royal blue2 2b65ec
deep pink2 deep pink2 e4287c royal blue3 royal blue3 2554c7
deep pink3 deep pink3 c12267 royal blue4 royal blue4 15317e
deep pink4 deep pink4 7d053f sandy brown sandy brown ee9a4d
831
deep sky blue deep sky blue 3bb9ff sea green sea green 4e8975
deep sky blue2 deep sky blue2 38acec sea green1 sea green1 6afb92
deep sky blue3 deep sky blue3 3090c7 sea green2 sea green2 64e986
deep sky blue4 deep sky blue4 25587e sea green3 sea green3 54c571
dim gray dim gray 463e41 sea green4 sea green4 387c44
dodger blue2 dodger blue2 157dec sky blue sky blue 6698ff
dodger blue3 dodger blue3 1569c7 sky blue1 sky blue1 82caff
dodger blue4 dodger blue4 153e7e sky blue2 sky blue2 79baec
floral white floral white fff9ee sky blue4 sky blue4 41627e
forest green forest green 4e9258 slate blue slate blue 737ca1
ghost white ghost white f7f7ff slate blue1 slate blue1 7369ff
832
gray43 gray43 625d5d violet red4 violet red4 7d0541
833
gray82 gray82 cccccb cornsilk4 cornsilk4 817a68
834
lemon chiffon4 lemon chiffon4 827b60 maroon2 maroon2 e3319d
light goldenrod yellow light goldenrod yellow faf8cc plum1 plum1 f9b7ff
light sea green light sea green 3ea99f red2 red2 e41b17
light sky blue light sky blue 82cafa salmon1 salmon1 f88158
light sky blue2 light sky blue2 a0cfec salmon2 salmon2 e67451
light sky blue3 light sky blue3 87afc7 salmon3 salmon3 c36241
light sky blue4 light sky blue4 566d7e salmon4 salmon4 7e3817
light slate blue light slate blue 736aff seashell seashell fef3eb
light slate gray light slate gray 6d7b8d seashell2 seashell2 ebe2d9
light steel blue light steel blue 728fce seashell3 seashell3 c8bfb6
light steel blue1 light steel blue1 c6deff seashell4 seashell4 817873
light steel blue2 light steel blue2 b7ceec sienna1 sienna1 f87431
light steel blue3 light steel blue3 9aadc7 sienna2 sienna2 e66c2c
light steel blue4 light steel blue4 646d7e sienna3 sienna3 c35817
835
light yellow2 light yellow2 edebcb snow snow fff9fa
medium forest green medium forest green 347235 thistle3 thistle3 c6aec7
medium sea green medium sea green 306754 wheat4 wheat4 816f54
medium slate blue medium slate blue 5e5a80 yellow1 yellow1 fffc17
B Enumerations
A.1 GeoDatum
A.2 Time Zones
A.3 Units
A.4 Data quality flags
A.5 Synchronisation Levels
A.1 GeoDatum
DELFT-FEWS may use a number of national coordinate system as geo-datum. These are referenced by all configurations requiring a definition of
geodatum.
All coordinates are handled internally as WGS 1984 (longitude-latitude). To add a new coordinate system to DELFT-FEWS, the transformation
between WGS-1984 and that system will need to added as Java class to DELFT-FEWS
836
The user can also specify the UTM zone - this should be in the form UTM48N or UTM48S. The zones are shown below:
<timeZoneName>GMT+9</timeZoneName>
]]>
A.3 Units
DELFT-FEWS supports a list of units. Most of these are SI units.
Unit Description
m Metres
mm Millimetres
oC Degrees Centigraed
% Percentage
Bft Beaufort
- Dimensionless
837
A.4 Data quality flags
Quality flags are constructed on a philosophy of two qualifiers. The first describes the origin of the data and the second the quality.
+ Original: This entails the data value is the original value. It has not been amended by DELFT-FEWS
+ Completed: This entails the original value was missing and was replaced by a non-missing value.
+ Corrected: This entails the original value was replaced with another non-missing value.
Following this specification, the table below gives an overview of quality flag enumerations
Enumeration Description
0 Original/Reliable
The data value is the original value retrieved from an external source and it successfully passes all validation criteria set.
1 Corrected/Reliable
The original value was removed and corrected. Correction may be through interpolation or manual editing.
2 Completed/Reliable
Original value was missing. Value has been filled in through interpolation, transformation (e.g. stage discharge) or a model.
3 Original/Doubtful
Observed value retrieved from external data source. Value is valid, but marked as suspect due to soft validation limits being
exceeded.
4 Corrected/Doubtful
The original value was removed and corrected. However, the corrected value is doubtful due to validation limits.
5 Completed/Doubtful
Original value was missing. Value has been filled in as above, but resulting value is doubtful due to limits in
transformation/interpolation or input value used for transformation being doubtful.
6 Missing/Unreliable
Observed value retrieved from external data source. Value is invalid due to validation limits set. Value is removed
7 Corrected/Unreliable
The original value was removed and corrected. However, corrected value is unreliable
8 Completed/Unreliable
Original value was missing. Value has been filled in as above, but resulting value is unreliable,
9 Missing value in originally observed series. Note this is a special form of both Original/Unreliable and Original/Reliable.
Notes:
No difference is made between historic and forecast data. This is not considered a quality flag. The data model of DELFT-FEWS is
constructed such that this difference is inherent to the time series type definition.
1 Scalar time series imported from telemetry. NB the length of data synchronised will depend on the all systems
login-profile selected. Typically this will be data generated up to 7 days ago
2 All grid data from a forecast run (e.g. Flood Mapping results) all systems
838
3 Large volumes of scalar data such as CatAvg data (forecasts, actuals & NWP) all systems
4 Used for data imported infrequently such as Astronomical or Climatological data all systems
6 (small) Grid data imported from external forecast (synchronised to OC) all systems
7 Grid data imported from external forecast (synchronised to FSS & MC only, and not to OC) all systems
8 Performance indicator time series. These are time series that do not need to be synchronised with a all systems
short synchronisation interval or when a forecaster logs in with a minimum profile.
11 Specific ModuleDataset files, which should be downloaded and activated directly after logging in most systems
and after each upload of a new version of the file (synch to OC). This is used in the Configuration
manager when uploading the module dataset
16 (large) Grid data imported from external forecast (synch. to OC) NFFS: to distinguish
between small
(synchLevel 6) and large
grids
WhatIfScenarioEditor
Scenario Editor
Description: Visual version of a what if editor. Defines similar whatifs as the (old) WhatIfScenarioFilter
Why to Use? To provide graphical feedback on time series defined. To assign location properties to dummy locations
Where to Use?
Config Example
Outcome(s):
Remark(s):
839