You are on page 1of 344

WATER RESOURCES SYSTEMS

PLANNING AND DESIGN


(HEng 6025, IDEN 6022)

Adane Abebe (Dr.-Ing)


Hydrology and Water Resources
Course structure

Hydro systems Analysis

Stochastic Hydrology Deterministic Modeling

Water Resources Planning


Course outline
 Concept of System and system analysis
• Definition and types of System
• Systems Approach
• Systems Analysis
 Water resources planning and Management
• Scope of planning
• Planning Approaches
• IWRM
 Hydrological Data analysis
• Time series Analysis
• Disaggregation Modeling
• Markov Models
 System Techniques
• Simulation approaches
• Optimization approaches
 Optimization techniques
• Direct Search Methods
• Optimization using Calculus
• Optimization using Lagrange Multiplier
• Linear Programming
• Non-linear Programming
Course outline
• Dynamic Programming
• Multi-objective Optimization
 Reservoir Operation and Management
• Reservoir Sizing
• Reservoir Operation
• Stochastic Dynamic Programming
 River basin Modeling

References
• Global Water Partnership (2005). Monitoring and Evaluation Indicators for IWRM
Strategies and Plans: Technical Brief 3. http://www.gwpforum.org.
• Heun, J.C. (2011). A Framework of Analysis for Water Resources Planning. UNESCO-
IHE Lecture Notes. Delft, The Netherlands.
• Jain S.K. and Singh V.P., (2003) „Water Resources Systems Planning and Management‟,
Elsevier, The Netherlands.
• Loucks, D. P., van Beek, E., Stedinger, J. R., Dijkman, J. P. M. and Villars, M. T. (2005).
Water Resources Systems Planning and Management: An Introduction to Methods,
Models and Applications. UNESCO Publishing, Paris.
• Mays L.W and Tung Y-K, 1992. „Hydrosystems Engineering and Management‟,
McGraw Hill, USA.
Concept
Classification
1.1 Concept of a System System Analysis
Pros and cons

System: Definition of a system (Dooge, 1973)


“any structure, device, scheme or procedure, real or abstract, that
interrelates in a given time reference, an input, cause, or stimulus, of
matter, energy, or information, and an output, effect or response, of
information, energy or matter.”

E.g. River basin system


Concept
Classification
1.1 Concept of a System System Analysis
Pros and cons

System: Set of objects which interact in a regular, interdependent


manner. It‟s a collection of various factors arranged in an ordered form
with purpose. E.g. River basin system
A system is characterized by:
– A system boundary: Rule that determines whether an element is a
part of the system or the environment
– Statement of input and output interactions with the environment
– Statement of interrelationships between various elements of the
system called feedback
State of the system: Conditions or indicates the activity in the system at
a given time e.g.. water level in a reservoir, depth of flow.
System analysis : Arriving at the management decisions based on the
systematic and efficient organization and analysis of information.
Concept
Classification
1.1 Concept of a System System Analysis
Pros and cons

System Representation
Some Systems in
Parameters, b WRPM:
Watershed
Aquifer
Development Area
Transformation function Detention Basin
Inputs, I Outputs, Q
Q(t) = W(a, b) * I(t)

• Mathematical model
Policies or controls, a • Typically a set of algebraic equations
• Derived from differential equations of
System Characteristics – Conservation of Mass (e.g., continuity)
– Conservation of Momentum (e.g.,
Linear Nonlinear Manning)
Lumped Distributed – Conservation of Energy (e.g., friction
loss)
Steady-state Transient
Deterministic Stochastic
Concept

1.2 Classification of Systems Classification


System Analysis
Pros and cons

Linear and Nonlinear systems: A linear system is the one in which the
output is a constant ratio of the input. In a linear system the output
due to a combination of inputs is equal to the sum of the outputs
from each of the inputs individually i. e principle of superposition.
Time-variant and Time-invariant systems: In a time invariant system
the input-output relationship does not depend on the time of
application of input. i.e. output is the same for the same input at all
times. E.g. Unit hydrograph.
Deterministic system and Stochastic system: In a stochastic system
the input-output relationship is probabilistic.
Continuous and Discrete systems: In continuous system the changes in
the system take place continuously.
Concept

1.2 Classification of Systems Classification


System Analysis
Pros and cons

Linear time-invariant systems:

i
Qi   h j ui  j 1
j 1
Concept

1.3 Hydrologic System Classification


System Analysis
Pros and cons

• Hydrologic systems are distributed in time and space


• Systems are divided into subsystems for the purpose of solution
• Hydrologic system: A physical, sequential and dynamic system
• The input-output relationship can be expressed as:
y(t) = Φ [x(t)]
where x(t) and y(t) are time functions of input and output
respectively; Φ[] is the transfer function or the operation
performed to transform input to output
For a catchment system, the input is water or energy of various
forms and the transfer function may be the unit hydrograph
Concept

1.3 Hydrologic System Classification


System Analysis
Pros and cons

• Hydrologic systems
Concept

1.3 Hydrologic System Classification


System Analysis
Pros and cons

• Linear systems and basin response

•Continuity equation

•General storage equation

•Convolution equation
1.3 Water resource Concept
Classification
System Analysis

systems components Pros and cons

• Water resources management


involves influencing and
improving the interaction of three
interdependent subsystems,

• Natural River subsystem (NRS),

• Socio-economic System (SES) &

• Administrative and Institutional


subsystem (AIS) (Loucks et al.,
2005) Figure: Interactions among subsystems
and between them and their environment
1.3 Water resource Concept
Classification
System Analysis

systems components Pros and cons

• Water resources system as an input-output system (Heun, 2011)


1.3 Water resource Concept
Classification
System Analysis

systems analysis Pros and cons

• Water resources problems are

– Complex, interconnected, and overlapping

– Involving water allocations, economic development, and


environmental preservation

• Systems analysis

– Break complex system down into components and analyze


the interactions between the components

– Central method used in water resources planning


Concept
Classification
1.3 System analysis System Analysis
Pros and cons

• An optimal plan is selected through a systematic search and


evaluation of various alternatives that meet the objectives and
constraints

• System analysis consists of five steps:


– Defining the problem
– Identifying the system and its elements
– Defining the objectives and constraints
– Identify feasible alternatives that satisfy the above said constraints
– Identify the most efficient alternative that best meets the
objectives
Concept

1.4 Advant. of system approach Classification


System Analysis
Pros and cons

• Focuses on definite goals and objectives

• Systematic searching for alternatives

• Provides with modern technology to analyse the system


scientifically and objectively

• Forces the user to identify the known and not readily known
elements of the system

• Regularly provides feedback from each step thus providing


flexibility for correction and modification

• Deal with highly complex multi-objective multi-constraint


problems
Concept

1.4 Disadvantages Classification


System Analysis
Pros and cons

• System approach is not suitable when there is a lack of proper


understanding of water resources systems/ its conflicting objectives
• Most of decisions are irreversible in nature and hence hazardous if d/t
dimensions of the system (physical, socio-economic, etc) not
considered
• Practical difficulties due to the gap between the theory and the practice
• Transfer of technological advances to practical use requires
professionals with both theoretical background and practical experience
• Most water resources systems are complex thereby demanding difficult
mathematical computations
• Unavailability of software and/or data required
• Dealing with intangibles.
2.1 Water Resources Planning

• Water Resources Planning (WRP) serves Water Resources


Management (WRM). WRP addresses the functioning of the Water
Resources System (WRS) in its integrated whole, taking into
consideration its social, economic and environmental functions.

• Planning (the formulation of development and management plans


and policies) is an important and often indispensable means to
support and improve operational management.
2.1 Water Resources Planning

Common goals of WRPM


– Reducing the frequency and/or severity of the adverse
consequences of droughts, floods and excessive pollution
– Identification and evaluation of alternative measures that may
increase the available water supplies or hydropower, improve
recreation and/or navigation, and enhance the quality of water
and aquatic ecosystems
– Provide safe, reliable and affordable drinking water to people
without causing damage to environment
– Allocate scarce water resources among competing users in an
equitable manner
– Maximize net social and economic benefits from the operation
of a multipurpose dam while minimizing the environmental
damage
2.1 Water Resources Planning

An example of planning process to address water scarcity


2.2 Water Resources Planning

• Scope of planning endeavors

– Single purpose plan

– Multi-purpose plan

– Master plan –phased development plan. single or multi on a certain


geographic area for a specific time.
– Comprehensive or Integrated plan- It is multi-unit, multipurpose
and multi objective plan. It include economical, financial, political,
social, and environmental objectives. Consider both structural and
non-structural (institutional) alternative. It does not include feasibility
studies of individual projects
2.2 Water Resources Planning

• Scope of planning endeavors - in terms of areal extent

– International plan

– National plan

– Regional plan

– District plan

– River basin plan –hydrological boundary


2.2 Water Resources Planning

Spatial scale for water resource planning


• Generally, River Basin is considered most suitable spatial scale for
water resources planning (from hydrological perspective)
• Within a basin, it is important to consider
– Upstream-downstream impacts
– Up-scaling and downscaling issues (e.g. from a farmer field to
cropping system to catchment to basin and vice versa)
– Administrative boundaries
• Outside of a basin, there are also many important considerations:
– National/country perspective
– Inter-basin transfers
– International context
– Virtual water
2.2 Water Resources Planning
Spatial and temporal scales for water resource planning

Figure . Common spatial and temporal scales of models of various river basin
processes (Loucks et al., 2005)
2.3 Planning Approaches
• Top down approach (command and control approach).
– Planning process typically dominated by professionals
– Very less stakeholder participation
– The approach assumes that one or more institutions have the ability
and authority to develop and implement the plan, in other words, that
will oversee and manage the coordinated development and operation
of the basin‟s activities on the surface and ground waters of the basin.
– Widely practiced in past century-still practiced in many developing
countries
– However, becoming less desirable and acceptable over time
• Bottom up approach (grass-root approach).
– Within the past two decades WRPM processes have increasingly
involved the active participation of interested stakeholders.
– Bottom-up planning must strive to achieve a common or „shared‟
vision of goals and priorities among all stakeholders.
2.3 Planning Approaches
• Typical Analytical Framework for water resources studies (Delft
Hydraulics).
2.3 Water Resources Planning
Need of Integration in WRP
Interdependence calls for integration
(GWP, 2000).
• Natural system (Land and water,
surface-and ground waters,
quality and quantity, upstream
and downstream, Freshwater and
coastal water).
• Human system (water and
national economy, sectoral,
public-private, involving every
body)
Four dimensions of IWRM (Savenije
and van der Zaag, 2008)
(1) Water resources, (2) Water users,
(3) Spatial scales, (4) Temporal
scales
2.4 Water Resources Management

• Why the need for better water management?


2.4 Water Resources Management
Irrigation Development & the Aral Sea

Land
Area (106 ha)
(103 km2) 80 8

70 7
Irrigated
Flow
60 Land 6
(km3) Area
50 5

40 4

30 3
Flow
20 2

10 1

0 0
1930 1940 1950 1960 1970 1980 1990 2000
1964
1973
1976
1985
1989
1997
2000
Aral Sea 2001
2002
2003
2004
2005
2006
2007
2008
2009
2.4 Water Resources Management

• Integrated Water Resources Management (IWRM)

Environmental
Policy

Enabling
Environment

Economic Social
Policy Policy
2.4 Water Resources Management
• Sustainable water resources management: Water resource
systems that are designed and managed to fully contribute to the
needs of society, now and in the indefinite future, while protecting
their cultural, ecological and hydrological integrity.

Sustainability principle to practice


• Broad guidance is available
• Difficult to translate guidance into operational concepts applied to
specific systems
• Requires
– Basin approach
– Considering externalities and economic efficiency
– Scaling up of processes
– Multidisciplinary approach with stakeholder input
2.4 River basin management
Infrastructure control, Institutional Precipitation, Temperature, Humidity, Streamflow
policies & incentives Water Quality, Groundwater, Snow pack,
Warnings, Alarms Evapotranspiration

Decision
Data
Implementation
Measurement !
(
(!
! ( !(!
(!(
( !
! (
!
(
!
(
!
(
!
(
!
(

!
( (!
! ( !
( !
( !!
((
!
(

!
(! !
(
(
!
( !
( !
( !
(

Decision !
(

!
(
!
(

!
(

!
(
!
(
!
(
!
(
!!
((

!
(
!
(

!
(!
!
(

!
(
!
(

!
(
!
(

!
(

!
( ( !
( !
(

Support
!
( !!
(( !
(

Data !
(
!
(
!
(
!
( !
( !
(
!
(
!
(!
(
!
(
(!
!(
!
(

!
(
!
(

!
(

Processing & !
(

Decision Making
System Archiving

Analysis

Data base
MCDM Data model
Operating rules Data display
Expert system
Rainfall/runoff,
Optimization, Warnings
Flooding, Hydraulics, Water Allocation,
Risk management, Dispute Resolution
Water Pollution, Environmental Flows
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Partitioning of time
series
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Background
• If the properties of the process is unaffected by a change of time
origin, the process is called “strictly stationary”, which means that
the joint probability distribution of any m observations made at
times t1, t2, …, tm is the same as that associated with m
observations made at times t1 + k, t2 + k, …, tm + k.
• It also follows from the definition of stationarity, that the process
obtained by performing a linear operation on a stationary process is
also stationary.
• In particular, if zt is a stationary process, then the first difference
Z t  Z t  Z t 1
and higher differences d Zt are stationary.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Stochastic model, e.g. AR(1)


X t    1  X t 1      t
Blue column - red column=green column
Blue column - green column=red column
Xn - ** = * = Xn+1

AR(p) Xt Xt-1 Xt-Xt-1


MA(q) X1 X2 X1-X2
ARMA(p,q)
X2 X3 X2-X3
ARIMA(p,d,q) X3 X4 X3-X4
. . .
. . .
. . .
BkZt= Zt-k Xn-1 Xn Xn-1-Xn
BZt=Zt-1 Xn * **
B2Zt=Zt-2 Xn+1
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Given a sequence of random variables at, at-1, at-2,… is called


a “white noise process”. A linear filter model can be
represented by the following equation:
z t    a t  1a t 1   2 a t 2  ...    ( B)a t
where
μ is a parameter that determines the level of the process

ψ(B) is the transfer function of the filter



( B)  1  1B   2 B  ...    j B j with  0  1
2

j0

White noise zt
Linear filter
at
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• The above equation can be written in terms of


~
zt  z t  

z t     ( B )a t
Squaring both sides and finding their expected values yields

 
2
z
2
a  j

j0
2

The linear filter model can be put in another form in which the
current deviation (~
z t  z t  ) is regressed on past
deviations ~ z t 1 , ~
z t 2 ,...
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Autoregressive (AR) processes


• An autoregressive process of order p, an AR(p) process, is
given by
Zt    1 Zt 1     2 Zt 2     ...   p Zt  p     at
or using the backward shift operator:
 B)(Z t     at

where   B   1  1 B  2 B 2
 ...   p B p
is the
autoregressive operator of order p.
1
( B)z t  a t is equivalent to z  ( B)a where ( B)  1 ( B) 
t t ( B)
The variance of the process is:
 2
 2z  a
1  11   2 2  ...   p p
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

First order Autoregressive (Markov) processes

An autoregressive process of order 1, an AR(1) process or


Markov process, is given by

Z t    1 Z t 1     at
where μ is the mean level, ϕ1 is the AR parameter, and at is the
error term with zero mean and variance Sa2. at is assumed to be
identically and independently distributed

S a2 , k  0
at at  k   
0 , k  0
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Using the backward shift operator B,


Z t    1 BZ t     at
where BkZt= Zt-k with
The characteristic equation is given as
 ( B)  1  1B  0
Thus the parameter ϕ1 must satisfy
-1< ϕ1<1
k  1k k 0
The variance of the process is
 2
z 
2 a
1  12
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

AR(1) process
Z t    1 Z t 1     at
at  0
Standard normal variate t 
a
 2
at   t  a since  z2  a
,     1   2

1  12 a z 1

at   t  z 1  12
Thus Z t    1 Z t 1      t  z 1  12
Zt   1 Z t 1   
   t 1  12
z z
 t  1 t 1   t 1  12
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Stochastic model, e.g. AR(1)


X t    1  X t 1      t
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Figure Theoretical ACF for an


AR(1) process with ϕ1 = 0.8

Figure ACF and PACF of an AR1 process


Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Second order Autoregressive processes

An autoregressive process of order 2, an AR(2) process is given


by
Z t    1 Z t 1     2 Z t 2     at

The characteristic equation for the process is


 ( B)  1  1B  2 B 2  0

For the process to be stationary


2  1  1
2  1  1
 1  2  1
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Second order Autoregressive processes (cont‟d)


The Yule-Walker equations can be solved for the parameters ϕ1 and ϕ2
to give 1
1 (1   2 ) 1 
1  1  2
1  12

12
2  2
 2  2 
2  1
1  2
1 2
1
Using the stationary conditions, the admissible values for ρ1 and ρ2,
for a stationary AR(2), must lie in the region

 1  1  1
 1  2  1
1
  (1   2 )
2
1
2
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Second order Autoregressive processes (cont‟d)

The variance of the process is

 2a
 
2

1  11   2 2
z

 1  2   2a
  
 1   2  (1   2 ) 2
 1
2
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Second order Autoregressive processes (cont‟d)

Example: A process is given by Zt    1.2Zt 1     0.5Zt 2     at


Is this process stationary? Determine the variance of Zt given the
variance of at as 1.0. Also obtain the first five serial correlation
coefficients.
Soln: ϕ1=1.2 and ϕ2= - 0.5 since 2  1  1
2  1  1
The process is stationary.
 1  2  1
The variance of the process is
 2
 z2  a

1  11   22
 1  2   a2
    3.704
 
 1  2  (1  2 )  1
2 2
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• The Sample autocorrelation


• For a given observed time series Z1,Z2,…Zn the sample
ACF is defined as:
nk

ˆk  Z t  Z Z t  k  Z 
k  
ˆ t 1
, k  0,1,2, 
ˆ0
 Z Z
n
2
t
t 1

Figure Schematic of a correlogram with short and long memory


Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Example: The Sample autocorrelation

• Consider the following ten values of a time series:


Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

nk

 Z t  Z Z t  k  Z 
ˆ k  t 1
,
 Z Z
n
2
t
t 1
nk

 Z t  Z Z t  k  Z 
 t  k 1
 ˆ  k
 Z Z
n
2
t
t 1

• The Sample ACF is symmetric about the origin k=0.


Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• The Sample Partial Autocorrelation Function: The partial


autocorrelation function (PACF) is an extension of autocorrelation,
where the dependence on the intermediate elements (those within
the lag) is removed. In other words the partial autocorrelation is
similar to autocorrelation, except that when calculating it, the
autocorrelations with all the elements within the lag are partialled
out.
k
ˆ k 1   ˆkj ˆ k 1 j
ˆk 1,k 1  j 1
k
1   ˆkj ˆ j
j 1

ˆk 1, j  ˆkj  ˆk 1,k 1ˆk ,k 1 j , j  1,..., k


Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Sample Partial Autocorrelation Function


11  1
ˆ    
ˆ   2

ˆ22  2 11 1  2 1

1  11 1 1  1
2

ˆ21  ˆ11  ˆ22ˆ11


ˆ ˆ 3   21  2   22 1
33 
1   21 1   22  2
ˆ31  ˆ21  ˆ33ˆ22
ˆ32  ˆ22  ˆ33ˆ21
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Using data from example above.


Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Important tools to identify an AR(p) process from an observed


time series are the ACF and the partial autocorrelation function
(PACF).

( Z t  k   )(Z t   )  1 Z t  k   Z t 1     2 Z t  k   Z t  2     


  p Z t  k   Z t  p     Z t  k   at

• By taking expectations of the terms

 k  1 k 1  2 k 2  ...   p k  p

• with k > 0. E[(Zt-k-μ)at] equals zero for k > 0, because Zt-k


only depends on the error process up to and including t-k
and is uncorrelated with at.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• The theoretical ACF is obtained by dividing the above eqn by


γ0:
k  1k 1  2 k 2  ...   p k  p

• Extending this eqn. for k = 1, 2, · · · , p results in a set Yule-


Walker equations:

1  1  2 1  ...   p  p 1
 2  1 1  2  ...   p  p  2
. ... .
. ... .
 p  1  p 1  2  p  2  ...   p
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• which in matrix notation is equal to

• Now if kj is the jth coefficient of an AR process of order k


(j =1 . . . k), then ϕkk can be written as
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• The coefficient ϕkk is a function of lag k, which is called the


theoretical partial autocorrelation function (PACF). The
sample PACF is used in model identification; ϕkk is estimated
and plotted against k for k = 1, 2, . .
• Example: The mean and standard deviation of the annual
flows of a certain river are 4.7 and 0.958 units respectively;
also the first serial correlation coefficient r1=0.324. Generate
three items of data (for times t, t+1 and t+2) using a Markov
model and the following independent standard normal variates
of 0.87, -0.65, 1.15. Assume that ξt-1=0 that is xt-1=4.7.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Soln: First multiply the nt values by sn=(1-r12)0.5=0.946


• ξt=0.324 x 0 +0.946 x 0.87 = 0.823, xt=0.958ξt+4.7=5.49
• ξt+1=0.324 x 0.823-0.946 x 0.65=-0.348,
xt+1=0.958ξt+1+4.7=3.94
• ξt+2=-0.324 x 0.348 + 0.946 x 1.15=0.975,
xt+2=0.958ξt+2+4.7=5.63
• Example: Annual flows in a certain river basin are reduced to
unit mean and estimated standard deviation of 0.182. The first
and second serial correlation coefficients are 0.458 and -0.004.
It is found that a second order process provides a satisfactory
fit to the data. Generate three additional values using the
following independent standard normal variates:1.352, -0.532,
0.789.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Solution
1  r1 1  r2  / 1  r12   0.4581.004 / 1  0.4582   0.582
2  r2  r12  / 1  r12    0.004  0.4582  / 1  0.4582   0.271
R 2  1r1  2 r2  0.582 0.458  0.271 0.004
 2  1  R 2  0.8562
 t  0.582 0  0.271 0  0.8561.352  1.157,
xt  0.182 t  1  1.211
 t 1  0.5821.157  0.271 0  0.856 0.532  0.238,
xt 1  0.181 t 1  1  1.040
 t  2  0.582 0.238  0.2711.157  0.856 0.789  0.489,
xt  2  0.182 t  2  1  1.089
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Moving average (MA) processes


• The q th order moving average process, MA(q), is given by
Zt    at  1at 1  2at 2   q at q
• Using the backward shift operator,
q
Z t     B at Z t      j at  j ( 0  1)
j 0
• The invertibility condition for a MA(q) process is that the roots of
the characteristic equation
( B)  1  1B  2 B2  ...  q Bq  0
lie outside the unit circle.
• Since the series ( B)  ( B)  1  1B  2 B2  ...  q Bq
is finite, no restrictions are needed on the parameters of the
moving-average process to ensure stationarity.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Moving average (MA) processes


• We can get the autocorrelation function.
 q k
   j j  k
 j 0 (k  q)
k   q
1    j2
 j 1

0 (k  q)
   k  1 k 1  ...   q  k q
 k  1,2,..., q
k   1  12  ...   q2
 k q
 0
• Thus, the autocorrelation function for a moving-average process of
order q has a cutoff after lag q.
• Unlike the Yule-Walker equations for an autoregressive process,
which are linear, the equations are nonlinear, and therefore, have to be
solved iteratively by any technique such as the Newton-Raphson
algorithm. These estimates of the parameters are rough estimates at the
identification stage.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Moving average (MA) processes


• A first order moving average process, MA(1), is given by
Z t    at  1at 1  1  1 B at
• Using the backward shift operator,

Z t     B at
• where ϴ(B) = 1- ϴ1B is the MA operator of order one. For the
process to be invertible, parameter θ1 must satisfy |ϴ1| < 1.
However, the process is of course stationary for all values of
1.
  1
 k 1 (1  12 )
 k  1  12  kk   k

0 (1  12 ( k 1) )


1
k 1
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Autoregressive moving average (ARMA) processes


• An autoregressive moving average ARMA(1,1) process is
given by
Z t    1 Z t 1     at  1at 1
• The ARMA(p,q) process is given by
 B ( Z t   )   B at

• where ϕ(B) and ϴ(B) are the AR(p) and the MA(q) operator,
respectively.
(1  11 )(1  1 )
1 
1  12  211
 2  11
 k  1 k-1 k2
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Autoregressive moving average (ARMA) processes

• An ARMA(1,1) to be fit for stationarity and invertability it


must satisfy

 2  1
 2  1 ( 21  1) 1  0
 2  1 (21 - 1) 1  0
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Autoregressive moving average (ARMA(1,1)) processes

• The moment estimators for ARMA(1,1) process are provided


below.
The variance of the ARMA(1,1) process is

ˆ 

 z2 1  ˆ12 
 
2

1  2ˆ1ˆ1  ˆ12
a

r2
ˆ1 
r1

ˆ
1 
 b  b  4 r1  ˆ1
2
 
2


2 r  ˆ 1 1 
b  1  2ˆ1r1  ˆ12
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Autoregressive moving average (ARMA(1,1)) processes

• Example: Obtain the first 4 serial correlation coefficients and


the variance of an ARMA(1,1) process represented by
Z t    0.75Z t 1     at  0.5at 1

Assume  a2  .1 Check stationarity.

Soln:
ρ1=0.3125
ρ2= 0.2344
ρ3 =0.1758
ρ4 =0.1318
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Nonstationary Processes
Differencing
• Calculating differences allows a trend to be removed from a
series:
Z t  Z t     Z t 1   
 2 Z t  Z t  Z t 1
X t  X t  X t 1  1  B X t
 
 2 X t  X t   1  B 1  B X t  1  2 B  B 2 X t
 X t  2 X t 1  X t  2

• Autoregressive Integrated Moving Average (ARIMA)


processes
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Autoregressive Integrated Moving Average (ARIMA)


processes
• Basically, an ARIMA model is an ARMA model for stationary
differences:

 B 
( Z t   ) 
d
at
 B 
Seasonal integrated autoregressive moving average
(SARIMA) processes
• The general notation of a SARIMA (p,d,q)×(P,D,Q) model is

(  s Z t   ) 
d D   B B s

 B B s
at
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Box and Jenkins (1976) distinguish three steps in model


construction:
• 1. Identification;
• 2. Estimation (calibration);
• 3. Diagnostic checking (verification).
• In the identification stage is analyzed which stochastic model
is most representative for the observed time series.
Identification starts with a visual analysis of a time series
plot. A graph may indicate the presence of a seasonal
component or some other form of trend. It may be useful to
filter the series in order to obtain a more smooth picture, which
reflects the mean level more clearly.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

• Other tools in the identification are the sample ACF and the
sample PACF for univariate time series models. Plots of the
sample ACF and the sample PACF indicate the type of
stochastic process that can be assumed (AR, MA or ARMA)
and the order of this process.
• In the estimation stage the parameter values are estimated by
using an optimization algorithm, based on a least squared
criterion or a maximum likelihood criterion.
• In the stage of diagnostic checking (verification) it is checked
if the model assumptions are answered. This is mainly based
on analysis of the residuals: the plot of residuals against time,
the residual ACF and the residual PACF.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Table Properties of autocorrelation and PAC functions of some


time series models.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Typical autocorrelation and partial-autocorrelation functions k and


kk for various stationary models (Box-Jenkins 1976).
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:


We have a set of states, {1,…,k}. The process starts in one of these
states and moves successively from one state to another. Each move is
called a step.
If the chain is currently in state si, then it moves to state sj at the next
step with a probability denoted by pij, and this probability does not
depend upon which states the chain was in before the current state. The
probabilities pij are called transition probabilities.
Markov chain is a model in which the current value (time t) of a
variable Y taking values in is fully explained by the knowledge of the
value taken by the same variable at time t-1. This model is summarized
in a transition matrix C1 giving the probability distribution of Yt given
any possible value of Yt-1.
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:


Example: Assume P less than/equal to 2 mm is considered dry, wet otherwise
Daily precipitation State
1 Dry
3 Wet
5 Wet
2 Dry
Transition probability matrix
4 Wet
State D W
1 Dry
D 2/6 4/6
1 Dry
W 3/7 4/7
2 Dry
6 Wet
2 Dry
7 Wet
8 Wet
4 Wet
3
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Each row of C1 is a probability distribution summing to one. Since the


current value is fully determined by the knowledge of only one past
period, this model is said to be of order 1.
m

P
j 1
ij 1 for i  1,2,, m
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:


More generally, a Markov chain of order f, f ≥ 0, is a model in which
the current value is explained by all lags up to t-f. The transition
matrix is then of a larger size. For instance, for f=2 and K=3, we have

The special case f=0 in which the current value is independent from
the past is called the independence model. The transition matrix then
becomes a single probability distribution: C0  PYt  j   P1  Pk 
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

The relationship between Pt and Pt+1, the vector of unconditional


probabilities of reservoir states at time t+1 is given by

Pt+1=QPt

P(Zt+1=i) = P(Zt+1=i/Zt=0)P(Zt=0) + P(Zt+1=i/Zt=1)P(Zt=1) +…


+ P(Zt+1=i/Zt=c)P(Zt=c)

A total of c+1 equations similar to equation above can be written


corresponding to c+1 elements of the vector Pt+1.

Pt+2=QPt+1= Q(QPt) = Q2Pt = Q(2)Pt

Where Q(2) denotes the two step transition matrix of conditional


probabilities (with time homogeneity).
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:


Let the initial state vector, P0, be the probability that the Markov chain
is in state i at time 0. Then, the state vector, Pn, which is the probability
that the chain is in state j after n transitions, can be expressed

Pn=QnxP0
=Q(n)P0

Where P0=[P1,P2,…,Pm] and Pi probability of being in state i at time 0.


Q(n)=QQ…Q is the n–step transition matrix of conditional
probabilities.
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Steady state probabilities

When n increases, the elements of matrix Q(n) approach under ergodic


condition the so called steady state values with identical columns.
Each column is equivalent to the vector П, known as the long run or
invariant distribution. This is the limiting vector of unconditional
probabilities, that is,
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

When n is very large Pn and Pn+1 each tend to equal to П.


Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 1: Given a state space S={0,1}


Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

This shows a fast approach to the steady state distribution:

The solutions are obtained from the simultaneous equations


Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 2:

Because the states do not communicate and consequently Q(n)=Q,


steady state conditions can not be reached. It is implied that inflows
are equal to the outflow (plus losses), a situation which negates the
purpose of a reservoir.
If all pairs of the states communicate the Markov chain is said to be
irreducible.
The chain described by the matrix is not ergodic since the transitions
are not possible from one state to any other state.
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3: The annual inflows Xt, t=1,2,3,…, to a reservoir are


serially independent; Xt may be considered to be one of the following
volumetric units: 1,2,3,4 and 5 where each unit is 5x106 m3. The
inflow data when converted to the above units are approximately
normally distributed with a mean of 3 and standard deviation of 1. The
reservoir capacity is 3 units, and the annual demand including losses is
3 units. Also the storage is, at any time, in one of the following states:
State 0, empty, with storage not exceeding ½ unit.
State1, one thirds full, with storage between ½ unit and 1½ units.
State2, two-thirds full, with storage exceeding 1½ units but not
exceeding 2½ units
State3, full, with storage exceeding 2½ units.
Thus the reservoir state space S={0,1,2,3}.
The reservoir is commissioned on January1,1980.
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
(a) Determine the elements of the one-step annual transition matrix on
the assumption that the process is homogeneous in time.
(b) If the reservoir is initially full, what are the probabilities of the
various states on (i) 1 January 2080 and on (ii) 1 January 1982?
(c) If the reservoir is initially full, what is the probability that the
reservoir (i) will not be empty during the first 3 years of operation and
(ii) will be empty for the first time around 1 January 1983?
(d) What is the return period for a full reservoir?
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
Xt~N(3,1)

1  (x  )2  Reservoir capacity


f ( x)  exp  =3 units
 2  2 2

S3

S2

annual demand S1
=3 units
S0
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
(a) From the tables of the normal distribution, the probabilities of the
five types of inflows are approximated as follows for t=0,1,2…

P(Xt=1)=0.061
P(Xt=2)=0.245
P(Xt=3)=0.388
P(Xt=4)=0.245
P(Xt=5)=0.061
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
(a) The one step transition probabilities q(i, j), i, j=0,1,2,3,4 are as
follows.
q(0,0)=P(Xt≤3)=0.694
q(1,0)=P(Xt=4)=0.245
q(2,0)=P(Xt=5)=0.061
q(3,0)=P(Xt≥6)=0.000
1.000

q(0,1)=P(Xt≤2)=0.306
q(1,1)=P(Xt=3)=0.388
q(2,1)=P(Xt=4)=0.245
q(3,1)=P(Xt≥5)=0.061
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
(a) The one step transition probabilities q(i, j), i, j=0,1,2,3,4 are as
follows.
q(0,2)=P(Xt≤1)=0.061
q(1,2)=P(Xt=2)=0.245
q(2,2)=P(Xt=3)=0.388
q(3,2)=P(Xt≥4)=0.306
1.000

q(0,3)=P(Xt<1)=0.000
q(1,3)=P(Xt=1)=0.061
q(2,3)=P(Xt=2)=0.245
q(3,3)=P(Xt≥3)=0.694
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
(a) Hence, the one step transition matrix is given by:
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
(b i) The following simultaneous equations are obtained by using

One of these equations, preferably a long one, is then replaced by

And the solutions obtained are:

Which gives the probabilities of the various states after a long period,
such as 100 years
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
(b ii) The initial vector of unconditional probabilities
P0=[0 0 0 1]T, and from equation P2=Q2P0.
Because the elements of the column vector P0 are zero except the last
which is 1. The column vector P2 is equal to the vector formed by the
elements of the last column of the product QQ, that is,
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
(b ii) where

Hence
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
(c i) Given the initial vector of unconditional probabilities in order to
determine the unconditional probabilities of first time emptiness
after a period of t years of operation of the reservoir (t=1,2,3,…)
the elements of the first column of the one step transition matrix
Q are adjusted so that when the reservoir reaches an empty state
it remains empty there after. In this particular case an absorption
state is setup by changing the first element to 1 and the other
elements in the first column to 0.
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Example 3:
Solution:
(c i) The adjusted matrix of transition probabilities is given by:
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Solution:
(c i)

Hence
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Solution:
(c i)

The first element of vector

Therefore the probability of having storages greater than ½ unit during


the first 3 years of operation is 1-0.090=0.910
Time series analysis
3.3 Markov Models Disaggregation
Markov model

Markov chain analysis:

Solution:
(c ii) Because of the absorption state P2(0) includes the probabilities of
first time emptiness after 1 year and 2 years. Likewise, P3(0)
includes the probabilities of first time emptiness after 1 year and
2 years and 3 years. Therefore the probability of an empty
reservoir for the first time at the end of the third year is
P3(0)-P2(0)=0.056
(d) The steady state probability of a full reservoir from answers to b(i)
is 0.273. This is the long run probability of having a full reservoir
in any year. Hence, the return period of a full reservoir is
1/0.273=3.66 years (on average the reservoir will be full once in
3.66 years).
4.1 Modelling Techniques

• Popular operations research techniques include


– Optimization methods
– Simulation
– Game theory
– Queuing theory etc
• Among, these, the popular ones in water resources field are
optimization and simulation.
4.1 Optimization vs simulation
• Simulation models: Predict response to given design
• Optimization models: Identify optimal designs or policies
4.1 Optimization vs simulation
• Optimization models eliminate the worst solutions.
• Simulation tools evaluate the performance for various configurations of
the system; but they are not effective for choosing the best config.
• Simulation simply addresses „what-if‟ scenarios – what may happen if
a particular scenario is assumed.
• Simulation is not feasible when there are too many alternatives for
decision variables, which demand an enormous computational effort.
• Optimization will determine the best decision; but the solution is often
based on many limiting assumptions.
• Full advantage of systems techniques: Optimization should be used to
define a relatively small number of good alternatives that can later be
tested, evaluated and improved by means of simulation.
4.2 Optimization

Basic components of an optimization problem :

– An objective function expresses the main aim of the model


which is either to be minimized or maximized.

– Set of unknowns or variables which control the value of the


objective function.

– Set of constraints that allow the unknowns to take on certain


values but exclude others.

Optimization problem is then to:


Find values of the variables that minimize or maximize the objective
function while satisfying the constraints.
4.2 Optimization

• Identify the best through evaluation from a number of possible


solutions
• Driving force in the optimization is the objective function (s)
• Optimal solution is the one which gives the best (either maximum or
minimum) solution under all assumptions and constraints
• An optimization model can be stated as:
Objective function: Maximize (or Minimize) f(X)
Subject to the constraints
gj(X) ≥ 0, j = 1,2,..,m
hj(X) = 0, j = m+1, m+2,.., p
X is the vector of decision variables; g(X) are the inequality
constraints; h(X) are the equality constraints.
4.2 Classification of optimization

Classification based on constraints


• Constrained optimization problems: Subject to one or more
constraints
• Unconstrained optimization problems: No constraints exist
4.2 Classification of optimization

Optimization problems can be classified as


(i) Linear programming: Objective function and all the
constraints are „linear‟ functions of the design variables
(ii) Nonlinear programming : Any of the functions among the
objectives and constraint functions is nonlinear
(iii) Geometric programming : Objective function and
constraints are expressed as polynomials
(iv) Quadratic programming: Best behaved nonlinear
programming problem with a quadratic objective function and
linear constraints and is concave (for maximization problems)
4.2 Classification of optimization

Optimization problems can be classified as deterministic or


stochastic programming problems
(i) Deterministic programming problem: In a deterministic
system, for the same input, the system will produce the same
output always. In this type of problems all the design variables are
deterministic.
(ii) Stochastic programming problem: In this type of problem,
some or all the design variables are expressed probabilistically
(non-deterministic or stochastic).
4.2 Classification of optimization

Objective functions can be classified as single-objective and


multi-objective programming problems.
(i) Single-objective programming: There is only a single
objective function.
(ii) Multi-objective programming: A multi-objective
programming problem can be stated as follows:
Find X which maximizes/ minimizes
Subject to gj(X) ≤ 0 , j = 1, 2, . . . , m
where f1, f2, . . . fk denote the objective functions to be maximized/
minimized simultaneously
4.3 Simulation
• Simulation process duplicates the system‟s behaviour by designing a
model of the system and conducting experiments for a better
understanding of the system functioning in various probable scenarios
• It reproduces the response of the system to any future conditions
• Operating policies can be tested through simulation before
implementing in actual situations
• Simulation model duplicates the system‟s operation with a defined
operational policy, parameters, time series of flows, demands etc
• Design parameters and the operation policy are evaluated through the
objective function or some reliability measures.
4.3 Steps in simulation
1. Problem definition: Define the goals of the study
2. System definition: Identify the WRS components. Identify the
performance measures to be analyzed.
3. Model design: Decide the model structure by determining the
variables describing the system, its interaction and various
parameters of structures.
4. Data Collection: Determine the type of data to be collected.
New/ Old data is collected/ gathered.
5. Validation: Test the model and apply the model to the problem
4.3 Classification of simulation
i. Physical (e.g. a scale model of a spillway)

ii. Analog (system of electrical components such as resistors or


capacitors arranged to act as an analog to the hydrological
components) or

iii. Mathematical (action of a system expressed as equations or logical


statements.

• Simulation models can be

i. Static (fixed parameters and operational policy) or

ii. Dynamic (takes into account the change in the parameters of the
system and the operational policy with time) in nature.
4.3 Classification of simulation
• Simulation models can be deterministic or stochastic

• Simulation models can be statistical or process oriented, or a mixture.

• Pure statistical models are based solely on data (field measurements).


Regressions and artificial neural networks are examples

• Pure process oriented models are based on knowledge of the


fundamental processes that are taking place. In this, calibration using
field data is required to estimate the parameter values in the process
relationships.

• Hybrid models incorporate some process relationships into regression


models or neural networks.
4.3 Operating Rules
• Allocate releases among purposes, reservoirs, and time intervals
• In operation (as opposed to design), certain system components are
fixed:
– Active and dead storage volume
– Power plant and stream channel capacities
– Reservoir head-capacity functions
– Levee heights and flood plain areas
– Monthly target outputs for irrigation, energy, water supply, etc
• Others are variable: Allocation of
– stored water among reservoirs
– stored and released water among purposes
– stored and released water among time intervals
4.3 Standard Operating Policy

• Reservoir operating policy - release as


function of storage volume and inflow
S t  Qt S t  Qt  Dt
Rt = Rt(St,Qt) 
Rt   Dt if Dt  S t  Qt  Dt  K
S  Q  K S t  Qt  Dt  K
 t t

Rt
Release available
water & Release demand & Release demand
deficits occur demand met spill excess

St Demand
Rt Dt
Qt X1t
K X2t
Dt

Sufficient water Reservoir fills


to meet demands and demand met

Dt Dt+K St+ Q t
4.3 Hedging Rule

• Reduce releases in times of drought (hedging) to save water for


future releases in case of an extended period of low inflows.

hedging D

K
4.3 Performance Evaluation

• How well will the system perform?


• Define performance criteria
– Indices related to the ability to meet targets and the
seriousness of missing targets
– Simulate the system to evaluate the criteria
– Interpret results
• Should design or policies be modified?
4.3 Performance Criteria - Reliability

• Reliability – Frequency with which demand was satisfied


– Define a deficit as:

 XT  X t if Xt  XT
Dt  
 0 if Xt  XT

• Then reliability is:



# of times Dt = 0
Reliability 
n

• where n is the total number of simulation periods



4.3 Performance Criteria - Resilience

• Resilience = probability that once the system is in a period of


deficit, the next period is not a deficit.

• How quickly does system recover from failure?

# of times Dt = 0 follows Dt > 0


Resilience 
# of times Dt > 0 occured


4.3 Performance Criteria - Vulnerability

• Vulnerability = average magnitude of deficits

• How bad are the consequences of failure?

Dt
Vulnerability = 
Dt 0 # of times Dt > 0 occured


4.3 Simulate the System

Reservoir operating policy


Allocation policy
Rt
8

Release available x1
6

Allocation, Xi
water x2
x3
Dt 4

Release Release demand


demand + excess 2

0
Dt K Dt+K St + Q t 0 2 4 6 8 10 12 14 16
Release, R

Policies

Hydrologic g(x) h(y)


Model
time series output
x y

Input System Output

Model
4.3 Uncertainty
• Deterministic process • Stochastic process
– Inputs assumed known. – Explicitly account for
– Ignore variability variability and uncertainty
– Assume inputs are well – Inputs are stochastic processes
represented by average values. – Historic record is one
– Over estimates benefits and realization of process.
underestimates losses
4.3 Simulate the System
Reservoir operating policy Allocation policy
Rt
8

Release available x1
6

Allocation, Xi
water x2
x3
Dt 4

Release Release demand


demand + excess 2

0
Dt K Dt+K St + Q t 0 2 4 6 8 10 12 14 16
Release, R

Distribution of inputs
FX(x)
FY(y)

Policies
X
Generate multiple
input sequences Compute
h(y) statistics of
g(x) outputs
y
Simulate each h(y)
x System
g(x) Input sequence
Get multiple
output sequences
y
x Model
4.3 System Simulation
Start
St
X1t
R
Qt X2t t=0
St = S0
K X3t
t=t+1
Operating Policy Allocation Policy

Rt 8

6
x1 Read Qt File
Allocation, Xi

Release available x2
water x3
4
Dt
Release Release demand 2 Compute
demand + excess Rt, Xit, i=1,…n
0
0 2 4 6 8 10 12 14 16
Data
Dt K Dt+K St + Q t Release, R Storage
St+1 = St +Qt -Rt
• Create network representation of system
• Need inflows for each period for each node
• For each period: No
Done?
Perform mass balance calculations for Yes
each node
Stop
Determine releases from reservoirs
Allocate water to users
4.3 Example
DEMAND
St 5

Demand (10 milliom m3)


4
Qt X1t 3
K X2t 2

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

• Using unregulated river for irrigation


Year

• Proposed Reservoir Flow statistics


• Capacity: K = 40 million m3 (active)
• Demand: D = 30  40  45 million m3
• Winter instream flow: 5 mil. m3 min.
• 45 year historic flow record available
• Evaluate system performance for a 20 year period
• Simulate
• Two seasons/year, winter (1) summer(2)
• Continuity constraints
• Operating policy
4.3 The Simulation

• Simulate reservoir operation


– Perform 23 equally likely simulations
– Each simulation is 20 years long
– Each simulation uses a different sequence of inflows
(realization)

– Seasonal Thomas Fiering Model


 
Q p , j 1  Q j 1  B j (Q p , j  Q j )  t j s j 1 (1  r j2 ) 0.5
s j 1
B j  rj ( )
sj
4.3 Example – Realization 1
Rmin 0.5 K 4 Realization 1
Winter Summer
Year S1y Q1y S+Q R1y S2y Q2y S+Q D2y R2y Deficit
1 0.000 4.740 4.740 0.740 4.000 1.805 5.805 3.000 3.000 0.000
2 2.805 2.918 5.723 1.723 4.000 1.499 5.499 3.200 3.200 0.000
3 2.299 2.747 5.045 1.045 4.000 1.548 5.548 3.400 3.400 0.000
4 2.148 2.819 4.966 0.966 4.000 1.753 5.753 3.600 3.600 0.000
5 2.153 3.871 6.023 2.023 4.000 2.229 6.229 3.800 3.800 0.000
6 2.429 3.585 6.015 2.015 4.000 2.235 6.235 4.000 4.000 0.000
7 2.235 4.736 6.971 2.971 4.000 2.984 6.984 4.100 4.100 0.000
8 2.884 3.275 6.159 2.159 4.000 2.212 6.212 4.200 4.200 0.000
9 2.012 3.188 5.200 1.200 4.000 2.666 6.666 4.300 4.300 0.000
10 2.366 3.401 5.767 1.767 4.000 1.240 5.240 4.300 4.300 0.000
11 0.940 3.811 4.750 0.750 4.000 2.371 6.371 4.400 4.400 0.000
12 1.971 3.435 5.407 1.407 4.000 2.421 6.421 4.400 4.400 0.000
13 2.021 2.460 4.481 0.500 3.981 1.317 5.298 4.400 4.400 0.000
14 0.898 2.377 3.275 0.500 2.775 1.896 4.671 4.400 4.400 0.000
15 0.271 3.692 3.963 0.500 3.463 1.831 5.293 4.500 4.500 0.000
16 0.793 3.302 4.095 0.500 3.595 1.300 4.895 4.500 4.500 0.000
17 0.395 2.548 2.944 0.500 2.444 2.047 4.491 4.500 4.491 -0.009
18 0.000 2.454 2.454 0.500 1.954 1.658 3.612 4.500 3.612 -0.888
19 0.000 3.139 3.139 0.500 2.639 2.768 5.407 4.500 4.500 0.000
20 0.907 2.910 3.816 0.500 3.316 1.445 4.762 4.500 4.500 0.000
Deficit -0.897
Number 2.000
% 0.100
4.3 Simulation Results

Total # of Frequency 9
SimulationShortage Failures of Failure
1 -1.031 2 0.100 8
2 -10.050 8 0.400 7
3 -0.516 1 0.050

# of Failures
4 -0.184 1 0.050 6
5 -1.159 2 0.100 5
6 -10.747 8 0.400
7 -4.627 6 0.300
4
8 -1.134 4 0.200 3
9 -1.446 4 0.200
2
10 0.000 0 0.000
11 -1.735 4 0.200 1
12 -3.384 5 0.250 0
13 -3.639 3 0.150
14 0.000 0 0.000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
15 -0.067 1 0.050 Simulation
16 -1.561 3 0.150
17 -3.586 6 0.300
18 -0.223 1 0.050
19
20
-1.347
-0.977
1
2
0.050
0.100
Average failure frequency = 0.165
21 -4.758 0 0.250 Average reliability = 1- 0.165 = 0.835 = 83.5%
22 -4.966 5 0.250
23 -3.641 4 0.200 Actual failure frequency  [0, 0.40]
Average -2.643
Std. Dev. 2.93704
0.165
0.118163 Actual Reliability  [100%, 60%]
5.1 Optimization techniques

There are variety of solution methods:

Analytical/Numerical differentiation,
Linear Programming (LP),
Dynamic Programming (DP),
Non-Linear Programming (NLP),
Integer Programming (IP),
Goal Programming,
etc
5.1 Optimization Problems

Example – Water Users


 Allocate release to users and
provide instream flow

 Obtain benefits from


allocation of xi, i = 1,2,3
35

30

Bi ( xi )  ai xi  bi xi2 i  1,2,3
25

Ben efit, B
20
15 B1
B2
10
B3
5

 Bi(xi) = benefit to user i from


0
0 1 2 3 4 5 6 7 8 9 10
Allocation, x
using amount of water xi
5.1 Optimization Problems

Example

• Decision variables: xi , i  1, 2, 3

3
maximize  (ai xi  bi xi2 )
• Objective: i 1
Note: if sufficient
water is available
the allocations are
x1  x2  x3  R  Q independent and
• Constraint:
equal to
3 x1*  3, x2*  2.33, x3*  8
maximize  ( ai xi  bi xi2 )
i 1 How?
• Optimization model: x
subject to
3
 xi  R  Q
i 1
5.1 Optimization

Benefit Bi (xit )

Decision variables xit



Objective:
T 3
Maximize   Bi (xit )
Constraints:  t1 i1

x1t  x2t  x 3t  Rt t 1,2,L


 St1  St  It  Rt t 1,2,L
St  K t 1,2,L

Optimization model


5.1 Analytical Optimization

Function of a single variable No Constraints


Single Decision Variable
minimize f ( x )
Scalar x No constraints

f(x)
Global
• First-order conditions Tangent is
minimum

for a local optimum df ( x ) horizontal


0
dx Convex
function
x*
• Second-order conditions X x
for a local optimum d 2 f ( x)
2
 0 at x  x* Curvature
dx is upward
5.1 Analytical Optimization

Function of multiple variable No Constraints


Multiple Decision Variables
minimize f (x)
Vector x No constraints

• First-order conditions
f ( x )
for a local optimum 0
x1
 at x  x*
f ( x )
0
xn

• n - simultaneous equations
5.1 Analytical Optimization
Function of multiple variables

Let f(x) be a function of n variables represented by the vector


X=(x1,x2,---xn). Its Hessian matrix, H[f(x)] is defined as
  2 f ( x)  2 f ( x)  2 f ( x) 
  
 x 2
1 x1x2 x1xn 
  2 f ( x)  2 f ( x)  2 f ( x) 
  
H  f  x    x2 x1  x2
2 2
x2 xn 
    
 
  2 f ( x)  f ( x)  f ( x) 
2 2

 x x xn x2 xn2 


 n 1
The eigen values of H[f(x)] are given by the roots of the
characteristic equation:
lI  H  f ( x)  0
Where I is identity matrix and l is the vector of eigen values.
5.1 Analytical Optimization

Function of multiple variables

Convexity and Concavity: If all eigen values of the Hessian matrix


are positive for all possible values of X, the function is strictly
convex and X0 is global minimum.
If all eigen values are negative for all possible values of X, the
function is strictly concave and X0 is the global maximum.
Example: 1. f ( x)  x12  x22  4 x1  2 x2  5
2. f ( x)   x12  x22  4 x1  8
Soln 1: f f
 2 x1  4,  2 x2  2,
x1 x2
2 0
H  f (x)   
The Hessian matrix is 0 2 
l  2 0 
lI  H     0 Thus l  2
Eigen values are 0 l  2 
The function has a global minimum at X=(2, 1).
5.1 Analytical Optimization

Constrained optimization with equality constraints

1. Function f(x) of n variables with a single equality constraint

Maximize (or minimize) f(X),

subject to g(X) = 0,

The function f(x) and the equality constraint, g(x), may or may not be linear.

We shall write the Lagrangean of the function f(x) denoted by Lf(x, l) and
apply the Lagrangean multiplier method.

Lf(x)=f(x) – lg(x), where l is the Lagragean multiplier.

When g(x)=0 optimizing Lf(x) is the same as optimizing f(x). Now the
original problem is transformed to unconstrained optimization by the
introduction of the additional variable l.
5.1 Analytical Optimization

Single Constraint
Multiple Decision Variables
Lagrangian
minimize f ( x )
L( x , l )  f ( x )  l[h( x )]
Vector
x
subject to
Notice what
h( x )  0 happens to h(x)
One constraint f h
l 0 when we have a
x1 x1 feasible vector x
f h
First-order conditions l 0
x2 x2
N+1 equations

f h
l 0
xn xn
h( x )  0
5.1 Analytical Optimization

Constrained optimization with equality constraints

1. Function f(x) of n variables with a single equality constraint

Bi ( xi )  ai xi  bi xi2 i  1,2,3 a1  6; a2  7; a3  8
b1  1.0; b2  1.5; b3  0.5
 3  3 
L x,l    (ai xi  bi xi )  l   xi  R  Q 
2 
i 1  i 1 
L
 a1  2b1x1  l  0
x1
L
 a2  2b2 x2  l  0
x2
L
 a3  2b3 x3  l  0
x3
L
 x1  x2  x3  R  Q  0
l
5.1 Analytical Optimization

Constrained optimization with equality constraints

1. Function f(x) of n variables with a single equality constraint


35

30

25
Benefit, B

20

15
B1
10
B2
5 B3
0
0 1 2 3 4 5 6 7 8 9 10
x 2* x 1* Allocation, x x 3*

Total
Downstream
Q x1 x2 x3 l R Flow
5.00 0.18 0.45 2.36 5.64 2.00 2.00
8.00 1.00 1.00 4.00 4.00 2.00 2.00
10.00 1.55 1.36 5.09 2.91 2.00 2.00
15.00 2.91 2.27 7.82 0.18 2.00 2.00
16.00 3.00 2.33 8.00 0.00 2.00 2.67
20.00 3.00 2.33 8.00 0.00 2.00 6.67
5.1 Analytical Optimization

Constrained optimization with equality constraints

2. Function f(x) of n variables with m equality constraints

Maximize (or minimize) f(X),

subject to gp(X) = 0, p=1, 2, …, m

The Lagrangean of the function f(x) in this case is

Lf(x, l)=f(x) – l1g1(x) – l2g2(x) – …– lmgm(x), where l=(l1,l2,…,lm)

Necessary condition for the function to have a maximum or minimum is that


the first partial derivative of the function L should be equal to zero.
dL
 0 , i  1,2,..., n
dxi
dL
0, p  1,2,..., m
dl p
5.1 Analytical Optimization

Constrained optimization with equality constraints

2. Function f(x) of n variables with m equality constraints

Necessary condition

The n + m equations are solved to get solution (x0, l0).

Let the second partial derivatives are

2L
Lij  evaluated at x 0 , i  1,2,..., n; j  1,2,..., n
xi x j
g p ( x)
g pi  evaluated at x 0 , p  1,2,..., m;
xi
5.1 Analytical Optimization

Constrained optimization with equality constraints

2. Function f(x) of n variables with m equality constraints

Sufficiency condition : Consider the determinant D, denoted as |D|, given


by L11   L12  L1n g11  g m1
L21 L22    L2 n g12  g m 2
    
D  Lm1 Lm 2  Lnn   g1n  g mn
g11 g12 g1n 0  0
    
g m1 gm2  g mn 0  0

This is a polynomial in m of order (n-m) where n is the number of variables


and m is the number of equality constraints.
5.1 Analytical Optimization

Constrained optimization with equality constraints

2. Function f(x) of n variables with m equality constraints

Sufficiency condition :

If each root of  in the equation |D|=0 is negative, the solution X0 is a local


maximum. If each root is positive, then X0 is a local minimum.

If all roots are negative and independent of X, then X0 is the global


maximum.

If all the roots are positive and independent of X, then X0 is the global
minimum.
5.1 Analytical Optimization

Constrained optimization with equality constraints

2. Function f(x) of n variables with m equality constraints


Maximize f ( x)   x12  x22
Example
Subject to x1  x2  4 or x1  x2  4  0
Solution : g ( x)  x1  x2  4  0
The Lagrangean is
Lf ( x, l )   x12  x22  l ( x1  x2  4)
At the stationary point,
L
 2 x1  l  0
x1
L
 2 x2  l  0
x 2
L
 ( x1  x2  4)  0
l
These equations yield x 1  x2  2, l  4
5.1 Analytical Optimization

Constrained optimization with equality constraints

2. Function f(x) of n variables with m equality constraints


Now we determine if this is a maximum.
Example
 2L  2L
L11  2  2, L12   0,
x1 x1 x2
 2L  2L
L21   0, L22  2  2,
x 2 x1 x 2
g g
g11   1, g12  1
x1 x2
2 0 1
D0 2 1 0
1 1 0
or 2  4  0 giving   -2
As the only root is negative, the stationary point  (2,2)
is a local maximum of f(x) and f max (x)  -8
5.1 Analytical Optimization

Constrained optimization with inequality constraints


1. Function f(x) with inequality constraints: An equality constraint can be
converted to an equality constraint by introducing an additional variable on
the left hand side of the constraint.

Thus a constraint g(x) ≤ 0 is converted as g(x) + s2=0, where s2 is a non-


negative variable. Similarly, a constraint g(x) ≥ 0 is converted as g(x) - s2=0.

The solution is found by the Lagrangean multiplier method, treating s as an


additional variable in each inequality constraint. When Lf(x) is formed with
either type of constraint, equating the partial derivatives wrt s gives,

ls=0, meaning either l=0 or s=0

 If l>0, s=0. Thus the corresponding constraint is an active constraint

 If s2>0, l=0, Thus the corresponding constraint is redundant/inactive.


5.2 Analytical Optimization

Kuhn – Tucker (KT) Conditions


These conditions are necessary for a function f(x) to be a local maximum
or a local minimum. The conditions for a maximization problem are given
as:
Maximize f(x)
Subject to gj(x)≤ 0, j=1,2,…,m

The conditions are as follows:


f ( x) m g j ( x)
 lj  0, for i  1,..., n
xi j 1  x i

l j g j ( x)  0, j  1,..., m
g j ( x)  0, j  1,..., m
and l j  0, j  1,..., m
In addition if f(x) is concave and the constraints form a convex set, these
conditions are sufficient for a global maximum.
5.2 Analytical Optimization

Kuhn – Tucker (KT) Conditions


A general problem may be one of maximization or minimization with
equality or inequality constraints of both ≥ and ≤ type.
Consider
Maximize/Minimize Z=f(x)
Subject to gi(x) ≤ 0, i=1,2,…,j
gi(x) ≥ 0, i=j+1,…,k
gi(x) = 0, i=k+1,…,m

Introduce variable si into the inequality constraints to make them equality


constraints. Let s denote the vector with elements si.
The Lagrangean is
j k m
L( X , S , l )  f ( x)   li [ g i ( x)  s ] 
2
i  l [ g ( x)  s
i i
2
i ]  l g ( x)i i
i 1 i  j 1 i  k 1

Where li is the Lagrangean mulitplier associated with constraint i.


5.2 Analytical Optimization

Necessary conditions for a maximum or minimum


The first partial derivatives of L(X, S, l) with respect to each variable in
X, S and l should be equal to zero. The solution for a stationary point (x0,
s0, l0) is obtained by solving these simultaneous equations.

Sufficiency condition for a maximum


f(x) should be a concave function
gi(x) should be convex; li ≥ 0, i=1,2,…j
gi(x) should be concave; li ≤ 0, i=j+1, …, k
gi(x) should be linear; li unrestricted, i=k+1, …, m

Sufficiency condition for a minimum


f(x) should be a convex function
gi(x) should be convex; li ≥ 0, i=1,2,…j
gi(x) should be concave; li ≤ 0, i=j+1, …, k
gi(x) should be linear; li unrestricted, i=k+1, …, m
6.2 Analytical Optimization
6.2 Analytical Optimization
5.2 Linear Programming

• Linear Programming (LP) is the most useful optimization technique

• Objective function and constraints are the „linear‟ functions of „non-


negative‟ decision variables

• Thus, the conditions of LP problems are

1. Objective function must be a linear function of decision variables

2. Constraints should be linear function of decision variables

3. All the decision variables must be nonnegative


5.2 Linear Programming

Maximize Z  6x  5 y  Objective Function


subject to 2x  3y  5  1st Constraint
x  3 y  11  2nd Constraint
4 x  y  15  3rd Constraint
x, y  0  Nonnegativity Condition

This is in “general” form


5.2 Linear Programming

• Standard form of LP problems must have following three


characteristics:

1. Objective function should be of maximization type

2. All the constraints should be of equality type

3. All the decision variables should be nonnegative


5.2 Linear Programming

• General form •Violating points for standard form of LPP:


• Objective function is of minimization
Minimize Z  3 x1  5 x2 type
subject to 2 x1  3 x2  15
• Constraints are of inequality type
x1  x2  3
4 x1  x2  2
x1  0
x2 unrestricted • Decision variable, x2, is unrestricted,
thus, may take negative values also.

How to transform a general form of a LPP to the standard form ?


5.2 Linear Programming
General form Standard form
Transformation
General form • Standard form
1. Objective function 1. Objective function

Minimize Z  3x1  5x2 Maximize Z   Z  3x1  5x2

2. First constraint 1. First constraint


2 x1  3x 2  15 2 x1  3x 2  x3  15

3. Second constraint 2. Second constraint


x1  x 2  3 x1  x2  x4  3

Variables x3 and x 4 are known as slack variables


5.2 Linear Programming
General form Standard form
Transformation
General form • Standard form
4. Third constraint 4. Third constraint
4 x1  x 2  2 4 x1  x 2  x5  2

Variable x5 is known as surplus variable


5. Constraints for decision 5. Constraints for decision
variables, x1 and x2 variables, x1 and x2
x1  0 x1  0
x2 unrestricted x2  x2  x2
and x2 , x2  0
5.2 Graphical method: LP
The graphical method can be used only with a two-variable
problem. First the feasibility region for the constraint set
should be mapped. The method to an LP problem in two
variables is illustrated below.

Example

Maximize Z  6x  5 y
subject to 2x  3y  5
x  3 y  11
4 x  y  15
x, y  0
5.2 Graphical method: Step - 1

Plot all the constraints


one by one on a graph
paper
5.2 Graphical method: Step - 2

Identify the common


region of all the
constraints.

This is known as
‘feasible region’
5.2 Graphical method: Step - 3

Plot the objective function


assuming any constant, k,
i.e. 6 x  5 y  k

This is known as ‘Z line’,


which can be shifted
perpendicularly by
changing the value of k.
5.2 Graphical method: Step - 4

Notice that value of the


objective function will be
maximum when it passes
through the intersection of
x  3 y  11 and 4 x  y  15
(straight lines associated with
2nd and 3rd constraints).

This is known as „Optimal


Point’
5.2 Graphical method: Step - 5

Thus the optimal point of the


present problem is
x *  3.091
y *  2.636

And the optimal solution is


6 x*  5 y *  31.726
5.2 Graphical method: LP

Exercise: Solve the following LP problem graphically, using


solver and using primal simplex manually.

Maximize Z  8 x  11y
subject to 3x  y  7
x  3y  8
x, y  0
5.2 Different cases of optimal solution

A linear programming problem may have

1. A unique, finite solution (example already discussed)

2. An unbounded solution,

3. Multiple (or infinite) number of optimal solution,

4. Infeasible solution, and

5. A unique feasible point.


5.2 Unbounded solution: Graphical
representation

Situation: If the feasible region is


not bounded

Solution: It is possible that the


value of the objective function
goes on increasing without
leaving the feasible region, i.e.,
unbounded solution
5.2 Multiple solutions: Graphical representation

Situation: Z line is parallel to any


side of the feasible region

Solution: All the points lying on


that side constitute optimal
solutions
5.2 Infeasible solution: Graphical representation

Situation: Set of constraints does


not form a feasible region at all
due to inconsistency in the
constraints

Solution: Optimal solution is not


feasible
5.2 Unique feasible point: Graphical
representation

Situation: Feasible region consist


of a single point.

Solution: There is no need for


optimization as there is only
one feasible point
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Assumptions in Linear programming:

• Proportionality assumptions
• Additivity assumption
• Divisibility assumption
• Deterministic assumption
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Example 1
A construction site requires a minimum of 10,000 yd3 of sand and
gravel mixture. The mixture must contain no less than 5000 yd3 of
sand and no more than 6000 yd3 of gravel. The material may be
obtained from two sites

Site Delivery Cost % Sand % Gravel


($/yd3 )

1 5 30 70
2 7 60 40
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Demonstration of Linear optimization in Solver of Microsoft


excel
C 63333.33 Objective Function
x1 3333.33
x2 6666.67

Constraints
x1 + x2 10000.00 >= 10,000
0.3x1 +0.6x2 5000.00 >= 5000
0.7x1+0.4x2 5000.00 <= 6000
x1 3333.33 >= 0
x2 6666.67 >= 0
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Example 2 : Consider a system composed of a manufacturing factory


and a waste treatment plant. The manufacturing plant produces finished
goods that sell for a unit price of $10 K. However, the finished goods
cost $3 K per unit to produce. In the manufacturing process two units of
waste are generated for each unit of goods produced. In addition to
deciding how many units of goods to produce, the plant manager must
also decide how much waste will be discharged without treatment so
that the total net benefit to the company can be maximized and the
water quality requirement of the water course is met.

The treatment plant has a maximum capacity of 10 units of waste with


80% waste removal efficiency at a treatment cost of $0.6 K per unit of
waste. There is also an effluent tax imposed on the waste discharged to
the receiving water body ($2 K for each unit of waste discharged). The
water pollution control authority has set an upper limit of 4 units on the
amount of waste any manufacturer can discharge. Formulate an LP
model for this problem.
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Schematic diagram of the manufacturing-waste treatment


system
X1 (units of goods to be produced)
Manufacturing
factory

2X1
(waste units
generated) 2X1 – X2 Waste
treatment
plant

X2 0.2(2X1 – X2)
(units of waste
discharged without
treatment)

watercourse
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Solution :
Let X1 be units of finished goods to be produced
X2 be units of waste discharged without treatment
Sales of finished goods (in $ K), 10X1
Cost of producing goods (in $ K), 3X1
Cost of treating the waste ( in $ K) generated from the production
process, 0.6(2X1-X2)
Effluent tax (in $ K), 2[X2 + 0.2(2X1 – X2)].
The objective function to the problem is to maximize the profit which is
10X1 – {3X1 + 0.6(2X1 – X2) + 2[X2 + 0.2(2X1 – X2)]} =5X1 – X2. The
objective is then expressed as:
Max X0 =5X1 – X2

Subject to 2X1 – X2 ≤ 10
0.4X1 + 0.8X2 ≤ 4
2X1 – X2 ≥ 0
X1 ≥ 0 and X2 ≥ 0
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Soln:
Graphical method: Feasible space of the manufacturing waste
treatment plant example
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method

In general, m equations in n unknowns (n > m), including slack and


surplus variables
• It is possible to solve for m variables in terms of the other (n – m)
variables
• Basic solution: A basic solution is a solution obtained by setting
(n – m) variables to zero
• The m variables whose solution is sought by setting the remaining
(n – m) variables to zero are called the basic variables. The number
of basic variables is equal to the number of constraints.
• The (n – m) variables which are set to zero are called the non-
basic variables. It may so happen that some variables may take a
value of zero arising out of the solution of the m equations, but they
are not non-basic variables as they are not initialized to zero.
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method

All the basic solutions need not (and, in general, will not) be
feasible.
• A basic solution which is also feasible is called as the Basic
Feasible Solution.
• All the corner points of the feasible space are basic feasible
solutions.
The possible no. of basic feasible solutions can be too large to be
enumerated completely.
• The goal would be, starting with an initial basic feasible solution,
to generate better and better basic feasible solutions until the
optimal basic feasible solution is obtained.
Initial basic feasible solution: The basic feasible solution used as
an initial solution in the simplex method. This is the solution in
which all the n decision variables are set to zero. Obviously slack
variables yield an initial basic feasible solution.
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method
The general procedure is:
1. Express the given LP problem in the standard form, with
equality constraints and non-negative right hand side values.
2. Identify the starting solution and construct the simplex table.
3. Check for optimality of the current solution. The solution will
be optimal if all the coefficients in the z-row are non-negative.
Else, an iteration is needed.
4. Identify the entering variable. This is the non-basic variable
with the most negative coefficients in the z-row.
5. Identify the departing variable. This is the basic variable in the
current solution in row i for which
bi Where i is the row and j is the pivot column
is minimum
aij corresponding to the entering variable.
aij  0

6. Perform the row transformation (Gauss-Jordan transformation)


and get the new solution.
7. Repeat step 3 to 6 until optimal solution is obtained.
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method
Express the given LP problem of the manufacturing waste treatment
plant example in standard form

Max X0 = 5X1 - X2 + 0S1 + 0S2 + 0S3


Subject to
2X1 – X2 +S1=10
0.4X1 + 0.8X2 + S2=4
-2X1 + X2 + S3=0
All X and S are non-negative
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method
The standard form of the LP model can be written as in the table
X0 – 5X1 + X2 – 0S1 – 0S2 – 0S3=0
0 + 2X1 - X2 + S1+ 0S2 + 0S3=10
0 + 0.4X1 + 0.8X2 + 0S1 + S2 + 0S3=4
0 - 2X1 + X2 + 0S1 + 0S2 + S3=0

Basic X0 X1 X2 S1 S2 S3 Soln
X0 1 -5 1 0 0 0 0

S1 0 2 -1 1 0 0 10

S2 0 0.4 0.8 0 1 0 4

S3 0 -2 1 0 0 1 0
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method
The current non-basic variable to be made basic is called the
entering variable while the current basic variable to be made non-
basic is called the leaving variable.
For a maximization problem, an entering variable is selected, based
on the optimality condition, as the non-basic variable having the
most negative coefficient in the X0 equation (Z-row) of the simplex
tableau.
The one with the largest negative value is selected because it has the
greatest potential to improve the objective function value.
On the other hand the rule of selecting the entering variable for a
minimization problem is reversed, that is choose the non-basic
variable with the largest positive coefficient in the objective
function row of the simplex tableau.
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method
Next, one of the current basic variables must be chosen to become
non-basic. The selection of the leaving variable is governed by the
feasibility condition to ensure that only feasible solutions are
enumerated during the course of the iterations.
Identify the coefficients which are positive in the column of the
entering variable, known as the pivot column, except the objective
function row, of the current solution. If all the coefficients are non-
positive thus the problem is ill posed (it has unbounded solution).
Compute the ratio of the RHS (solution column) to the positive
coefficient under the pivot column for each row.
Pick the row with least of these ratios and mark as the pivot row. The
basic variable in the current solution corresponding to this row in the
simplex table will be the leaving variable. The coefficient which is
common to the pivot row and the pivot column is the pivot
coefficient.
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method
The most negative coefficient in the z-row is -5, thus X1 is entering
variable.
Among the ratio of the RHS (soln column) and positive pivot
column, the least ratio is 5, thus S1 is the leaving variable.
At the intersection of pivot row and column, pivot coefficient is 2.
Pivot column

Basic X0 X1 X2 S1 S2 S3 Soln Ratio =


X0 1 -5 1 0 0 0 0 None
Pivot row
S1 0 2 -1 1 0 0 10 10/2=5

S2 0 0.4 0.8 0 1 0 4 4/0.4=10

S3 0 -2 1 0 0 1 0 -
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method
Next, row operation (or Gauss Jordan transformation) is applied to
update the variables in the basic and non-basic variable list.
The new pivot row is obtained by dividing the elements of each old
row by the pivot coefficient.

New pivot row = old pivot row/pivot coefficient

The rows other than the pivot row are transformed as

New row = old row – (pivot column coefficient) (New pivot row)

Similarly the new z row is computed. This completes the first iteration.
This solution would have been optimal if all the coefficients of the z-
row were non-negative. Thus another iteration is performed until this
requirement is met.
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method
Row operation (or Gauss Jordan transformation)
The old pivot row = [0 2 -1 1 0 0 10]
The new pivot row = [0 1 -1/2 ½ 0 0 5]
The rows other than the pivot row are transformed as:
The old row 3 = [0 0.4 0.8 0 1 0 4]
New row 3= [0 0.4 0.8 0 1 0 4] – (0.4) [0 1 -1/2 ½ 0 0 5]
= [0 0 1 -0.2 1 0 2]
Results of iteration 1
Basic X0 X1 X2 S1 S2 S3 Soln
X0 1 0 -1.5 2.5 0 0 25

X1 0 1 -0.5 0.5 0 0 5
S2 0 0 1 -0.2 1 0 2

S3 0 0 0 1 0 1 10
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Simplex method
From the previous result of iteration the most negative coefficient in
the z-row is -1.5, thus the entering variable is X2.
The smallest positive ratio is 2, thus S2 is the leaving variable. The
pivot coefficient is 1. The new pivot row = [0 0 1 -0.2 1 0 2]
After the row operations, the simplex tableau is given below.
The feasible extreme point associated with this tableau is
(X1, X2)=(6,2) and the objective function value is 28. All coefficients
in the z-row are non-negative. Thus optimality achieved.
Results of iteration 2
Basic X0 X1 X2 S1 S2 S3 Soln
X0 1 0 0 2.2 1.5 0 28
X1 0 1 0 0.4 0.5 0 6
X2 0 0 1 -0.2 1 0 2
S3 0 0 0 1 0 1 10
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Dual Problem
Each LP problem (called as Primal in this context) is associated with
its counterpart known as Dual LP problem.
Instead of primal, solving the dual LP problem is sometimes easier.

Finding Dual of a LP problem


Primal Dual
Maximization Minimization
Minimization Maximization
ith variable ith constraint
jth constraint jth variable
Inequality sign of ith Constraint:
xi > 0  if dual is maximization
 if dual is minimization
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Dual Problem

Finding Dual of a LP problem


Primal Dual
ith variable unrestricted ith constraint with = sign
jth constraint with = sign jth variable unrestricted
RHS of jth constraint Cost coefficient associated with jth
variable in the objective function
Cost coefficient associated with
ith variable in the objective RHS of ith constraint constraints
function
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Dual Problem

Finding Dual of a LP problem


Before finding its dual, all the constraints should be transformed to
‘less-than-equal-to’ or ‘equal-to’ type for maximization problem and
to ‘greater-than-equal-to’ or ‘equal-to’ type for minimization
problem.
It can be done by multiplying with -1 both sides of the constraints,
so that inequality sign gets reversed.
Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Matrix form of Simplex method


Expressing the standard form of LP with equality constraints in matrix
form Maximize z  CX
subject to  A, I X  b
X 0
where I m  m  identity matrix
X   x1 , x2 ,  , xn  m 
T

C  c1 , c2 ,  , cn  m 
a11 a12  a1n  b1 
a a22  a2 n  b 
A  21
, b 2
     
   
am1 am 2  amn  bm 

The elements of b are non-negative in the primal.


Hydrologic DSS
Categories of Optimize
Analytical
5.2 Linear Programming Linear programs
Dynamic programs
Stochastic Dynamic

Matrix form of Simplex method


Let the vector X be partitioned to XI and XII and the vector C be
partitioned to CI and CII.
z 
1  C I  C II   0
  X I    
0 A I   b
 X II 
B  basis of the solution
X B  vector of the current basic variables
C B  vector containing eelementsof C associated with X B
Thus the objective z  C B X B
BX B  b
Basis XI X II RHS
XB B 1 A B 1 B 1b
z C B B 1 A  C I C B B 1  C II C B B 1b
Hydrologic DSS
Uncertainties
Categories of Optimize
Return period
Analytical
5.3 Dynamic Programing Risk
Linear programs
Test data
Dynamic programs
Stochastic Dynamic

Net Benefit, NB1

Input Output
Characteristics of DP problem: S1 Stage 1 S2

• Multi stage decision problem/ Sequential decision. Decision variable,


• Output from one state is input to the next state X1

• Problem is divided into stages with a policy decision required at


each stage
• Each stage has a number of possible states associated with it.
e.g. In the allocation problem, the amount of water available for
allocation at a stage defines the state at that stage
Rn Rn-1 R2 R1
Sn Sn-1 Sn-2 S1 S0

Xn Xn-1 X2 X1

Stage n Stage n-1 Stage 2 Stage 1


Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

Bellman’s principle of optimality: Given the current state of a


system the optimal policy (sequence of decisions) for the remaining
stages is independent of the policy adopted in the previous stages.
Water allocation problem 
f n* S n   max Rn xn   f n*1 S n  xn  
A total of 6 units of water is to be allocated optimally to three users.
User 1, User 2 and User 3.
Amount of water Return from
allocated
User 3 User 2 User 1
X
R3(X) R2(X) R1(X)
0 0 0 0
1 5 5 7
2 8 6 12
3 9 3 15
4 8 -4 16
5 5 -15 15
6 0 -30 12
Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

Stage 1

R1
f1* S1   maxR1 x1 
S1
User 1 0  x1  S1
x1 0  S1  Q

S1: Amount of water available for allocation to User 1


X1: Amount of water allocated to User 1
X1*: allocation to User 1, that resulted on return of f1*(S1)
f1*(S1): maximum return due to allocation of S1
Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

S1 X1 R1(x1) f1*(S1)=max(R1(x1)) X1*

0 0 0 0 0

1 0 0 7 1
1 7
2 0 0 12 2
1 7
2 12
3 0 0 15 3
1 7
2 12
3 15
4 0 0 16 4
1 7
2 12
3 15
4 16
Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

S1 X1 R1(x1) f1*(S1)=max(R1(x1)) X1*

5 0 0 16 4
1 7
2 12
3 15
4 16
5 15
6 0 0 16 4
1 7
2 12
3 15
4 16
5 15
6 12
Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

Stage 2
R2(x2)

f 2* S 2   max R2 x2   f1* S 2  x2  
S2 S2-x2
User 2 User 1 0  x2  S 2
x2
0  S2  Q

S2: Amount of water available for allocation to User 2 and User 1 together
X2: Amount of water allocated to User 2
S2-X2: Amount of water available for allocation at stage 1 (user 1)
X2*: allocation to User 2, that resulted on return of f2*(S2)
f2*(S2): maximum return due to allocation of S2
Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

S2 x2 R2(x2) S2-x2 f1*(S2-x2) R2(x2)+f1*(S2-x2) f2*(S2) X2*

0 0 0 0 0 0 0 0
1 0 0 1 7 7 7 0
1 5 0 0 5
2 0 0 2 12 12 12 0,1
1 5 1 7 12
2 6 0 0 6
3 0 0 3 15 15 17 1
1 5 2 12 17
2 6 1 7 13
3 3 0 0 3
4 0 0 4 16 16 20 1
1 5 3 15 20
2 6 2 12 18
3 3 1 7 10
4 -4 0 0 -4
Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

S2 x2 R2(x2) S2-x2 f1*(S2-x2) R2(x2)+f1*(S2-x2) f2*(S2) X2*

5 0 0 5 16 16 21 1,2
1 5 4 16 21
2 6 3 15 21
3 3 2 12 15
4 -4 1 7 3
5 -15 0 0 -15
6 0 0 6 16 16 22 2
1 5 5 16 21
2 6 4 16 22
3 3 3 15 18
4 -4 2 12 8
5 -15 1 7 -8
6 -30 0 0 -30
Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

Stage 3
R3(x3)

S3

f 3* S3   max R3 x3   f 2* S3  x3  
User 3 S3-x3 User 2 User 1
0  x3  S3
x3 S3  Q

S3: Amount of water available for allocation to User 1, User 2 and User 3
together=6 units
X3: Amount of water allocated to User 3
S3-X3: Amount of water available for allocation at stage 2 (user 1 & 2
together)
X3*: allocation to User 3, that resulted on return of f3*(S3)
f3*(S3): maximum return due to allocation of S3
Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

S3 x3 R3(x3) S3-x3 f2*(S3-x3) R3(x3)+f2*(S3-x3) f3*(S3) X3*

6 0 0 6 22 22 28 2
1 5 5 21 26
2 8 4 20 28
3 9 3 17 26
4 8 2 12 20
5 5 1 7 13
6 0 0 0 0

When the third stage is solved, all the three users are considered for
allocation, thus the total maximum return is
f3(6)=28
The allocations to the individual users are traced back
From the table for stage 3
X3*=2
Hydrologic DSS
Categories of Optimize
Analytical
5.3 Dynamic Programing Linear programs
Dynamic programs
Stochastic Dynamic

From this the water available for stage 2 is obtained as


S2=Q-x3=6-2=4
From the table for stage 2 with the value of S2=4
X2*=1
From this the amount of water available for allocation at stage 1 is
obtained as
S1=S2-X2*=4-1=3
From the table for stage 1with the value of S1=3
X1*=3
Thus the optimal allocations are

X1*=Allocation to User 1 = 3 units


X2*=Allocation to User 2 = 1 units
X3*=Allocation to User 3 = 2 units
Maximum return resulting from the allocations = 28
5.4 Multi-Objective Planning

Water resource planning is a complex and interdisciplinary problem


in which we may have to consider multiple objectives.
• Some objectives may conflict each other. For example, a reservoir
project intended to satisfy irrigation, hydropower and recreation.
• The concept of non-inferior (or Pareto optimal) solutions is basic to
the mathematical framework for multi-objective planning.
A non-inferior solution is one in which no increase in any objective is
possible without simultaneous decrease in at least one of the other
objectives.
• No optimal solution to a multi-objective problem.
• Determine the non-inferior set and get the best solution out of this.
(best compromise solution or the perfect solution)
5.4 Multi-Objective Planning

Z2
Consider a problem in which Z
two objectives z1 and z2 are to Z1
be maximized.
• Let both be functions of
decision variable x .

X1 X2 X
• Solutions with x < x1 and x > x2 can be eliminated.
• The range x1 < x < x2 is the non-inferior range.
• In this range, it is not possible to increase the value of one OF
without decreasing that of the other.
5.4 Multi-Objective Planning

Let X be a vector of decision Z2 Non-inferior


variables, X = (x1, x2, x3, …, xn) solution
Z2(xb)
• Zj(X), j = 1, 2, …., p denote p
objectives, each of which is to be
maximized.
• The multi-objective function is:

Maximize Z1 ( x), Z 2 ( x),..., Z p ( x)  Z1(xa) Z1

subject to g i ( x)  bi i  1,2,..., m
Plan formulation:
• Generates a set of non-inferior solutions.
• Two common approaches in formulating a MOP problem
• Weighting method
• Constraint method
5.4 Multi-Objective Planning

Weighting method:

• Attach weights to each objective

Max Z = w1Z1+ w2Z2+ ……….+ wpZp


s.t.
gi(X) < bi i = 1, 2, ……, m

where wj is relative weight (non-negative)


• The weights reflect the trade-off of pairs of objective functions.
• These weights are varied systematically and the model is solved for
each set to generate a set of technically efficient solutions.
• By varying the weights in each case, a wide range of plans are
obtained for further analysis before the best one is selected.
5.4 Multi-Objective Planning

Constraint method:

• One objective is maximized with lower bounds on all the others.

Max Zj (X)
s.t.
gi(X) ≤ bi i = 1, 2, ……, m and
Zk(X) ≥ Lk for all k ≠ j

• Any set of feasible values of Lk resulting in a solution with binding


constraints gives an effective alternative.
• If the constrained method of formulation can be solved using LP, it
is particularly useful to conduct sensitivity analysis to infer the
implied tradeoffs for given right-hand side values of the binding
constraints.
5.4 Multi-Objective Planning

Example 5.4:

• The following two objectives are planned to be maximized.

Max Z1 (X)=5X1 – 4X2


Max Z2 (X)= - 2X1 + 8X2
s.t.
- X1 + X2 ≤ 6
X1 ≤ 12
X1 + X2 ≤ 16
X2 ≤ 8
X1 , X2 ≥ 0
5.4 Multi-Objective Planning

Example 5.4: Using weighting method

Max Z=w1{5X1 – 4X2 }+ w2{ - 2X1 + 8X2 }


5.4 Multi-Objective Planning

Example 5.4:

• Using Constraint method the problem can be modified as a single


objective .

Max Z1 (X)=5X1 – 4X2


s.t.
- 2X1 + 8X2 ≥ L2
- X1 + X2 ≤ 6
X1 ≤ 12
X1 + X2 ≤ 16
X2 ≤ 8
X1 , X2 ≥ 0
5.4 Multi-Objective Planning
Example 5.4:

• Any optimal solution for an assumed value of L2 is a non-inferior


solution, if the constraints with L2 on the right-hand side is binding.
By varying L2 we get different non-inferior solutions
6.1 Reservoir Sizing

• To increase the firm yield of a stream, impoundments are built. Need


to develop the storage-yield relationship for a river
• The problem of reservoir sizing involves determination of the required
storage capacity of the reservoir for given inflows and demands in a
sequence of periods.
• The inflow sequence is assumed to repeat, i.e., if the inflow sequence is
a year, the inflow in a given period (within the year) is same in all
years.
• Simplified methods
– Mass curve (Rippl) method
– Sequent peak method
• More complex methods
– Optimization
6.1 Simplified Methods

• Mass curve (Rippl) method


– Graphical estimate of storage required to supply given yield
– Constructed by summing inflows over period of record and
plotting these versus time and comparing to demands

• Time interval includes “critical period”


– Time over which flows reached a minimum
– Causes the greatest drawdown of reservoir
6.1 Rippl method

 j 
K d  Maximum Rt  Qt 
 t 1 
where 1  i  j  2T
6.1 Rippl Method

t Q(t) Q(t) R(t) R(t) t Q(t) Q(t) R(t) R(t) Qt Rt


Oct 18 18 9.3 9.3 Apr 1 169 9.3 175.8
Nov 22 40 9.3 18.5 May 0 169 9.3 185.0
500
Dec 17 57 9.3 27.8 Jun 0 169 9.3 194.3
Jan 26 83 9.3 37.0 Jul 0 169 9.3 203.5 450
Feb 15 98 9.3 46.3 Aug 0 169 9.3 212.8 demand
400
Mar 32 130 9.3 55.5 Sep 7 176 9.3 222.0
Apr 8 138 9.3 64.8 Oct 15 191 9.3 231.3 350

May 3 141 9.3 74.0 Nov 17 208 9.3 240.5 300


Jun 0 141 9.3 83.3 Dec 25 233 9.3 249.8

 tQt
250
Jul 0 141 9.3 92.5 Jan 47 280 9.3 259.0
Aug 0 141 9.3 101.8 Feb 16 296 9.3 268.3 200
Sep 0 141 9.3 111.0 Mar 18 314 9.3 277.5
150
Oct 5 146 9.3 120.3 Apr 7 321 9.3 286.8
Nov 6 152 9.3 129.5 May 4 325 9.3 296.0 100

Dec 6 158 9.3 138.8 Jun 0 325 9.3 305.3


50
Jan 5 163 9.3 148.0 Jul 1 326 9.3 314.5
Feb 3 166 9.3 157.3 Aug 3 329 9.3 323.8 0
Oct Apr Oct Apr Oct Apr Oct Apr
Mar 2 168 9.3 166.5 Sep 4 333 9.3 333.0
Time

Accumulated Inflows, Q Capacity K


Accumulated Releases, R
6.1 Sequent Peak Method
t Rt Qt Kt-1 Kt
 Rt  Qt  K t 1 If Rt  Qt  K t 1  0
October 9.25 18 0.0 0.0
November 9.25 22 0.0 0.0 Kt  
December 9.25 17 0.0 0.0  0 If Rt  Qt  K t 1  0
January 9.25 26 0.0 0.0
February 9.25 15 0.0 0.0
March 9.25 32 0.0 0.0
April 9.25 8 0.0 1.3
May 9.25 3 1.3 7.5 140
… … … … …
120 Rt

Inflow, Release, Capacity


May 9.25 0 81.3 90.5
Qt
June 9.25 0 90.5 99.8 100
K =120.5
Kt
July 9.25 0 99.8 109.0 80
August 9.25 0 109.0 118.3
60
September 9.25 7 118.3 120.5
October 9.25 15 120.5 114.8 40
November 9.25 17 114.8 107.0 20
December 9.25 25 107.0 91.3
0
… … … … … October April October April October April October
January 9.25 26 45.3 28.5
Time
February 9.25 15 28.5 22.8
March 9.25 32 22.8 0.0
6.1 Minimum Capacity, Given Yield

Optimization Without including evaporation


St
Qt Y
Rt
K
Given Find
Minimize K Y K
subject to 1.0
St 1  St  Qt  Rt t  1,..., T ; T  1  1
1.5
St  K t  1,..., T
2.0
Rt  Y t  1,..., T
2.5
3.0

4.0
6.1 Minimum Capacity, Given Yield

Optimization Without including evaporation

The GAMS code for the model is shown below.


sets
t Time (months) /1*15/
Parameter Q(t) Inflow Scalar Y Yield/5/;
/1 5.0 Variables K Capacity;
2 7.0 Positive Variables
3 8.0 S(t), Spill(t);
4 4.0 Equations Qbal(t), Cap(t);
5 3.0 Qbal(t)..S(t+1)$(ord(t) lt 15)+S('1')$(ord(t) eq 15)
6 3.0 =e=S(t)+Q(t)-Spill(t)-Y;
7 2.0 Cap(t)..S(t)=L=K;
8 1.0 Model Min_K/All/;
9 3.0 Solve Min_K USING LP MINIMIZING K;
10 6.0 file MinK/Min_K.txt/
11 8.0 put MinK
12 9.0 put 'Results from Min_K model', put//
13 3.0 put 'Yield', put Y, put/
14 4.0 put 'Capacity' put K.l, put//
15 9.0/; put 't Inflow Storage Release Yield' put/
put ' Q(t) S(t) Spill(t) Y' put/
loop(t, put t.TL, Q(t), put S.l(t), put Spill.l(t), put Y, put/)
6.1 Minimum Capacity, Given Yield

Including Evaporation Lt
At
St 1  St  Qt  Rt  Lt
– Lt Losses from reservoir
– A Surface area of reservoir 300

– et ave. evaporation rate 275

250

Lt   Aa St  A0 et 225 A a=0.007674

200
  S  St 1   A 0=160 km2
  Aa  t   A0  et
175

Area (km2)
  2   150

 S  St 1 
125

 Aa et  t   A0et 100
Total Storage

 2  75

 0.5 Aa et St  0.5 Aa et St 1  A0et 50 Dead Storage

 at St  at St 1  bt 25

0
0 5000 10000 15000 20000 25000
Storage Volume (mln m3)
6.1 Minimum Capacity, Given Yield

Lt
Including Evaporation At

St 1  St  Qt  Rt  Lt

Lt  at St  at St 1  bt

St 1  St  Qt  Rt  (at St  at St 1  bt )

(1  at ) St 1  (1  at ) St  Qt  Rt  bt

Minimize k
subject to
(1  at ) St 1  (1  at ) St  Qt  Rt  bt for all t
St  K for all t
Rt  Dt for all t
ST 1  S1
6.1 Maximum Yield Given Capacity
SETS
t TIME (MONTHS) / 1*9 / Maximize Y
PARAMETER Q(t) INFLOW
/
1 1.0
subject to
2
3
3.0
3.0 St 1  St  Qt  Rt t  1,..., T ; T  1  1
4 5.0
5 8.0 Rt  Y t  1,..., T
6 6.0
7
8
7.0
2.0
St  K t  1,..., T
9 1.0
/
;
SCALAR K CAPACITY /8/;
VARIABLES Y Yield;
POSITIVE VARIABLES
S(t) STORAGE * output
St
R(t) Excess RELEASE; file Spill /Max_Y.txt/
EQUATIONS Eq1(t) FLOW BALANCE
put Spill Q Y
Eq2(t) Release
put ' Results from Max_Y model', put // t
Eq3(t) CAPACITY;
Eq1(t).. S(t+1)$(ord(t) lt 9) + S('1')$(ord(t) eq 9) * write yield and capacity Rt
=E= S(t) + Q(t)- R(t); put ' Capacity ', put K, put / K
Eq2(t).. R(t) =G= Y; put ' Yield ', put Y.l, put //
Eq3(t).. S(t) =L= K; * write inflow, storage, release, and yield in each period
MODEL Max_Y /ALL/ ; put ' Inflow Storage Excess Release Yield ' put /
SOLVE Max_Y USING LP Maximize Y ; put ' Q(t) S(t) R(t) Y ' put /
loop( t, put Q(t), put S.l(t), put R.l(t), put Y.l, put /)
6.1 Maximum Yield Given Capacity

Capacity 8 Result from running model


Yield 3.6 one time with capacity
Inflow Storage Release Yield K = 8, then Yield Y = 3.6
Q(t) S(t) R(t) Y 9
1 3.8 3.6 3.6
Q(t)
3 1.2 3.6 3.6 8
S(t)
3 0.6 3.6 3.6 7 R(t)
5 0 5 3.6
Y
8 0 5.8 3.6 6

6 2.2 3.6 3.6

Q, S, R. Y
5
7 4.6 3.6 3.6
2 8 3.6 3.6 4

1 6.4 3.6 3.6 3

0
1 2 3 4 5 6 7 8 9
Period
6.2 Reservoir Operation

• A reservoir operating policy is a sequence of release decisions in


operational periods, specified as a function of the state of the
system.
• State of the system: storage at beginning of a period; inflow
during the period etc.
• Most common policy implemented in practice –Standard
Operating Policy (SOP)
6.2 Reservoir Operation
• Optimal reservoir operating policy using LP

• Given reservoir of known capacity K, a sequence of inflows, the


reservoir operation problem involves determining the sequence of
releases Rt that optimizes an objective function.
• In general the OF may be a function of the storage volume and/or the
release.
Maximize R
t
t

subject to
S t 1  S t  Qt  Rt  Et  Ot t
Rt  Dt t
St  K t
Rt  0 t
St  0 t
ST 1  S1
6.2 Reservoir Operation

The GAMS code for the model is shown below.


SCALAR K /19500/;
MODEL Reservoir / ALL /;
SCALAR S_min /5500/; SOLVE Reservoir USING NLP MINIMIZING obj;
FILE res /River1b.txt/;
SCALAR beg_S /15000/;
PUT res
SETS PUT " (mln.m3) (mln.m3) (mln.m3) (mln.m3)"/;
PUT " Storage Input Release Demand"/;
t / t1*t12/;
PUT "t0 ", beg_S/;
$include River1B_Q_Ave.inc LOOP(t,PUT t.TL, S.L(t), Q(t), R.L(t), D(t) /;);
$include River1B_D.inc
$include River1B_Evap.inc
POSITIVE VARIABLES
S(t), R(t);
S.UP(t)=K;
S.LO(t)=S_min;
VARIABLES
obj;
EQUATIONS objective, balance(t);
objective.. obj =E= SUM(t,power((R(t)-D(t)),2));
balance(t).. (1+a(t))*S(t) =E= (1-a(t))*beg_S $(ord(t) EQ
1)
+ (1-a(t))*S(t-1)$(ord(t) GT 1)
+ Q(t) - R(t)- b(t);
6.2 Reservoir Operation

(1) Inflow data, River 1B_Q_Ave.inc: (2) Demand data, River1B_D.inc:


PARAMETER Parameter
Q(t) inflow (million m3) D(t) demand (million m3)
* normal /
/ t1 1699.5
t1 426 t2 1388.2
t2 399 t3 1477.6
t3 523 t4 1109.4
t4 875 t5 594.6
t5 2026 t6 636.6
t6 3626 t7 1126.1
t7 2841 t8 1092.0
t8 1469 t9 510.8
t9 821 t10 868.5
t10 600 t11 1049.8
t11 458 t12 1475.5
t12 413 (3) Evaporation data, River1B_Evap.inc /;
/; Parameter Parameter
a(t) evaporation coefficient b(t) evaporation coefficient
/ /
t1 0.000046044 t1 1.92
t2 0.00007674 t2 3.2
t3 0.000180339 t3 7.52
t4 0.000391374 t4 16.32
t5 0.000602409 t5 25.12
t6 0.000648453 t6 27.04
t7 0.000656127 t7 27.36
t8 0.000548691 t8 22.88
t9 0.0003837 t9 16.00
t10 0.000145806 t10 6.08
t11 0.000103599 t11 4.32
t12 0.000053718 t12 2.24
/; /;
6.2 Reservoir Operation

The results from running the model using the average inflow
conditions are:
Storage Input Release Demand

t0 15000
t1 13723 426 1700 1700
t2 12729 399 1388 1388
t3 11762 523 1478 1478
t4 11502 875 1109 1109
t5 12894 2026 595 595
t6 15838 3626 637 637
t7 17503 2841 1126 1126
t8 17838 1469 1092 1092
t9 18119 821 511 511
t10 17839 600 869 869
t11 17239 458 1050 1050
t12 16172 413 1476 1476
6.2 Reservoir Operation

Typical Rule Curves


Flood Rule Curve: If the reservoir e.g. uses a bottom gate with a maximum capacity Qb,
If S > FRC and S - FRC < Qb then Q = D + (S - FRC) and S = FRC
If S > FRC and S - FRC > Qb then Q = D + Qb and S = S – Qb
(in this case the storage S exceeds the FRC, where D is the target draft and Q is the actual release).
Conservation Rule Curve: If S < CRC, then Q = r*D and S is recomputed with Q = r*D
Dead Storage Curve: If S < DSC, then Q = S + D - DSC and S = DSC
(Q needs to be corrected if it appears to be negative due to evaporation)
3000

2500
FSL MCM
Storage in MCM

2000
Flood rule
curve MCM
1500
Conservation
rule curve MCM
1000
Dead storage
curve MCM
500

0
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Month
6.2 Reservoir Operation

Typical Rule Curves

Example Use of Reservoir Rule Curve


6.2 Reservoir Operation

Typical Simulated Energy Target curve


Hydrologic DSS
Uncertainties
Categories of Optimize
Return period
Analytical
6.3 Stochastic Dynamic Programs Risk
Linear programs
Test data
Dynamic programs
Stochastic Dynamic

The SDP optimization process derives the optimum operating


strategy of the reservoir from Bellman‟s backward recursive
relationship:
 
opt
n
  j n 1
 
f j (s j )  x j B S j , S j 1 , I j   p p ,q  f j 1 S j 1 
 q 

where
B(Sj, Sj+1, Ij) = cost or contribution of the decision Xj given state Sj at the
initial stage,
fj+1n-1 = accumulated suboptimal cost (or contribution) by optimal operation
of the reservoir over the last n-1 stages,
Ij = inflow during period j,
Pp,qj =transition probabilities of inflows (defined previously),
Sj =system state at stage j,
Sj+1=t(Sj,Xj) = state transformation equation,
j = stage, and
Xj = decision taken at stage j.
Hydrologic DSS
Categories of Optimize
Analytical
6.3 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

Storage Discretization

Classical scheme

Capacity
S 
SDN

Savaranskiy‟s scheme

Capacity
S 
SDN  2

Moran‟s Scheme
Capacity
S 
SDN  1
FIGURE Discrete representation of storage in 1000 m3 (for capacity equal to 16.8
million m3 and SDN = 8). (From Karamouz and Vasiliadis, 1992.)
Hydrologic DSS
Categories of Optimize
Analytical
6.3 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

Flow diagram for the


stochastic dynamic
programming model
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

Inflow and storage states at different time

Variable Time
t t+1
Q i j
S k l

Variable Time
t t+1
Q Qit Qj,t+1
S Skt Sl,t+1
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• According to the storage continuity, it can be expressed as


Rkilt = Skt + Qit – Eklt – Sl,t+1

where Rkilt is the reservoir release corresponding to the initial reservoir


storage Skt, the final reservoir storage Sl,t+1, and the evaporation loss Eklt.

• The loss Eklt, depends on the initial and final reservoir storages, Skt and
Sl,t+1

• Since the inflow Q is a random variable, the reservoir storage and the
release are also random variables

• The system performance measure depends on the state of the system


defined by the storage class intervals k and l, and the inflow class
interval i for the period t.
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• System performance measure for a period t is denoted as Bkilt which


corresponds to an initial storage state k, inflow state i, and final storage
state l in period t.
• The system performance measures can be:
– For example: Amount of hydropower generated when a release of
Rkilt is made from the reservoir, and the reservoir storages (which
determine the head available for power generation) at the beginning
and end of the period are respectively Skt and Sl,t+1.
• Following backward recursion, the computations are assumed to start at
the last period T of a distant year in the future and proceed backwards
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• Each time period denotes a stage in the dynamic programming i.e., n = 1


when t=T; n=2 when t = T - 1, etc.
• The index t takes values from T to 1, and the index n progressively
increases with the stages in the SDP.

Time periods t=1


t = T-1 t = T t=2 t = T-1 t =T

n = T+2 n = T+1 n = T n = T-1 n=2 n=1


Stages

• Let fnt (k, i) denote the maximum expected value of the system
performance measure up to the end of the last period T (i.e. for periods t,
t + 1, ..., T), when n stages are remaining, and the time period
corresponds to t.
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• Let the time period be monthly


• The index t takes values from 12 to 1, and the index n progressively
increases with the stages in the SDP.

Time periods t=1


t = 11 t = 12 t=2 t = 11 t =12

n = 14 n = 13 n = 12 n = 11 n=2 n=1
Stages
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• Let the time period be biannual.


• The index t takes values from 2 to 1, and the index n progressively
increases with the stages in the SDP.

Time periods
t=1 t=2 t=1 t=2 t=1 t =2

n=6 n=5 n=4 n=3 n=2 n=1


Stages
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• With only one stage remaining (i.e. n = 1 and t = T),


f1T k ,i   MaxBkilT  k ,i
 feasible l
• For a given k and i, only those values of l are feasible that result in a
non-negative value of release, Rkilt.
• Since this is the last period in computation, the performance measure
Bkilt is determined with certainty for the known values of k, i and l.
• When we move to the next stage, (n = 2, t = T - 1), the maximum value
of the expected performance of the system is written as
 
f 2T 1 k ,i   Max Bkil T 1   pijT 1 f1T l . j   k, i
 j 
 feasible l
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• When the computations are carried out for stage 2, period T - 1, the
inflow during the period is known.
• Inflow during the succeeding period T is also needed since we are
interested in obtaining the maximum expected system performance up
to the end of the last period T.
• Since this is not known with certainty, the expected value of the system
performance is got by using the inflow transition probability PijT-1 for
the period T - 1.
• The term within the summation denotes the maximized expected value
of the system performance up to the end of the last period T, when the
inflow state during the period T - 1 is i.
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• The search for the optimum value of the performance is made over
the end-of-the period storage l.
• Since f1T(k, i) is already determined in stage 1, for all values of k and
i, f2T-1(k, i) given by above equation may be determined.
• The term {feasible l}, indicates that the search is made only over
those end-of-the-period storages which result in a non-negative
release Rkilt or satisfy any other constraints.
• The relationship may be generalized for any stage n and period t as

 
f k , i   Max Bkil T   Pij f n 1 l. j 
n
t t t 1
k , i
 j 
 feasible l
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• Solving the equation recursively will yield a steady state policy within a
few annual cycles, if the inflow transition probabilities Pijt are assumed
to remain the same every year, which implies that the reservoir inflows
constitute a stationary stochastic process.
• In general, the steady state is reached when the expected annual system
performance, [f t i) - f tn (k, i)] remains constant for all values of k,
n+T (k,

i, and t.
• When the steady state is reached, the optimal end-of-the-period storage
class intervals, l, are defined for given k and i for every period t in the
year.
• This defines the optimal steady state policy and is denoted by l*(k, i, t).
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• Example
• Obtain steady state policy for the following data, when the objective is
to minimize the expected value of the sum of the square of deviations
of release and storage from their respective targets, over a year with
two periods. Neglect evaporation loss. If the release is greater than the
release target, the deviation is set to zero. Target Storage, TS=30;
Target Release, Tr=30; Bkilt = (Rkilt –Tr)2 + (Skt –Ts)2
Inflow transition probabilities
t=2 t=1
j j

Inflow and Storage i 1 2 i 1 2


t=1 1 0.5 0.5 t=2 1 0.4 0.6
For period 1 For period 2 2 0.3 0.7 2 0.8 0.2
i Qit k Skt i Qit k Skt
1 15 1 30 1 35 1 20
2 25 2 40 2 45 2 30
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• Solution: The system performance measure Bkilt :


Target Storage, TS=30; Target Release, Tr=30;
Bkilt = (Rkilt –Tr)2 + (Skt –Ts)2

Bkilt values for all k, i, l, and t


Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• Solution: Bkilt values for all k, i, l, and t


Rkilt = Skt + Qit – Eklt – Sl,t+1
k Skt i Qkt l Skt+1 Ekilt Rkilt (Skt- Ts)2 (Rkilt- Tr)2 Bkilt

1 30 1 15 1 20 0 25 0 25 25
For period 1
1 30 1 15 2 30 0 15 0 225 225
1 30 2 25 1 20 0 35 0 0 0
1 30 2 25 2 30 0 25 0 25 25
2 40 1 15 1 20 0 35 100 0 100
2 40 1 15 2 30 0 25 100 25 125
2 40 2 25 1 20 0 45 100 0 100
2 40 2 25 2 30 0 35 100 0 100

For period 2 k Skt i Qkt l Skt+1 Ekilt Rkilt (Skt- Ts)2 (Rkilt- Tr)2 Bkilt

1 20 1 35 1 30 0 25 100 25 125
1 20 1 35 2 40 0 15 100 225 325
1 20 2 45 1 30 0 35 100 0 100
1 20 2 45 2 40 0 25 100 25 125
2 30 1 35 1 30 0 35 0 0 0
2 30 1 35 2 40 0 25 0 25 25
2 30 2 45 1 30 0 45 0 0 0
2 30 2 45 2 40 0 35 0 0 0
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• n=1, t=2
f12 k , i   MinBkilT  k , i
k i Bkilt f12 (k,i) l*
 feasible l l =1 l =2
n=2, t=1 1 1 125 325 125 1
1 2 100 125 100 1
2 1 0 25 0 1
  2 2 0 0 0 1, 2
f 21 k , i   Min Bkil 1   pij1 f12 l. j  k , i
 j 
 feasible l
k  1; i  1; l  1; Bkil 1   pij1 f12 l. j   25  0.5 *125  0.5 *100  137.5
j

k  1; i  1; l  2; Bkil 1   pij1 f12 l. j   225  0.5 * 0.0  0.5 * 0.0  225
j

k  1; i  2; l  1; Bkil 1   pij1 f12 l. j   0  0.3 *125  0.7 *100  107.5


j
k i Bkil 1   pij1 f12 l. j  f21 (k,i) l*
k  1; i  2; l  2; Bkil 1   p f l. j  1 2
ij 1 j

j l =1 l =2
 25  0.3 * 0  0.7 * 0  25 1 1 137.5 225 137.5 1
1 2 107.5 25 25 2
k  2; i  1; l  1; Bkil 1   pij1 f12 l. j 
j 2 1 212.5 125 125 2
 100  0.5 *125  0.5 *100  212.5 2 2 207.5 100 100 2
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• n=3, t=2
 
f 32 k , i   Min Bkil 2   pij2 f 21 l. j  k , i k i Bkil 1   pij1 f 21 l. j  f32 (k,i) l*
 
j
j
l =1 l =2
 feasible l 1 1 195 435 195 1
k  1; i  1; l  1; Bkil 2   pij2 f 21 l. j  1 2 215 245 215 1
j 2 1 70 135 70 1
 125  0.4 *137.5  0.6 * 25  195 2 2 115 120 115 1
k  1; i  1; l  2;
 325  0.4 *125  0.6 *100  435 k i Bkil 1   pij1 f 32 l. j  f41 (k,i) l*
• n=4, t=1 l =1
j
l =2
1 1 230 317.5 230 1
  1 2 209 126.5 126.5 2
f 41 k , i   Min Bkil 1   pij1 f 32 l. j  k , i 2 1 305 217.5 217.5 2
 j 
2 2 309 201.5 201.5 2
 feasible l
k i Bkil 1   pij1 f 41 l. j  f52 (k,i) l*
• n=5, t=2 j
l =1 l =2
1 1 292.9 532.9 292.9 1
 
f k , i   Min Bkil 1   pij2 f 41 l. j 
5
2
k , i 1 2 309.3 339.3 309.3 1
 j  2 1 167.9 232.9 167.9 1
 feasible l 2 2 209.3 214.3 209.3 1
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

n=6, t=1
k i Bkil 1   pij1 f 52 l. j  f61 (k,i) l*
j

l =1 l =2
1 1 326.1 413.6 326.1 1
1 2 304.3 221.8 221.8 2
2 1 401.1 313.6 313.6 2
2 2 404.3 296.8 296.8 2

k i Bkil 1   pij1 f 61 l. j  f72 (k,i) l*


n=7, t=2 j
l =1 l =2
1 1 388.5 628.5 388.5 1
1 2 405.2 435.2 405.2 1
2 1 263.5 328.5 263.5 1
2 2 305.2 310.2 305.2 1
k i Bkil 1   pij1 f 72 l. j  f81 (k,i) l*
j

n=8, t=1 l =1 l =2
1 1 421.9 509.4 421.9 1
1 2 400.2 317.7 317.7 2
2 1 49.9 409.4 409.4 2
2 2 500.2 392.7 392.7 2
Hydrologic DSS
Categories of Optimize
Analytical
6.4 Stochastic Dynamic Programs Linear programs
Dynamic programs
Stochastic Dynamic

• The computations are terminated after this stage because it is


verified that the annual system performance measure remains
constant.
f81 (1,1) – f61 (1,1) = 421.9 – 326.1 = 95.8
n=6 t=1 n=8 t=1
f16(k,i) i l* f18(k,i) i l*
326.10 1 1 421.91 1 1
221.88 2 2 317.75 2 2
313.60 1 2 409.41 1 2
296.88 2 2 392.75 2 2

Steady state policy for period 1 Steady state policy for period 2
k i l* k i l*
1 1 1 1 1 1
1 2 2 1 2 1
2 1 2 2 1 1
2 2 2 2 2 1
Introduction
Hydrologic models
7.1 River Basin Modelling Approaches
Application

• HMs are simplified representation of a complex system (physical,


analog or mathematical)
• To simulate but not all characteristics of the system.
• Mathematical model represents the system by a set of equations
expressing relationship between a system variables & Parameters
Introduction
Hydrologic models
7.1 Hydrologic modeling Approaches
Application

Background on Hydrological Models:

What is a Model?
• Analog models: uses electricity e.g. Ohm‟s law
analogous to Darcy‟s law
• Physical model: constructed in laboratory
• Numerical/ Mathematical models: phenomenon
described by a set of mathematical equations.
Introduction
Hydrologic models
7.1 Hydrologic modeling Approaches
Application

• Two categories of engineering problems


(a) forecasting, or the estimation of when some hydrological event
will occur.
(b) frequency prediction or the estimation of how often an event will
occur.
• Case (a) arise most directly in the operation of hydrological controls
• Case (b) is associated, typically, with the design rather than the
operation of such controls
Hydrological Introduction
Model Hydrologic models
7.1 Hydrologic modeling Approaches
Application

(Physically (black box) (Conceptual) Process


based) Based

Lumped Distributed

Spatial
Catchment Method of
Scale
size disaggregation
Based

Large Medium Small Sub- Grid HRU


catchment

Variable Time Step Fixed Time Step

Time
Event Continuous Intra Daily Weekly/ Scale
Based Daily monthly Based
Introduction
Hydrologic models
7.1 Hydrologic modeling Approaches
Application

Numerical Analog Analytical


Method of
solution

Boundary
Finite Finite Boundary mixed
fitted
difference element element
coordinate
I, THANK YOU!
Introduction
Hydrologic models
7.1 Hydrologic modeling Approaches
Application

• Hydrologic Modelling
• Process description

•Black box (systems) type model: based on systems theoretical


approach (linear, time-invariant)

•Conceptual Type Models: attempts to represent major/certain


components of the hydrological cycle

•Physically Based Models: if the parameters of the model


provide sound description of the hydrological processes
Introduction
Hydrologic models
7.3 Conceptual models Approaches
Application

Conceptual type models

• Generally composed of a number of simplified interconnected


conceptual elements.
• The elements are used to represent the significant or dominant
constituent hydrologic processes in the light of our conceptual
understanding to these processes.
• Each conceptual elements simulates the effects of one or more of
the constituent process by the use of empirical and assumed
functions which are, hopefully, physical, realistic or at least
physically valid.
• Conceptual rainfall-runoff models were introduced in the sixties,
attempting for a more physical representation of the hydrological
processes.
Introduction
Hydrologic models
7.3 Conceptual models Approaches
Application

Conceptual tank model

• A simple conceptual rainfall-runoff model developed in Japan


(see Sugawara, 1978)
• Physical processes are represented in an artificial way as a
collection of vertical series of tanks.
• In this model of runoff mechanism, the upper part of the first
tank takes care of direct runoff, while the lower part of the
same tank represents the interflow, and the second tank
corresponds to the base or groundwater flow.
• By varying the number, position and size of outlets of each
tank, different types of model configurations may be
constructed.
Introduction
Hydrologic models
7.3 Conceptual models Approaches
Application

Conceptual tank model

• A simple conceptual rainfall-runoff model developed in Japan


(see Sugawara, 1978)
• There are many tank models proposed by different authors,
• Here, simpler model (called SIRT, Simplified IHE Rainfall-
runoff tank model) consisting of two tanks with double
storage and triple outflow is used.
• The model has 8 unknown parameters that have to be
identified through the process of calibration.
• These parameters can be divided according to the tank: tank
A : k1, k2, k3, d1, d2 and s1; Tank B : k4 and s2.
Introduction
Hydrologic models
7.3 Conceptual models Approaches
Application

Conceptual tank model


Introduction
Hydrologic models
7.3 Conceptual models Approaches
Application

Conceptual tank model

Lower and upper boundary possibilities on a certain catchment


Introduction
Hydrologic models
7.3 Conceptual models Approaches
Application

Conceptual SMAR model


Introduction
Hydrologic models
7.3 Conceptual models Approaches
Application

Conceptual SMAR - Soil Moisture Accounting and Routing


Parameter Description Parameter range
Z The combined water storage depth capacity of the layers (mm) 25 - 125
T A parameter (less than unity) that converts the given 0.5 – 1
evaporation series to the model-estimated potential
evaporation series.
C The evaporation decay parameter, facilitating lower 0.5 – 1
evaporation rates from the deeper soil moisture storage layers
H The generated ‘direct runoff’ coefficient 0–1
Y The maximum infiltration capacity depth (mm/time step) 10 – 100
N The shape parameter of the Nash gamma function „surface 1 – 10
runoff‟ routing element; a routing parameter
nK The scale (lag) parameter of the Nash gamma function 1 – 10
„surface runoff‟ routing element; a routing parameter
G The weighting parameter, determining the amount of 0–1
generated „groundwater‟ used as input to the „groundwater‟
routing element.
Kg The storage coefficient of the „groundwater‟ (linear reservoir) 1 - 200
routing element; a routing parameter
Introduction
Hydrologic models
7.4 Physically based models Approaches
Application

Physically based models

• Based on complex laws of physics.


• Generally expressed as systems of non-linear partial differential
equations.
• Mainly introduced to provide a scientifically sound basis to address
many of the pivotal hydrological and environmental problems such
land use changes due to human impact influence, hazards of
pollution and toxic waste disposal.
• These problems can not be addressed adequately by traditional
models
• The physical nature of the parameters of these models would allow
their estimation from field measurements and also their use in the
case of un-gauged catchments.
• The physically-based distributed models attempt to describe the
internal mechanisms governing the overall catchment response
based on our understanding of the physics of the constituent
hydrological processes.
Introduction
Hydrologic models
7.5 Approach to Hydrologic modeling Approaches
Application

Define purpose

Field data Conceptual model


Code selection

EC/Suit Ye
able s
Code

N
o

Numerical formulation
Computer program
Code verification
Code
development
Field data Model construction

Performance criteria

Calibration
Compariso
n with field Validation
data
Simulation

Presentation of results

Field data Post audit


Introduction
Hydrologic models
7.5 Approach to Hydrologic modeling Approaches
Application

Calibration of parameters
Introduction
Hydrologic models
7.5 Approach to Hydrologic modeling Approaches
Application

Calibration and model parameters


TECHNIQUES OF MODEL FITTING
Model parameters in a conceptual quasi physical model may be
fitted by either of the following methods or by a combination of
these.
• Direct measurement
• Trial and error
• Automatic optimisation
• Combination method
Introduction
Hydrologic models
7.5 Approach to Hydrologic modeling Approaches
Application

Automatic parameter optimisation


• Search through many different combinations of parameter
values
• To initiate the procedure, starting values are assigned,
• the simulation program is run and the objective function value is
evaluated
• Incremental changes are then made to one or more parameters
• the simulation program is run again and the objective function is
re-evaluated
• most appropriate direction of search for the next trial is decided.
• The search continues in subsequent trials until a stopping (i.e.
termination) criterion is fulfilled.
Introduction
Hydrologic models
7.5 Approach to Hydrologic modeling Approaches
Application

Terminating criteria in optimising algorithm


Criteria, to make an optimising algorithm to stop, are essential in
any optimising routine. These can be
• the maximum number of iterations provided for
• tolerance limit for convergence of the objective function
• tolerance limit for convergence of the size of iterating
parameter step.
• When any one of the above criteria is violated, the optimisation
program is made to stop.
Introduction
Hydrologic models
7.5 Approach to Hydrologic modeling Approaches
Application

Equifinality and the GLUE procedure

GLUE- Generalized Likelihood Uncertainty Estimation


• Different sets of parameter values may produce equally likely
simulation results- equifinality principle
• There is not always a single optimal, or unique parameter set
• Starting with a prior estimate of a particular parameter set and
assuming it to be the true set
• Assuming further that the sensitivity of different parameters is
equal
• Bayesian procedure is used to update the parameter set to a set of
likelihood values
Introduction
Hydrologic models
7.5 Approach to Hydrologic modeling Approaches
Application

Sensitivity Analysis
Introduction
Hydrologic models
7.7 Application of Hydrologic models Approaches
Application

Application of Some Hydrological models

• ArcSWAT
• SMAR
• HECGEOHMS/HecHMS
• Tank model
• Topmodel
Introduction
Hydrologic models
7.7 Application of Hydrologic models Approaches
Application

ARC SWAT model


Introduction
Hydrologic models
7.7 Application of Hydrologic models Approaches
Application

Nile Basin: Change in precipitation due to climate change


GAMS

GAMS = General Algebraic Modeling System

Detailed Information in:

GAMS Guide and Tutorials


Brooke, A., D. Kendrick, A. Meeraus, and R. Raman
(2006). GAMS Language Guide. Gams Development
Corporation. Washington D.C.

GAMS website
www.gams.com
GAMS

Start GAMS by selecting:


Start  All Programs GAMS GAMSIDE
GAMS

Choose from the GAMSIDE:


File  Project New project
GAMS

Write a GAMS model and solve the following nonlinear


program using GAMS

𝑀𝑎𝑥 𝑍 = 𝑥1 + 2𝑥3 + 𝑥2 𝑥3 − (𝑥12 + 𝑥22 + 𝑥32 )


GAMS

Enter GAMS Code


𝑀𝑎𝑥 𝑍 = 𝑥1 + 2𝑥3 + 𝑥2 𝑥3 − (𝑥12 + 𝑥22 + 𝑥32 )

VARIABLES Z, X1, X2, X3;


EQUATIONS F ;
F.. Z =E= X1+2*X3+X2*X3-X1*X1-X2*X2-X3*X3 ;
MODEL HW41 /ALL/;
SOLVE HW41 USING NLP MAXIMIZING Z;
FILE res /Hydro1.txt/;
PUT res;
put "Solution X1 = ", put X1.L, put /;
put " X2 = ", put X2.L, put /;
put " X3 = ", put X3.L, put /;
GAMS
GAMS

𝑀𝑎𝑥 𝑍 = 𝑥1 + 2𝑥3 + 𝑥2 𝑥3 − (𝑥12 + 𝑥22 + 𝑥32 )


GAMS

Example The GAMS code to solve the Waste treatment plant problem is

Variables Z; Declare Variables


Positive variables X1,X2;
Equations obj,C1,C2,C3; Declare Equations
obj..Z=e=5*X1-X2;
C1..2*X1-X2=l=10;
C2..0.4*X1+0.8*X2=l=4;
Define Equations
C3..2*X1-X2=g=0;
Model WSTPLNT/All/;
Solve WSTPLNT USING LP MAXIMIZING Z;
Display x1.l, x2.l, z.l

=E= (equality)
=G= (greater than or equal to)
=L= (less than or equal to)
GAMS

Example The GAMS code to solve the Waste treatment plant problem is

Variables Z; Declare Variables


Positive variables X1,X2;
Equations obj,C1,C2,C3; Declare Equations
obj..Z=e=5*X1-X2;
C1..2*X1-X2=l=10;
C2..0.4*X1+0.8*X2=l=4;
Define Equations
C3..2*X1-X2=g=0;
Model WSTPLNT/All/;
Solve WSTPLNT USING LP MAXIMIZING Z;
file ans/treat1.txt/
put ans
put ' Results of optimization of treatment plant ', put //
put ' Goods ', put X1.l, put /
=E= (equality) put ' Waste ', put X2.l, put /
=G= (greater than or equal to)
=L= (less than or equal to) put ' Optimum value ', put Z.l, put //
GAMS
GAMS
The GAMS code to solve the allocation problem is The result for a release of R = 10 is
Example SETS i / 1, 2, 3/ Release 10.00
SCALAR r RELEASE /10.0/; Downstream flow 2.00
PARAMETER Objective 41.41
a(i) /1 6.0, 2 7.0, 3 8.0/ i x(i)
b(i) /1 -1.0, 2 -1.5, 3 -0.5/; 1 1.55
VARIABLES obj OBJECTIVE; 2 1.36
POSITIVE VARIABLES x(i) USE, s DOWNSTREAM FLOW; 3 5.09
S.lo=2.0; d(obj)/dr = 2.91
EQUATIONS objective, cap;
objective..obj =E= SUM(i,a(i)*x(i)+b(i)*x(i)**2);
cap..sum(i,x(i))+s-r =E= 0.0;
MODEL user /ALL/ ;
SOLVE user USING NLP MAXIMIZE obj ;
FILE res /WaterUser.txt/
PUT res
PUT 'Release ', PUT r, PUT /
PUT 'Downstream flow ', PUT s.l, PUT /
PUT 'Objective ', PUT obj.l, PUT //
PUT 'i x(i) ' PUT /
loop( (i), PUT i.TL, PUT x.l(i), PUT /)
PUT //, 'd(obj)/dr = ', PUT cap.m, PUT //
GAMS
The Alternative GAMS code to solve the Waste treatment plant problem is
Example
sets C Constraint /PLANTCAP, DISCLIMIT, MAXWASTE/
A amount /GOODS, WASTE/;
Parameter CC(A) coefficient of objective function
/GOODS 5
WASTE -1/;
Parameter RHS(C) right handside of constraints
/PLANTCAP 10
DISCLIMIT 4
MAXWASTE 0/;
TABLE COEFF(C,A) coefficients in constraints
C1..sum(A,COEFF('PLANTCAP',A)*X(A))=L=RHS('PLANTCAP');
GOODS WASTE
C2..sum(A,COEFF('DISCLIMIT',A)*X(A))=L=RHS('DISCLIMIT');
PLANTCAP 2 -1
C3..sum(A,COEFF('MAXWASTE',A)*X(A))=G=RHS('MAXWASTE');
DISCLIMIT 0.4 0.8
PROFIT..sum(A,CC(A)*X(A))=E=MAXBENEFIT;
MAXWASTE 2 -1
Model WSTPLNT/All/;
Variables X(A) decision variables
Solve WSTPLNT USING LP MAXIMIZING MAXBENEFIT;
MAXBENEFIT;
DISPLAY X.L, X.M;
Positive Variable X;
Equations C1 plant capacity constraint
C2 waste discharge limit
C3 no more waste constraint
PROFIT profit of company;
Hydrologic DSS
Categories of Optimize
Analytical
5.1 Analytical Optimization Linear programs
Dynamic programs
Stochastic Dynamic

Minimize f ( x)  x12  x22  4 x1  4 x2  8


Example
Subject to  x1  2 x2  4  0
2 x1  x2  5
Solution :
f 2 f
 2 x1  4,  2,
x1 x12
f  2f
 2 x2  4,  2,
x 2 x 2
2

2 f 2 f
 0
x1x2 x2 x1
 2 0
Hf ( x)   
0 2 
l 2 0
lI  H  0 l1  2; l2  2.
0 l 2
Thus f(x) is a strictlyconvex.
Therefore the function - f(x) is concave and can be maximized.
Hydrologic DSS
Categories of Optimize
Analytical
5.1 Analytical Optimization Linear programs
Dynamic programs
Stochastic Dynamic
First convert the problem to a form
Example
Maximize F(x)
Subject to g(x)  0
The original proble is rewritten as
Maximize  f ( x)   x12  x22  4 x1  4 x2  8
Subject to x1  2 x2  4  0 or x1  2 x2  4  s12  0
2 x1  x2  5  0 or 2 x1  x2  5  s22  0
  
L f ( x)   x12  x22  4 x1  4 x2  8  l1 x1  2 x2  4  s12  l2 2 x1  x2  5  s22 
L L
 2 x1  4  l1  2l2  0  2 x2  4  2l1  l2  0
x1 x 2
L
 2l1s1  0, i.e. either l1 or s1 is zero
s1
L
 2l2 s2  0, i.e. either l2 or s2 is zero
s 2
L
l1
 
  x1  2 x2  4  s12  0

L
l2
 
  2 x1  x2  5  s22  0
Hydrologic DSS
Categories of Optimize
Analytical
5.1 Analytical Optimization Linear programs
Dynamic programs
Stochastic Dynamic

(i) Assuming λ2=0, s1=0; x1=8/5, x2=6/5 and λ1=4/5>0, s22=3/5>0

Here the conditions for a maximum are satisfied. No violations.

(ii) Assume l1=0 and l2=0. Then the simultaneous equations gives x1=x2=2;
s12=-2 (not possible) s22=-1 (not possible) This is not a solution to the
problem. Similarly

(iii) Assume l1=0 and s2=0. The equations to be solved are:


 2 x1  4  2l2  0
 2 x 2  4  l2  0
x1  2 x2  s12  4
2 x1  x2  5

These equations give x1=3/2, x2=2, s12=-3/2 (not possible), l2=1/2 > 0

This is not a solution.


Hydrologic DSS
Categories of Optimize
Analytical
5.1 Analytical Optimization Linear programs
Dynamic programs
Stochastic Dynamic

(iv) Assume s1=0, s2=0 Then

 2 x1  4  l1  2l2  0
 2 x2  4  2l1  l2  0
x1  2 x2  4
2 x1  x2  5

These equations yield l1= 4/3 > 0, l2 = -2/3 (negative). As l2 < 0, this is not a
solution for a maximum.

Hence solution (i) i.e. x1=8/5, x2=6/5 is the only solution to the problem.
Thus –f(x) is a maximum of -0.8 at (8/5, 6/5), or f(x) is a minimum of 0.8 at
x=(8/5, 6/5).
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Artificial neural networks

• ANNs are an attempt at modeling the information processing


capabilities of nervous systems.
• They can be used for either prediction or classification
problems.
• The most popular design for ANNs is the so-called multilayer
feed-forward network.
• Such networks have an input layer, an output layer, and one or
more hidden layers.
• A layer is usually a group of neurons having the same pattern
of connection pathways to the other neurons of adjacent layers.
Each neuron in a particular layer has connection pathways to all
the neurons in the next adjacent layer, but none to those of the
same layer.
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Artificial neural networks (cont’d)


Bias node
Bias node

X1 w11 w10
wΦ1 wΦ0
w21
h1
w12
Φ
X2 w22
w13
h2 wΦ2
w20
X3 w23

Bias node
Input layer Hidden layer Output layer
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Artificial neural networks (cont‟d)

In rainfall-runoff modelling, the input neurons can be rainfall,


upstream observed flow, evaporation, or output from a simple
model to consider storage.
The weights are applied via transfer functions, commonly the
logistic function, the arc tangent function, and the linear function.
M  1
yout  f   wi yi  w0   M 
 i 1     wi yi  w0 
1 e  i 1 
where f() denotes the transfer function, wi is the input connection pathway weight, M is the
total number of inputs (which usually equals the number of neurons in the preceding
layer), and wo is the neuron threshold (or bias), i.e. a base-line value independent of the
input.

The transfer function can be the tan h function, given as:


Yi = (ez – e-z)/( ez + e-z)
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Artificial neural networks (cont‟d)

• The transfer function is bounded in the range [0,1]. The weights


wi, the threshold wo and σ of different neurons can be interpreted
as parameters of the selected network configuration.
• It is often recommended that the inputs to an ANN be
normalized before training.
• One of the most popular methods for determining the weights of
ANN models is the so-called back propagation method.
In the GFMFS
A single hidden layer is used. If „l‟ is the total number of neurons in
the input layer and „m‟ is the total no. of neurons in the hidden
layer, then the total no. of weights to be estimated for the Neural
network model, is (l+1)m + (m+1).
Simplex method is used for automatic optimisation of weights in the
Neural network model.
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Artificial neural networks

In the previous architecture 3 – 2 – 1 the hidden nodes in the single


layer are determined using nonlinear transfer function, the logistic
function, a form of sigmoid function as:

 3  1
h1out  f   w1i X i  w10    3 
 i 1     w1i X i  w10 
1 e  i 1 

 3  1
h2out  f   w2i X i  w20    3 
 i 1     w2 i X i  w20 
1 e  i 1 
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Artificial neural networks

The output node in the single output layer are determined using
nonlinear transfer function, the logistic function as:

 2  1
out  f   wi hi  w 0    2 
 i 1     wi hi  w 0 
1 e  i 1 

Given an ANN structure, the back propagation method starts out


with random draws on the weights near zero. Then the initial
observation of the training data set is run through the network.
Using the error, the connection and bias weights are “updated” by
a fraction of the output error.
It is required to avoid overtraining of ANN models.
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Artificial neural networks –Back propagation

Example: Train the following 3-layer network by BP method:


Assume the learning rate to be 0.25. The input pattern is (0, 1) and
the output 0.
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Solution: To begin with, the weight values are set to random values:
0.6, 0.4, 0.5,-0.2 for weight matrix 1, and 0.3 and 0.8 for weight
matrix 2. The input signal set to the neurons I1 and I2 of the input
layer which just pass the signal to the hidden layer.

Now consider the working of the hidden layer.


Input of hidden neuron HI: 0 * 0.6 + 1 * 0.5 = 0.5
Output of hidden neuron HI: 1 / [ 1 + exp(-0.5) ] = 0.6225
Input of hidden neuron H2: 0 * 0.4 + 1 * (-0.2)=-0.2
Output of hidden neuron H2: 1 / [ 1 + exp(+0.2)] = 0.4502

The signal now reaches the output layer.


Input of output neuron O 1: 0.6225* 0.3 + 0.4502* 0.8 = 0.5469
Output of output neuron O 1: 1 / [ 1 + exp(-0.5469)] = 0.6334
Since the target output is 0,
Error at the output neuron = 0 - 0.6334 = -0.6334
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Solution:

To modify the weights, we first calculate


δ = Error*(∂Out/∂x) = (Target- Out)*Out*(1.0 - Out)

Change in weight
ΔWpq,k = ηδq,k Outp,j
and Wpq,k(n+ 1) = Wpq,k(n ) + ΔWpq,k

First, change the weights in weight matrix 2:


δ = (- 0.6334) * 0.6334 * (1 - 0.6334)= - 0.1470.
ΔWll.3 = 0.25*(-0.1470)* 0.6225 = - 0.0229
Δw21.3 = 0.25*(-0.1470)* 0.4502 = - 0.0165

New value of weight 1: 0.3 + (-0.0229) = 0.2771


New value of weight 2: 0.8 + (-0.0165) = 0.7835.
Introduction
Hydrologic models
7.2 System type models Approaches
Application

Solution:

Now consider the weights in weight matrix 1:


Change in weight 1: 0.25 * (-0.6334) * 0* 0.6225 * (1-0.6225)= 0
Change in weight 2: 0.25 * (-0.6334) * 0* 0.4502 * (1-0.4502) = 0
Change in weight 3: 0.25 * (-0.6334) * 1* 0.6225 * (1-0.6225)=
-0.0372
Change in weight 4: 0.25 * (-0.6334) * 1* 0.4502 * (1-0.4502)=
-0.0392

Hence,
New value of weight 1: 0.6 + 0 = 0.6 (not changed)
New value of weight 2: 0.4 + 0 = 0.4 (not changed)
New value of weight 3: 0.5 + (-0.0372) = 0.4628
New value of weight 4: -0.2 + (-0.0392) =-0.2392.
2. Precipitation of
Average total
monthly
Eastern Nile
precipitation
2. Temperature of EN
Average Daily
Maximum
Temperature
1.3 Simplified systems diagram Concept
Classification
System Analysis

for a River basin Pros and cons

Precipitation Runoff Other Sources

Downstream
River Reaches & Reservoirs Requirements
Instream Uses

Evapotranspiration
Consumptive Distribution
Use System
Precipitation

Municipal & Agricultural


Industrial Treatment Demand Sites
Demand Sites

Groundwater
Pumping Drainage
Collection,
Treatment,
& Disposal
Precipitation

Aquifer
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Harmonic representation of seasonal parameters

If Mt is any periodic parameter, its representation by harmonics is


given as:


M  A   A cos(2jt /  )  B sin(2jt /  )
m

t 0 j j
j 1

  
1 2 2
A  M ,
0
t 1
t A  M
j
t 1
t
cos(2jt /  ), B j
 
 M t 1
t
sin(2jt /  )

ω is the number of seasons in a year

The maximum number of harmonics which can be fitted to the data are
ω/2.
Structure
Spectral

3.1 Time Series Analysis Autoregressive


Moving Average
ARMA

Harmonic representation of seasonal parameters

For each harmonic, compute the variance explained by it:

Aj  B j
2 2 2

var( h j ) 
C j

2 2

Compute the ratio of the variance explained by the j th


harmonic and the original variance as:
5.1 Optimization techniques

Direct Search methods:

Simplex/ Nelder and Mead method


Hooke and Jeeves method
Rosenbrock method
Particle swarm optimization
Evolutionary algorithms
etc
5.1 Optimization techniques

Simplex method:

The simplex represents a geometric figure formed by a set of


n+1 points in n-dimensional space. Thus in two dimensions the
simplex is a triangle and in three dimensions it is a tetrahedron.

The basic idea is to compare the


values of the objective function at
the n + 1 vertices of a general
simplex and move this simplex
gradually towards the optimum
point during the iterative process.
5.1 Optimization techniques

Simplex method:

B – Best point G – Good point


W – Worst point M – Mid point
R – Reflected point E – Expansion
point
M=(B + G)/2

R=M+(M – W)= 2M – W Reflection

E=R+ (R – M)= 2R – M = 3M – 2W

Expansion
5.1 Optimization techniques
Simplex method:

C – Contraction point

C1= (W+M)/2
or
C2 = (M+R)/2

Contraction

S – Shrinkage point

S = (W + B)/2

Shrinkage
5.1 Optimization techniques

Simplex method:

Table: Logical decisions for the Nelder Mead algorithm


5.1 Optimization techniques
Simplex method:

Example: Use the Nelder Mead algorithm to find the


minimum of f(x,y)= x2 – 4x+y2 – y – xy. Start with the
vertices:
V1=(0,0), V2=(1.2,0) and V3=(0,0.8)
The function f(x,y) takes on the values
f(0,0)=0, f(1.2,0)= – 3.36, f(0,0.8)= – 0.16
Thus B = (1.2,0) G= (0,0.8) W=(0,0)
The vertex W will be replaced. The points M and R are

M=(0.6,0.4) R=(2*0.6 – 0, 2*0.4 – 0)=(1.2,0.8)

The f(R)=f(1.2,0.8)= - 4.48. Since f(R)<f(B), expansion to


E is considered.
5.1 Optimization techniques
Simplex method:

Example: E =2R – M= 2(1.2,0.8) - (0.6,0.4)= (1.8,1.2)


f(E) =f(1.8,1.2)= - 5.88 < f(B)
The new triangle has vertices at
V1=(1.8,1.2), V2=(1.2,0) and V3=(0,0.8)
The process continues and generates a sequence of
triangles that converges down on the solution point (3,2).
5.1 Optimization techniques
Simplex method:

Figure: The sequence of triangles converging to the point


(3,2) in the Nelder Mead method
5.1 Optimization techniques

Rosenbrock method:

A further development of Hooke and Jeeves method.


The coordinate system is rotated in each stage of
minimization.
The first axis is oriented towards the locally estimated
direction of the valley and all the other axes are made
orthogonal and normal to the first one.

General procedure: For minimization of the case of a


function of n variables.
First we select a set of initial step lengths l1, l2, l3---ln
to be taken along the search directions s1, s2, ---, sn
respectively.
5.1 Optimization techniques

Rosenbrock method:
General procedure: The procedure for the jth stage is:

1. The set of s1( j ) , s2( j ) ,, sn( j ) and the base point are known
at the beginning of the jth stage.
A step of length l1 is taken in the direction from the known
base point. If the step is successful l1 is multiplied by a, the
new point is retained, and a success is recorded. If the step is
a failure l1 is multiplied by –b and a failure is recorded. The
value of a and b are usually 3 and 0.5 respectively.

2. Continue the search sequentially along the directions


s1( j ) , s2( j ) ,sn( j 1) , sn( j ) , s1( j ) , s2( j ) ,until at least one step has been
successful and one step has failed in each of the n directions.
5.1 Optimization techniques

Rosenbrock method:
( j 1) ( j 1) ( j 1)
3. Compute the new set of directions 1 s , s 2 ,  , s n for
use in next or (j+1)th stage of minimization.
First compute a set of independent directions p1, p2, …, pn as
p   p1 , p2 , , pn 
 1 
  0 
 2 2 
 [ s1 , s2 ,, sn ] 3  3  3
( j) ( j) ( j)

 
 
 n  n  n   n 
 
where  k is the algebraic sum of all the successful step
( j)
lengths in the corresponding direction s k .
P1 represents the vector joining the starting point and the final
point after the sequence of searches in the jth stage.
5.1 Optimization techniques

Rosenbrock method:
These linearly independent vectors p1, p2, …, pn can be
used to generate a new set of orthogonal directions by means
of Gram-Schmidt orthogonalization procedure.
(a) compute the matrix p
(b) set Q1=p1 and s1( j 1)  Q1 / Q1
(c) compute
 s
i
Qi 1  pi 1   pi 1sm
T ( j 1) ( j 1)
m , i  0,1,2,, n  1
m 1

Qi
si( j 1)  , i  1,2, , n
Qi
4. Take the best point observed in the present stage (jth one)
as the base point for the next stage, set the new iteration
number as j+1 and repeat the procedure from step 1.
5. Assume convergence after satisfying  i   for all i,
where  is a specified small number.
5.1 Optimization techniques

Rosenbrock method:

Example:
Minimize f ( x1 , x2 )  x1  x2  2 x1  2 x1 x2  x2 starting from
2 2

the base x  00 . Take the initial step lengths as l1=l2=0.8


B
 
the minimum permissible step length =0.15 a=3 and
b=0.5.

Solution:

Stage or iteration 1
1  0
1. The search directions are taken as (1)
s   and s2(1)   
1
0 1 
5.1 Optimization techniques
Solution:
Stage or iteration 1
(1)
By taking a step length l1 in the direction 1 , we obtain
s
0 1  0.8
x  xB  l s(1)
1 1     0.8    
0 0 0 

As fB=f(xB) = 0 and f = f(x) = 2.08 > fB, this step is a failure


and new l1 is taken as – (0.5)(0.8)= - 0.4,
2. Next we take a step length of size l2 along s2 from xB. Thus
(1)

0 0 0 
x  x B  l2 s 2(1)     0.8    
0 1  0.8

and f= - 0.16 <fB. This is a success and hence the new value of
l2 = (3)(0.8) = 2.4 and the new base point is x  0.8
0
B
 
5.1 Optimization techniques
Solution:
Next we take a step length of size l1 along s1(1) . Thus
1  0  1   0.4
x  xB  l1       0.4    
   
0 0.8   
0 0.8 

and f= f(x)=- 0.88 <fB. Hence this step is a success we thus


obtain the new base point as xB   0.4
0.8 
from xB we take a step length of l2 along s2(1) and obtain

 0.4 0  0.4


x  xB  l s(1)
2 2    2.4    
0.8    
1 3.2 
and f=f(x)= +4.4 > fB and hence this is a failure.
Now we have observed at least one success and one failure in
both directions. This completes one iteration/stage.
5.1 Optimization techniques
Solution:
( 2) ( 2)
s
3. We now calculate the new set of search directions 1 and 2s
 1 0 
p   p1 p2   s  (1)
1 s (1)
2 
  
 2 2

where 1  0.4 and  2  0.8 . Then


1 0  0.4 0   0.4 0 
p 
0 1 0.8 0.8 0.8 0.8
  

  0 .4  0 
p1    p2   
0 . 8 
and 0.8

The search directions s1( 2 ) and s 2( 2 ) are given by


p 1  0.4  0.448 
s( 2)
 1    
1
p1 
0.4 2  0.82 
1/ 2
 0 .8   0.896
5.1 Optimization techniques
Solution:
Q2
s ( 2)
2 
Q2
where
0  0   0.448  0.448  0.321
T

Q2  p2  p2T s1( 2 ) s1( 2 )        


0.8 0.8  0.896 0.896 0.153
1 0.321 0.902
 s ( 2)
   
2
(0.321) 2
 (0.153) 2 
1/ 2
0.153 0.430

 0.4
xB   
4. Take the new base point as with fB = - 0.88,  0 . 8 
the iteration number as i=2 and go to step 1.
5.1 Optimization techniques
Solution: Stage or iteration 2
(The original step lengths have to be used now)

1. A step length of l1=0.8 is taken from xB along s1( 2 ) to obtain


 0.4  0.448  0.758
x  xB  l s( 2)
1 1    0.8  
 0 .8   0 . 896   1.516 
and f=f(x)=-1.126. Since f < fB, this step is a success and new
l1 is taken as (3)(0.8)= 2.4,
 0.758
2. (i) Next we take xB   
, fB= -1.216, and take a step of
 1.516 
size l2=0.8 along s 2( 2 ) . This gives
 0.758 0.902  0.364
x  xB  l s( 2)
2 2    0.8  
 1. 516  0.430  1 .860 
and f=f(x)= 1.42105. As f > fB , this is a failure and hence the
new value of l2 = (-0.5)(0.8) = -0.4 .
5.1 Optimization techniques

Solution: Stage or iteration 2

ii. Next we take a step of size l1=2.4 from xB along s1( 2 ) . This
gives
 0.758  0.448  1.833
x  xB  l1s1( 2)     2. 4   
 1 .516   0 .896   3.666
and f=f(x)= 1.221. As f > fB , this is a failure and hence the new
value of l1 = (-0.5)(2.4) = -1.2 . Since the previous search along
s 2( 2 ) is a failure, we take the new point as
 0.758 0.902  1.140
x  xB  l2 s2( 2 )     0.4   
 1 .516  0.430  1 .366 
with f=f(x)=-1.156. Since f < fB, this step is a success and hence
we take the new base point as x   1.140 with fB=-1.156.
B
 1.366 

In this stage we observed at least one failure and one success.


5.1 Optimization techniques
Solution:
( 3) ( 3)
s
3. We now calculate the new set of search directions 1 and 2 s
 0  where   0.8 and
( 2)  1
p   p1 p2   s1 s2 
( 2)
 1 2  0.4 .
 2  2 
 0.448 0.902   0.8 0   0.7192  0.3608 
p     
 0 .896 0 .430  0.4  0.4   0.5448  0.1720

Then  0.7192 and  0.3608


p1    p2   
 0 . 5448    0 . 1720

The search directions s1(3) and s2(3) are given by


p1 1  0.7192  0.797 
s( 3)
    
1
p1 
(0.7192) 2  (0.5448) 2 
1/ 2
 0 .5448   0.604
5.1 Optimization techniques
Solution:
Q2
s ( 3)
2 
Q2
where
 0.3608  0.3608  0.797  0.797   0.2141
T

Q2  p2  p2T s1(3) s1(3)       


  0 . 1720   0 . 1720  0 . 604 0 . 604   0 . 2830
1  0.2141  0.604
 s ( 3)
   
2
(0.2141) 2
 (0.2830) 2 
1/ 2
 0.2830   0.797

Thus we go to the next direction (i=3) by taking the new base


point as x   1.140 with fB = - 1.156,
B
 1.366 

 0.797  0.604
s( 3)
  and s ( 3)
 

2
1
 0 . 604   0 . 797
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Disaggregation modeling is a process by which time series are


generated dependent on a time series already available.

Statistical properties are preserved at both key and subseries levels,


and the relationships between the two levels are maintained.

Two basic forms- temporal domain and spatial domain.

Key Subseries . Substation


A
Annual Semi-annual . Substation
B
Semi-annual Biweekly
. Substation
Monthly Biweekly Outlet o C

Annual Monthly
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Single Site Temporal disaggregation Models


Only single level of disaggregation is provided but all forms can be
cascaded or staged into multilayered approach.

Y  AX  B Basic model (Valencia and Schaake)


Y  AX  B  CZ Extended model (Mejia and Rouselle)
Y  A X  B   C Y 1 Condensed model (Lane)

Properties of disaggregation Models


•Preservation of expected values
•Preservation of additivity
•Preservation of covariances and variances
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Parameter Estimation for the Basic Temporal Disaggregation


Model

Aˆ  SYX S XX
1

Bˆ Bˆ T  SYY  SYX S XX
1
S XY
Any solution for B which wil l produce
BB T  D is a valid solution.
If B is assumed to be a lower triangular matrix, a unique solution can be
obtained by the square root method when D is a positive definite matrix
or by a method proposed by Lane when D is at least
a positive semidefinite matrix.
1 N 1 N
SXX  
N - 1 v 1
( X T
v v )
X SYX  
N - 1 v 1
(Y T
v v )
X
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Parameter Estimation for the Basic Temporal Disaggregation


Model
If B is assumed to be a lower triangular matrix, and D is a positive
definite matrix, then the non zero elements of B may be determined by
b ij  d ij / b jj for j  1, i  1,..., n
1
 ij j 1 jk 2  2
b  d   (b )  , for j  2,...n,
ij
i j
 k 1 
and
 ij j 1 jk ik 
d   b b 
b ij    , for j  2,..., n  1, i  j  1,..., n
k 1

b jj
where b ij are elements of B, d ij are the elements of D
and n is the size of the matrices B and D.
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Parameter Estimation for the Basic Temporal Disaggregation


Model
If B is assumed to be a lower triangular matrix, and D is a positive
definite or a positive semidefinite matrix, then the elements of
B may be determined by
b kj  0 for all k  i
b ki  0 for all k  i, when d ii   (b ij ) 2  0
j i

and
d ki   b ij b kj
j i
b ki  1/ 2
for all k  i when
 ii 
d   (b ) 
ij 2

 j i 
d ii   (b ij ) 2  0
j i
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Parameter Estimation for the Extended Temporal


Disaggregation Model


Aˆ  ( SYX  SYZ S zz1S ZX ) S XX  S XZ S zz1S ZX 
1


C  S  Aˆ S S 1
YZ XZ  ZZ

Bˆ Bˆ T  SYY  Aˆ S XX Aˆ T  Aˆ S XZ Cˆ T  Cˆ S ZX Aˆ T  Cˆ S ZZ Cˆ T
or equivalent ly Bˆ Bˆ T  SYY  Aˆ S XY  Cˆ S ZY
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Parameter Estimation for the Condensed Temporal


Disaggregation Model

Aˆ  [ SYX ( , )  SYY ( ,  1) SYY


1
(  1,  1) SYX (  1, )]
S( , )  S XY ( ,  1) S (  1,  1) SYX (  1, )
XX
1
YY 
1

 
Cˆ  SYY ( ,  1)  Aˆ S XY ( ,  1) SYY
1
(  1,  1)
B B  S ( , )  Aˆ S ( , )  Cˆ S (  1, )
ˆ ˆ T
  YY  XY  YY
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Parameter Estimation for the Spatial Disaggregation Model


The spatial disaggregation model has the same form as the
extended model but it is applied in a spatial than a temporal
sense.

ˆ 1 T
 1 T
A  [ SYX  SYY (1) SYY S XY (1)] S XX  S XY (1) SYY S XY (1)
1


C  S (1)  Aˆ S (1) S 1
YY XY  YY

Bˆ Bˆ T  SYY  Aˆ S XY  Cˆ SYY
T
(1)
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Example of Disaggregation Modeling

Four time series of monthly streamflow will be analyzed. Annual


data are first generated for a key station. These data are then
disaggregated into annual data at three substations, and then the
annual data at the three substations are disaggregated into
monthly data.

. Substation
A
. Substation
B

. Substation
Outlet o C
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Example of Disaggregation Modeling

Data used:
Annual 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940

A 183.1 234.4 251.2 156.2 160.4 176.6 278.5 345.7 321.6 248.8 219.7 201.1 215.9 213.6 186.1

B 158.1 220.3 233.6 134.7 134.8 152 240.2 303.7 304.5 233.2 207.3 174.3 192.3 183.2 153.5

C 126.1 184.6 227.1 131.7 132.1 108.5 188.1 264.6 275.5 223.5 207.1 142.3 190.5 170.3 110.6

Outlet 467.3 639.3 711.9 422.6 427.3 437.1 706.8 914 901.6 705.5 634.1 517.7 598.7 567.1 450.2

May

A 24.7 7.9 21.5 11.3 18.2 21.9 32.2 8.4 47.7 14.5 15.1 12 9.7 14.1 22.8

B 23.3 7.1 20.9 11.9 19.1 21.8 34 7.4 46.9 15.7 15.9 11.3 10.3 15 20.2

C 22.2 12 22.4 11.5 20.5 13.6 28.9 10.4 39.3 17.8 16.6 11.8 13.3 14.8 15.4

Outlet 70.2 27 64.8 34.7 57.8 57.3 95.1 26.2 133.9 48 47.6 35.1 33.3 43.9 58.4

June

A 30.3 19.1 21 15.3 36.9 26.1 32.4 18.1 48.2 15 34.9 22.4 34.7 29.2 33.4

B 29.2 22.4 22.1 16 34.5 24.3 34.4 20 47.6 16.8 38.1 21 33 30.1 30.3

C 19.8 25.4 22.5 15.4 36.7 24.6 35.9 19.5 29.2 19.7 42.7 21.4 28.9 26.7 20

Outlet 79.3 66.9 65.6 46.7 108.1 75 102.7 57.6 125 51.5 115.7 64.8 96.6 86 83.7
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Example of Disaggregation Modeling

Soln: The data is assumed to sufficiently close to normal.


Otherwise transformation is important. The annual values will be
generated using lag-one linear autoregressive model.


yt  1 yt 1  1 
1 
2 1/ 2
 t
The f1 is the autoregressive parameter,
s is the standard deviation,
yt is the annual value at time t minus the mean,
et is the normal (mean zero, variance one) random term
Using the sample estimates f1=r=0.568 and s=s=160.9, the AR(1)
model becomes

yt  0.568yt 1  132.4 t
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Example of Disaggregation Modeling

Soln:
The spatial disaggregation of the key station annual data into
annual data at the three substations is done using Lane model.
Calculate the sample moments required for the parameter
estimation.

3101 3002 2701 1844 1609 861.5


SYY  3002 2977 2762 SYY (1)  1967 1760 1038 
2701 2762 2877  2168 2037 1416 

8803
S XX  25880 SYX  8741 S XY (1)  5980 5407 3316
8340
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Example of Disaggregation Modeling

Soln:
The spatial disaggregation parameters are now calculated as:
0.3532
  0.3372
0.3096

0.5112  0.6510 0.03759  64.29 15.50  79.79 


Ĉ  0.1707  0.1734  0.02162 Bˆ Bˆ T  15.50 21.10  36.60
 0.6819 0.8244  0.01597  79.79  36.60 116.4 

8.018 0. 0.
B̂  1.933 4.167 0. 
 9.951  4.167 0. 
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Soln (cont’d):
The condensed seasonal disaggregation model is used to
disaggregate the annual data at the three substations to monthly
data. The procedure is demonstrated for month 6.
37.56 38.15 33.47
174.3 171.9 174.0 
  SYX (6,6)  91.86 96.61 95.29 
S XY (6,5)  175.0 173.7 182.2
28.21 37.76 63.73
111.3 119.1 163.7 
3101 3002 2701
67.49 67.13 48.38
SYY (6,5)  62.46 63.01 47.61
S XX  3002 2977 2762
18.92 23.44 21.12 2701 2762 2877 

110.7 109.3 76.57  87.56 79.81 45.08 


SYY (5,5)  109.3 109.4 77.23  SYY (6,6)  79.81 76.27 47.87 
76.57 77.23 60.76 45.08 47.87 58.20
Time series analysis
3.2 Disaggregation modeling Disaggregation
Markov model

Soln (cont’d):
The parameter estimates are obtained as:

0.2213  0.4229 0.1768  1.416  0.8238 0.2171 


Aˆ  0.1226  0.2549 0.1286  Cˆ  0.5396  0.1149 0.3167
 0.05735 0.07423  0.02786  3.133 2.961 0.5494 

38.70 36.61 31.47 6.221 0. 0. 


Bˆ Bˆ  36.61 35.84 31.64  Bˆ  5.884 1.103 
T
0. 
31.47 31.64 37.07  5.059 1.697 2.934

Once all parameters have been estimated, synthetic data may be


generated.

You might also like