Professional Documents
Culture Documents
Part I
Christian Boehm
Full-Waveform Inversion
Inverse problem:
F (m) = d
m model parameters
d data
F forward model (e.g. elastic wave equation)
Inverse problem:
F (m) = d
m model parameters
d data
F forward model (e.g. elastic wave equation)
1 Optimality Conditions
min f (x ) s.t. x ∈ X,
x ∈Rn
x optimization variable
cost function (objective) f : Rn → R and
n
admissible set X ⊆R .
f (x )
local
minimum
global
minimum
−2
−4
−6
−8
40
30
20
10 35 40 45
15 20 25 30
5 10
First-order expansion:
First-order expansion:
Hence,
∇f (x̄ ) = 0.
First-order expansion:
Hence,
∇f (x̄ ) = 0.
gradient = direction of steepest ascent, i.e.,
negative gradient = direction of steepest descent.
First-order expansion:
Hence,
∇f (x̄ ) = 0.
gradient = direction of steepest ascent, i.e.,
negative gradient = direction of steepest descent.
min f (x )
x ∈Rn
min f (x )
x ∈Rn
min f (x )
x ∈Rn
If f (x ) is convex:
Every stationary point is a global solution,
i.e., there are no maxima or saddle points and
1st order optimality conditions are sufficient for x̄ being a global
solution
If f (x ) is convex:
Every stationary point is a global solution,
i.e., there are no maxima or saddle points and
1st order optimality conditions are sufficient for x̄ being a global
solution
If f (x ) is non-convex:
Stationary points may also be maxima or saddle points
Many local solutions x̄ with different f (x̄ ) might exist
If f (x ) is convex:
Every stationary point is a global solution,
i.e., there are no maxima or saddle points and
1st order optimality conditions are sufficient for x̄ being a global
solution
If f (x ) is non-convex:
Stationary points may also be maxima or saddle points
Many local solutions x̄ with different f (x̄ ) might exist
min f (x ) s.t. x ∈X
x ∈Rn
min f (x ) s.t. x ∈X
x ∈Rn
Additional assumption: X is a convex set.
min f (x ) s.t. x ∈X
x ∈Rn
Additional assumption: X is a convex set.
min f (x ) s.t. x ∈X
x ∈Rn
Additional assumption: X is a convex set.
min f (x ) s.t. x ∈X
x ∈Rn
Additional assumption: X is a convex set.
min f (x ) s.t. x ∈X
x ∈Rn
Additional assumption: X is a convex set.
min f (x ) s.t. x ∈X
x ∈Rn
Additional assumption: X is a convex set.
min f (x ) s.t. x ∈X
x ∈Rn
Additional assumption: X is a convex set.
min f (x ) s.t. x ∈X
x ∈Rn
min f (x ) s.t. x ∈X
x ∈Rn
min f (x ) s.t. x ∈X
x ∈Rn
min f (x ) s.t. x ∈X
x ∈Rn
min f (x ) s.t. x ∈X
x ∈Rn
X := {x ∈ Rn : gi (x ) ≤ 0, i = 1, . . . , m,
hi (x ) = 0, i = 1, . . . , p}.
min f (x ) s.t. x ∈X
x ∈Rn
X := {x ∈ Rn : gi (x ) ≤ 0, i = 1, . . . , m,
hi (x ) = 0, i = 1, . . . , p}.
min f (x ) s.t. x ∈X
x ∈Rn
X := {x ∈ Rn : gi (x ) ≤ 0, i = 1, . . . , m,
hi (x ) = 0, i = 1, . . . , p}.
vpmin (ξ) ≤ vp (ξ) ≤ vpmax (ξ), vsmin (ξ) ≤ vs (ξ) ≤ vsmax (ξ).
2 X
f 2
1
1.5
x2
−1
0.5
−2
−3
−3 −2 −1 0 1 2 3
x1
2 X
f 2
1
1.5
x2
x̄ 1
−1
0.5
−2
-∇f ( x )
−3
−3 −2 −1 0 1 2 3
x1
2 X
f 2
1
1.5
x2
−1
0.5
−2 ∇h ( x )
−3
−3 −2 −1 0 1 2 3
x1
2 X
f 2
1
1.5
x2
x̄ 1
−1
0.5
−2 ∇h ( x )
-∇f ( x )
−3
−3 −2 −1 0 1 2 3
x1
2 X
f 2
1
1.5
x2
0
x
1
−1
0.5
−2 ∇h ( x )
-∇f ( x )
−3
−3 −2 −1 0 1 2 3
x1
2 X
f 2
1
x
1.5
x2
−1
0.5
−2 ∇h ( x )
-∇f ( x )
−3
−3 −2 −1 0 1 2 3
x1
2 X
f 2
1
1.5
x2
−1
0.5
−2 ∇h ( x )
x
-∇f ( x )
−3
−3 −2 −1 0 1 2 3
x1
−∇f (x )
−∇f (x )
−∇f (x )
h(x̄ ) = 0.
22.09.2014 Constrained Nonlinear Optimization 21
Equality Constraints
Assumptions:
x̄ is a local solution of (P)
constraint qualification: ∇h1 (x̄ ), . . . , ∇hp (x̄ ) are linearly independent
Then there exists µ̄ ∈ Rp such that
∇f (x̄ ) + ∇h(x̄ )µ̄ = 0,
h(x̄ ) = 0.
Assumptions:
x̄ is a local solution of (P)
constraint qualification: ∇h1 (x̄ ), . . . , ∇hp (x̄ ) are linearly independent
Then there exists µ̄ ∈ Rp such that
∇f (x̄ ) + ∇h(x̄ )µ̄ = 0,
h(x̄ ) = 0.
∇x L(x̄ , µ̄) = 0,
∇µ L(x̄ , µ̄) = 0.
Then:
0 −2 0
x̄ = , ∇f (x̄ ) = , ∇h(x̄ ) = .
−1 0 0
Optimality conditions:
0 = ∇j(m) = ∇m L(u(m), m) · u †
Optimality conditions:
h(x̄ ) = 0
Optimality conditions:
Optimality conditions:
Optimality conditions:
Example:
X = {x ∈ R2 : kx k = 2, −x2 ≤ 0}
2 3
f
2
1
1
X
x2
0 0
−1
−1
−2
−3
−2
−4
−3
−3 −2 −1 0 1 2 3
x1
2 3
f
2
1
1
X
x2
0 0
−1
−1
−2
−3
−2
−4
−3
−3 −2 −1 0 1 2 3
x1
2 3
f
x̄ 2
1
1
X
x2
0 0
−1
−1
−2
∇g ( x )
−3
−2 ∇h ( x )
-∇f ( x ) −4
−3
−3 −2 −1 0 1 2 3
x1
2 3
f
2
1
1
X
x2
0 0
−1
−1
−2
−3
−2
−4
−3
−3 −2 −1 0 1 2 3
x1
2 3
f
2
1
1
x̄ X
x2
0 0
−1
−1
−2
∇g ( x )
−3
−2 ∇h ( x )
-∇f ( x ) −4
−3
−3 −2 −1 0 1 2 3
x1
2 3
f
2
1
1
x̄ X
x2
0 0
−1
−1
−2
∇g ( x )
−3
−2 ∇h ( x )
-∇f ( x ) −4
−3
−3 −2 −1 0 1 2 3
x1
2 3
f
2
1
1
x̄ X
x2
0 0
−1
−1
−2
∇g ( x )
−3
−2 ∇h ( x )
-∇f ( x ) −4
−3
−3 −2 −1 0 1 2 3
x1
Karush-Kuhn-Tucker conditions:
There exist (x̄ , λ̄, µ̄) such that
Karush-Kuhn-Tucker conditions:
There exist (x̄ , λ̄, µ̄) such that
Karush-Kuhn-Tucker conditions:
There exist (x̄ , λ̄, µ̄) such that
min f (x ) s.t. x ∈X
x ∈Rn
min f (x ) s.t. xl ≤ x ≤ xu
x ∈Rn
Then
l
xi
if xi ≤ xil ,
PX (x )i = xi if xil < xi < xiu ,
u
xi if xi ≥ xiu .
Optimality conditions:
l
≥ 0 if x̄i = xi ,
x̄ ∈ X , ∇f (x̄ )i = 0 if xi < x̄i ≤ xiu ,
l
≤ 0 if x̄i = xiu .
constraint qualifications
non-smooth functions, i.e., if derivatives are not available
optimization problems in function space
globalization strategies (i.e. line search or trust region methods)