Professional Documents
Culture Documents
Sacchi MD
Sacchi MD
in Exploration Seismology
M. D. Sacchi
Department of Physics
Institute for Geophysical Research
University of Alberta
M.D.Sacchi 1
Contact:
M.D.Sacchi
sacchi@phys.ualberta.ca
Notes:
www-geo.phys.ualberta.ca/~ sacchi/slides.pdf
M.D.Sacchi 2
• Inverse Problems in Geophysics
• Reflection Seismology
• Introduction to Inverse Problems
• Inverse Problems in Reflection Seismology
M.D.Sacchi 3
Geophysics
• In Geophysics old and new physical theories
are used to understand processes that occur in
the Earth’s interior (i.e., mantle convection,
Geo-dynamo, fluid flow in porous media, etc)
and to estimate physical properties that
cannot be measured in situ (i.e., density,
velocity, conductivity, porosity, etc)
M.D.Sacchi 4
What makes Geophysics different from other
geosciences?
M.D.Sacchi 5
How do we convert indirect measurements
into material properties ?
• Brute force approach
• Inverse Theory
• A combination of Signal Processing and
Inversion Techniques
M.D.Sacchi 6
Another aspect that makes Geophysics unique is
the problem of non-uniqueness
Toy Problem:
m1 + m 2 = d
M.D.Sacchi 7
Dealing with non-uniqueness
• Physical constraints
Velocities are positives
Causality
• Non-informative priors
Bayesian inference, MaxEnt priors
• Results from Lab experiments
a
• Common sense = Talk to the Geologists
a Theycan understand the complexity involved in defin-
ing an Earth model.
M.D.Sacchi 8
Reflection Seismology involves solving problems
in the following research areas:
Wave Propagation Phenomena: we deal
with waves
Digital Signal Processing (DSP): large
amounts of data in digital format, Fast
Transforms are needed to map data to
different domains
Computational Physics/Applied Math:
large systems of equations, FDM, FEM,
Monte Carlo methods, etc
High Performance Computing (HPC):
large data sets (>Terabytes), large systems of
equations need to be solved again and again
and again
Inverse Problems: to estimate an Earth
model
Statistics: source estimation problems, to
deal with signals in low SNR environments,
risk analysis, etc
M.D.Sacchi 9
The Seismic Experiment
S R R R R R
R
R R
x
v(z)
v1 v4 v1
v2 v3 v2
v3
z
M.D.Sacchi 10
The Seismic Experiment :/eps: Command
not found.
M.D.Sacchi 11
The Seismic Experiment
S R
x
v1
v2
z
x
Interpretation
M.D.Sacchi 12
Seismic Data Processing
Correct irregularities in source-receiver
postioning
Compensate amplitude attenuation
Filter deterministic and stochatic noise
Guarantee source-receiver consistency
Reduce the data volume/improve the SNR
Enhance resolution
Provide an “interpretable” image of the
subsurface
M.D.Sacchi 13
Reduce the data volume/improve the SNR
Data(r, s, t) → Image(x, z)
M.D.Sacchi 14
Forward and Inverse Problems
Forward Problem:
Fm = d
M.D.Sacchi 15
Inverse Problem: Given F and d, we want to
estimate the distribution of physical properties m.
In other words:
1 - Given the data
2 - Given the physical equations
3 - Details about the experiment →
What is the distribution of physical properties???
M.D.Sacchi 16
In general, the forward problem can be indicated
as follows:
F :M →D
F −1 : D → M
M.D.Sacchi 17
Linear and non-linear problems
The function F can be linear or non-linear.
If F is linear the following is true
given m1 ∈ M and m2 ∈ M , then
M.D.Sacchi 18
Fredholm Integral equations of the first
kind
Z b
dj = gj (x) m(x) dx, j = 1, 2, 3, . . . , N
a
N : Number of observations
Z
s(tj ) = r(τ )w(tj − τ )dτ
M.D.Sacchi 19
Questions:
Is there a solution ?
How do we construct the solution ?
Is the solution unique ?
Can we characterize the degree
non-uniqueness?
To answer the above questions, we first need to
consider what kind of data we have at hand:
a- Infinite amount of accurate data
b- Finite number of accurate data
c- Finite number of inaccurate data (the real
life scenario!)
M.D.Sacchi 20
Linear Inverse Problems: The Discrete case
We will consider the discrete linear inverse
problem with accurate data. Let us assume that
we have finite amount of data dj , j = 1, . . . N , the
unknown model after discretization using layers
or cells is given by the vector m with elements
mi , i = 1, . . . M . The data as a function of the
model parameters are given by
d = Gm
M.D.Sacchi 21
Minimum norm solution
In this case we will try to retrieve a model with
minimum norm. The problem is posed a follows:
Find model m̂ that minimizes the following cost
function
J = mT m (1)
d = Gm (2)
J 0 = mT m + λT (d − Gm) (3)
M.D.Sacchi 22
i = 1, . . . , N .
Setting the derivative of J 0 with respect to the
unknowns m and λ to zero gives rise to the
following system of equations:
2m + GT λ = 0 (4)
Gm − d = 0. (5)
M.D.Sacchi 23
Example
Suppose that some data dj is the result of the
following experiment
Z L
−α(x−rj )2
dj = d(rj ) = e m(x)dx . (7)
0
rj = j . L/(N − 1) , j = 0 . . . N − 1 .
d = Gm (9)
M.D.Sacchi 24
In our example we will assume a true solution
given by the profile m portrayed in Figure 1a.
The corresponding data are displayed in Figure
1b. For this toy problem I have chosen the
following parameters:
M.D.Sacchi 25
Script 1: toy1.m
M.D.Sacchi 26
(a) Model m (b) Data d=Gm
2.5 5
2 4
1.5 3
1 2
0.5 1
0 0
−0.5 −1
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
2 4
1.5 3
1 2
0.5 1
0 0
−0.5 −1
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
M.D.Sacchi 27
The MATLAB solution for the minimum norm
solution is given by:
Script 2: mns.m
function [m_est,d_pred] = min_norm_sol(G,d);
%
% Given a discrete Kernel G and the data d, computes the
% minimum norm solution of the inverse problem d = Gm.
% m_est: estimates solution (minimum norm solution)
% d_pred: predicted data
m_est = G’*inv(G*G’)*d;
d_pred = G* m_est;
M.D.Sacchi 28
Let’s see how I made Fig.1 using Matlab
Script 3: makefigs.m
M.D.Sacchi 29
Weighted Minimum norm solution
As we have observed the minimum norm solution
is too oscillatory. To alleviate this problem we
introduce a weighting function into the model
norm. Let us first define a weighted minimum
norm solution
m̂ = Q GT (G Q GT )−1 d (11)
where
M.D.Sacchi 30
In Figs. 2 and 3, I computed the minimum norm
weighted solution using W = D1 and W = D2 ,
that is the first derivative operator and the
second derivative operator respectively.
M.D.Sacchi 31
(a) Model m (b) Data d=Gm
2.5 5
2 4
1.5 3
1 2
0.5 1
0 0
−0.5 −1
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
(c) Minimum Weighted Norm Model (D1) (d) Predicted Data d=Gm
2.5 5
2 4
1.5 3
1 2
0.5 1
0 0
−0.5 −1
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
M.D.Sacchi 32
(a) Model m (b) Data d=Gm
2.5 5
2 4
1.5 3
1 2
0.5 1
0 0
−0.5 −1
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
(c) Minimum Weighted Norm Model (D2) (d) Predicted Data d=Gm
3 5
2.5 4
2
3
1.5
2
1
1
0.5
0 0
−0.5 −1
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
M.D.Sacchi 33
This is the MATLAB script that I used to
compute the solution with 1D first order
derivative smoothing:
Script 3: toy1.m
M.D.Sacchi 34
Similarly, you can use second order derivatives:
Script 4: toy1.m
W = convmtx([1,-2,1],M);
D2 = W(1:M,1:M);
Q = inv(D2’*D2);
m_est = Q*G’*inv(G*Q*G’)*d;
M.D.Sacchi 35
The Derivative operators D1 and D2 behave like
high pass filters. If you are not convince compute
the amplitude spectrum of a signal/model after
and before applying one of these operators.
Question: We want to find smooth solutions then,
why are we using High Pass operators?
M.D.Sacchi 36
Linear Inverse Problems: The Discrete
case with non-accurate data
In this section we will explore the solution of the
discrete inverse problem in situation where the
data are contaminated with errors.
Attempting to exactly fit the data is not a good
idea. In fact, rather than attempting an exact fit
for each observation we will try to fit the data
with certain degree of tolerance.
In the previous section we have worked with the
Exact Data Problem, and define the minimum
norm solution via the following problem:
Subject to Gm − d = 0
Gm − d ≈ 0
M.D.Sacchi 37
A model like the above can also be written as
Gm = d + e
M.D.Sacchi 38
The minimum norm solution in this case becomes
the solution of the following optimization
problem:
Minimize J = mT m
Subject to ||Gm − d||22 = ²
M.D.Sacchi 39
Please, notice that in the previous equation we
have used the l2 norm as a measure of distance
for the errors in the data; we will see that this
also implies that the errors are considered to be
distributed according to the normal law
(Gaussian errors). Before continuing with the
analysis, we recall that
||e||22 = eT e .
M.D.Sacchi 40
The solution is now obtained by minimizing J 0
with respect to the unknown m. This requires
some algebra and I will give you the final solution:
d J0
dm = 0
= (GT G + µI)m − GT d = 0 .
M.D.Sacchi 41
Identity (GT G + I)−1 GT = GT (GGT + I)−1 .
M.D.Sacchi 42
About µ
The importance of µ can be seen from the cost
function J 0
M.D.Sacchi 43
The parameter µ receives different names
according to the scientific background of the user:
1. Statisticians: Hyper-parameter
2. Mathematicians: Regularization parameter,
Penalty parameter
3. Engineers: Damping term, Damping factor,
Stabilization parameter
4. Signal Processing: Ridge regression
parameter, Trade-off parameter
M.D.Sacchi 44
Trade-off diagrams
A trade-off diagram is a display of the model
norm versus the misfit for different values of µ.
Say we have computed a solution for a value µ
this gives the model estimate:
T
Model Norm J(µ) = m(µ) m(µ)
M.D.Sacchi 45
Example
We use the toy example given by Script 1 to
examine the influence of the trade-off parameter
in the solution of our inverse problem. But first, I
will add noise to the synthetic data generated by
script (toy.m). I basically add a couple of lines to
the code:
Script 5
M.D.Sacchi 46
The Least Squares Minimum Norm solution is
obtained with the following script
Script 6
Script 6’
% Damped LS Solution
%
I = eye(M);
m_est = (G’*G+mu*I)\ (G’*d);
M.D.Sacchi 47
Inversion of the noisy data using the least squares
minimum norm solution for µ = 0.05 and µ = 5.
are portrayed in Figures 4 and 5, respectively.
Notice, that a small value of µ leads to data
over-fitting. In other words, we are attempting to
reproduce the noisy observations like if they were
accurate observations. This is not a good idea
since over-fitting can force the creation of
non-physical features in the solution (highly
oscillatory solutions).
M.D.Sacchi 48
(a) Model m (b) Data
2.5 5
Observed data
2 4
1.5 3
1 2
0.5 1
0 0
−0.5 −1
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
(c) LS Minimum Norm Model (d) Data and Predicted Data d=Gm
3 5
Observed data
4 Predicted data
2
3
1 2
1
0
0
−1 −1
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
M.D.Sacchi 49
(a) Model m (b) Data
2.5 5
Observed data
2 4
3
1.5
2
1
1
0.5
0
0 −1
−0.5 −2
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
(c) LS Minimum Norm Model (d) Data and Predicted Data d=Gm
3 5
Observed data
4 Predicted data
2
3
2
1
1
0
0
−1
−1 −2
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
M.D.Sacchi 50
Script 7: trade.m
for k=1:Nmu;
mu(k) = mu_min + (mu_max-mu_min)*(k-1)/(Nmu-1);
mu(k) = 10^mu(k);
m_est = G’*((G*G’+mu(k)*I)\d); % Compute min norm LS solution
dp = G*m_est; % Compute estimated data
Misfit(k) = (dp-d)’*(dp-d);
Model_Norm(k) = m_est’*m_est;
end;
%Fancy plot
figure(1); clf;
plot(Model_Norm,Misfit,’*’); hold on;
plot(Model_Norm,Misfit );
for k=1:Nmu
say = strcat(’ \mu=’,num2str(mu(k)));
text(Model_Norm(k),Misfit(k),say);
end;
xlabel(’Model Norm’)
ylabel(’Misfit’); grid
M.D.Sacchi 51
45
µ=100
µ=58.8704
40 µ=34.6572
µ=20.4029 Underfitting
35
µ=12.0112
30
25 µ=7.0711
Misfit
Overfitting
20
µ=4.1628
15
µ=2.4506
10
µ=1.4427
5
µ=0.84932
µ=0.5
0
0 2 4 6 8 10 12 14
Model Norm
M.D.Sacchi 52
Smoothing in the presence of noise
In this case we minimize:
d J0
dm = 0
= (GT G + µWT W)m − GT d = 0 .
or,
m = (GT G + µWT W)−1 GT d
M.D.Sacchi 53
Script 8: mwnls.m
% Trade-off parameter
%
mu = 5.;
%
% First derivative operator
%
W = convmtx([1,-1],M);
W1 = W(1:M,1:M);
m_est = (G’*G+mu*W1’*W1)\(G’*d);
dp = G*m_est;
Model_Norm = m_est’*m_est;
Misfit = (d-dp)’*(d-dp);
M.D.Sacchi 54
(a) Model m (b) Data
2.5 6
Observed data
2 4
1.5
2
1
0
0.5
0 −2
−0.5 −4
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
(c) LS Minimum Weigthed Norm Model (d) Data and Predicted Data d=Gm
3 6
Observed data
Predicted data
4
2
2
1
0
0
−2
−1 −4
0 20 40 60 80 100 0 20 40 60 80 100
x r
j
M.D.Sacchi 55
Edge Preserving Regularization (EPR)
Smoothing tends to blurr edges. How can we
preserve edges in our inverted models ???
Minimize
X
Φ(m) = ln(1 + (D1 m/δ)2i )
i
M.D.Sacchi 56
The EP solution involves an iterative solution of
the following system:
M.D.Sacchi 57
(a) True Phantom (b) Data d=Km
2.5 12
2 10
8
1.5
6
1
4
0.5
2
0 0
−0.5 −2
0 10 20 30 40 50 0 10 20 30
M.D.Sacchi 58
(a) First order Smoothing (b) Second order Smoothing
2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
0 0
−0.5 −0.5
0 50 100 150 200 0 50 100 150 200
x
1.5
0.5
−0.5
0 50 100 150 200
M.D.Sacchi 59
Large sparse system of equations
Sparse matrices
A sparse matrix is a matrix where most elements
are equal to zero.
0 1 0 2
G=
0 0 −1 0
M.D.Sacchi 60
Matrix-vector multiplication in sparse
format
We would like to evaluate the following
matrix-vector multiplication:
d = Gm
fullmult.m
M.D.Sacchi 61
Let’s see how many operations have been used to
multiple G times m
Number of sums = M*N
Number of products = N*M
M.D.Sacchi 62
If G is sparse and we use the sparse format
storage, the matrix-vector multiplication can be
written as:
sparsemult.m
M.D.Sacchi 63
Conjugate Gradients
The CG method is used to find the minimum of
the system
J 0 = ||Ax − y||
where A is an N × M matrix. CG used the
following iterative scheme:
Choose an initial solution x0 and compute
s0 = y − Ax0 , r0 = p0 = AT (y − Ax0 ), and
q0 = Ap0 . Then
M.D.Sacchi 64
Theoretically, the minimum is found after M
iterations (steps). In general, we will stop in a
number of iterations M 0 < M an be happy with
an approximate solution. It is clear that all the
computational cost of minimizing J 0 using CG is
in the lines I marked ∗ and ∗∗.
M.D.Sacchi 65
How do we use the CG algorithm to minimize the
cost function J = ||Gm − d||2 + µ||m||2 ?. If we
choose A and y as follows
G
A=
√
(15)
µIM
d
y= (16)
0M
M.D.Sacchi 66
testcg.m
% Initialization
x =zeros(M,1);
s=y-A*x;
p=A’*s;
r=p;
q=A*p;
old = r’*r;
max_iter = 10
% CG loop
for k=1:max_iter
alpha = (r’*r)/(q’*q);
x= x +alpha*p;
s = s-alpha*q;
r= A’*s;
new = r’*r;
beta = new/old;
old = new;
p = r + beta*p;
q = A*p;
end
M.D.Sacchi 67
An excellent tutorial for CG methods:
Jonathan Richard Shewchuk, An Introduction to the
M.D.Sacchi 68
Imaging vs. inversion
Imaging - Where?
Inversion - Where and What ?
M.D.Sacchi 69
Imaging and Inversion using the distorted
Born appoximation
2 ω2
∇ u(x, s, ω) + u(x, s, ω) = −δ(x − s)W (ω)
c(x)
1 1
2
= 2
+ f (x)
c(x) c0 (x)
M.D.Sacchi 70
ω2
∇2 u(x, s, ω) + c0 (x) u(x, s, ω) =
f (x)ω
−δ(x − s)W (ω)− c0 (x) u(x, s)
2 ω2
∇ G(x, s, ω) + G(x, s, ω) = −δ(x − s)W (ω)
c0 (x)
Lippman-Schwinger equation:
u(r, s, ω) = W (ω)G(r, s, ω)
2
+ ω G(r, x, ω) f (x) u(x, s, ω) d3 x
R
M.D.Sacchi 71
u(r, s, ω) = W (ω)G(r, s, ω)
+ ω G(r, x, ω) f (x) u(x, s, ω) d3 x
2
R
usc (r, s, ω) =
2
G(r, x, ω) f (x) G(x, s, ω) d3 x
R
ω W (ω)
u = Bf
M.D.Sacchi 72
• c0 = constant - First Born Approximation
• c0 = c0 (x) - Distorted Born Approximation
• Validity of the approximation requires
c(x) ≈ c0 (x)
• G(x, y, ω) can be written explicitly only for
very simple models
• In arbirtrary backgrounds we can use the
First Order Asymtotic approximation
(Geometrical Optics)
M.D.Sacchi 73
Forward/Adjoint pairs:
u = Bf
f˜ = B 0 u
where
hu, Bf i = hB 0 u, f i
f˜(x) =
W (ω)∗ ω 2 G∗ (x, r, ω) u(r, s, ω) G∗ (s, x, ω)dω d2 r d2 s
RRR
M.D.Sacchi 74
Imaging with the adjoint operator:
u = Bf , fˆ = B 0 u
f˜ = B 0 B f
Z
f˜(x) = K(x, x0 )f (x0 )d3 x0
K(x, x0 ) ≈ δ(x − x0 )
M.D.Sacchi 75
Forward-adjoint operators with WKBJ
Green functions
(Solution in wavenumber domain)
C0 = C0 (z)
Rz
2 i kz (z 0 )dz 0
ω f (ky , z)e 0
Z
u(ky , kh , ω) = dz,
4 (krz (0)ksz (0)krz (z)ksz (z))1/2
Rz
2 −i kz (z 0 )dz 0
ω u(ky , kh , ω)e
Z Z
0
f˜(ky , z) = 1/2
dωd 2
kh ,
4 (krz (0)ksz (0)krz (z)ksz (z))
with
r r
ω |ky + kh |2 c2 (z) ω |ky − kh |2 c2
kz (z) = 1− + 1−
c(z) 4ω 2 c(z) 4ω 2
M.D.Sacchi 76
Recursive implementation (Downward
continuation)
Rz
−i kz (z 0 )dz 0
u(ky , kh , z, ω) = u(ky , kh , ω)e 0
Pz
−i kz (z 0 )∆z
≈ u(ky , kh , ω)e z 0 =0
M.D.Sacchi 77
Lateral variant background velocity c0 = c0 (x, z)
(Splittinga )
2 1 1
∆S = − −
c0 (z) c0 (y + h, z) c0 (y − h, z)
c0 (z): mean velocity at depth z
c0 (x, z): background velocity
The algorithm recursively downward
continues the data with c0 (z) in kh , ky
The split-step correction is applied in y , h.
a Feitand Fleck’78, Light Propagation in graded-index
fibers, Appl. Opt. 17
M.D.Sacchi 78
The Algorithm
• F is a 2D or 4D Fourier transform
• Algorithm complexity ≈ two FFTs per depth
step
M.D.Sacchi 79
Imaging and Inversion
Imaging:
f˜ = B 0 u
finv = B † u
M.D.Sacchi 80
Inversion (numerical solution) - CG
Method
• CG: Conjugate Gradients to solve large linear
systems of equation (in this case linear
operators)
• Easy to implement if you have B and B 0
• Don’t need to iterate forever
M.D.Sacchi 81
CG Scheme
To solve the problem ||Bx − y||2 , with initial
solution x0
Set initial values: r = y − B x0 , g = B 0 r , s = g
For i = 1:imax
v = Bs ! Modeling
δ = ||v||2
α = γ/δ
x = x + αs
r = r − αv
g = B 0 r ! Imaging
s=g+βs
Enddo
M.D.Sacchi 82
A note for code developers: Operators
= subroutines/functions
B = A 1 A2 . . . A n
hu, Bf i = hB 0 u, f i
M.D.Sacchi 83
SEG/EAGE Sub Salt Model - Velocity
M.D.Sacchi 84
SEG/EAGE Sub Salt Model - Zero offset
data
M.D.Sacchi 85
SEG/EAGE Sub Salt Model -
Born+Split-step Imaging / adjoint
operator
M.D.Sacchi 86
Edge Preserving Regularization
X X
2
J = ||usc −Bf || +βx R[(Dx f )k ]+βz R[(Dz f )k ] .
k k
R(x) = ln(1 + x2 )
with spatial derivative operators:
M.D.Sacchi 87
(A) (B)
0 2500 0
2400
50 0.05
2300
Depth (m)
Time (s)
100 2200 0.1
2100
150 0.15
2000
200 1900
0 50 100 150 200 0 50 100 150
Offset (m) Offset (m)
(C) (D)
0 2500 0 2500
2400 2400
50 50
2300 2300
Depth (m)
2100 2100
150 150
2000 2000
M.D.Sacchi 88
From Inverse Filters to LS
Migration
Mauricio Sacchi
Review 1
Linear systems and Invariance - Discrete case
• Convolution Sum
yn = ∑h k−n x k = hn * x n
k
• Signals are time series or vectors
Review 2
Discrete convolution
• Formula
yn = ∑h n−k x k = hn * x n
k
• Finite length signals
x k , k = 0, NX − 1
y k , k = 0, NY − 1
hk , k = 0, NH − 1
• How do we do the convolution with finite length signals?
– Computer code
– Matrix times vector
– Polinomial multiplication
– DFT
Review 3
Discrete convolution
% Initialize output
y(1: NX + HH - 1) = 0
% Do convolution sum
for i = 1: NX
for j = 1: NH
y(i + j - 1) = y(i + j - 1) + x(i) h(j)
end
end
Review 4
Discrete convolution
Example:
x = [ x 0 , x1, x 2 , x 3 , x 4 ] , NX = 5
h = [h0 , h1, h2 ] , NH = 3 yn = ∑h k− n x k = hn * x n
k
y 0 = x 0 h0
y1 = x1h0 + x 0 h1
⎛ y0 ⎞ ⎛ x0 0 0 ⎞
y 2 = x 2 h0 + x1h1 + x 0 h2 ⎜ y1 ⎟ ⎜ x1 x 0 0 ⎟
⎜ y 2 ⎟ ⎜ x 2 x1 x 0 ⎟ ⎛ h0 ⎞
y 3 = x 3 h0 + x 2 h1 + x1h2 ⎜ y3 ⎟ = ⎜ x3 x2 x1 ⎟ ⎜⎜ h1 ⎟⎟
⎜ y4 ⎟ ⎜ x4 x3 x 2 ⎟ ⎝ h2 ⎠
y 4 = x 4 h0 + x 3 h1 + x 2 h2 ⎜ y5 ⎟ ⎜0 x4 x 3 ⎟⎟
⎜ ⎟ ⎜
⎝ y6 ⎠ ⎝0 0 x4 ⎠
y5 = x 4 h1 + x 3 h2
y6 = x 4 h2
Review 5
An finally: the LS filter design method…
• Inverse filtering using series expansion results in long filters with low convergence in some
cases (a=0.99)
• Truncation errors could deteriorate the performance of the filter
• An alternative way of designing filter involves using the method of LS (inversion):
Given x = (1, a)
• We have posed the problem of designing and inverse filter as an inverse problem.
Review 6
Inversion of a dipole
Dipole
Desire
output
⎛1 0 0 ⎞
⎜a ⎟ ⎛ h0 ⎞ ⎛1 ⎞
Find a filter of length 3 that inverts the dipole:
1 0 ⎜ 0⎟
⎜0 a 1 ⎟ ⎜ h1 ⎟ ≈
⎜0 ⎟ ⎜ ⎟ ⎜ 0⎟
0 a ⎝ h2 ⎠ ⎜ ⎟
⎜ ⎟ ⎝ 0⎠
⎝ ⎠
LS filter h = (W T W)−1 W T g
Review 7
Inversion of a wavelet Wavelet
⎛ w0 0 0⎞
⎜ ⎟ Desire
⎜ w1 w0 0⎟ output
⎜ w2 w1 w0 ⎟ ⎛ h0 ⎞ ⎛1 ⎞
⎜ ⎟ ⎜ 0⎟
⎜ h1 ⎟ ≈
⎜ w3 w2 w1 ⎟ ⎜ ⎟ ⎜ 0⎟
⎜w4 ⎝ h2 ⎠ ⎜ ⎟
Find a filter of length 3 that inverts the wavelet: w3 w2 ⎟ ⎝ 0⎠
⎜ ⎟
⎜⎜ 0 w4 w3 ⎟
⎟
⎝0 0 w4 ⎠
e =|| W h − g ||22
∇e = 0 = W T Wh − W T g
h = (W T W)−1 W T g
gˆ = W h, (gˆ k = w k * hh )
Review 8
Trade-off parameter
Wh≈g
∇e = 0 = W T Wh − W T g + µg
h = (W T W + µ I)−1 W T g ⎛ R0 R1 R2 ⎞
⎜ ⎟
gˆ = W h, (g k = w k * hh ) R = W W = ⎜ R1
T
R0 R1 ⎟
⎜ ⎟
⎝ R2 R1 R0 ⎠
Rk : Autocorrelation coefficient
µ = R0 .P /100, P : %of pre − whitening
Review 9
Ideal and Achievable Inversion of a Wavelet:
Pre-whitening
• Pre-whitening P=0%
w k : Wavelet hk : Filter gˆ k : Actual Output
Time Domain
Frequency Domain
Normalized Amplitude Spectrum
Review 10
• P=10%
w k : Wavelet hk : Filter gˆ k : Actual Output
Review 11
Ideal and Achievable Inversion of a Wavelet:
Pre-whitening
• P=20%
w k : Wavelet hk : Filter gˆ k : Actual Output
Review 12
Resolution -
Misfit Stability +
µ = 1.
* Resolution +
Stability -
µ = 0.1
* µ = 0.01
* Model Norm
Review 13
How good is the output ?
SIR gˆ k : Actual Output
gˆ k : Actual Output
• Signal-to-interference ratio (SIR) P=0%
P=20%
max(| gˆ k |)
SIR =
∑ | gˆ k |
k
Review 14
• Previous analysis was OK for minimum phase wavelets - we tried to convert a front
loaded signal into anther front loaded signal (spike)
• For non-minimum phase wavelets we introduce an extra degree of freedom (Lag)
⎛ w0 0 0⎞
⎜ ⎟
⎜ w1 w0 0⎟
⎜ w2 w1 w0 ⎟ ⎛ h0 ⎞ ⎛ 0⎞ In this example Lag=3
⎜ ⎟ ⎜ h1 ⎟ ≈ ⎜ 0⎟
⎜ w3 w2 w1 ⎟ ⎜ ⎟ ⎜1 ⎟
⎜w4 ⎝ h2 ⎠ ⎜ ⎟
w3 w2 ⎟ ⎝ 0⎠
⎜ ⎟
⎜⎜ 0 w4 w3 ⎟
⎟
⎝0 0 w4 ⎠
Review 15
LS Inversion of non-minimum phase wavelets
Review 16
Review 17
LS Inversion of non-minimum phase wavelets
Review 18
Review 19
Where does the wavelet come from?
• Observations ⎛ R0 R1 R2 ⎞
• Finding the inverse filter requires the ⎜ ⎟
R = W T W = ⎜ R1 R0 R1 ⎟
matrix R. This is the autocorrelation ⎜ ⎟
matrix of the wavelet. ⎝ R2 R1 R0 ⎠
• If the reflectivity is assumed to be
white then the autocorrelation of the
trace is equal (within a scale factor) to
the autocorrelation of the wavelet RTrace = c RWavelet
• In Short: Since you cannot measure
the wavelet or its autocorrelation, we
replace the autocorrelation coefficients
of the wavelet by those of the trace and
then, we hope that the validity of the
white reflectivity assumption holds!!
Review 20
Why is that ?
Convolution model sk = w k * rk
Review 21
Color ?
Convolution model sk = w k * rk
Rosa, A. L. R., and T. J. Ulrych, 1991, Processing via spectral modeling: Geophysics, 56, 1244-1251.
Haffner, S., and S. Cheadle, 1999, Colored reflectivity compensation for increased vertical resolution: 69th Annual
International Meeting, SEG, Expanded Abstracts, 1307-1310.
Saggaf, M.M., and E. A. Robinson, 2000, A unified framework for the deconvolution of traces of nonwhite reflectivity:
Geophysics, 65, 1660-1676.
Review 22
• We often compute deconvolution operators from the data (we by-pass the wavelet)
• This is what we called Wiener filter or spiking filter
• Inversion is done with a fast algorithm that exploits the fact that the autocorrelation
matrix has a special structure (Toeplitz Matrix, Levinson Recursion)
−1
h = (RTrace + µ I)−1 v
h = (W W + µ I) W g
T T
⎛w ⎞ ⎛1 ⎞
gˆ = W h, (gˆ k = w k * hh ) ⎛1 ⎞ ⎜ 0 ⎟ ⎜ ⎟
⎜ ⎟ 0 0
v = W T g = W T ⎜ 0⎟ = ⎜ ⎟, v=⎜ ⎟
⎜ ⎟ ⎜0⎟ ⎜ 0⎟
I started with a wavelet ⎝ 0⎠ ⎜ ⎟ ⎜ ⎟
⎝0⎠ ⎝ 0⎠
rˆk = hk * sk
Estimate
reflectivity I started with the trace
Filter will not have the right
scale factor… not a problem!!
Review 23
Practical aspects of deconvolution:
Noise
sk = w k * rk + n k •1-2 cannot be
simultaneously
achieved
rˆk = hk * sk = hk * (w k * rk + n k ) •The pre-whitening is
included in the design
= hk * w k * rk + hk * n k of h and is the
parameter that controls
1-2
gˆ k bk
1 - This is the actual output 2- This is filtered/amplified
noise (Noise after filtering)
Review 24
Review 25
Practical aspects of deconvolution:
Noise
Review 26
Review 27
Practical aspects of deconvolution:
Noise
Review 28
sk = w k * rk + n k
Time
rˆk = hk * sk = hk * (w k * rk + n k )
= hk * w k * rk + hk * n k
S(ω ) = W (ω ).R(ω ) + N (ω )
| Rˆ (ω ) |=| H (ω ) | . |W (ω ) | . | R(ω ) | + | H (ω ) | . | N (ω ) |
Review 29
To Reproduce the examples
• http://www-geo.phys.ualberta.ca/saig/SeismicLab/SeismicLab/
Review 30
Non-Uniqueness
- A simple problem
Review 31
Simple example
m2
m1 + a m2 ≈ d
m1
Review 32
m2
m1 + a m2 ≈ d
m1
Review 33
Smallest Solution
m2
m1 m2
m1
Review 34
Flattest Solution
m2
m1 m2
m1
Review 35
Sparse Solution
m2
m1 m2
m1
Review 36
Deconvolution
- debluring
- equalization
Review 37
Seismic Deconvolution
Review 38
Seismic Deconvolution
S r
2
Depth
or Time 3
Review 39
Seismic Deconvolution
1 2 3 4
Reflectivity
r(t)
Wavelet
w(t)
d(t) Seismogram
Time
Review 40
Seismic Deconvolution
d =W r + n
J =|| Wr − d ||22
∇J = W 'W r − W ' d = 0
Review 41
Seismic Deconvolution
J =|| Wr − d ||22
∇J = W 'W r − W ' d = 0
c I ≈ W 'W
rˆ ≈ c −1W ' d
Some sort of “backprojection”
Review 42
Seismic Deconvolution
Quadratic Regularization
d =W r + n
Review 43
Seismic Deconvolution
Quadratic
Regularization/Interpretation
Model Norm: Measure of
“bad” features
Trade-off parameter
Review 44
Seismic Deconvolution
Quadratic
Regularization/Interpretation
J =||Wr − d ||22 +µ || Lr ||22
µ = 10 4
∗ Low resolution / underfitting
||Wr − d ||22
∗
Unstable / overfitting
∗
∗ ∗ µ = 10−6
∗
Review 45
Seismic Deconvolution
Quadratic Regularization
Typical regularizers J =||Wr − d ||22 +µ || Lr ||22
Review 46
Sir Harold Jeffreys (Quoted by Moritz, 1980; Taken from Tarantola, 1987)
Review 47
Seismic Deconvolution: Trying to squeeze resolution
via mathematical tricks..
Super-resolution via non-
quadratic regularization
R(r) = ∑ |r |
i
i L1 norm Non Quadratic Norm
∑
2
R(r) = | ri | L2 norm Quadratic Norm
i
R(r) = ∑ ln(1 + σ
i
ri2
2
) Cauchy regularizer Non Quadratic Norm
c
- Claerbout, J. F., and F. Muir, 1973, Robust modeling with erratic data: Geophysics, 38, 826-844.
- Taylor, H. L., S. C. Banks, and J. F. McCoy, 1979, Deconvolution with the L-one norm: Geophysics, 44, 39-52.
- Oldenburg, D. W., T. Scheuer, and S. Levy, 1983, Recovery of the acoustic impedance from reflection
seismograms: Geophysics, 48, 1318-1337.
Review 49
Cauchy Norm Deconvolution
S(r) = ∑ ln(1 + σ
i
ri2
2
)
Review 50
iter = 0
r = (W 'W + µQ(r) )−1W ' d r iter = 0
For iter = 1: iter_max
1
Qii = 2
σ (1 + ri2 /σ c 2 ) Q iter ii =
1
σ c (1 + (riiter ) 2 /σ c 2 )
2
Review 51
Cauchy Norm Deconvolution
Review 52
R(r) = ∑ |r |
i
i
∑ |r |
2
R(r) = i
i
R(r) = ∑ ln(1 + σ
i
ri2
2
)
c
R(r) = ∑ F (r )
i
i
General Form
Review 53
Seismic Deconvolution
Back to the original problem: estimation of r(t)
Wavelet
w(t)
d(t) Seismogram
Time
Review 54
Seismic Deconvolution
Back Projection
QR L=I
NQR Cauchy
Time
Review 55
Seismic Deconvolution
Back Projection
QR L=I
NQR Cauchy
Time
True
Review 56
Review 57
Conventional Deconv
Review 58
Amplitude spectrum
R = W'*W;
iter_max;
for k=1:iter_max;
Estimated reflectivity
Q = diag(1./(sc^2+r.^2));
Matrix = R + mu*Q;
r = inv(Matrix) * W'*s;
end;
Review 60
for k=1:iter_max;
sc = 0.01*max(abs(r));
Adaptive
end;
version
Review 61
High frequency imaging methods….
Other methods exits - they all attempt to retrieve a sparse reflectivity sequence:
Atomic Decompostion/Matching Pursuit / Basic Pursuit (like an L1)
Chopra, S., J. Castagna, and O. Portniaguine, 2006, Seismic Resolution and Thin-Bed Reflectivity
Inversion: Recorder, 31, 19-25. (www.cseg.ca)
Portniaguine, O., and J. P. Castagna, 2004, Inverse spectral decomposition: 74th Annual
International Meeting, SEG, Expanded Abstracts, 1786-1789.
Review 62
I k +1 − I k
rk = Reflectivity as a function of P-impedance
I k +1 + I k
k
1
ξ k = log(I k / I 0 ) ≈
2
∑r k Approximation
j= 0
Review 63
Cauchy Norm Deconvolution:
with impedance constraints
True reflectivity Trace
β=0
Constraints are not
honored
Review 64
β = 0.25
Constraints are
honored
Review 65
Cauchy Norm Deconvolution:
with impedance constraints - Algorithm
beta=0.25
iter_max=10
R = W'*W+beta*C'*C;
r = zeros(N,1);
for k=1:iter_max;
Q = diag(1./(sc^2+r.^2));
Matrix = R + mu*Q;
r = inv(Matrix) * (W'*s+beta*C'*psi)
sc = 0.01*max(abs(r));
end;
Review 66
AVO
Estimation of rock
properties & HCI
Review 67
Example: Multi-channel Seismic Deconvolution
Sensing a reflector
w(t) d (t,α , x)
α α
v− , ρ−
x v+ , ρ+
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Review 69
Aˆ (t, x) Bˆ (t, x)
Quadratic regularization
L=I
Review 70
Aˆ (t, x) Bˆ (t, x)
Cauchy Solution
Aˆ (t, x) Bˆ (t, x)
Interpretation/
AVA Analysis
III
II I
Identify gas
bearing sand lens
Wang J., Kuehl H. and Sacchi M.D., 2005, High-resolution wave-equation AVA imaging: Algorithm
and tests with a data set from the Western Canadian Sedimentary Basin: Geophysics, 70, 891-899
Review 73
Imaging - Large scale problems
SOURCE (s)
m RECEIVER/GEOPHONE (g)
h x
We want the variation of the
reflectivity at each subsurface
position:
A 2D image is in fact a 3D volume
α A 3D Image is a 5D Volume
r(α , x,z)
m: midpoint
h: offset
Wang, Kuehl and Sacchi, Geophysics 2005
Review 74
A m=d+n
sin(α )
m = m(x, y, z, p), p=
v (x, y, z)
g = [g x ,g y ], s = [sx , sy ]
Review 75
Migration/Inversion Hierarchy..
Ar=d Modeling
Review 76
Regularized Imaging
Remarks:
• We don’t have A, we only have a code that knows how to apply A to m
and, another, that knows how to apply A’ to d
• A and A’ are built to pass the dot product test
• Importance of sampling Matrix W for reducing footprints (Kuehl, 2002,
PhD Dissertation)
Review 77
Migration
Migration
Inversion
Migration QR NQR
Review 79
Synthetic trace
Review 80
Red: synthetic
Blue: inverted
A:synthetic B: inverted
Review 81