Professional Documents
Culture Documents
uq2019 (/github/G-Ryzhakov/uq2019/tree/master)
/ lectures (/github/G-Ryzhakov/uq2019/tree/master/lectures)
Today
What is UQ?
polynomial approximation
parameters estimation
Uncertainty quantification
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 1/12
4/15/2021 Jupyter Notebook Viewer
Uncertainty quantification
Numerical simulations assume models of real world; these models pick uncertainties
In input variables: Y = F (X + ε)
In output variables: Y = F (X ) + ε
Models are approximate
Forward problems
In the forward propagation of uncertainty, we have a known model F for a system of interest.
We model its inputs X as a random variable and wish to understand the output random
variable
Y = F (X )
Also, this is related to sensitivity analysis (how random variations in X influence variation in
Y ).
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 2/12
4/15/2021 Jupyter Notebook Viewer
Inverse problems
In inverse problems, F is a forward model, but Y is observed data, and we want to find the
input data X
Inverse problems are typically ill-posed in the usual sense, so we need an expert (or prior)
about what a good solution X might be.
Bayesian perspective becomes the method of choice, but this requires the representation of
high-dimensional distributions.
p(Y |X )p(X )
p(X |Y ) = .
p(Y )
−1/2
Chebyshev polynomials of the first kind, (a, b) = (−1, 1) , w(x) = (1 − x
2
)
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 3/12
4/15/2021 Jupyter Notebook Viewer
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 4/12
4/15/2021 Jupyter Notebook Viewer
plt.plot(*data)
plt.legend(["power = {}".format(i) for i in range(len(data))]);
plt.plot(x, complex_func(x));
Now, let's approximate the function with polynomials taking different maximal power n and the
corresponding number of node points
n
̂
f (x) ≈ f (x) = αi pi (x)
∑
i=0
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 5/12
4/15/2021 Jupyter Notebook Viewer
In [5]: n = 6
M = n
nodes = np.linspace(-1, 1, M)
RH = complex_func(nodes)
A = system_mat(nodes, n, p_cheb)
if n == M:
alpha = np.linalg.solve(A, RH)
else:
alpha = np.linalg.lstsq(A, RH)[0]
print("α = {}".format(alpha))
return y
In [8]: y = complex_func(x)
approx_y = calc_approximant(p_cheb, alpha, x)
plt.plot(x, y, x, approx_y, nodes, RH, 'ro');
ε = 1.2986238408482182
If we take another set of polynomials, the result of the approximation will be the same
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 6/12
4/15/2021 Jupyter Notebook Viewer
If we take another set of polynomials, the result of the approximation will be the same
(coefficients α will be different of course).
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 7/12
4/15/2021 Jupyter Notebook Viewer
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 8/12
4/15/2021 Jupyter Notebook Viewer
#solve
nodes = make_nodes(n, distrib)()
RH = f(nodes)
A = system_mat(nodes, n, p_herm)
alpha = np.linalg.solve(A, RH)
# calc values
x = np.linspace(-1, 1, 2**10)
y = f(x)
approx_y = calc_apprximant(p_herm, alpha, x)
#plot
plt.figure(figsize=(14,6.5))
plt.plot(x, y, x, approx_y, nodes, RH, 'ro')
plt.show()
# calc error
epsilon_cheb = np.linalg.norm(y - approx_y, np.inf)
print("ε = {}".format(epsilon_cheb))
In [13]: interact(plot_approx,
f=fixed(complex_func),
n=widgets.IntSlider(min=1,max=15,step=1,value=4,continuous_upd
distrib=widgets.ToggleButtons(options=['Uniform', 'Chebyshev r
poly=widgets.ToggleButtons(options=['Chebyshev polynomials', '
);
Random input
mean value
variance
risk estimation
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 9/12
4/15/2021 Jupyter Notebook Viewer
f (x) = αi pi (x).
∑
i=0
If the set of orthogonal polynomials {pn } have the same weight function as ρ , and the first
polynomial is constant p0 (x) = h0 , then 𝔼f = α0 h0 . Usually, h0 = 1 and we get a simple
relation
𝔼f = α0
note, that the summation begins with 1. Assume we can interchange the sum and the integral,
then
∞ ∞ b ∞
2
Var f = αi pi (τ) αj pj (τ) ρ(τ) dτ = α hi .
i
∑∑∫ ∑
a
i=1 j=1 i=1
The formula is very simple if all the coefficients {hi } are equal to 1:
∞
2
Var f = α .
i
∑
i=1
Let us check the formulas for the mean and variance by calculating them using the Monte-
Carlo method.
Normal distribution
big_x = np.random.randn(int(1e6))
big_y = complex_func(big_x/scale)
mean = np.mean(big_y)
var = np.std(big_y)**2
print ("mean = {}, variance = {}".format(mean, var))
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 10/12
4/15/2021 Jupyter Notebook Viewer
n = 15
M = n
if n == M:
alpha = np.linalg.solve(A, RH)
else:
W = np.diag(np.exp( -nodes**2*0.5))
alpha = np.linalg.lstsq(W.dot(A), W.dot(RH))[0]
so, the method based on polynomial expansion is more accurate than Monte-Carlo.
In [17]: ex = 2
x = np.linspace(-scale - ex, scale + ex, 10000)
y = complex_func(x/scale)
approx_y = calc_apprximant(p_herm, alpha, x)
plt.plot(x, y, x, approx_y, nodes, RH, 'ro');
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 11/12
4/15/2021 Jupyter Notebook Viewer
Uniform distribution
In [20]: n = 15
M = n
nodes = np.linspace(-1, 1, M)
RH = complex_func(nodes)
A = system_mat(nodes, n, p_legendre)
if n == M:
alpha = np.linalg.solve(A, RH)
else:
alpha = np.linalg.lstsq(A, RH)[0]
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 12/12