You are on page 1of 10

From the initial state, or as from the moment of preceding calculation, one

calculates the stress field resulting from an increment of deformation.

In AXIS modeling
DX: corresponds to radial displacement
DY: corresponds to longitudinal displacement

Differences btw analytics vs numerics mostly from the modeling:


+ Step of time computation considered
+ Nodes of interest (Gaussian or ELNO/NOEUD)
+ Grid (or mesh) of elements
+ Model of material behavior
+ Method of computation (convergent criteria; methodology of
convergence (NEWTON..), so on)

Pr. Dasnor notes:


1/ Choice the number of samples: e.g., M=12
2/ Generate M random values of given properties {ξi}
3/ Using an interpolation methodology (Kriging, others) obtain value {ξ i}
in all other elements.
4/ Make a simulation with ASTER using these values.

Step 1: Check results of existing calculations → at which time E1


becomes E2 / E3
Step 2: 1 simulation K elements (K elements w.r.t d=k*radius of tunnel,
beyond d we can neglect the variability in characteristic of the
parameters.
Step 3:
+ N1 simulation (core simulation) : 100
+ Make a first evaluation: obtain → b1
+ … other (10) simulations
+ … evaluation with 110 → b2 . Compare b1 and b2

Highlights: a/ Reduce the number of the simulation


b/ Reduce the time computation (for each simulation) c/ COV error →
exact?
FEM

CONVERGENCE=_F(RESI_GLOB_RELA=1.E-05,
ITER_GLOB_MAXI=200,
ARRET='OUI',
ITER_GLOB_ELAS=25,),
COMPORTEMENT=_F(GROUP_MA=('SOL', ),
RELATION='ELAS',
ITER_INTE_MAXI=20,
RESI_INTE_RELA=1.E-06,
ITER_INTE_PAS=0,
RESI_CPLAN_RELA=1.E-06,
PARM_THETA=1.0,
SYME_MATR_TANG='OUI',
ITER_CPLAN_MAXI=1,
DEFORMATION='PETIT',
PARM_ALPHA=1.0,),
NEWTON=_F(MATRICE='TANGENTE',
REAC_ITER=1,
REAC_INCR=1,
REAC_ITER_ELAS=0,
MATR_RIGI_SYME='NON',),
SOLVEUR=_F(RENUM='METIS',
STOP_SINGULIER='OUI',
ELIM_LAGR='NON',
NPREC=8,
METHODE='MULT_FRONT',),
METHODE='NEWTON',
ARCHIVAGE=_F(PRECISION=1.E-06,
CRITERE='RELATIF',),

Time of computation: 0.000000000000e+00 - Level of cutting: 1


---------------------------------------------------------------------
| NEWTON | RESIDU | RESIDU |
OPTION |
| ITERATION | RELATIF | ABSOLU |
ASSEMBLAGE
| |RESI_GLOB_RELA | RESI_GLOB_MAXI |
|
---------------------------------------------------------------------
| 0 | 1.55349E-11 | 8.52448E-03 |
TANGENTE |
Criterion (S) of convergence reached (S)
The residue of the type RESI_GLOB_RELA is worth 1.553494197183e-
11 with the node and degree of freedom N7232 DX
The last decade has seen a rapid increase in the use of…
and the analysis of spatial data is an important component of this
development.

(https://blog.dominodatalab.com/fitting-gaussian-process-models-python/)

we can describe a Gaussian process as a distribution over functions. Just as a multivariate normal
distribution is completely specified by a mean vector and covariance matrix, a GP is fully specified by
a mean function and a covariance function:
p(x)∼GP(m(x),k(x,x′))

For example, one specification of a GP might be:

Here, the covariance function is a squared exponential, for which values of and that are close
together result in values of closer to one, while those that are far apart return values closer to zero.

Building models with Gaussians


What if we chose to use Gaussian distributions to model our data?

There would not seem to be any gain in doing this, because normal distributions are not particularly
flexible distributions in and of themselves. However, adopting a set of Gaussians (a multivariate
normal vector) confers a number of advantages. First, the marginal distribution of any subset of
elements from a multivariate normal distribution is also normal:

p(x,y)=N⎛⎝⎜[μxμy],⎡⎣⎢ΣxΣTxyΣxyΣy⎤⎦⎥⎞⎠⎟

p(x,y) = \mathcal{N}\left(\left[{
\begin{array}{c}
{\mu_x} \\
{\mu_y} \\
\end{array}
}\right], \left[{
\begin{array}{cc}
{\Sigma_x} & {\Sigma_{xy}} \\\\
{\Sigma_{xy}^T} & {\Sigma_y}
\end{array}
}\right]\right)
p(x)=∫p(x,y)dy=N(μx,Σx)

Also, conditional distributions of a subset of the elements of a multivariate normal distribution


(conditional on the remaining elements) are normal too:
RBF: https://en.wikipedia.org/wiki/Radial_basis_function_kernel
In machine learning, the (Gaussian) radial basis function kernel, or RBF kernel, is a popular kernel
function used in support vector machine classification
The RBF kernel on two samples x and x', represented as feature vectors in some input space, is defined
as[2]

https://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html#sphx-glr-download-auto-
examples-svm-plot-rbf-parameters-py

Parameters of the RBF Kernel


When training an SVM with the Radial Basis Function (RBF) kernel, two parameters must be
considered: C and gamma. The parameter C, common to all SVM kernels, trades off misclassification
of training examples against simplicity of the decision surface. A low C makes the decision surface
smooth, while a high C aims at classifying all training examples correctly. gamma defines how much
influence a single training example has. The larger gamma is, the closer other examples must be to be
affected.
https://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html#sphx-glr-auto-examples-
svm-plot-rbf-parameters-py
This example illustrates the effect of the parameters gamma and C of the Radial Basis Function (RBF)
kernel SVM.
Intuitively, the gamma parameter defines how far the influence of a single training example reaches,
with low values meaning ‘far’ and high values meaning ‘close’. The gamma parameters can be seen as
the inverse of the radius of influence of samples selected by the model as support vectors.
The C parameter trades off correct classification of training examples against maximization of the
decision function’s margin. For larger values of C, a smaller margin will be accepted if the decision
function is better at classifying all training points correctly. A lower C will encourage a larger margin,
therefore a simpler decision function, at the cost of training accuracy. In other words``C`` behaves as a
regularization parameter in the SVM.
The first plot is a visualization of the decision function for a variety of parameter values on a simplified
classification problem involving only 2 input features and 2 possible target classes (binary
classification). Note that this kind of plot is not possible to do for problems with more features or target
classes.
The second plot is a heatmap of the classifier’s cross-validation accuracy as a function of C and
gamma. For this example we explore a relatively large grid for illustration purposes. In practice, a
logarithmic grid from
to
is usually sufficient. If the best parameters lie on the boundaries of the grid, it can be extended in that
direction in a subsequent search.
Note that the heat map plot has a special colorbar with a midpoint value close to the score values of the
best performing models so as to make it easy to tell them apart in the blink of an eye.
The behavior of the model is very sensitive to the gamma parameter. If gamma is too large, the radius
of the area of influence of the support vectors only includes the support vector itself and no amount of
regularization with C will be able to prevent overfitting.

When gamma is very small, the model is too constrained and cannot capture the complexity or “shape”
of the data. The region of influence of any selected support vector would include the whole training set.
The resulting model will behave similarly to a linear model with a set of hyperplanes that separate the
centers of high density of any pair of two classes.
For intermediate values, we can see on the second plot that good models can be found on a diagonal of
C and gamma. Smooth models (lower gamma values) can be made more complex by increasing the
importance of classifying each point correctly (larger C values) hence the diagonal of good performing
models.
Finally one can also observe that for some intermediate values of gamma we get equally performing
models when C becomes very large: it is not necessary to regularize by enforcing a larger margin. The
radius of the RBF kernel alone acts as a good structural regularizer. In practice though it might still be
interesting to simplify the decision function with a lower value of C so as to favor models that use less
memory and that are faster to predict.
We should also note that small differences in scores results from the random splits of the cross-
validation procedure. Those spurious variations can be smoothed out by increasing the number of CV
iterations n_splits at the expense of compute time. Increasing the value number of C_range and
gamma_range steps will increase the resolution of the hyper-parameter heat map.

he RBF kernel is a stationary kernel. It is also known as the “squared exponential” kernel. It is
parameterized by a length-scale parameter , which can either be a scalar (isotropic variant of the kernel)
or a vector with the same number of dimensions as the inputs
(anisotropic variant of the kernel). The kernel is given by:

k(x_i, x_j) = exp(-1 / 2 d(x_i /length_scale, x_j / length_scale)^2)


This kernel is infinitely differentiable, which implies that GPs with this kernel as covariance function
have mean square derivatives of all orders, and are thus very smooth. The prior and posterior of a GP
resulting from an RBF kernel are shown in the following figure:
https://app.dominodatalab.com/u/fonnesbeck/gp_showdown/view/GP+Showdown.ipynb

Python users are incredibly lucky to have so many options for constructing and fitting non-parametric
regression and classification models. I've demonstrated the simplicity with which a GP model can be fit
to continuous-valued data using scikit-learn, and how to extend such models to more general
forms and more sophisticated fitting algorithms using either GPflow or PyMC3. Given the prevalence
of non-linear relationships among variables in so many settings, Gaussian processes should be present
in any applied statistician's toolkit. I often find myself, rather than building stand-alone GP models,
including them as components in a larger hierararchical model, in order to adequately account for non-
linear confounding variables such as age effects in biostatistical applications, or for function
approximation in reinforcement learning tasks.
This post is far from a complete survey of software tools for fitting Gaussian processes in Python. I
chose these three because of my own familiarity with them, and because they occupy different sweet
spots in the tradeoff between automation and flexibility. You can readily implement such models using
GPy, Stan, Edward and George, to name just a few of the more popular packages. I encourage you to
try a few of them to get an idea of which fits in to your data science workflow best.

scipy.interpolate.Rbf
class scipy.interpolate.Rbf(*args)[source]

A class for radial basis function approximation/interpolation of n-dimensional scattered data.

You might also like