Professional Documents
Culture Documents
Contents
1 General
2 Error
3 Elementary and special functions
4 Numerical linear algebra
4.1 Basic concepts
4.2 Solving systems of linear equations
4.3 Eigenvalue algorithms
4.4 Other concepts and algorithms
5 Interpolation and approximation
5.1 Polynomial interpolation
5.2 Spline interpolation
5.3 Trigonometric interpolation
5.4 Other interpolants
5.5 Approximation theory
5.6 Miscellaneous
6 Finding roots of nonlinear equations
7 Optimization
7.1 Basic concepts
7.2 Linear programming
7.3 Convex optimization
7.4 Nonlinear programming
7.5 Optimal control and infinite-dimensional optimization
7.6 Uncertainty and randomness
7.7 Theoretical aspects
7.8 Applications
7.9 Miscellaneous
8 Numerical quadrature (integration)
9 Numerical methods for ordinary differential equations
10 Numerical methods for partial differential equations
10.1 Finite difference methods
10.2 Finite element methods
10.3 Other methods
10.4 Techniques for improving these methods
10.5 Grids and meshes
10.6 Analysis
11 Monte Carlo method
12 Applications
13 Software
General
Iterative method
Rate of convergence — the speed at which a convergent sequence approaches its limit
Order of accuracy — rate at which numerical solution of differential equation converges to exact
solution
Series acceleration — methods to accelerate the speed of convergence of a series
Aitken's delta-squared process — most useful for linearly converging sequences
Minimum polynomial extrapolation — for vector sequences
Richardson extrapolation
Shanks transformation — similar to Aitken's delta-squared process, but applied to the partial sums
Van Wijngaarden transformation — for accelerating the convergence of an alternating series
Abramowitz and Stegun — book containing formulas and tables of many special functions
Digital Library of Mathematical Functions — successor of book by Abramowitz and Stegun
Curse of dimensionality
Local convergence and global convergence — whether you need a good initial guess to get convergence
Superconvergence
Discretization
Difference quotient
Complexity:
Computational complexity of mathematical operations
Smoothed analysis — measuring the expected performance of algorithms under slight random
perturbations of worst-case inputs
Symbolic-numeric computation — combination of symbolic and numeric methods
Cultural and historical aspects:
History of numerical solution of differential equations using computers
Hundred-dollar, Hundred-digit Challenge problems — list of ten problems proposed by Nick
Trefethen in 2002
International Workshops on Lattice QCD and Numerical Analysis
Timeline of numerical analysis after 1945
General classes of methods:
Collocation method — discretizes a continuous equation by requiring it only to hold at certain
points
Level set method
Level set (data structures) — data structures for representing level sets
Sinc numerical methods — methods based on the sinc function, sinc(x) = sin(x) / x
ABS methods
Error
Error analysis
Approximation
Approximation error
Condition number
Discretization error
Floating point number
Guard digit — extra precision introduced during a computation to reduce round-off error
Truncation — rounding a floating-point number by discarding all digits after a certain digit
Round-off error
Numeric precision in Microsoft Excel
Arbitrary-precision arithmetic
Interval arithmetic — represent every number by two floating-point numbers guaranteed to have the
unknown number between them
Interval contractor — maps interval to subinterval which still contains the unknown exact answer
Interval propagation — contracting interval domains without removing any value consistent with the
constraints
See also: Interval boundary element method, Interval finite element
Loss of significance
Numerical error
Numerical stability
Error propagation:
Propagation of uncertainty
List of uncertainty propagation software
Significance arithmetic
Residual (numerical analysis)
Relative change and difference — the relative difference between x and y is |x − y| / max(|x|, |y|)
Significant figures
False precision — giving more significant figures than appropriate
Truncation error — error committed by doing only a finite numbers of steps
Well-posed problem
Affine arithmetic
Basic concepts
Types of matrices appearing in numerical analysis:
Sparse matrix
Band matrix
Bidiagonal matrix
Tridiagonal matrix
Pentadiagonal matrix
Skyline matrix
Circulant matrix
Triangular matrix
Diagonally dominant matrix
Block matrix — matrix composed of smaller matrices
Stieltjes matrix — symmetric positive definite with non-positive off-diagonal entries
Hilbert matrix — example of a matrix which is extremely ill-conditioned (and thus difficult to
handle)
Wilkinson matrix — example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly,
equal eigenvalues
Convergent matrix – square matrix whose successive powers approach the zero matrix
SAXPY — the operation z = ax + y where a is a scalar and x, y and z vectors
Algorithms for matrix multiplication:
Strassen algorithm
Coppersmith–Winograd algorithm
Cannon's algorithm — a distributed algorithm, especially suitable for processors laid out in a 2d
grid
Freivalds' algorithm — a randomized algorithm for checking the result of a multiplication
Matrix decompositions:
LU decomposition — lower triangular times upper triangular
QR decomposition — orthogonal matrix times triangular matrix
RRQR factorization — rank-revealing QR factorization, can be used to compute rank of a
matrix
Polar decomposition — unitary matrix times positive-semidefinite Hermitian matrix
Decompositions by similarity:
Eigendecomposition — decomposition in terms of eigenvectors and eigenvalues
Jordan normal form — bidiagonal matrix of a certain form; generalizes the
eigendecomposition
Jordan–Chevalley decomposition — sum of commuting nilpotent matrix and diagonalizable
matrix
Schur decomposition — similarity transform bringing the matrix to a triangular matrix
Singular value decomposition — unitary matrix times diagonal matrix times unitary matrix
Matrix splitting – expressing a given matrix as a sum or difference of matrices
Gaussian elimination
Row echelon form — matrix in which all entries below a nonzero entry are zero
Bareiss algorithm — variant which ensures that all entries remain integers if the initial matrix has
integer entries
Tridiagonal matrix algorithm — simplified form of Gaussian elimination for tridiagonal matrices
LU decomposition — write a matrix as a product of an upper- and a lower-triangular matrix
Crout matrix decomposition
LU reduction — a special parallelized version of a LU decomposition algorithm
Block LU decomposition
Cholesky decomposition — for solving a system with a positive definite matrix
Minimum degree algorithm
Symbolic Cholesky decomposition
Iterative refinement — procedure to turn an inaccurate solution in a more accurate one
Direct methods for sparse matrices:
Frontal solver — used in finite element methods
Nested dissection — for symmetric matrices, based on graph partitioning
Levinson recursion — for Toeplitz matrices
SPIKE algorithm — hybrid parallel solver for narrow-banded matrices
Cyclic reduction — eliminate even or odd rows or columns, repeat
Iterative methods:
Jacobi method
Gauss–Seidel method
Successive over-relaxation (SOR) — a technique to accelerate the Gauss–Seidel method
Backfitting algorithm — iterative procedure used to fit a generalized additive model, often
equivalent to Gauss–Seidel
Modified Richardson iteration
Conjugate gradient method (CG) — assumes that the matrix is positive definite
Derivation of the conjugate gradient method
Nonlinear conjugate gradient method — generalization for nonlinear optimization problems
Biconjugate gradient method (BiCG)
Biconjugate gradient stabilized method (BiCGSTAB) — variant of BiCG with better
convergence
Conjugate residual method — similar to CG but only assumed that the matrix is symmetric
Generalized minimal residual method (GMRES) — based on the Arnoldi iteration
Chebyshev iteration — avoids inner products but needs bounds on the spectrum
Stone's method (SIP – Srongly Implicit Procedure) — uses an incomplete LU decomposition
Kaczmarz method
Preconditioner
Incomplete Cholesky factorization — sparse approximation to the Cholesky factorization
Incomplete LU factorization — sparse approximation to the LU factorization
Underdetermined and overdetermined systems (systems that have no or more than one solution):
Numerical computation of null space — find all solutions of an underdetermined system
Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined
systems) or smallest residual
Sparse approximation — for finding the sparsest solution (i.e., the solution with as many zeros as
possible)
Eigenvalue algorithms
Power iteration
Inverse iteration
Rayleigh quotient iteration
Arnoldi iteration — based on Krylov subspaces
Lanczos algorithm — Arnoldi, specialized for positive-definite matrices
QR algorithm
Jacobi eigenvalue algorithm — select a small submatrix which can be diagonalized exactly, and repeat
Jacobi rotation — the building block, almost a Givens rotation
Jacobi method for complex Hermitian matrices
Divide-and-conquer eigenvalue algorithm
Folded spectrum method
LOBPCG — Locally Optimal Block Preconditioned Conjugate Gradient Method
Orthogonalization algorithms:
Gram–Schmidt process
Householder transformation
Householder operator — analogue of Householder transformation for general inner product
spaces
Givens rotation
Krylov subspace
Block matrix pseudoinverse
Bidiagonalization
Cuthill–McKee algorithm — permutes rows/columns in sparse matrix to yield a narrow band matrix
In-place matrix transposition — computing the transpose of a matrix without using much additional
storage
Pivot element — entry in a matrix on which the algorithm concentrates
Matrix-free methods — methods that only access the matrix by evaluating matrix-vector products
Polynomial interpolation
Linear interpolation
Runge's phenomenon
Vandermonde matrix
Chebyshev polynomials
Chebyshev nodes
Lebesgue constant (interpolation)
Different forms for the interpolant:
Newton polynomial
Divided differences
Neville's algorithm — for evaluating the interpolant; based on the Newton form
Lagrange polynomial
Bernstein polynomial — especially useful for approximation
Brahmagupta's interpolation formula — seventh-century formula for quadratic interpolation
Extensions to multiple dimensions:
Bilinear interpolation
Trilinear interpolation
Bicubic interpolation
Tricubic interpolation
Padua points — set of points in R2 with unique polynomial interpolant and minimal growth of
Lebesgue constant
Hermite interpolation
Birkhoff interpolation
Abel–Goncharov interpolation
Spline interpolation
Spline interpolation — interpolation by piecewise polynomials
Trigonometric interpolation
Other interpolants
Simple rational approximation
Polynomial and rational function modeling — comparison of polynomial and rational interpolation
Wavelet
Continuous wavelet
Transfer matrix
See also: List of functional analysis topics, List of wavelet-related transforms
Inverse distance weighting
Radial basis function (RBF) — a function of the form ƒ(x) = φ(|x−x 0|)
Polyharmonic spline — a commonly used radial basis function
Thin plate spline — a specific polyharmonic spline: r2 log r
Hierarchical RBF
Subdivision surface — constructed by recursively subdividing a piecewise linear interpolant
Catmull–Clark subdivision surface
Doo–Sabin subdivision surface
Loop subdivision surface
Slerp (spherical linear interpolation) — interpolation between two points on a sphere
Generalized quaternion interpolation — generalizes slerp for interpolation between more than two
quaternions
Irrational base discrete weighted transform
Nevanlinna–Pick interpolation — interpolation by analytic functions in the unit disc subject to a bound
Pick matrix — the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-
definite
Multivariate interpolation — the function being interpolated depends on more than one variable
Barnes interpolation — method for two-dimensional functions using Gaussians common in
meteorology
Coons surface — combination of linear interpolation and bilinear interpolation
Lanczos resampling — based on convolution with a sinc function
Natural neighbor interpolation
Nearest neighbor value interpolation
PDE surface
Transfinite interpolation — constructs function on planar domain given its values on the boundary
Trend surface analysis — based on low-order polynomials of spatial coordinates; uses scattered
observations
Method based on polynomials are listed under Polynomial interpolation
Approximation theory
Approximation theory
Orders of approximation
Lebesgue's lemma
Curve fitting
Vector field reconstruction
Modulus of continuity — measures smoothness of a function
Least squares (function approximation) — minimizes the error in the L2-norm
Minimax approximation algorithm — minimizes the maximum error over an interval (the L∞-norm)
Equioscillation theorem — characterizes the best approximation in the L∞-norm
Unisolvent point set — function from given function space is determined uniquely by values on such a set
of points
Stone–Weierstrass theorem — continuous functions can be approximated uniformly by polynomials, or
certain other function spaces
Approximation by polynomials:
Linear approximation
Bernstein polynomial — basis of polynomials useful for approximating a function
Bernstein's constant — error when approximating |x| by a polynomial
Remez algorithm — for constructing the best polynomial approximation in the L∞-norm
Bernstein's inequality (mathematical analysis) — bound on maximum of derivative of polynomial in
unit disk
Mergelyan's theorem — generalization of Stone–Weierstrass theorem for polynomials
Müntz–Szász theorem — variant of Stone–Weierstrass theorem for polynomials if some
coefficients have to be zero
Bramble–Hilbert lemma — upper bound on Lp error of polynomial approximation in multiple
dimensions
Discrete Chebyshev polynomials — polynomials orthogonal with respect to a discrete measure
Favard's theorem — polynomials satisfying suitable 3-term recurrence relations are orthogonal
polynomials
Approximation by Fourier series / trigonometric polynomials:
Jackson's inequality — upper bound for best approximation by a trigonometric polynomial
Bernstein's theorem (approximation theory) — a converse to Jackson's inequality
Fejér's theorem — Cesàro means of partial sums of Fourier series converge uniformly for
continuous periodic functions
Erdős–Turán inequality — bounds distance between probability and Lebesgue measure in terms of
Fourier coefficients
Different approximations:
Moving least squares
Padé approximant
Padé table — table of Padé approximants
Hartogs–Rosenthal theorem — continuous functions can be approximated uniformly by rational
functions on a set of Lebesgue measure zero
Szász–Mirakyan operator — approximation by e−n x k on a semi-infinite interval
Szász–Mirakjan–Kantorovich operator
Baskakov operator — generalize Bernstein polynomials, Szász–Mirakyan operators, and Lupas
operators
Favard operator — approximation by sums of Gaussians
Surrogate model — application: replacing a function that is hard to evaluate by a simpler function
Constructive function theory — field that studies connection between degree of approximation and
smoothness
Universal differential equation — differential–algebraic equation whose solutions can approximate any
continuous function
Fekete problem — find N points on a sphere that minimize some kind of energy
Carleman's condition — condition guaranteeing that a measure is uniquely determined by its moments
Krein's condition — condition that exponential sums are dense in weighted L2 space
Lethargy theorem — about distance of points in a metric space from members of a sequence of
subspaces
Wirtinger's representation and projection theorem
Journals:
Constructive Approximation
Journal of Approximation Theory
Miscellaneous
Extrapolation
Linear predictive analysis — linear extrapolation
Unisolvent functions — functions for which the interpolation problem has a unique solution
Regression analysis
Isotonic regression
Curve-fitting compaction
Interpolation (computer graphics)
General methods:
Bisection method — simple and robust; linear convergence
Lehmer–Schur algorithm — variant for complex functions
Fixed-point iteration
Newton's method — based on linear approximation around the current iterate; quadratic
convergence
Kantorovich theorem — gives a region around solution such that Newton's method
converges
Newton fractal — indicates which initial condition converges to which root under Newton
iteration
Quasi-Newton method — uses an approximation of the Jacobian:
Broyden's method — uses a rank-one update for the Jacobian
Symmetric rank-one — a symmetric (but not necessarily positive definite) rank-one
update of the Jacobian
Davidon–Fletcher–Powell formula — update of the Jacobian in which the matrix
remains positive definite
BFGS method — rank-two update of the Jacobian in which the matrix remains
positive definite
Limited-memory BFGS method — truncated, matrix-free variant of BFGS method
suitable for large problems
Steffensen's method — uses divided differences instead of the derivative
Secant method — based on linear interpolation at last two iterates
False position method — secant method with ideas from the bisection method
Muller's method — based on quadratic interpolation at last three iterates
Sidi's generalized secant method — higher-order variants of secant method
Inverse quadratic interpolation — similar to Muller's method, but interpolates the inverse
Brent's method — combines bisection method, secant method and inverse quadratic interpolation
Ridders' method — fits a linear function times an exponential to last two iterates and their midpoint
Halley's method — uses f, f' and f''; achieves the cubic convergence
Householder's method — uses first d derivatives to achieve order d + 1; generalizes Newton's and
Halley's method
Methods for polynomials:
Aberth method
Bairstow's method
Durand–Kerner method
Graeffe's method
Jenkins–Traub algorithm — fast, reliable, and widely used
Laguerre's method
Splitting circle method
Analysis:
Wilkinson's polynomial
Numerical continuation — tracking a root as one parameters in the equation changes
Piecewise linear continuation
Optimization
Mathematical optimization — algorithm for finding maxima or minima of a given function
Basic concepts
Active set
Candidate solution
Constraint (mathematics)
Binary constraint — a constraint that involves exactly two variables
Corner solution
Global optimum and Local optimum
Maxima and minima
Slack variable
Continuous optimization
Discrete optimization
Linear programming
Linear programming (also treats integer programming) — objective function and constraints are linear
Convex optimization
Convex optimization
Quadratic programming
Linear least squares (mathematics)
Total least squares
Frank–Wolfe algorithm
Sequential minimal optimization — breaks up large QP problems into a series of smallest possible
QP problems
Bilinear program
Basis pursuit — minimize L1-norm of vector subject to linear constraints
Basis pursuit denoising (BPDN) — regularized version of basis pursuit
In-crowd algorithm — algorithm for solving basis pursuit denoising
Linear matrix inequality
Conic optimization
Semidefinite programming
Second-order cone programming
Sum-of-squares optimization
Quadratic programming (see above)
Bregman method — row-action method for strictly convex optimization problems
Subgradient method — extension of steepest descent for problems with a nondifferentiable objective
function
Nonlinear programming
Nonlinear programming — the most general optimization problem in the usual framework
Optimal control
Infinite-dimensional optimization
Semi-infinite programming — infinite number of variables and finite number of constraints, or other way
around
Shape optimization, Topology optimization — optimization over a set of regions
Topological derivative — derivative with respect to changing in the shape
Generalized semi-infinite programming — finite number of variables, infinite number of constraints
Theoretical aspects
Convex analysis — function f such that f(tx + (1 − t)y) ≥ tf(x) + (1 − t)f(y) for t ∈ [0,1]
Pseudoconvex function — function f such that ∇f · (y − x) ≥ 0 implies f(y) ≥ f(x)
Quasiconvex function — function f such that f(tx + (1 − t)y) ≤ max(f(x), f(y)) for t ∈ [0,1]
Subderivative
Geodesic convexity — convexity for functions defined on a Riemannian manifold
Duality (optimization)
Weak duality — dual solution gives a bound on the primal solution
Strong duality — primal and dual solutions are equivalent
Shadow price
Dual cone and polar cone
Duality gap — difference between primal and dual solution
Fenchel's duality theorem — relates minimization problems with maximization problems of convex
conjugates
Perturbation function — any function which relates to primal and dual problems
Slater's condition — sufficient condition for strong duality to hold in a convex optimization problem
Total dual integrality — concept of duality for integer linear programming
Wolfe duality — for when objective function and constraints are differentiable
Farkas' lemma
Karush–Kuhn–Tucker conditions (KKT) — sufficient conditions for a solution to be optimal
Fritz John conditions — variant of KKT conditions
Lagrange multiplier
Lagrange multipliers on Banach spaces
Semi-continuity
Complementarity theory — study of problems with constraints of the form 〈u, v 〉 = 0
Mixed complementarity problem
Mixed linear complementarity problem
Lemke's algorithm — method for solving (mixed) linear complementarity problems
Danskin's theorem — used in the analysis of minimax problems
Maximum theorem — the maximum and maximizer are continuous as function of parameters, under some
conditions
No free lunch in search and optimization
Relaxation (approximation) — approximating a given problem by an easier problem by relaxing some
constraints
Lagrangian relaxation
Linear programming relaxation — ignoring the integrality constraints in a linear programming
problem
Self-concordant function
Reduced cost — cost for increasing a variable by a small amount
Hardness of approximation — computational complexity of getting an approximate solution
Applications
In geometry:
Geometric median — the point minimizing the sum of distances to a given set of points
Chebyshev center — the centre of the smallest ball containing a given set of points
In statistics:
Iterated conditional modes — maximizing joint probability of Markov random field
Response surface methodology — used in the design of experiments
Automatic label placement
Compressed sensing — reconstruct a signal from knowledge that it is sparse or compressible
Cutting stock problem
Demand optimization
Destination dispatch — an optimization technique for dispatching elevators
Energy minimization
Entropy maximization
Highly optimized tolerance
Hyperparameter optimization
Inventory control problem
Newsvendor model
Extended newsvendor model
Linear programming decoding
Linear search problem — find a point on a line by moving along the line
Low-rank approximation — find best approximation, constraint is that rank of some matrix is smaller than
a given number
Meta-optimization — optimization of the parameters in an optimization method
Multidisciplinary design optimization
Paper bag problem
Process optimization
Recursive economics — individuals make a series of two-period optimization decisions over time.
Stigler diet
Space allocation problem
Stress majorization
Trajectory optimization
Transportation theory
Wing-shape optimization
Miscellaneous
Combinatorial optimization
Dynamic programming
Bellman equation
Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman equation
Backward induction — solving dynamic programming problems by reasoning backwards in time
Optimal stopping — choosing the optimal time to take a particular action
Odds algorithm
Robbins' problem
Global optimization:
BRST algorithm
MCS algorithm
Multi-objective optimization — there are multiple conflicting objectives
Benson's algorithm — for linear vector optimization problems
Bilevel program — problem in which one problem is embedded in another
Optimal substructure
Dykstra's projection algorithm — finds a point in intersection of two convex sets
Algorithmic concepts:
Barrier function
Penalty method
Trust region
Test functions for optimization:
Rosenbrock function — two-dimensional function with a banana-shaped valley
Himmelblau's function — two-dimensional with four local minima, defined by
Finite element method in structural mechanics — a physical approach to finite element methods
Galerkin method — a finite element method in which the residual is orthogonal to the finite element space
Discontinuous Galerkin method — a Galerkin method in which the approximate solution is not
continuous
Rayleigh–Ritz method — a finite element method based on variational principles
Spectral element method — high-order finite element methods
hp-FEM — variant in which both the size and the order of the elements are automatically adapted
Examples of finite elemets:
Bilinear quadrilateral element — also known as the Q4 element
Constant strain triangle element (CST) — also known as the T3 element
Barsoum elements
Direct stiffness method — a particular implementation of the finite element method, often used in
structural analysis
Trefftz method
Finite element updating
Extended finite element method — puts functions tailored to the problem in the approximation space
Functionally graded elements — elements for describing functionally graded materials
Superelement — particular grouping of finite elements, employed as a single element
Interval finite element method — combination of finite elements with interval arithmetic
Discrete exterior calculus — discrete form of the exterior calculus of differential geometry
Modal analysis using FEM — solution of eigenvalue problems to find natural vibrations
Céa's lemma — solution in the finite-element space is an almost best approximation in that space of the
true solution
Patch test (finite elements) — simple test for the quality of a finite element
MAFELAP (MAthematics of Finite ELements and APplications) — international conference held at
Brunel University
NAFEMS — not-for-profit organisation that sets and maintains standards in computer-aided engineering
analysis
Multiphase topology optimisation — technique based on finite elements for determining optimal
composition of a mixture
Interval finite element
Applied element method — for simulation of cracks and structural collapse
Wood–Armer method — structural analysis method based on finite elements used to design
reinforcement for concrete slabs
Isogeometric analysis — integrates finite elements into conventional NURBS-based CAD design tools
Stiffness matrix — finite-dimensional analogue of differential operator
Combination with meshfree methods:
Weakened weak form — form of a PDE that is weaker than the standard weak form
G space — functional space used in formulating the weakened weak form
Smoothed finite element method
List of finite element software packages
Other methods
Analysis
Applications
Computational physics
Computational electromagnetics
Computational fluid dynamics (CFD)
Large eddy simulation
Smoothed-particle hydrodynamics
Aeroacoustic analogy — used in numerical aeroacoustics to reduce sound sources to simple
emitter types
Stochastic Eulerian Lagrangian method — uses Eulerian description for fluids and
Lagrangian for structures
Computational magnetohydrodynamics (CMHD) — studies electrically conducting fluids
Climate model
Numerical weather prediction
Geodesic grid
Celestial mechanics
Numerical model of the Solar System
Dynamic Design Analysis Method (DDAM) — for evaluating effect of underwater explosions on
equipment
Computational chemistry
Cell lists
Coupled cluster
Density functional theory
DIIS — direct inversion in (or of) the iterative subspace
Computational sociology
Computational statistics
Software
For software, see the list of numerical analysis software.