Reliable Methods for Computer Simulation by Pekka Neittaanmaki and Sergey R. Repin by Pekka Neittaanmaki and Sergey R. Repin - Read Online

Book Preview

Reliable Methods for Computer Simulation - Pekka Neittaanmaki

You've reached the end of this preview. Sign up to read more!
Page 1 of 1



Pekka Neittaanmäki and Sergey Repin

Jyväskylä–Saint-Petersburg, October 2003

Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods of reliable computations. In general, a reliable numerical method must solve two basic problems:

(a) generate a sequence of approximations that converges to a solution,

(b) verify the accuracy of these approximations.

A computer code for such a method must consist of two respective blocks: solver and checker.

An intensive investigation of the problem (a) (developing correct and efficient numerical methods) started in the middle of the 20th century. At present, there is a vast amount of literature devoted to this subject. For this reason, in Chapter 2, we recall only some principal results. Readers can find a detailed exposition of these questions in the literature cited. This chapter also includes mathematical knowledge used in subsequent parts.

In this book, we are chiefly concerned with the problem (b) and try to present the main approaches developed for a posteriori error estimation in various problems.

The material of the book is organized as follows.Chapter 3 is devoted to an analysis of two-sided a posteriori error estimates for iteration methods. Here, the derivation of estimates is based upon the Banach fixed point theorem. InChapter 4, we outline the methods of the a posteriori error estimation used for finite element approximations.

Subsequent parts of the book are concentrated on a posteriori error estimates of the functional type (also called Duality Error Majorants) for differential equations. They provide computable bounds of errors for all types of conforming approximations. Originally, these estimates were derived by methods of duality theory in the modern calculus of variations. For convenience of the readers, we expose all necessary mathematical background in Chapter 5. A special but important case of linear elliptic problems is considered in Chapter 6. Here, we present a respective theory and also expose a series of numerical results intended to demonstrate the performance of Duality Error Majorants.Chapter 7 is concerned with nonlinear variational problems. It gives a general approach to a posteriori error estimation for variational problems with uniformly convex functionals. In Chapter 8, we apply this theory to an important class of variational problems arising in the theory of variational inequalities.

The authors try to retain a rigorous mathematical style, however proofs are constructive whenever possible and additional mathematical knowledge is presented when necessary. The book contains a number of new mathematical results and lists a posteriori error estimation methods that have been developed in the very recent time.

Some parts of the text have been used in lectures at the State Technical University of St.-Petersburg, the University of Jyväskylä, and the University of Houston. We hope that the book will be useful for advanced specialists, as well as for students and PhD students specialized in applied mathematics and scientific computing.

Chapter 1


1.1 Sources of errors affecting the reliability of numerical solutions

In mathematical modeling, a physical object (or process) is analyzed by a certain mathematical model. Let U be a physical value that characterizes this process and u be a respective value obtained from the mathematical model. Then the quantity

is an error of the mathematical model. Here, |⋅| is understood in a broad sense. In a particular problem, it may denote the absolute value of the difference or a norm in a suitable functional space. For example, deformations of solid bodies are often described by linear elasticity theory. In this case, U is a vector-valued function of displacements that arise in a body under the action of given forces and u 1represents the error of the above mathematical model.

In a great majority of mathematical models arising in physics, mechanics, biology and other sciences one cannot directly obtain exact (analytic) solutions. The reason for this is that adequate models of complicated processes usually lead to systems of partial differential equations or to coupled systems that could also include algebraic relations, integral equations, and additional conditions. An exact solution of such a mathematical problem is usually understood in a rather abstract sense — as an element of a certain functional space. Properties of such a solution may be investigated by purely mathematical methods, however the quantitative analysis inevitably leads to the necessity of approximating u by a sequence of simple (e.g., piece wise polynomial) functions. Thus, a sequence of approximate problems associated with the original one arises. Let uh denote a solution of such an approximate problem defined on a mesh of character size h. Then,uh encompasses theapproximation error

2corresponds to errors that arise if differential equations are approximated by systems of algebraic equations (e.g., in the finite element or Ritz–Galerkin methods).

However, finite–dimensional problems are also solved approximately, so that instead ofuh. The quantity

shows an error of the numerical algorithm performed with a concrete computer. This error includes

• roundoff errors,

• errors arising in iteration processes forcibly stopped by some stopping criteria,

• errors caused by possible defects in computer codes.

It is worth noting that the detection of the errors of the latter type may be very difficult and often causes serious troubles in developing computer codes. Therefore, having a method able to verify the numerical results one can significantly increase the reliability of a computer program.

All what we have said above is schematically presented inFig. 1.1.1. We see that in place of the desired U . The difference between them meets the principal inequality


3provide bounds for the function U.

Figure 1.1.1 Errors arising in the mathematical modeling.

Another principal inequality is



Thus, two major problems of mathematical modeling, namely,

• reliable computer simulation,

• numerical verification of mathematical models,

3. Below, we focus our attention on this problem.

1.2 The main approaches to error estimation

2 too small to be taken into account. In general, such a concept does not lead to reliable numerical results (see, e.g., [58] Chapter 11, where it is discussed that approximate solutions of complicated boundary-value problems are often mesh—dependent so that restructuring of a mesh may lead to a different result). Strictly speaking, approximate solutions presented without an error control can be viewed only as preliminary ones.

Nowadays, there are two main approaches to error estimation associated with a priori and a posteriori error estimates.

1.2.1 A priori error estimates

These estimates are intended to estimate the behavior of errors a priori, i.e., before computations. For example, consider a boundary-value problem in an abstract form Au – f. Assume that it is approximated by a discrete problem Ahuh=fh h. Let D denote the set of given data associated with our boundary-value problem, i.e., D includes properties of the domain, coefficients, and the boundary conditions. Assume that it is found a functional M dependent on Dh, h, and the exact solution u such that


where || ⋅ || is a suitable (e.g., energy) norm. If uh tends to u as h→ 0, then a consistent a priori error majorant M must meet the condition


If, in addition,


where C(D,u) is a positive constant and β > 0, then we say that a rate convergence estimate is obtained.

It is worth noting that the right-hand side of (1.2.1) includes an unknown exact solution u. For this reason, a priori error estimates have mainly a theoretical meaning: they justify the asymptotic behavior of an error. This justification is, by no means, of profound importance for all numerical methods. It shows that the approximations used are correct in principle.

The estimate (1.2.3) establishes a qualified convergence estimate. As a rule, it can be derived only if the exact solution u possesses additionalregularity properties and belongs to a certain subset of the natural energy space.

At present, the theory of a priori convergence estimates is well-studied (see, e.g., [53],[204]).

1.2.2 A posteriori error estimates

A posteriori error estimates use computed solutions instead of the exact ones. Thus, an a posteriori counterpart of the estimate (1.2.1) is


In this book, we consider a posteriori error estimates for iteration and finite element methods and analyze a new class of estimates (a posteriori error estimates of the functional type) that can be applied for various variational problems.

A posteriori error estimates for iteration methods follow from the classical Banach fixed point theorem. They are considered inChapter 3.

Historically, a posteriori error estimates for finite element approximations arose as a needful tool for mesh–adaptive methods, in which a new (refined) mesh is constructed by analyzing an approximate solution computed on a coarse mesh. These estimates were mainly used as error indicators intended to detect subdomains with excessively high errors. Two widely used types of such estimators are based either on evaluating of the negative norm of the residual or on analyzing the difference between the computed solution and its post–processed image usually obtained by various gradient averaging methods. The first method brings its origin in the papers by I. Babuška and W. C. Rheinboldt [15,16]. Later it was investigated and extended by many authors (see, e.g., the books [3,18,213]). Residual type error estimate have the following common