This action might not be possible to undo. Are you sure you want to continue?

Analytical and Numerical Methods for Volterra Equations

**SIAM Studies in Applied Mathematics
**

JOHN A. NOHEL, Managing Editor This series of monographs focuses on mathematics and its applications to problems of current concern to industry, government, and society. These monographs will be of interest to applied mathematicians, numerical analysts, statisticians, engineers, and scientists who have an active need to learn useful methodology for problem solving. The first six titles in this series are: Lie-Bäcklund Transformations in Applications, by Robert L. Anderson and Nail H. Ibragimov; Methods and Applications of Interval Analysis, by Ramon E. Moore; Ill-Posed Problems for Integrodifferential Equations in Mechanics and Electromagnetic Theory, by Frederick Bloom; Solitons and the Inverse Scattering Transform, by Mark J. Ablowitz and Harvey Segur; Fourier Analysis of Numerical Approximations of Hyperbolic Equations, by Robert Vichnevetsky and John B. Bowles; and Numerical Solution of Elliptic Problems, by Garrett Birkhoff and Robert E. Lynch.

Peter Linz

Analytical and Numerical Methods for Volterra Equations

siam.

Philadelphia/1985

Library of Congress Catalog Card Number: 84-51968 ISBN: 0-89871-198-3 . All rights reserved.Copyright © 1985 by Society for Industrial and Applied Mathematics.

to Susan .

This page intentionally left blank .

Some qualitative properties of the solution 3.2.2.1. Systems of equations 3.3.3. Linear Volterra Equations of the Second Kind 3. Integrodifferential equations Notes on Chapter 3 Chapter 4. Applications in systems theory 2.4. Equations with unbounded kernels 3. Nonlinear Equations of the Second Kind 4. Existence and uniqueness of the solution 3.4.Contents Preface Part A: Theory Chapter 1.2.1. Connection between Volterra equations and initial value problems Notes on Chapter 1 Chapter 2.2. Some Applications of Volterra Equations 2. Existence and uniqueness for more general kernels vii xi 3 3 7 10 13 14 15 19 23 26 29 29 35 38 46 47 49 50 51 51 54 . The resolvent kernel 3. History-dependent problems 2. Problems in heat conduction and diffusion 2.5. Introduction 1.6. Successive approximations for Lipschitz continuous kernels . Some problems in experimental inference Notes on Chapter 2 Chapter 3.1. Classification of Volterra equations 1. 4.1.

Unbounded kernels and systems of equations 4. Block-by-block methods 114 7. 97 7. 6. Error estimates and numerical stability 103 7. A method based on the product trapezoidal rule 8. .5.1. A simple method for a specific example 8. Methods based on more accurate numerical integration . Properties of the solution 4. Equations with smooth kernels 5.1. Abel equations Notes on Chapter 5 Chapter 6.2. Explicit Runge-Kutta methods 122 7. . Product integration 8.1. .3..8. Notes on Chapter 8 129 130 132 135 136 138 141 .viii CONTENTS 4.7. Some numerical examples 118 7.2.2. . 6.4. Product Integration Methods for Equations of the Second Kind 8. Error analysis: convergence of the approximate solution .6. Another view of stability 110 7.4. Equations of the First Kind 5.5. A summary of related ideas and methods 124 Notes on Chapter 7 127 Chapter 8. 6.3. The Numerical Solution of Equations of the Second Kind . .2.9. A simple numerical procedure 96 7.1.4. 58 62 63 65 67 67 71 76 77 77 84 86 89 92 95 7.. The resolvent equation Notes on Chapter 4 Chapter 5. 100 7. A convergence proof for product integration methods .4.5.3. Convolution Equations 6.3. A block-by-block method based on quadratic interpolati 8. Some simple kernels Laplace transforms Solution methods using Laplace transforms The asymptotic behavior of the solution for some special equations Notes on Chapter 6 Part B: Numerical Methods Chapter 7.

Use of the differentiated form 9. Equations of the First Kind with Differentiable Kernels 9.4.3. .2.3. 188 Notes on Chapter 11 189 Chapter 12.4. Some practical considerations Notes on Chapter 9 Chapter 10. Some remarks on error analysis 10.4. Integrodifferential Equations 143 144 145 151 154 158 160 161 163 165 166 166 169 171 174 175 177 11. A simple numerical method 178 11. Block-by-block methods 10. Linear multistep methods 182 11. Other types of integrodifferential and functional equations . An example from polymer rheology 191 192 195 197 198 199 201 201 204 . The product midpoint method for a system of the first kind. Case Studies 13. The midpoint and trapezoidal methods for general Abel equations 10. Estimating errors in the approximation 13.4.3. Nonlinear equations 9.1. The midpoint method for systems of the first kind 12. 12. Solving Abel equations in the presence of experimental errors Notes on Chapter 10 Chapter 11.1. Error analysis for simple approximation methods 9. Notes on Chapter 12 Chapter 13.1.6. Numerical stability 186 11. Application of simple integration rules 9.1.2.1.CONTENTS ix Chapter 9.5.7.3. Some Computer Programs 12.2. The product trapezoidal method for a system of the second kind 12.5. Solving a simple Abel equation 10. Difficulties with higher order methods 9.2.5.2. Block-by-block methods 9. The trapezoidal method for systems of the second kind . Equations of the Abel Type 10. Block-by-block methods 185 11.

X CONTENTS 13.3. Solving an equation of the first kind in the presence of large data errors 207 Notes on Chapter 13 211 References Supplementary Bibliography Index 213 223 225 .

integral equation techniques are well known to classical analysts and many elegant and powerful results were developed by them. In some cases. and perhaps more common reason. numerical methods for integral equations are rarely mentioned. Numerical analysts have followed suit. engineers. Analytical and numerical techniques for differential equations are widely known by many who are essentially unfamiliar with integral equation techniques. In other cases. analysts have had a continuing interest in integral equations. A brief glance at standard numerical analysis texts will confirm this. are associated with this topic. In some cases. Volterra. Amongst applied mathematicians. Fredholm. Hilbert. however. Our aim in this book is to present one aspect of this recent activity in integral equations. integral equations may become unavoidable. and numerical analysts a working knowledge of integral equations is less common. The names of many modern mathematicians. Recently.Preface Starting with the work of Abel in the 1820's. integral equations are the natural mathematical model for representing a physically interesting situation. is that integral operators. namely. notably Cauchy. and equations. transforms. and integral terms have to be introduced. In mathematical modeling. and others. as in the work of Abel on tautochrone curves. the traditional emphasis has been on differential equations. As mathematical models become more realistic. This has led to the currently active field of boundary integral equation methods which has attracted many engineers. it is known that integral equations provide a convenient and practically useful alternative to differential equations. xi . differential equations can no longer represent the physically essential properties. There are basically two reasons for this interest. The second. this situation has begun to change. methods for the solution of Volterra equations. and the study of numerical methods for integral equations has become a topic of considerable interest. as in problems with delayed action. Consequently. are convenient tools for studying differential equations.

A systematic attack on the whole problem started with the work of Pouzet in about 1960. some analytical techniques for studying the properties of the solution are known. most of the major principles were well understood and the study had attained some maturity. its merit as a catalyst is considerable. First. are the concern of most of the very recent papers. These are important not only for gaining insight into the qualitative behavior of the solution. The major points of these analytical methods are presented in the first part of the book. such equations have been studied since the work of Volterra on population dynamics. I have used examples (sometimes very simple) where I felt that the general discussion could be illuminated by a specific instance. Although later research greatly extended and improved Pouzet's work. With this in mind. No attempt has been made to extend the results to more general function spaces. The second part of the book is devoted entirely to numerical methods. although now special problems rather than a general theory have become the focus of attention. Much of the work on the numerical solution of Volterra equations was carried out in the twenty year period between 1960 and 1980. Extensions to partial integrodifferential equations or other functional equations. with most entries dating from this period. Still. but are also essential in the design of effective numerical methods. dealing with work published after 1982. I have included a number of exercises with a two-fold purpose. there is still a considerable interest in this field. although this is certainly possible. I have chosen the simplest possible setting for the discussion. Also. The list of references at the end of the book shows this. starting with my Ph. as well as questions of numerical stability. During the intervening years I have talked about this topic with most of the researchers active in the . but conceptually straightforward. dissertation in 1968. The audience for which this book is intended is a practical one with an immediate need to solve real-world problems. the space of real functions of real variables. As a result.xii PREFACE Arising primarily in connection with history-dependent problems. In many cases (here as well as in other areas). By 1980. My interest and understanding of this field developed over a long period of time. generalizations of known theorems are often technically tedious. Since there are few known analytical methods leading to closedform solutions. a good grasp of calculus is sufficient for understanding almost all of the given material. only sporadic attempts were made to develop numerical algorithms. The supplementary bibliography. Before 1960. our emphasis will be on numerical techniques.D. they can serve as a test of the reader's understanding. They are usually simple enough to give no trouble to those who have firmly grasped the preceding discussion. Currently. reflects this current interest. The second purpose of the exercises is to allow me to present extensions without having to give proofs or other lengthy discussions. I prefer to leave this to the interested reader.

Gerasoulis. PETER LINZ Davis. T. The writing of this book was started by a suggestion from him and was originally conceived as a joint effort. His advice and direction were largely responsible for my initial contributions to the field. My thanks go to all of them for their contributions and many stimulating discussions. and A. Ben introduced me to Volterra equations when. as a new graduate student in 1965. Baker. I am also indebted to the reviewers of the manuscript. its underlying structure as well some of the specific approaches were very much influenced by this early work. Thus.PREFACE xiii field. amongst them C. I would like to acknowledge a debt which goes well beyond the usual one incurred by authors. California July 1984 . for comments which were valuable in the preparations of the final version. Bownds. H. I was searching for an appropriate thesis topic. When Ben had to withdraw from the project later. It is almost certain that without the continued interest and help of Ben Noble this book would never have been written. While in the end I wrote all of the book. he graciously consented to let me use the already significant amount of work he had done. although his name does not appear on the title page of the book. M. the contribution of Ben Noble to this work is considerable. J. Finally.

This page intentionally left blank .

PART A: THEORY .

This page intentionally left blank .

Volterra equations can be considered a generalization of initial value problems. However. An equation of the form is a Volterra equation of the second kind. in the Riemann or Lebesgue sense). We will not be interested in principal value equations here and will use the adjective singular in its second sense. s. The distinction between Fredholm and Volterra equations is analogous to the distinction between boundary and initial value problems in ordinary differential equations.1. Classification of Volterra equations. 1.. occurs under an integral sign. this is not always the case as is shown by examples in later chapters. This rather general definition allows for many different specific forms and in practice many distinct types arise. Some authors call an equation singular if the integrals cannot be interpreted in the usual way (i. u) are assumed to be known. Equations which have unbounded but integrable integrands are then called weakly singular. The right-hand side g(f) and the kernel K(t.e. In the classical theory of integral equations one distinguishes between Fredholm equations and Volterra equations. In a Fredholm equation the region of integration is fixed. generally a function of one or more variables. Here the unknown is f(s). Others use the term singular to denote any equation in which some of the integrands are unbounded. Volterra equations frequently occur hi connection with time-dependent or evolutionary systems. but must be considered as principal value integrals.1) is one of several forms in which a Volterra equation can be 3 .Chapter 1 Introduction An integral equation is an equation in which the unknown. Equation (1. Some integral equations are called singular. In practice. usually depending in some simple fashion on the independent variables. whereas in a Volterra equation the region is variable. although the usage of this term varies.

In many practical applications. that is. that is. In this situation the limiting behavior of the solution is usually found from its behavior for large. choose the range of the independent variable so that the lower limit is zero and consider only the equation In our subsequent discussion. we will write the kernels of such equations as where p(t. More generally. the behavior of the solution on the whole real axis is of interest. it is chosen so that H(t. In numerical computations it is necessary in any case to use a finite T. There are many applications where the kernel of the equation is unbounded. For notational simplicity we can. A fundamentally different kind of equation is the Volterra equation of the first kind Although formally one can often reduce such an equation to one of the second kind (e. Historically. whenever the domain of the equation is unspecified. s) represents the singular part.1). the equation is (in our terminology) singular. by differentiation). without loss of generality. /(s)) is bounded. Where possible. Of special interest is the linear case in which Linearity somewhat simplifies the treatment of the equation. s. one of the earliest integral equations to be studied was AbeVs equation . For our purposes we assume that T is finite.4 CHAPTER 1 written. one might consider the form but we will limit our attention to the more common form (1..g. we will assume it to be 0^r^T<°o. we will see in subsequent discussions that equations of the first kind present some serious practical difficulties. but finite T. although when the nonlinearity is suitably restricted it introduces few essential complications.

Volterra integrodifferential equations involve derivatives of the unknown as well as integral terms.1. s) bounded and p(t. Thus (1. Nowadays it is fairly common practice to call the equation with h(t. we only illustrate it with some examples. Example 1. The presence of both derivatives and integrals allows for a profusion of different forms.INTRODUCTION 5 which is an example of a singular equation of the first kind. for the moment. We will use this observation in later chapters. but there does not exist any commonly used convention for classifying them. Formally. one can immediately extend the classification to systems of equations by interpreting /.3) becomes where and Equation (1. Fortunately. s) unbounded (but restricted to guarantee existence and uniqueness of the solution) a generalized Abel equation. K and g as vectors.8) is then a system of the second kind. Consider the integrodifferential equation Introducing the new function . most of the equations arising in practice have a fairly simple form and can usually be reduced to integral equations.

.6 we get CHAPTER 1 so that (1. the reduction can be made directly by integration. the linear equation with /(O) = /0. 1. for instance. Consider.2. Interchanging the order of integration. For certain types of linear integrodifferential equations. to obtain where FIG.1).11) is equivalent to a system of the second kind.Integrating this we get where We can then interchange the order of integration in (1. 1. Example 1. using elementary calculus arguments (Fig.15).1.

INTRODUCTION 7 Such arguments can also be used to reduce integrodifferential equations of higher order to integral equations. In extremely simple examples. for example. Our concern here is not the conversion of differential to integral equations.19) to a single Volterra equation. but the converse.20). Integral equations are used extensively in the study of the properties of differential equations.2. it is possible to find an equivalent system of ordinary differential equations.22) gives . Use direct integration to reduce (1. Connection between Volterra equations and initial value problems. 1. As the most elementary case consider Differentiating both sides. The most elementary observation is that the differential equation can be converted by integration into the Volterra equation This is often the starting point for the exploration of the qualitative properties of the solution of (1. its existence and uniqueness. Exercise 1. Reduce the equation to a system of Volterra equations. Exercise 1.1.2. this gives a way for finding closed form solutions to Volterra equations. we get Substituting for the integral term from (1. When the kernel of the integral equation has certain special properties.

. T]. we have the equation If we divide by P(t) and introduce the variable u(t) = f(t)/P(t). If we substitute this form into (1.22). the equation becomes Differentiation then yields the equation with The elementary integrating factor method immediately gives the solution of (1.26) as with This can further be simplified by integration by parts to and the solution This result can be generalized to kernels of the form which are usually referred to as degenerate or finite-rank kernels. the initial value for this differential equation is This example is a special case of a linear equation with kernel with P(t)^0 in [0.3).8 CHAPTER 1 From (1.

32) gives Now define u(t) as Then Therefore. T]. using (1.32). integrating (1. Let Assume that Pi(t). Let y ( (f) be the solution of (1.(0 exist and are continuous. . Because of the assumptions made. the y.INTRODUCTION 9 THEOREM 1.31).1.31) subject to the initial conditions (1. Then the linear equation has a solution where the yt(t) are the solution of the system Proof. Qt(0 and a given function g(t] are continuous in [0. Then.

The result of Theorem 1.30).10 CHAPTER 1 showing that u(t) satisfies (1. can be written as a Fredholm equation if we set k(t. as is indicated by the following exercise. Kowalewski [155]. The classical Fredholm theory therefore also applies to Volterra equations. Volterra [232]. Pogorzelski [207]. together with (1. A direct study of Volterra equations yields many results which cannot be obtained with the Fredholm theory. s) = 0 for s>t. under the stated assumptions. To some extent this is because Volterra equations can be considered a special case of Fredholm equations. In Chapter 3 it will be shown that. but most treat Volterra equations only briefly. Mikhlin [187]. the proof is completed. the equation with has a solution with Vj(0 satisfying Notes on Chapter 1. Some books which make more than a passing reference to Volterra equations are Cochran [68]. and . Corduneanu [70].29). There are many texts on integral equations.31).29) has a unique solution. but loses much of its power because the kernel is not symmetric. Exercise 1. Therefore the solution of the system (1. gives the unique solution of a linear Volterra equation of the second kind with a degenerate kernel.30). Davis [75].1 can be extended to certain nonlinear equations. Tricomi [229]. under the appropriate assumptions. equation (1.3. Show that. For example. Since u(t) has the form (1.

INTRODUCTION 11 Yosida [247]. Tsalyuk [230] contains many references on analytical and numerical work. the reduction seems to have been ignored until it was used by Bownds and his coworkers to construct methods for the approximate solution of Volterra equations (see for example [34]. . A more detailed discussion can be found in [30]. who attributes the observation to Goursat. On the whole. The equivalence between Volterra equations with degenerate kernels and systems of differential equations is mentioned by Cochran [68]. with special emphasis on the Russian literature. [35]).

This page intentionally left blank .

but also on the states at previous times. If the history dependence can be represented by a term then the modeling equation is of Volterra type. There may be several advantages to reducing (when possible) a differential equation. Integral equations also find their use in applications where the more obvious model is a differential equation.Chapter L Some Applications of Volterra Equations Before setting out on a detailed investigation of Volterra equations. however. and where it is necessary to know how the state y(f 0 ) is arrived at in order to predict the future. with mathematical arguments that are quite informal. Such models are sometimes called history dependent or systems with memory. the discussion will be somewhat intuitive and at times simplified.1 and 2. Our aim here will be to develop some understanding of the usefulness of Volterra integral equations without undue attention to detail. the examples given are representative of a variety of more complicated applications.2. situations where knowledge of the current state alone is not enough. Volterra equations arise most naturally in certain types of time-dependent problems whose behavior at tune t depends not only on the state at that time. Nevertheless. Consequently. we briefly look at some actual applications where such equations arise. From a theoretical point of 13 . Information prior to t = t0 is irrelevant to the solution after t0. There are. This is true for any f0. Some elementary examples of such systems are discussed in §§ 2. The solution of the differential equation is completely determined for t>t0 if y(f 0 ) is known.

14 CHAPTER 2 view. f + Af) is given by h(t) Af. This new component will be replaced when it fails.2. thereby considerably simplifying the numerical computations. If all contributions are added and the limit Af —> 0 taken. and properties of the solution may be more readily inferred from the integral form. integral equations arise in some situations where experimental observations yield not the variable of interest but rather some integrals thereof. and so on. For example. The probability for a needed replacement is the sum of (a) the probability that the first failure occurs in (t. t + Af) the probability of failure of a component which is new at t' is If every component eventually fails then for every t'. In general. in the small interval (t. It is a Volterra equation of the second kind of a particularly simple form. Finally. Section 2.3. Of practical interest is the renewal density h(t) which measures the probability for the need of a replacement. 2. t + At). so that p(f) must satisfy Assume now that as soon as the component fails it is replaced by a new one. integral operators are more easily dealt with than differential operators. The simplest and best known example. There may also be some practical advantages. It is defined so that the probability that a renewal has to be made in the interval (t. History-dependent problems. the failure time is a random variable characterized by a probability density p(t) such that. Several examples of this are given in § 2. In some cases the integral equation reduces the dimensionality.1. To compute the actual variable then requires the solution of integral equations. certain partial differential equations in two variables can be shown to be equivalent to integral equations in one variable. we get the equation This is the renewal equation. followed by another failure after t — t' time units. Consider a component of some machine which is subject to failure as time passes.4 contains some examples of this type. One of the best known examples of this type is the so-called renewal equation. and (b) the probability that a renewal was made at time t'. already mentioned in § 1.20) to the integral equation (1. .21). is the reduction of the ordinary differential (1.

7). elementary population equation is Here N(t) stands for the number of individuals of a population alive at time t. which has a negative coefficient. the growth factor a may change with the state of the environment.5) are rarely realistic and more complicated models have to be constructed to account for observed phenomena. Thus. Some . The state of the environment at time t in turn depends on the past history of the population. This term. Basically.5) states the assumption that the rate of change of the population. The simple assumptions leading to equation (2. The term system is used rather generally to denote a physical or otherwise observable entity operating according to certain principles. depends only on the number of individuals alive at time t. tends to inhibit the growth of the population. if a >0 the population grows exponentially. 2. Equation (2. The constant a represents the difference between the birth and death rates. Volterra on population dynamics. Equation (2. for example.SOME APPLICATIONS OF VOLTERRA EQUATIONS 15 The study of Volterra equations originated with the work of V. For example.2. that is. the number of births and deaths at time t. since this population contributes to the change in the environment. it becomes plausible to use a variable growth factor incorporating a history-dependent term. instead of a constant a. if a <0 it will die out at an exponential rate. Applications in systems theory. the environment in which the population exists may change due to factors such as pollution or exhaustion of the food supply.5) then becomes the Volterra integrodifferential equation The equation actually used by Volterra in the original study was not (2. In such a case. but an equation of the form The additional term -a1N2(t) was introduced to account for the competition between individuals in the population. we think of a system as a process which transforms a given input into a proper response or output. The classical.

2.1.9). and a transformation T which maps elements of X into Y. To proceed we must be more specific. and computer systems. a linear system is one in which the superposition principle holds.1. We interpret t as "time". the independent variable is indeed the elapsed time measured from some chosen reference time t = 0. This description is clearly so general and all-inclusive that little of interest can be done with it. simple examples are mechanical systems. T satisfies which is essentially the superposition principle. The concept is quite broad and includes a great variety of complex phenomena. We shall therefore consider the system (2. electric circuits. because of the linearity. First. Although the setting is still quite general. we choose as X and Y the space of functions of a single real variable t. a linear space Y of outputs. Systems can be classified as linear or nonlinear. everywhere else it is zero. we introduce the unit-impulse function This impulse function has value one during a small time period centered at ^. we can now show that for many such systems T can be represented as an integral operator of the Volterra type. In other words.16 CHAPTER 2 FIG. where for all elements of X. the response must be proportional to A and we can write . 2. in many applications. Schema of a simple system. To see this. a system can be described formally by the equation Schematically. Linear systems are considerably more tractable than nonlinear ones. If it is put through the system. From the mathematical point of view a system can be characterized by three elements: a linear space X of all possible inputs. First of all. particularly those for which Volterra equations are important. the response generated by it must satisfy certain conditions. such a system is depicted in Fig.

These somewhat intuitive considerations then make it plausible that a general linear causal system can be described by the Volterra integral equation where f(t) is the input to the system and g(t) the resulting output. In such a system we expect that the response at time t to a signal applied at time T depends only on the elapsed time t — r between the input signal and the output response. This is called a linear time-invariant causal system and is described by an equation of the form An equation such as this. for which the kernel depends only on the difference t — r is called a convolution equation. we can now identify some . ^ = (i . 0 is the unit-response function. Furthermore. A further simplification can be achieved by considering systems whose underlying structure does not change with time.13). The treatment of convolution equations is often much easier than that of the general case. In this case m(t. N —*• °o then. The general principle of causality can be expressed by requiring that Consider now what happens when a given input f(t) is applied during time 0 =s= t «£ A. in time-dependent systems one expects that the response cannot be felt before the signal is applied. Starting with these quite general observations. provided the limits exist. which characterizes the response of the system at time t to a unit impulse applied at time ^. r) = m(t-r}.SOME APPLICATIONS OF VOLTERRA EQUATIONS 17 The function m(f. If we pick N large enough and let A = A/N. we see that where the second step uses the causality condition (2.i) A. then f(t) can be closely approximated by so that If we now take the limit A —> 0.

Schema of a feedback system.17) into (2. a known correspondence between an input f(t) and output g(t). then the situation depicted in Fig.2 can be described by the set of equations where mi(t) and m 2 (f) are the respective unit-response functions for Tx and T2. suitably transformed.18 CHAPTER 2 subclasses of problems in system theory.2.2.18). giving Interchanging orders of integration. In system identification the properties of the system are assumed to be unknown and are to be inferred from observations. to the system. 2.15) the transformation of variables a = t — T is made. Feedback systems are constructed by "feeding back" the output. in particular. If the main interest is in the response g(0> the unknown u(t) can be eliminated by substituting (2. A schematic representation of a simple feedback system is given in Fig. we obtain FIG. If in (2. 2. 2. . then the equation becomes which is a Volterra equation of the first kind for the unknown m(o-). If we restrict ourselves to linear. A type of a system of considerable practical interest is generated by connecting two simpler systems. The time-invariant case is particularly simple. time-invariant systems.

The recasting of differential equations in terms of integral equations can be done not only in simple cases such as equation (1. the Fourier cosine integral.20).21) is a function of t — r only. but also for certain partial differential equations. The method of reduction utilizes Fourier transforms (although sometimes Laplace transforms can be used as well). This is particularly common in equations of parabolic type encountered in various heat conduction and diffusion problems. so that the equation is of convolution type.1. T) is a function only of f-r. Problems in heat conduction and diffusion. then (2.20) is a Volterra equation of the second kind for g(f).SOME APPLICATIONS OF VOLTERRA EQUATIONS 19 where If the properties of the two simpler systems. In our discussion we will use the following notation for the Fourier sine integral. their unit-response functions m^t) and m2(f). 2.3. that is. In fact. Show that k(t. and the full Fourier integral: The corresponding inversion formulas then are Consider now the simple heat conduction equation . T) defined by (2. and the input /(s) are known. Exercise 2. k(t. the response of the feedback system.

we obtain Integrating the left-hand side by parts twice.31) subject to the initial condition (2. from (2.28).26).32) has an elementary solution Applying now the inversion formula (2. yields or Also. If we apply the Fourier cosine transformation to equation (2.20 CHAPTER 2 subject to the conditions with the further assumption that.30) and the prescribed conditions at infinity. we have The inner integral can be explicitly evaluated as . and using (2. for all t.29) The differential equation (2.

38).37) expresses the power production as a function of the temperature. Usually the gradient depends on the surface temperature or concentration. The problem becomes more interesting when we complicate it. the behavior of the solution on the boundary is governed by an integral equation in a single variable.34) becomes If we now use U(t) for u(0. (2. can be described by the rather complicated set of integropartial differential equations for —oo<x<o°.30) gives the gradient of u (and hence the rate of transfer across the boundary) as a fixed function of time. so that we must replace g(f) with some function G(u(0. we get a nonlinear Volterra equation of the second kind with an unbounded kernel Thus. Additional conditions will be taken as The second of these equations. t>0. t) and put x = 0 in (2. is simply a diffusion equation with an added source term due to the power generated by the reactor. The relation between the temperature of the reactor T(x. .SOME APPLICATIONS OF VOLTERRA EQUATIONS 21 so that So far everything is standard and well known. t).35). Equation (2. Another example of this type occurs in nuclear reactor dynamics. and the power produced u(t). t). In a physical situation where u represents temperature or concentration this is not very realistic. t) and (2. The condition (2.

22 CHAPTER 2 Applying the full Fourier transform to (2. we get which. we obtain This can be written in the more explicit form where and .27) to give Substituting this into (2.37) and exchanging order of integration. using integration by parts and the condition at infinity. t) satisfies with initial condition This equation has the simple solution which can be inverted by (2. the Fourier transform f(<o.38). becomes Therefore.

This was the case in all examples in the previous sections of this chapter. External measurement of radiation of a cylindrical object. Consider. In such situations f(x. and z. . Since Volterra equations are generalizations of initial value problems for ordinary differential equations. z) can be computed.43) is a simple Volterra integrodifferential equation for a function of one variable. y. f(x.4. the intensity of radiation is measured at various points outside B with the hope that. Assume that B is a circular cylinder and consider the cross-section with the plane z = 0 (Fig. Typically. the amount of radiation emitted by an infinitesimal volume element) varies with x. but when radial symmetries are present these may reduce to equations of the Abel type. a three-dimensional radiating body B in which the emission coefficient / (that is. y. however. 2. z) is often unknown and has to be inferred from external measurements. y. FIG. In general this leads to Fredholm equations of the first kind. Some problems in experimental inference. instances where Volterra equations occur in the description of completely static phenomena. it is not surprising that most applications occur in connection with time-dependent systems. if enough measurements are taken. One class of such applications involves the inference of quantities which are not directly observable.SOME APPLICATIONS OF VOLTERRA EQUATIONS 23 Equation (2. 2.3. There are. 2. but which can be measured through certain "averaged" data. for example.3).

2. A slightly more complicated situation arises when B not only emits radiation. 0). y) we measure the radiation in the x-direction. Radiation originating at any point inside B is attenuated when traveling through B.48) becomes the generalized Abel equation where As a second example we take a situation which often arises in stereology and related fields. that is. then A transformation of variables then gives the equation which is an equation of the Abel type for the unknown -R(r).4). we take the coefficient of attenuation p as constant throughout B. for simplicity. y) = f ( x . y) travels a distance Vl — y 2 — x before leaving B. where r = Vx2 + y2. which is the radius of that portion of a particle lying within the slice of thickness D (Fig. then (2.44) must be replaced by since radiation originating at (x. Spherical particles of various sizes are embedded in some material. y).24 CHAPTER 2 At the point P = (c. but also absorbs it. Then clearly where h(x. Generally. y. . what can be measured is the apparent radius. If. The distribution of particle size is unknown and is to be inferred by taking slices through the medium and observing the particles within the slices. (2. denoting it by g(y). in general. After the appropriate change of variables. this attenuation is exponential with the distance traveled. If we further assume radial symmetry for h(x.

51). assume that the radii are all in the range 0 «s x «s R.SOME APPLICATIONS OF VOLTERRA EQUATIONS 25 FIG. the first being particles whose center is in the slice and whose radius is between x and x + dx. so that is the probability that a particle has radius between x and x + dx. For the sake of simplicity. If a particle of radius r has its center at a distance y from the surface of the slice. The apparent radius is between x and x + dx only if its actual radius is between r and r + dr. then its apparent radius is x. 2. the number of particles with apparent radius between x and x + dx is computed from two parts. Let p denote the density of particle centers and assume that it is constant throughout the material.4. Apparent diameters of spherical particles embedded in a slice. If the center of a particle of radius x lies in the slice. where the relation between x and r is given by (2. In particular so that the contribution from particles at y is proportional to . then its apparent radius is Thus. The total from all such particles in the slice is The second contribution comes from particles whose centers are located at a distance y outside the slice. assume that the slice has unit surface area. We can then make the following simple arguments. Lastly. The probability density for particle radii will be denoted by /(x).

Miller [190] and Noble [196] give elementary discussions of applications at much the same level as this chapter. We give here only a few references which are representative of what exists and which give the reader an introduction to a vast literature.26 CHAPTER 2 This is to be summed over all possible y.53) we see that This is a singular Volterra equation of the second kind. Because of its importance. 0=^y =£\/.R2 — x 2 and —\/R 2 — x 2 =s£ y ^ 0. giving a total contribution of On changing the integration with respect to y to one with respect to r. that is. then combining (2. The effect of these uncertainties on the inferred solution must be carefully considered. The original work on renewal equations was done by Lotka [173] and Feller [101]. The books by Bellman [25] and Bellman and Cooke [26] contain a wealth of material. showing the essence of the subject while omitting some details. . Notes on Chapter 2. The original work of Volterra can be found in [231].54) becomes an equation of the Abel type The practical solution of the integral equations arising in this connection is complicated by the fact that the quantities observed are subject to experimental errors. only the surface of the slice can be examined. More recent work dealing with population growth can be found in [41] and [221]. this becomes If we now let G(x) dx denote the observed number of particles with apparent radius between x and x + dx.52) and (2. Some newer material is in [226] and [235]. In some cases it is possible to find the apparent radii only if D = 0. In this case (2. Davis [75] is a more accessible reference. much work has been done on renewal equations. that is. There are many books and papers dealing with Volterra equations arising in practical applications. This is a point to which we will return in later chapters.

[143]. Hearne and Young [88]. Mann and Wolf [179] discuss the heat conduction problem. dealing with applications in stereology and related questions. [216]. There are many papers. and Nestor and Olsen [193] discuss the Abel equation arising from the radiating sphere. while a discussion of the equations of nuclear reactor dynamics can be found in [159]. Volterra equations occur in areas such as viscoelasticity [73]. . [8]. Edels. [119]. Minerbo and Levy [191]. In addition to the applications mentioned in this chapter. for example [7]. and damped vibrations [161].SOME APPLICATIONS OF VOLTERRA EQUATIONS 27 A simple discussion of the relation between Volterra equations and system modeling is given in [105]. [9]. [136]. while [71] presents a much more extensive and theoretical approach. study of epidemics [234]. superfluidity [160]. [86]. [204]. [120].

This page intentionally left blank .

3.Chapter 3 Linear Volterra Equations of the Second Kind The simplest and most completely understood Volterra equation is the linear equation of the second kind. Section 3. The classical approach to proving the existence and uniqueness of the solution of (3. In §§ 3. In §§ 3. and its sensitivity to perturbations. This consists of the simple iteration with For ease of manipulation it is convenient to introduce with 29 . Existence and uniqueness of the solution.1) is the method of successive approximation.6 some of the results are extended to equations with unbounded kernels. also called the Heard method.2 we look at the questions of existence and uniqueness of the solution and its formal representation by the use of resolvent kernels. systems of equations. its behavior as t—»°°.1.1 and 3.3 is devoted to the study of some qualitative properties of the solution including comparison theorems.4 to 3. and integrodifferential equations.

from (3. it holds for all n. Hence we can interchange the order of integration and summation in the following expression to obtain . we see that Also. then the integral equation (3. namely that k(t.4) Since (3.6) is true for n — 1. The first theorem uses this iteration to prove the existence and uniqueness of the solution under quite restrictive conditions. then from (3. s) and g(f) are continuous. Choose G and K such that We first prove by induction that If we assume that (3.3). The series (3.6) is obviously true for n = 0. If k(t.1) possesses a unique continuous solution for O^t^T. s) is continuous in O^s^t^T and g(t) is continuous in 0=SfsST.30 CHAPTER 3 On subtracting from (3.1.5) converges and we can write We now show that this f(t) satisfies equation (3. This bound makes it obvious that the sequence fn(t) in (3. THEOREM 3.2) the same equation with n replaced by n — 1. Proof.7) is uniformly convergent since the terms (pt(t) are dominated by G(KT)l/i\.1).

Exercise 3. since it is the limit of a uniformly convergent sequence of continuous functions.8) and repeating the step shows that for any n. It is not difficult to show that any discontinuous solution of (3. the right-hand side is arbitrarily small. Therefore f(t) is continuous. For large enough n. To show that f(t) is the only continuous solution.1. For c^O this solution is discontinuous at t = 0.1 the .1) must. so that we must have and there is only one continuous solution. there may be solutions which are not continuous. Prove that any integrable solution of (3. suppose there exists another continuous solution f(t) of (3. Prove that under the conditions of Theorem 3.1) must be nonintegrable and this is true of (3. The kernel s* s is continuous.LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 31 This proves that f(t) defined by (3. but it can be verified that is a solution of (3.1).10). under the conditions of Theorem 3.1. Exercise 3.7) satisfies equation (3. One well-known example is the equation which has one obvious solution f(t) = 0. Then Since /(() and f(t) are both continuous. Each of the <Pi(t) is clearly continuous.2. be continuous.1). Let p(t) and q(t) be continuous on O^fssT and such that Q^p(t)^q(t)^t.9) for any c. there exists a constant B such that Substituting this into (3. Note that although there is only one continuous solution.

Tj. (iii) k(t. (iv) there exist points 0 = T0<T1<T2<. However. T] Then (3. The Picard iteration.1) has a unique continuous solution for O^f^T. T2]. (ii) for every continuous function h and all 0 < TI *£ T2 ^ t the integrals and are continuous functions of t.1 is a simple case of a contraction mapping argument. and so on. The approach used in the next result can be called the method of continuation.1 under less restrictive conditions.1) (i) g(t) is continuous in O^f^T. Assume that in (3. equation (3. For example. we will not pursue this. T].. It is possible to repeat what has been done here in a more general setting and thereby obtain the result of Theorem 3. we eventually cover the whole interval [0. s) is absolutely integrable with respect to s for all Q^t^T.2. T3]. The conditions needed for the next theorem do arise in some cases of interest not covered by Theorem 3. Instead. since for our purposes here such general results are not needed. . the same kind of argument can be made for square-integrable kernels. (v) for every t in [0. We first establish existence and uniqueness in some interval [0. The proof of Theorem 3.1. THEOREM 3. is not restricted to continuous kernels and functions. for all i and all T4 =s£f*£T i+1 where a is independent of t and i. then show that this solution can be continued to successive intervals [T!. but can also be carried out in general function spaces. Under suitable conditions.32 CHAPTER 3 equation has a unique continuous solution. [T2. let us look at another way of proving existence and uniqueness.2).-<TN = T such that.

the solution is unique. To see that f(t) is the only continuous solution. Tx]. respectively. by (3. we see that the sequence (3. We write the equation as with where. This can be applied several times to yield Since a < 1.12). Then so that which implies Since a < 1 this can be true only if f(t) = f(t). TJ.7) is continuous. the <pn(t) are continuous functions and hence f(t) defined by (3. in (3. Also. Define /n(r) and <pn(t) as in (3. Having established existence and uniqueness in [0.14) is .3). Consider first the interval [0.LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 33 Proof.2) and (3.5) is dominated by and converges uniformly. But (3. Then so that. assume there exists another continuous solution f(t). because of (3. that is.11). f(s) is the solution obtained in the first step. we now proceed to the next interval [Tl5 T2].15).

The kernel satisfies the conditions of Theorem 3. Thus. The equation has two continuous solutions.2 are automatically satisfied.2. Theorem 3.14). For example. say tf/(t).2. This argument can be repeated and since there is only a finite number of subintervals in [0.14) has a unique continuous solution in [Tl5 T2]. so that all assumptions of the theorem are satisfied for (3. T].2. (3.1. it is not necessary that k(t. we thereby construct the unique continuous solution in [0.1. From (3. Something like the additional conditions is needed as can be seen from the next example. We can therefore apply the basic step again. On the other hand. T2]. the trivial solution f(t) = 0 as well as the . as shown by the following example.2. It is however not sufficient that k(s.34 CHAPTER 3 just a Volterra equation with origin shifted from 0 to Tx. because of condition (ii). s) be continuous for (3. Note that. this theorem can be generalized by repeating the above arguments in other function spaces. Example 3. Of greater interest are certain unbounded kernels.14) and condition (v) we see that But obviously so that the continuation from f(t) to i/r(f) is continuous and we have a unique continuous solution in [0.2 allow some cases not covered by Theorem 3.1 imply that assumptions (i) to (v) in Theorem 3.1. so that some types of discontinuous kernels can be analyzed by Theorem 3. since the conditions of Theorem 3. We leave it as an exercise to show that this claim is true. G(f) is continuous. Example 3. the conditions of Theorem 3.2 is more general than Theorem 3. Again. so that the equation has a unique continuous solution for all continuous g. T]. t) only satisfy assumptions (ii) and (iii) of Theorem 3.11) to hold.

s) and g(f) are continuous.2.5. the computation of (p n (f) by (3. but not conditions (iv) and (v) of Theorem 3.2 to show that the equation with has a unique continuous solution for all 0«sT<oo. Exercise 3.2. then the order of integration can be interchanged and we get where .2.4. We have If k(t.5. Exercise 3. in particular. The resolvent kernel.6. Exercise 3. Find the actual solution of the equation in Exercise 3. Show that the kernel satisfies conditions (ii) and (iii).LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 35 solution f(t) = t. Exercise 3. Show that the kernels and satisfy the conditions (ii) to (v) of Theorem 3. Use Theorem 3. The kernel (f2-s2)~1/2 satisfies conditions (ii) and (iii). 3.2. Let us take another look at the method of successive approximation.4). but not the last two conditions of Theorem 3.3.

In . From (3. s) is the resolvent kernel for k(t.36 CHAPTER 3 A similar result holds for the other <pn(t) and it follows immediately by induction that where with kt(t. s) and g(f) are continuous.3.1) is given by Proof. s) is continuous and then This follows immediately by induction on (3. The function T(f. If fc(f. The fcn are called the iterated kernels. s).17). then the unique continuous solution of (3.5) we then have where If k(t. s) = k(t. Therefore is uniformly convergent for 0«£s ^ t ^ T. THEOREM 3. s).

1. The resolvent kernel itself can be expressed as the solution of an integral equation.7.4. then k is said to be a difference kernel. Under the assumptions of Theorem 3. Using (3.7).3 the resolvent kernel Y(t. we see that Exercise 3. a simplification occurs that makes (3. that is.21). THEOREM 3. Sometimes it is convenient to write (3. Use this to show that the resolvent kernel also satisfies the equation Equation (3. . DEFINITION 3.23) involves an unknown function of two variables and is not generally useful for the actual computation of the resolvent kernel.LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 37 the order of integration and summation can be interchanged. s) satisfies the equation Proof. First show that the iterated kernels defined in (3.1) is a function of t-s only.23) computationally useful. Therefore where the last step follows from (3.17) also satisfy for all 1 =^p =s£ n -1.23) in somewhat different forms. however. If the kernel of (3.17) and (3. In one important case.

Even when an explicit solution of (3.1) is given by where the resolvent kernel R(t) is the solution of Proof. . we see that so that /(f) defined by (3.1) when fe is a difference kernel.26).38 CHAPTER 3 THEOREM 3.26). it may be possible to obtain some qualitative information about it. 3. the resolvent kernel is itself the solution of an equation with a difference kernel. If k is a difference kernel. If in the inner integral we substitute s — r = u. This observation will be useful when we take a closer look at such equations in Chapter 6. by (3. For equations with difference kernels then. Let R(t) be the unique continuous solution of (3.25) is the solution of (3. and k(t) and g(t) are continuous.3.25). then.5. then the unique continuous solution of (3.25). Some qualitative properties of the solution. Consider then f(t) defined by (3.1) cannot be found. Thus Applying now (3. Then where the last step involves the usual interchange of order of integration. Results on the smoothness of the solution.

Exercise 3. For equations with difference kernels. If we assume further differentiability for g(f) and k(t. and (d/dt) k(t. Complete the proof of Theorem 3. The extension of this argument is obvious. T].LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 39 upper and lower bounds. of the solution.6.. If we allow ourselves to impose sufficiently stringent conditions.. we conclude that.8.6. In fact. Exercise 3. the differentiability. If (i) g(f) is p times continuously differentiable in [0.9. however. and its asymptotic behavior are of practical importance. that is. carry on the process outlined above. THEOREM 3.27) are continuous. then (3. p. much more work has been done. T]. The rather stringent conditions of Theorem 3. One concern of particular importance in the numerical solution is the smoothness.. s) is continuous in 0*s s =s= t ^ T for all j = 0.1) the function g is continuously differentiate on [0.1) is p times continuously differentiable in [0.1. if all derivatives needed in (3. T]. it follows from this that /'(*) is also continuous. Proof. s) is continuous. k(t. then the solution of (3. We leave the details as exercise. then f"(t) is continuous. These results are intended as a demonstration of the arguments rather than as a presentation of a coherent picture. (ii) (ff/dt') k(t. s) is bounded. T] for all q ^ O and r ^ O such that r + q^p-1. In this section we summarize a few results which can be obtained by elementary means. If in (3. s). for general kernels the situation seems quite complicated and no comprehensive theory exists. Inductively. the analysis is quite trivial. Some of this will be outlined in Chapter 6. we can differentiate once more to get where we use the notation Since f'(t) is continuous.. (iii) (d q /df q ) kr(t) is continuous in [0. here we consider only the general case.6 can be .1) has a continuous solution and the equation can be differentiated to give Since /(t) is continuous.

Assume that the kernel k(t.7. which is the required result. . it is clear that F(f)-|/(0|>0 for all t^T. s) in (3. . . s) satisfying and such that the integral equation has a continuous solution F(t) for Proof. p is a continuous function of t.30) gives Since F(0)-|/(0)|>0 and K(t. Apply this new result to investigate the smoothness of the solution of the equation The next set of theorems deals with bounds on the solution of (3. .40 CHAPTER 3 relaxed in many ways. From (3. Assume also that there exist functions G(t) and K(t.1) and its behavior as *—»<». The first is a simple comparison theorem useful for both theoretical and practical purposes. Show that Theorem 3. 1.1) is absolutely integrable with respect to s for all O ^ r ^ T and that the equation has a continuous solution.1) Then Subtracting this from (3. .6 continues to hold if condition (ii) is replaced by: (ii') for every continuous function h(t) and all j = 0.s) is positive. THEOREM 3.

3. Let f(t) be an approximate solution to (3.30) can be solved.30) becomes which has the solution so that Example 3.3.4.1) such that Then /(f)-/(f) satisfies If |r(t)|<e.3 and 3. Let g(f) and k(t.3 that The bounds established with the arguments in Examples 3.LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 41 The use of this theorem requires that we find a K(t. We illustrate the application of this result with a few examples. s) be bounded by Then (3.27). Sometimes much better results can be achieved if we use a K(t. Consider the equation Since the equation .5. Example 3.4 are often very crude. s) simple enough so that (3. Example 3. s) be bounded as in Example 3. then we see from the result of Example 3. s) of the form and find the solution using the explicit form (1. Let g(t) and k(t.

29) replaced by (3.35) is satisfied and (3. Exercise 3. respectively.42 CHAPTER 3 can be solved by (1.36) is not as special as may appear. We will consider the special equation Taking g(f) = 1 simplifies matters by eliminating difficulties due to varying g.33) and (3.7 to hold under these weakened conditions.28) and (3. Use Example 3. (3. They yield bounds as well as information on the asymptotic behavior of the solution.36). if we assume that then the theorem holds with (3.34) respectively.33) and (3.7 does not hold if (3.36). divide by g(0.36) is sufficiently smooth to guarantee a differentiate solution and permit all differentiations used in the proof of the following theorem. The next two theorems are concerned with kernels with certain sign constraints.34). but no additional requirements are made. . some further restrictions have to be imposed on the kernel.28) and (3. Prove that Theorem 3.11.10. Exercise 3.2 to show that Theorem 3. First. For example. Actually.29) are replaced by (3. At least for g(f) that do not change sign in [0.28) and (3. T]. equation (3.33) and (3.29) cannot immediately be relaxed to and For Theorem 3.27) to give we see immediately that The inequalities (3.1) can be transformed to (3.28) and (3.7 holds if (3.29) are replaced by (3. then introduce the new variable u(t} = f(t)/g(t) and the kernel This gives the form (3. We will assume that the kernel in (3.34).

If the conditions of Theorem 3. THEOREM 3.38) and the fact that f(t)^Q for t*^t*.9. The right part then follows easily from (3. From (3. Then and . hence f'(t*) cannot be positive. If in (3.36). Then Because of (3.36) Since we are assuming that k and its derivatives are bounded.LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 43 THEOREM 3. and if in addition and lim(^00/(r) exists. This contradiction proves the left half of the inequality (3. Assume that We can then pick r large enough so that f(t)^ a/2 for all t > T.36) satisfies Proof.36). Assume now that t* is the smallest positive number such that f(f*) = 0.41) is positive. we have for all Q^s^t^T. However. the right-hand side of (3.8. we have that f(0)= 1 and /'(O) ==£().39). then the solution of (3.8 hold. f(t] changes from positive to negative. then Proof. at t*.

nor do there seem to be any simple additional conditions. Show that if the conditions of Theorem 3. contradicting the fact that/(f) 2*0. Example 3. while /(f) = 1 for f ^1.37) and (3. But the solution of is clearly less than 1 in 0 < f < l .36) the characterization of the asymptotic behavior of the solution is difficult and contains a number of open problems. Unfortunately. We will reconsider some of these questions in Chapter 6 for difference kernels.12. This can be seen from the following example.42). Exercise 3.9 is left open. . Exercise 3.13. Show that the solution of satisfies and The question of the existence of limt_w/(0 in Theorem 3.38) are not sufficient to guarantee this.37) and (3. Even for the simplified equation (3.44) are all satisfied. the last term on the right goes to —«> as f -*• oo. Consider Then conditions (3. such as perhaps that would do the trick. conditions (3.8 hold and if then limt^oo/(0 cannot be zero. and (3. This matter could be settled easily if one could show that f(t) decreased monotonically.6.44 CHAPTER 3 By (3.38).

Then so that We now apply the result of Example 3. s). s) be continuous and bounded by Let f(t) be the solution of Then f(t) satisfies where f(t) is the solution of (3. Let g(t). consider the behavior of the solution of (3. s) and g(f).4 with to yield (3. From the results of Example 3.1) with respect to changes in g and k. that is. small perturbations in the parameters cause a small corresponding change in the solution.47) for the effect of a perturbation shows that (3. and the size of the perturbations are measured by their norms. The order of magnitude estimate (3. with a similar definition for the norm of the perturbation in the kernel. k(t.46). a problem is said to be well-posed if it has a unique solution which depends continuously on the parameters. THEOREM 3.3 we have immediately that Now substitute f(t) into (3. Proof.1). Ag(t). Ak(f. A .1). Here the parameters are k(t. By definition.10. for example.LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 45 As a final result.1) is well-posed.

The inequality (3. the sensitivity of the solution to a small change may be considerable.46 CHAPTER 3 unique continuous solution of (3. THEOREM 3. Some of the results just presented can be generalized to systems of equations with only minor changes in notation. But it does indicate that equations for which the kernel has large magnitude can be troublesome computationally. the steps involved in proving results for systems are often formally the same as those for single equations. If the system (3. If g(f) and k(t. the linear system can be written as Using the vector norm the induced matrix norm is With this notation. Therefore. s) are continuous in O ^ s ^ f ^ T (meaning that all components are continuous). THEOREM 3. using norms in place of absolute values. (3. Proof.11. since (3.46) is an inequality.1. Theorem 3. 3. In a similar vein.4.7.49) possesses a unique continuous solution .1) is wellposed. This is of course not necessarily so.47) and indicates that if K is large. s)||.46) shows its continuous dependence on ||Ag|| and ||Afc(f.46) gives a little more information than (3.48) has a unique continuous solution for 0 *£ f =£ T. we can get analogues of the comparison result. Repeat the steps in the proof of Theorem 3. and (3.1) is known to exist. Using the usual vector-matrix notation.12. Systems of equations. then the system (3.

as for example in (2. the result converges to a continuous function which is the solution of (3.7.56). Yet. some iterated kernel will be bounded. This argument requires. Equations with unbounded kernels. To see this. Extend Theorem 3. Exercise 3.10 to systems of equations. To relax this condition is not a simple matter. however.12. 3. equations with unbounded kernels are of practical interest.11 and 3. in particular. kernels containing factors (t-s)~l/2 and (t2—s2)~1/2 arise in applications.15. if g(0 is continuous. which then is used to show that the method of successive approximation converges. s)f(s) is absolutely integrable and if with continuous K and G.36). Similar arguments can be made for k(t. then where F(t) is the continuous solution of Proof. Exercise 3. In the previous sections of this chapter it was frequently assumed that k(t. compute the iterated kernel The second and higher iterated kernels are therefore bounded. Repeat the steps in the proof of Theorem 3. A simple example is the equation The method of successive substitutions can still be carried out and. While equations with unbounded kernels are on the whole not as well understood as those with smooth kernels. A more general approach is through the continuation arguments given in . s) was bounded (often also continuous or smoother).LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 47 1(0 in OsSfsST. such that k(f. Eventually. some of the previous results do carry over. with 0 < a < l . s) = (t-s)~~a. using norms instead of absolute values. that we can manipulate the kernels to bound the iterated kernels.14.5. Carry out the details in the proofs of Theorems 3.

48 CHAPTER 3 Theorem 3. Tj and define cpn(f) as before. We sketch only the first step.2.13. Corresponding to Theorem 3. Assume that in (3.2. THEOREM 3. When the kernel is unbounded (or has some other irregular behavior) it is often convenient to rewrite (3. As we indicated there. s) is continuous in Q^s^t^T. Proof. Then .57) has a unique continuous solution in 0 =£ t ^ T. (v) there exist points 0 = T0 < T\ < T2 < • • • < TN = T such that with t ^ Tf where (vi) for every t^Q Then (3. s) represents the part with the nonsmooth behavior. (iv) p(t. Consider the interval [0.1) as where p(t. (ii) k(t. Proceed as in Theorem 3. the theorem can be applied to certain types of unbounded kernels.2 we then have the following result. s) is absolutely integrable with respect to s for all 0«£f «*T. (iii) for each continuous function h and all Q^rl^r2^t the integrals and are continuous functions of t.57) (i) g(t) is continuous in Q^t^T. This illbehaved part is usually quite simple which helps in the analysis.

h(t).LINEAR VOLTERRA EQUATIONS OF THE SECOND KIND 49 and. Rather than develop a general theory.14. s). Exercise 3. can be used for the analysis of a variety of integrodifferential equations. k2(t. This solution can then be shown to be continuable to the other intervals as in Theorem 3. let us take a more specific case. TJ.6. Exercise 3. s) is a scalar function satisfying the conditions given in Theorem 3.13 to systems of equations where p(t. Let f(t) and z(i) be the solution of the system . Extend the results of Theorem 3. The extension to equations of higher order is obvious.16. Show that the equation has a unique continuous solution. Consider the -equation with assuming that g(t). This leads to the conclusion that the equation has a unique continuous solution in [0.13.2.61) has a unique continuously differentiate solution in Proof.1. Complete the arguments in the proof of Theorem 3. described in § 1. by condition (v) with a<l. THEOREM 3. 3.17. Exercise 3. Integrodifferential equations.18. k^(t. The reduction of integrodifferential equations to a system of integral equations. s) are all continuous in Then (3.13.

Some elementary qualitative results. Since z(f) is continuous. MO.50 CHAPTER 3 Then. The method of continuation. [125].11. f(t) satisfies (3.63) has a continuous solution.61). Exercise 3. . Exercise 3. namely. is commonly used. s) all continuous. Show that the equation with g(r). for unbounded kernels see [106]. k2(t. we see from (3.6 shows. the space of continuous functions. s). and [208]. the integrodifferential equation (3.20. Notes on Chapter 3. according to Theorem 3. s) = (t-s)~*. As Theorem 3. going beyond what is given here. the system (3. k3(t. The use of Picard iteration and resolvent kernels is standard and can be found in most texts on integral equations. (3. Since every solution of (3. can be found in [24].19. Show that the equation with g(f) continuous. see [189]. For an investigation of this case. This matter is considerably more complicated for important unbounded kernels such as k(t. s). fci(f. For more abstract approaches see Corduneanu [71] and Miller [190]. Other integrodifferential equations can be treated in a similar fashion.62) and (3. has a unique twice continuously differentiable solution.63) that f(t) is not only continuous. smoothness results for equations with differentiable kernels are quite trivial. A treatment of equations with nonsmooth kernels can be found in Davis [74]. but also continuously differentiable with Therefore. The setting used here. has a unique continuously differentiable solution. although more powerful than Picard iteration in demonstrating existence and uniqueness.63) and the latter has a unique continuous solution. seems to be less popular.62).61) satisfies a system of the form (3. h2(t).61) must have a unique continuously differentiable solution.

4. While the analysis here is easy.3. Then 51 .1 the existence and uniqueness results for linear equations are extended under the assumption that K(t. A brief discussion of equations with unbounded kernels and nonlinear systems is given in § 4.2) a similar equation with n replaced by n-1.4.As in the linear case.1. u) satisfies a simple Lipschitz condition with respect to the third argument. The method of successive approximation as applied to nonlinear equations is a direct generalization of the method developed in Chapter 3 for the linear case. Some qualitative properties of the solution are investigated in § 4.Chapter 4 Nonlinear Equations of the Second Kind In this chapter we turn our attention to the nonlinear Volterra equation of the second kind We investigate the solution of this equation under various assumptions on the kernel.2 we consider some problems that arise in connection with kernels which are not Lipschitz continuous. we subtract from (4. Successive approximations for Lipschitz continuous kernels. A discussion of an application of the resolvent kernel concept for nonlinear equations concludes the chapter. Successive iterates are defined by with /0(0 = g(0. In § 4. s. the conditions are restrictive. In § 4.

and z. that where Therefore exists and is a continuous function. analogous to Theorem 3.1) has a unique continuous solution for all finite T.52 CHAPTER 4 To make the analysis as simple as possible we assume that K(t. s.4). y. THEOREM 4. From (4. as before. is immediate. u) satisfies a Lipschitz condition of the form where L is independent of t.7) it follows.1.10) satisfies the original equation.s. Assume that in (4.1) the functions g(t) and K(t. s. Proof. To prove that f(t) defined by (4. Then (4.1. u) are continuous in 0=£s*Sf*sT and — oo< M <oo ? and that furthermore the kernel satisfies a Lipschitz condition of the form (4.2) . set From (4. Introducing as before with 9o(0 = g(0» we see that and A theorem and proof for the nonlinear case.

say by B. . (4.4) gives where But so that by taking n large enough.NONLINEAR EQUATIONS OF THE SECOND KIND 53 so that Applying the Lipschitz condition (4. To show uniqueness. Then from which it follows that Since \f(i)—f(t)\ must be bounded.1). the right-hand side of (4. and hence conclude that f(t) = f(t).10) satisfies and is therefore a solution of (4.13) can be made as small as desired. It follows that the function f(t) defined by (4. we assume the existence of another continuous solution /(r).15) implies that By repeating the argument we are led to for any n.

for example. However. If we can show this. To develop some intuition.1.1.1 that the Lipschitz condition is applied only to bound the term If we knew that |/n(f)|^B for all n. with T>0. let us consider some specific cases. Existence and uniqueness for more general kernels.17) does not satisfy a condition of the form (4. the iterates f fn(t) remain bounded. we note from the structure of the proof of Theorem 4. and (4. The use of the Lipschitz condition (4. This is clearly impossible for arbitrary y and z.17) can be found by differentiating and solving the resulting differential equation which gives .4) simplifies the analysis but makes the results of limited use. so that we need to examine more general nonlinear behavior. Show that the equation has a unique continuous solution for all t.4). we would need a constant L such that that is. 4. we could use L — 2B. The actual solution of (4.2. the equation In order to use arguments in Theorem 4. and the proof would go through as before. we will be able to prove existence and uniqueness in some region O^f^T. Since we expect that. at least in some neighborhood of the origin.54 CHAPTER 4 Exercise 4.4) is not satisfied. In many cases (4.

..5). u) is continuous in all variables. we can obtain similar results if we can show that there exists some interval O^f^S. Assume that there exist constants a. the kernel K(t. In the general case.1) has a unique continuous solution in Proof. . . but not in any large interval.2) and (4. we can find a constant C such that Next. existence and uniqueness holds only locally near t = 0. THEOREM 4. |3. Assume that (4.26) hold for 1. (in) for all Q^s^t^T and a < y < / 3 . such that successive iterates remain bounded for f«£S. n -1. Then . Let Defining fn and <pn as in (4. O^t^T. a < z < |3 the kernel satisfies the Lipschitz condition Then there exists a 8 >0 such that (4. that is.. s. 2. 2 . L such that (i) a<g(t)<|3.2.1) with g(t) continuous in Q^t^T. we choose a positive number d such that This can always be done because of the strict inequalities in (i).NONLINEAR EQUATIONS OF THE SECOND KIND 55 A solution exists in 0=s=f<l. . Because of (i) and (ii). (ii) for all Q^s^t^Tand a <u <|3.24)-(4. we can now show by induction that for n = 1. . Consider (4.

56

CHAPTER 4

where the last step follows because /„_! and fn_2 2satisfy (4.24), so that the Lipschitz condition (4.20) can be applied. Thus

and (4.25) is satisfied. Also,

and (4.26) holds for |/ n (f)-g(OI- Finally, since for 0«=f^S,

**(4.22) and (4.26) imply that
**

For n = 1,

so that (4.24) and (4.26) are satisfied. Also, and (4.25) holds for n = 1. This completes the inductive argument proving (4.24H4.26). The conclusion from this part of the argument is that in 0 «s t ^ 8, for all n. This bounds the iterates and the rest of the proof is then completely analogous to that of Theorem 4.1. The details are left to the reader. Exercise 4.2. Complete the argument to prove Theorem 4.2. Exercise 4.3. Show that for (4.17) all conditions of Theorem 4.2 are satisfied. Find a value of 8 for which a continuous solution can be guaranteed to exist. Exercise 4.4. Investigate the existence and uniqueness of the solution of near f = 0.

NONLINEAR EQUATIONS OF THE SECOND KIND

57

Theorem 4.2 is sufficiently general to guarantee the local existence and uniqueness of the solution under a wide variety of circumstances. Example 4.1. Let where k(t, s) is continuous for 0^s^t=sT and h(u) is continuously differentiable in some interval a = g(0) - c *s u ^ g(0) + c = |3. Then Theorem 4.2 holds. To see this, note that

so that where We can therefore use

to satisfy (4.20). While Theorem 4.2 can be used to prove the existence and uniqueness of the solution near f = 0, it may yield fairly small values of 8, especially if rough estimates are used. Example 4.2. Consider the solution of

in 0 «s t =£ 1. If we pick a = |, |3 = 2, then condition (i) and (ii) of Theorem 4.2 are satisfied. Also, for O ^ f ^ l ,

so that (4.21) is satisfied with C = e. With i < y < 2 and|<z<2,

so that L in (4.20) can be taken to be 4e. Therefore, we must find a d > 0 such that A value of d = 0.07 satisfies this and we conclude that (4.33) has a unique solution in 0 ^ t ^ 0.07.

58

CHAPTER 4

Exercise 4.5. Find a value for 8 such that the equation

has a unique continuous solution in 4.3. Properties of the solution. The qualitative results developed for the linear case in § 3.3 can be extended to nonlinear equations if some additional assumptions are made. The smoothness result Theorem 3.6, is easily extended to the nonlinear case. THEOREM 4.3. If g(t) is p times continuously differentiable in [0, T] and K(t, s, u) is p times continuously differentiable with respect to all three arguments in 0=£ss£ts£T, —oo<u<oo ? then the solution of (4.1) is p times continuously differentiable. Proof. The proof immediately follows from successive differentiation of (4.1). To extend the comparison results and the theorems on the asymptotic behavior of the solution to nonlinear equations is rather difficult. The problem lies in the fact that not only do we need a characterization of the kernel (e.g., negative, increasing, etc.), but we must also say something about the type of nonlinearity. Some progress can be made if certain monotonicity properties are assumed. This is illustrated in the next set of theorems. THEOREM 4.4. Assume that the conditions of Theorem 4.1 hold. Let f(t) be the continuous solution of (4.1) and let F(t) be another continuous function satisfying

where G(t) and H(t, s, u) are continuous in all arguments, and the following conditions hold: (i) |g(f)|<G(f),0^f^T, (ii) for all functions z x (f), z2(t) such that the inequality holds for all Then

Proof. Because of the continuity requirements, |/(0)|<F(0). Now assume that (4.37) is not satisfied. Then there must exist some r^T such that

NONLINEAR EQUATIONS OF THE SECOND KIND

59

|/(T)| = F(T). From (4.1) and (4.34)

Putting t = r then gives

But G(T)>|g(r)|. Also, for O ^ f ^ r , continuity implies that FF(i)^\f(t)\. Therefore (4.36) shows that both parts of the right-hand side of (4.38) are positive. This contradiction proves (4.37). The assumption of strict inequality |g(f)|<G(f) was made mainly to simplify the arguments. It can be removed without much difficulty. Exercise 4.6. Show that if (i) |g(f)|*£G(f), (ii) F(t) depends continuously on G(t), (iu) the other conditions of Theorem 4.5 are satisfied, then Example 4.3. Consider the equation

If we choose G(f) = 1 and then the conditions of Theorem 4.4 (with Exercise 4.6) are satisfied. Since the equation

obviously has a positive solution, we can omit the absolute value sign to get

This equation can be solved easily by differentiation to give

8. we get The result (4.40) and using the Upschitz condition (4. Let f(t) be an approximate solution to equation (4.6.5.4) is sufficient for the nonlinear problem to be well-posed. the solution f(t) of (4.7.1) and R = max |r(f)|. Let the conditions of Theorem 4. Exercise 4.44). THEOREM 4. Proof.4).39) satisfies THEOREM 4. Assume that the conditions of Theorem 4. Subtracting (4.9 we again take a simplified equation As in the discussion preceding Theorem 3.1 are satisfied.1) and f(t) be the solution of where Ag and AX are bounded for all O ^ s ^ f ^ T . Show that To get analogues of Theorems 3. Let f(t) be the solution of (4. -oo</<oo5 and K and AK satisfy a Lipschitz condition of the form (4.60 CHAPTER 4 Consequently.4).1 hold. the more general case can normally be reduced to (4.1) from (4.44).8 and 3. and let f(t) be the continuous solution of (4. Assume that the kernel satisfies the following .1) such that Then where f(t) is the solution of (4.41) then follows by comparing this equation with The Lipschitz condition (4.

T]. THEOREM 4. Assume that the conditions of Theorem 4.45) Therefore (4. The crucial assumption is that (4.6 are satisfied. Since /(O) = 1 and f(t) is a continuous function. Then. Furthermore. thereby proving (4.7. But (4. If the solution were to change sign somewhere in [0. s. from (4. .NONLINEAR EQUATIONS OF THE SECOND KIND 61 conditions: (i) for all u and (ii) for allu^Q and 0«£ s ^ t ^ T. Proof.9.44) From assumption (ii) while from (4.48) becomes This contradiction shows that f(t) cannot change sign in [0. the function K(t.49) holds for every F(t) bounded away from zero. Then if the limit exists. assume that for every a > 0 we have for every F(t) satisfying a^F(r)^l. there would be points ^ and t2 such that /(fj) = 0 and f(t) <0 for t: < t ^ t2. u) is a nondecreasing function of t.47). The arguments are essentially the same as in Theorem 3. T].46) then implies that f(t)^l 1 for 0 ^ t «£ t0. Then Proof. there is some interval 0^t*sf 0 for which /(f)>0.

8.4. s.8.13. Assume that (i) g(0 is continuous (i. (ii) K(r. Then (4. u) is a continuous function in (iii) the Lipschitz condition is satisfied for O^s^t^T and all y and z. but we will not pursue this matter here. Show that the equation has a unique continuous solution for all OsSf<°° satisfying and 4.52) makes it possible to generate the same inqualities. For less restrictive conditions similar results can be obtained at the expense of more complicated arguments. Proceed as in the proof of Theorem 3.13 with K replaced by L and K(t.e. (iv) p(t. every component is continuous).1. s.8. s)h(s). Consider the system of equations \ where we use the notation in § 4.. s. Unbounded kernels and systems of equations. Exercise 4.7. Exercise 4. THEOREM 4.9. The Lipschitz condition (4. Consider the equation where (i) g(f) is continuous in (ii) K(t. The extension of the results in the foregoing sections is a relatively simple matter if we assume the stringent conditions of Theorem 4.10. THEOREM 4.62 CHAPTER 4 Exercise 4. Complete the proof of Theorem 4. h(s)) instead of k(t. u) is a continuous function for . Complete the proof of Theorem 4.51) has a unique continuous solution in O^t^T. s) satisfies conditions (iii)-(vi) of Theorem 3.1. Proof.9.

55) has a unique continuous solution.54).50).9 is in the study of integrodiflferential equations.53) has a unique continuous solution in Proof. Use the method of successive substitutions in Theorem 4. Example 4. K. Consider the integrodifferential equation where H and K are continuous functions satisfying the Lipschitz conditions Integrating (4.NONLINEAR EQUATIONS OF THE SECOND KIND 63 (iii) the kernel satisfies the Lipschitz condition where the norm is as defined in (3. The resolvent equation. we see that Therefore.4.1.60) with the vectors K. Complete the proof of Theorem 4. Then (4.54) is satisfied with and (4. One immediate application of Theorem 4. we obtain the system Making the connection between H. replacing absolute values with norms.59) and (4. 4.5. z in (4. (4. f and h in (4.11. y. there is no immediate analogue of Theorem 3. For nonlinear equations.55) and using /(O) = a.9. Exercise 4. All arguments hold as before.3. expressing the solution in terms of a .

we can write where Using now (3.7 gives . T] and T(t.64 CHAPTER 4 resolvent kernel. THEOREM 4.61) has a solution. s) be the resolvent kernel for k(t. s) as given in (3.22).21). a resolvent equation can be used to establish a connection between certain nonlinear equations and related linear forms.10. Consider the nonlinear equation Let F(f.61) has a unique continuous solution on [0. Let F(t) be defined by Assume that all functions involved are continuous and such that (4. s) is a continuous function. Then f(t) satisfies Proof. Since we are assuming that (4. Nevertheless. we have Applying the results of Exercise 3.

if the equation can be considered a small deviation from the linear case.62) and (4.62) and (4. When the nonlinearity in (4. s) need only be.63). For details see Tricomi [229]. the proof of the theorem is completed. and substituting. Equations (4. then it is seen easily that so that (4.63) are sometimes called the variation of constants formulas.65) gives the first-order perturbation effect of the nonlinearity. Notes on Chapter 4.1.63) gives Taking fQ(t) = F(f). The use of the Lipschitz condition in Theorem 4. A very extensive discussion and many results on nonlinear equations.5. Sato [218] also discusses existence and uniqueness and gives some comparison theorems. square integrable. Substituting this into (4. we have If H is Lipschitz continuous in its second argument. that is. is simple and is used in most texts. it is not too difficult to relax the uniform condition (4. although restrictive. say.4) to where a(t.NONLINEAR EQUATIONS OF THE SECOND KIND 65 Since this is the same as (4. including the variation of constants method. Actually. Consider the equation where e is a small constant. . are in the book by Miller [190]. Example 4. A treatment of local existence and uniqueness for non-Lipschitz kernels is given in Corduneanu [70].61) is weak. variation of constants in conjunction with the method of successive approximations can yield some elementary perturbation results.

This page intentionally left blank .

1) with respect to t. s) and dk(t. we obtain If k(t. t) does not vanish in 0=^f =^T. Assume that (i) k(t.1) can then be answered by considering (5. tion Equations with smooth kernels. there are two types of equations of the first kind which commonly occur in practical applications: (1) the kernel is a well-behaved (i. otherwise no solution to (5. Consequently we will need to restrict our attention to special cases.1. and (2) the kernel is unbounded at s = t. dividing (5. In the first section we consider equations with smooth kernels with emphasis on the linear case. the theory of integral equations of the first kind is much less extensive than the theory for equations of the second kind. The question of the existence and uniqueness of the solution of (5. continuous.1) can be converted into an equation of the second kind by differentiation. t) yields a standard Volterra equation of the second kind. If we differentiate (5. In the second section we consider Abel equations in various forms.e. t) does not vanish anywhere in (iii) g(0) = 0. then clearly it is necessary that g(0) = 0.. In general. (iv) g(t) and g'(t) are continuous in 67 . THEOREM 5. s)/dt are continuous in (ii) k(t.2). then.2) by k(t.1) can exist. We consider first the linear equa- If the kernel and the solution are to be bounded.Chapter J Equations of the First Kind Broadly speaking. differentiable) function of all arguments. Abel's equation is an example in the second category. 5.1. (5. Formally.

(b) Show that if g(f) is continuous and g'(f) bounded with possible simple jump discontinuities.2) can be written as Integrating this.1) must satisfy (5. Consider. Exercise 5. shows that f(t) satisfies (5. Also. but not necessarily continuous. and using the assumption g(0) = 0. The condition that k(t.s)/ds are continuous in O ^ s ^ f ^ T .1 still holds. This solution is identical with the continuous solution of (5. But (5.1) has a unique bounded. An alternative method for reducing (5.2) is equivalent to the equation where Thus. equation (5. T] is essential. (a) Show that if condition (i) in Theorem 5.1) can have only one continuous solution. then the conclusion of Theorem 5.1) This form indicates that Theorem 5.2).1 are satisfied and there exists a unique continuous f(t) satisfying (5.68 CHAPTER 5 Then (5. Proof.1). solution. all conditions of Theorem 3.1 is replaced by (i) k(t. i) does not vanish anywhere in 0=Sf ssT.1 can be rewritten using somewhat different conditions on k(t. for (5. since any solution of (5.3) and (5. Since k(t. We will leave this as an exercise.1.2).1) to an integral equation is to integrate by parts.1) has a unique continuous solution. s) and d(t. then equation (5.3). s) and g(f). for example.2). With we obtain from (5. t) may not vanish at any point in [0. equation (5. By direct substitution it . which violates the condition at the single point t = 0.

(ii) the function 8K(t. a brief comment on the nonlinear case i is in order.7) gives which is not a standard equation of the second kind. The equation obviously has no solution. s) and g(f) are sufficiently differentiable. What conditions must g(t) satisfy for (5. The situation here is complicated by the fact that no solution may exist even for some rather simple examples. Differentiation of (5. t) = 0 for all 0«s t^ T.7) into an equation of the second kind. u)/dt are continuous for all M<o°. u) and BK(t. s. Assume that (i) K(t. so that the equation has an infinite number of continuous solutions. However. Exercise 5. then (5. THEOREM 5.1) is equivalent to an equation of the second kind. s.2. s. Show that if k(t. even though the kernel is constant with respect to r and s. and (iii) the nonlinear equation has a unique solution x for every y and all . and if k(t.EQUATIONS OF THE FIRST KIND 69 is easily verified that satisfies (5. A set of conditions sufficient to guarantee a unique continuous solution is given in the following theorem.1) to have a unique continuous solution under these circumstances? Most of our subsequent discussion will be limited to linear equations. and Lipschitz-continuous with respect to the third argument. The difficulty also appears when we try to convert (5.6) for any constant c. u)ldt satisfies a Lipschitz condition of the form for all y and z.2.

We will only sketch the required steps. then conclude that this solution satisfies (5. Exercise 5. and Q^t^T.10) to (5. Thus.9) has a unique solution.2. equation (5. Conditions (i). Extend Theorem 5.2.9) with some arbitrarily chosen continuous function f0(s).2 can be relaxed in various ways. Because of condition (iii). showing that it has a unique continuous solution. u)/dt and g'(f) satisfy conditions similar to those in Theorem 4.7) and that it is the only solution of (5.12). From here we proceed as in Theorem 3. We consider the differentiated form (5. showing that there exists a continuous solution in [T1} 2Tj]. Then (5. fn(i) is continuous. We proceed as in Theorem 5. (v) g(0) = 0. (ii) and (vi) of Theorem 5. Complete all steps in the proof of Theorem 5.1. t)^0 in the linear case.7). we get Using condition (iv) again.8). Condition (iii) is needed to eliminate such obvious counterexamples as (5. we have If we restrict t to the interval [0. the existence and uniqueness of the solution follows by the usual contraction mapping arguments.2.9).9). Tj] such that LTJO < 1. Consider the method of successive approximations applied to (5.12) defines a unique sequence of continuous functions. s. Because of condition (iv) and the various assumptions on K and g. for every given continuous / n _i(f). The only thing new is to show that the nonstandard equation of the second kind (5. Applying now (5.12) uniquely defines fn(t). (5. .2 to the case where dK(t. and so on. (vi) g(f) and g'(t) are continuous in 0=^f«*T.7) has a unique continuous solution.2. Exercise 5.4. Condition (iv) plays the same role as does k(t.70 CHAPTER 5 (iv) there exists a constant 0 > 0 such that for all y and z. We want to show that the sequence fn(t) converges to a continuous function which is the solution of (5. To do so we can use the method of continuation as in Theorem 3. Proof.3.

1) is not well-posed. It is a common observation that problems which are not well-posed can be troublesome numerically. The solution f(t) depends. We will return to this point in Chapter 9. Qualitative properties of the solution of (5. can be established. we define then we can make a small change in g(f) which involves a large change in g'(t). Similarly. that is. Let us first take the case n = ^. for any continuous <p(i). In particular. amongst other things.2) or (5. Abel equations. analogous to those in § 3. for example.3.EQUATIONS OF THE FIRST KIND 71 Exercise 5. multiply both sides by (x — 0 1/2 and . We begin with the simple Abel equation As we will see.2.10. practical situations often lead to kernels in which the singularity is of the form (f-s)"14. 5. the perturbation result given in Theorem 3. Fortunately.5. Why this is so is easily seen from (5. but whose derivative is of order unity. Not everything goes through trivially. One way of expressing this observation is to say that (5. this equation has an explicit solution.5) and various theorems. small changes in the kernel can have a significant effect on the solution as well. Show that the equation has a unique continuous solution for all t^Q. If we measure the size of a function by the maximum norm. by adding to g(f) the function whose magnitude is of order e. Volterra equations of the first kind are no exception. does not hold for equations of the first kind. and consider In order to solve this equation. O<JUL<!.1) can be studied using either of the equivalent forms (5.2). The general case of equations of the first kind with unbounded kernels is difficult and very little work has been done on it. on the derivative of g(f).

we need two standard results.72 integrate from 0 to x. < 1. . Prove that the interchange of order of integration in (5.6. from (5. Then CHAPTER 5 Interchanging orders of integration on the left. First. on differentiating. we have The inner integral on the left can be evaluated explicitly Consequently. Here F denotes the gamma function.14) for general /x. The second result relates to the interchange of integrals. In general.22) is justified. /3. s) is a continuous function in 0=Sss*fs£x and a. y are constants such that then Exercise 5. for 0 «£ /u. To justify these manipulations and at the same time solve (5.17) or. it can be shown that if <p(t.

24) and (5.25) then follow immediately.14). after a few manipulations. If g(t) is continuous in 0<f «£T and where C^O and «<JUL. we find.14) has the solution This solution is continuous in 0 < t ^ T. Multiply (5.24) into the left-hand side of (5. and satisfies as f —> 0. then (5.14) has a solution in the class of functions just stated.21) then From (5. Substituting (5. this solution is unique in the class of functions of the form f(t) = f 3 F(f). with |3>-1 and F(s) continuous. we can apply (5. The results (5. Suppose first that (5. THEOREM 5. .14). Then Since /(s) is assumed to be of the form s3F(s). Proof. where |3>-1 and F(t) is continuous. We still must show that a solution in the stated class exists.23) as x—»0.26) can be differentiated everywhere except at x = 0. Furthermore.EQUATIONS OF THE FIRST KIND 73 We can now find a closed form solution of (5.3. Hence (5. that it does indeed satisfy the equation.22) to interchange order of integration and get By (5.14) by (x-t)^ 1 and integrate from 0 to x. This can be done by verification.

28) is given by (c) Show that the solution of (5. Show that f(t) given by (5.9. in (5.14).14). and 0=s/u. . Exercise 5. by putting the form we get Referring back to (5. Exercise 5. differentiable g(f). (5.74 CHAPTER 5 Exercise 5. we set equation reduces to the From this. Show that. then Abel equations appear in the literature in various forms which are simply related to (5.26).28) are easily found.<l. For example. and (5.14) becomes If.29) is given by Because the solution of the simple Abel equation involves differentiation of improper integrals. for t>Q.14).7.24) the solutions to (5.27) is given by (b) Show that the solution of (5.24) satisfies (5.8. (a) Show that the solution of (5. Hint: Show first that if G(t) is continuous and y>0. making the transformation of variables equation (5. the formulas given in the following exercise are frequently useful.27).

4. Show that if k(t. Then every solution of (5. s ) is continuous in O^s^t^T. then h(t. t) does not vanish for any t. we obtain where The change of variables t=s+(x-s)u the gives the from (5. .39) is continuous on O^s «=x ^ T.37) by (x-tY from 0 to x.10.39).37) in the class of functions t^F(t]. Exercise 5.22). then l and integrate Interchanging order of integration using (5. and that g(f) satisfies the condition (5. we consider the generalized Abel equation under certain smoothness assumptions on k(t.3. with |3>-1 and F(t) continuous. s) is differentiable.11. satisfies the equation where Proof. Exercise 5. As in Theorem 5.23) of Theorem 5. s) and g(t). show that h(x. THEOREM 5. we multiply (5. If k(t. s) in (5.EQUATIONS OF THE FIRST KIND 75 Next. Assume that k ( t . s) is also differentiate.3. f ) ^ 0 for O ^ f ^ T . Show that h(x.

Unfortunately.14) depends on g'(f) so that Abel's equation is not well-posed.76 CHAPTER 5 Exercise 5. as shown in Theorem 5. For the generalized Abel equation. analytical manipulations can be made which reduce the equations to simpler form. Show that. Tricomi [229] and Kowalewski [155] give brief accounts similar to the present chapter. This is essentially the only technique for studying the properties of equations of the first kind.4.11 are satisfied. of the general Abel equation is in [14].12. including smoothness results. A more thorough study. from (5. we have shown that for equations of the Abel type. but a reduction to an equation of the first kind with smooth kernel can be made. Most texts on integral equations dismiss Volterra equations of the first kind with smooth kernels with the comment that they can be reduced to equations of the second kind by differentiation. In the case of the simple Abel equation. To summarize. Consequently. the integrals involved are complicated and can usually not be evaluated analytically. numerical methods are usually required. Formulas for integrals involving factors of the form (r —«)""• and (t22 -tA s ) can be found in Gradshteyn and Ryzhik [121]. Abel equations have received a little more attention.37) has a unique continuous solution.33) that the solution of (5.24) can be found. This feature plays a significant role in numerical methods. These will be discussed in Chapter 10. no explicit inversion is known. Notes on Chapter 5. then (5. We also note. closed form solutions such as (5.9 and Exercise 5. .24) and (5. if the conditions in Exercise 5.

6) that this equation has a 77 . Some simple kernels.Chapter 6 Convolution Equations The applications in Chapter 2 show that many integral equations arising in practice have difference kernels of the form Equations with difference kernels are referred to as convolution equations. but they will not be considered here. The linear convolution equation of the second kind is Nonlinear convolution equations. Similar definitions hold for convolution equations of the first kind and integrodifferential equations. The situation is particularly simple when k(t — s) is a polynomial in (t — s). such as are also of interest. there exist many special results for convolution equations which cannot be extended to the general case.1.1. 6. Example 6. Because of the simplicity of the kernel. In this chapter we study some of the techniques for investigating the solution of convolution equations. The reduction to a differential equation is then straightforward and often yields an elementary solution. Find a solution of It is an elementary observation (see Theorem 3.

8). we get from (6.2) with that is. More generally for p5=l.5) it follows that so the solution of (6.11). Prove (6.6) is which indeed satisfies (6. Exercise 6. There is a close relation between differential operators and Volterra operators with kernels of the form (t — s}p.2. indicating that d2/dt2 is the left inverse of the Volterra operator with kernel (t — s). Show that Consider now (6. Exercise 6. Hence and From (6.4). For example. q^p. the equation If we assume that g(f) is sufficiently differentiable. A generalization of this example is immediate.78 CHAPTER 6 twice continuously differentiable solution. using .1.4) and (6.

If g(f) is continuous on [0. then a complete solution to (6.3.1. Differentiating once more gives the (n + l)st order ordinary differential equation with constant coefficients From (6.CONVOLUTION EQUATIONS 79 (6.11) into a system of differential equations.9). After simplifying a little we get the following result.1 to convert (6.14) show that If g(n+1) is sufficiently simple so that a particular solution to (6.12)-(6.15) can be found. T].8) and (6.11) is given by .10) is degenerate.11) can be obtained using elementary results on differential equations with constant coefficients. Find the solution of the equation Since a kernel of the form (6. Exercise 6. then the solution to (6. one can also use Theorem 1.11) we get while (6. THEOREM 6.

THEOREM 6.11) can also be written as Differentiating (6.2.20) gives Also from (6.17) and (6. Theorem 3. Another equation immediately reducible to a system of differential equation is one with a kernel of the form where the & are assumed to be all distinct. The solution of the equation .16).21) is then the desired result (6.20) Thus the z^t) satisfy (6.1 guarantees that (6.11) has a unique continuous solution f(t).19) and are therefore the solution to that system. Equation (6. Define Zj(f) by Then so that the solution of (6.80 CHAPTER 6 where the yf are the solution of the system Proof.18) as well as the initial conditions (6.

Obtaining a single differential equation with constant coefficients for (6.30) follows. Let Then But from (6.1 using A closely related differential equation can be obtained with a slightly altered approach. The solution (6.3. This is an immediate consequence of Theorem 1. as in Theorem 6.25) is a little more complicated.25) so that (6. THEOREM 6. We begin by defining.CONVOLUTION EQUATIONS 81 is given by where the yf are the solution of the system of differential equations with Proof.25) is given by where the Ff satisfy the system and Proof.3 .

82 CHAPTER 6 Then and in general.33) has a solution since the matrix of the system is essentially a Vandermonde matrix...27) will be possible only if g is sufficiently simple. a closed form solution to either (6. Then Substituting this into (6. One result which sometimes helps in this is the observation made in Chapter 3. Consider now the linear combination and pick the cy such that If all & are distinct and nonzero. n.32) gives an equation of order n with constant coefficients for the solution f(t) where In general. 2.15) or (6.. then (6. in particular .. for j = 1. that for difference kernels the resolvent kernel itself is the solution of a convolution equation.

THEOREM 6. with g(t) given by (6. Closely related to the resolvent kernel R(t] is the so-called ddifferential resolvent S(t) defined as the solution of the equation By a simple change of variable. Then g (n+1) (f) = 0.38) it follows that S(0)= 1. The equation for R(i) is then of the form (6.10).11) with Combining this with (6.36).15). we see that S(t) also satisfies so that But from (6.CONVOLUTION EQUATIONS 83 R(t) is given by Suppose now that k(t) is of polynomial form (6. consequently .4. The initial conditions are those for (6. Use (6.10). we get a simple result for the resolvent kernel. then its resolvent kernel R(t) is the solution of the homogeneous linear differentialal equation with constant coefficients with initial conditions Proof.15) using the specific form of g(t). If k(t) is of polynomial form (6.15).

Further details.42). The Laplace transform operator X has an inverse J2T1. it follows that 6. Let f^ and f2 be two functions which are absolutely integrable over some interval [0. One of the primary tools for the solution and analysis of linear convolution equations is the Laplace transform.84 CHAPTER 6 Since this is exactly the same equation as (6. generally without proof.42) is convergent. for some \v0. In this section we present. To the left of Re (w) = w0. the Laplace transform /*(w) is defined by analytic continuation. .5. given by which is such that The real number a in (6. For convenience we will use the usual notation for the convolution of two functions The Laplace transform of a given function f(t) will be denoted by /* or J£(f). which is generally so for all w in the half-plane RQ(W)^WO. Laplace transforms. those aspects of Laplace transform theory most useful in the study of integral equations. it is defined as where \v is a complex number.35). This definition holds provided the integral in (6. THEOREM 6. The usefulness of Laplace transforms in the solution of convolution equations arises primarily from the following standard result. then In other words.43) has to be such that all singularities of /*(\v) lie to the left of the path of integration. furthermore 3?(fi) and ^"(/2) are absolutely convergent for w 2s w0. In the half-plane of convergence /*(w) is an analytic function defined by (6. can be found in books on the Laplace transform.2. the Laplace transform of a convolution is the product of the individual transforms. as well as the proofs. T] and which are bounded in every finite subinterval not including the origin. If.

2). Still. The main advantage of the Laplace transform method is that it allows us to obtain some qualitative properties of the solution in a simple and elegant way. in some simple cases. Example 6.49) can be used to obtain a solution to (6. and using Theorem 6. for example [239].5. as will be demonstrated in subsequent sections of this chapter. This is a standard result which can be found in any book on the Laplace transform. Numerically.49) is also not easy to use since it involves integration in the complex plane.46).46) in terms of definite integrals. Find the solution of the equation From a table of Laplace transforms we find Therefore and . we have Solving this equation for /*. Consider now the linear convolution equation of the second kind (6. There are easier ways to solve Volterra equations numerically.CONVOLUTION EQUATIONS 85 Proof. and applying the inverse Laplace transform. (6. (6. its practical use for finding the solution of integral equations is limited by the fact that the inverse Laplace transform is known only for some rather simple functions. gives the solution While (6. noting that !£ is a linear operator.49) gives the solution of (6. which can be written in shorthand notation as Applying the Laplace transform to this. as we shall see.2.

6. Use the Laplace transform method to solve the equation Exercise 6.3.4.5. which in convolution form is Taking Laplace transforms and solving for R* gives so that Substituting this into (3. A more general expression involves only the Laplace transform of k.6. Let u(t) be the solution of the equation . Use the Laplace transform method to solve the equation of the first kind 6. this can be useful. but fc is relatively simple.49) can be used to obtain a solution only in very special cases for which g(f) and k(t) are quite simple. THEOREM 6.25) gives the solution In cases where g is complicated. Solution methods using Laplace transforms. The resolvent kernel satisfies equation (6. for arbitrary g(f). to derive it we use the fact that for convolution equations the resolvent satisfies a convolution equation. Show that. Equation (6. the solution of is given by Another closed form solution useful for simple k(t) and general g(0 uses the differential resolvent.86 CHAPTER 6 Exercise 6. Exercise 6.35).

we find that or By Theorem 6. Take Laplace transforms of (6.CONVOLUTION EQUATIONS 87 Then the solution of is given by This conclusion holds for all k(t) and g(t) for which the formal manipulations in the following argument are justified.5. Applying this to (6.54). for sufficiently smooth functions y. Again.5 then and applying which is the desired result (6. Proof. some manipulations are required to find general forms. As indicated by Exercise 6. namely that. this gives and Thus We now use another elementary result on Laplace transforms.56) with y = g and p = 1.55). We illustrate this by deriving the solution of the simple Abel equation by this technique. Consider the equation . the Laplace transform method can also be used on equations of the first kind.53) and (6.

58) this is the same as and by Theorem 6. at As a final example. differentiating (6.88 CHAPTER 6 Note that the kernel. as expected. and noting that we obtain An immediate application of £~l does not yield anything very useful. consider the integrodifferential equation with constant coefficients of the form with .59) From (6. Applying £ to this equation.5.60). although not bounded. from (6. instead we introduce a new variable z(t) such that Then.5 This can now be inverted to give Finally. and from (6.57). we arrive. satisfies the conditions of Theorem 6.

First. 6. Theorem 3. The results developed in Chapter 3 can of course be applied to convolution equations to get some elementary results. but important.7. consider This equation is of some significance.8. then substitute it into the original equation to verify it.62) is The manipulations in these examples are purely formal and one needs to demonstrate their validity by showing that all necessary conditions are satisfied. its solution is the resolvent kernel R(t).57).4. While the Laplace transform method can give explicit solutions for special integral equations. Perhaps a more practical procedure is to use the Laplace transform method to arrive at a possible solution. we obtain so that the solution of (6. it arises in practice as the renewal equation . To obtain more far-reaching results the Laplace transform method is often useful.8 becomes THEOREM 6. As already pointed out.CONVOLUTION EQUATIONS 89 Applying SB to (6. The asymptotic behavior of the solution for some special equations.62) and using (6. Also. Such a priori justification then requires some analysis of the solution of the integral equations using the results of Chapter 3. This is an immediate consequence of Theorem 3. We will consider there only some linear convolution equation with certain special. If the kernel k(t) in the equation satisfies then the solution of (6. its usefulness becomes more apparent when we consider some qualitative properties of the solution. For example. forms.64) satisfies Proof.

70). we factor the denominator to give Let us ignore the possibility of multiple roots (they introduce no essential complications) and order the roots so that From the definition of the Laplace transform and from (6. . In many cases of interest this condition is fulfilled.68). or Re(w) = 0 and Im(w)^0. .71) can be written as a rational function of w. Taking Laplace transforms of (6.72) holds. From this we conclude that j30 = 0 and that Re (ft) <0 for i = 1.90 CHAPTER 6 discussed in Chapter 2.69) and (6.69) and (6. m. If (6. If we use a partial fraction decomposition to write then CQ can be obtained by .68) is a renewal equation.70) it follows that fc*(0) = l and that fe*(w)<l for all w with Re(w)>0. If (6. then k(t) represents a probability density and must satisfy We will therefore consider (6. .68) gives Let us assume now that the right-hand side of (6. 2 . Consequently there is considerable interest in this form. that is. . subject to conditions (6.

71) and use (6. 2. then.8. To see the use of this theorem. A set of closely related results. If f(t)z*Q and with positive c and p.. (6. See Bellman and Cooke [26. m it follows that Also. One of the simplest of these is the following. THEOREM 6. then More precise and more extensive results can be obtained using various known properties of the Laplace transform. for large t where as t Proof.72). 240]. and (6. p. are particularly useful. because it follows that The conclusion then is that.CONVOLUTION EQUATIONS 91 If we now apply 3?~l to (6..74).69).70). the so-called Tauberian theorems.68) subject to conditions (6. consider the equation . we get Since Re (a) <0 for t = 1.. if f(t] is the solution of (6..

8 with c = 1/m. [162]. . and consequent use in obtaining approximate solutions. are given in Bellman and Cooke [26]. such as Widder [239]. including Theorem 6. convolution equations have received a great deal of attention. Because of their importance in applications. p = 2 and we can conclude that where Exercise 6.92 subject to CHAPTER 6 Applying !£ to (6. and [234]. The results on Laplace transforms quoted in this chapter can be found in standard texts. If f(t) is the solution of where p is a positive integer and k(t) satisfies conditions (6. Nonlinear convolution equations have also been studied extensively.8. is studied in [33].81)-(6.80) gives Now so that Therefore /*(w) satisfies the conditions of Theorem 6. show that for large t. The reduction of convolution equations to systems of differential equations. Some representative papers on this subject are [40]. [66].7. Notes on Chapter 6.83). The books by Bellman [25] and Bellman and Cooke [26] contain a wealth of material on the linear case. [105]. Various Tauberian theorems.

PART B: NUMERICAL METHODS

This page intentionally left blank

Chapter 7

The Numerical Solution of Equations of the Second Kind

Since few of the Volterra equations encountered in practice can be solved explicitly, it is often necessary to resort to numerical techniques. There are many alternatives available as we shall see; we will concentrate here on the underlying ideas on which these methods are based. In order to bring out the essential ideas clearly and to avoid unnecessary complications, we use the simplest setting and consider the equation

under the conditions (i) g(f) is a continuous function in (ii) the kernel K(t, s, y) is continuous in (iii) the kernel satisfies the Lipschitz condition for all 0=^s=s£f=s£T, and all y l5 y2. As shown in Chapter 4, these conditions are sufficient to guaranty that (7.1) has a unique continuous solution. The analysis of the numerical methods will utilize these assumptions, and, strictly speaking, holds only when they are satisfied. The algorithms can usually be applied to other equations as well, although not all conclusions are necessarily valid. For certain types of kernels it may actually be necessary to modify the procedures, as we shall see in the next chapter. We begin our discussion in this chapter by presenting a rather simple and intuitively reasonable method based on the trapezoidal integration rule. Methods based on more accurate integration methods are investigated next. The results suggest the possibility of using various standard numerical integration techniques for the approximate solution of Volterra equations.

95

96

CHAPTER 7

To justify this, a general convergence theorem is presented in § 7.3. In §§ 7.4 and 7.5 we investigate the question of error estimates and numerical stability. A class of methods, closely related to the implicit Runge-Kutta methods for ordinary differential equations, is presented in § 7.6. A detailed numerical example in § 7.7 gives some practical evidence for the effectiveness of these methods. In § 7.8 we show how explicit Runge-Kutta methods, popular for the solution of ordinary differential equations, can be generalized to Volterra equations. Finally, in § 7.9 we present a short summary of various ideas that have been proposed but not treated in detail here. 7.1. A simple numerical procedure. Intuitively, we can compute an approximate solution to (7.1) by the following simple process. Suppose that for a given stepsize h > 0 we know the solution at points ^ = ih, i = 0,1,..., n -1. An approximation to /(O can then be computed by replacing the integral on the right side of (7.1) by a numerical integration rule using values of the integrand at ij, i = 0,1,..., n and solving the resulting equation for /(O- Since f(t0) = g(0), the approximate solution can be computed in this step-by-step fashion. The particulars of the algorithm depend primarily on the integration rule we choose; to start let us consider the very simple procedure one obtains by the use of the (composite) trapezoidal rule. If we let Fn denote the approximate value of /(O, we can compute Fn by

withF 0 =g(0). The unknown Fn is defined by (7.3) implicitly, but for sufficiently small h the equation has a unique solution. In the linear case we can of course solve it directly for Fn; in the nonlinear case we would normally use some iterative technique to solve for FM to within a desired accuracy. A formal error analysis of this procedure will be given later. For the moment let us proceed in a more empirical fashion and see what results we obtain for a simple test example. Example 7.1. The equation

has exact solution The errors in the approximate solution by the trapezoidal method are shown in Table 7.1 for several stepsizes.

6 xKT 5 -3. The trapezoidal method is quite simple but the results are of relatively low accuracy.5 0.6 xlQ.1 Observed errors Ft — /(tt) for Example 7. This leads us to suspect that.2 xKT 5 -4.3 xlO~ 4 -4.2 0.8 0.5xlO~4 -8. If the conjecture made in the previous section is true and the accuracy of the approximate solution depends on the accuracy of the numerical integration.7 xlO^ 4 -7. if the integral in (7. Use the trapezoidal method with h = 0.1 xlO5 h = 0. then the approximate solution computed this way has the same order of accuracy.3 xKT 5 -8.2 xlO~ 4 -5.7xlO~4 -2.3 xHT 5 -1.5 xlO"4 -1. Exercise 7.7 x 10~5 -5. 0.0 XHT 4 -5.0 h = Q.9XHT4 -2.3 xKT 5 h = 0. that is. We shall see shortly that this conjecture is essentially true.2xlO~ 5 -6.3xKT4 -4.05 -2.025 -5.8xlO~4 -6.1) is approximated by a numerical integration having a certain order of accuracy.1 xHT 5 -2.5 xlO~ 4 -3. 0.025 to compute an approximate solution to Compare with the exact solution to show the O(h2) dependence of the error.6 0.9 1. in general.2xlO~ 5 -1.3 xlO~ 4 -1.OxlQ.1 0. the approximate solution becomes more accurate as h decreases. Furthermore.0 xicr 4 -1. Methods based on more accurate numerical integration.05.1X10-4 From these results we can observe that the method apparently converges.3 0. t 0.1.1.2 xHT 6 -l.5 -1.1.2.7 0. This O(h2) dependence of the error is a reflection of the second order convergence of the trapezoidal method.5 -4.7 xlO-4 -1. 7.1 xl(T 5 -3. then for more accurate methods better integration .EQUATIONS OF THE SECOND KIND 97 TABLE 7. we note that the error is approximately proportional to h2.4 0.l -8.6 xlO" 5 -2.

. Fr_! can therefore not be obtained by (7. However..7) with these weights and r = 5 will be called the fourth order Gregory method for solving the Volterra equation.8) to hold.7) holds only for n ^ r. This reflects the fact that higher order integration rules require a minimum number of points.1 We assume therefore that we have an integration rule of the form where <p(t) is any continuous integrand. First we note that (7. The values of F1} F2. F4 need to be given to start. Using this to replace the integral in (7. then the equation has a unique solution for all sufficiently small h. this does not affect the subsequent discussion in any significant way. Here h denotes the constant stepsize for the integration. = ih. Some restrictions and practical complications arise immediately. How many starting values are needed in (7.7) and must be computed some other way.98 CHAPTER 7 rules must be used. The values of Fl5 F 2 . The wni are called the integration weights. by some adaptive approach. for example. the weights wni are uniformly bounded. F0 is of course always taken as f(0)=g(0) Exercise 7. Find the weights for the sixth order Gregory formula. F3.1).7) represents a viable computational procedure. Within the restriction of constant stepsize we have the choice of using either Newton-Cotes or Gregory type integration rules.. but apart from this (7. and f.7)? 1 It is of course possible to vary the stepsize during the computation. Because of the stepwise way of computing the solution. For example. How these starting values can be found will be discussed later. we must rely largely on integration rules with constant stepsize. the fourth order Gregory formula has weights (for n ^ 5) Thus (7.2. where r is some fixed integer... The value r = 5 arises because at least six points are required for (7. as is generally the case. we are led to consider the numerical method If.

If the three-eighths rule is used on the points t0. one gets the weights (for n^2) n is even: n is odd: We will call this Simpson's method 1. The use of the Newton-Cotes type formulas introduces a slight complication. tn-2. as well as in subsequent discussions. giving a class of numerical methods for solving (7. on the points t n _ 3 . While intuitively there seems to be little difference between the two methods. the repeated Simpson's rule) can be applied only when n is even. If we use the three-eighths rule at the upper end. 'n> WC get the Weights n is even: n is odd: as in Simpson's method 1. t$. . tn-l. For odd n some adjustment has to be made. 8t] denotes the Kronecker delta We will refer to this second form as Simpson's method 2. Simpson's rule (that is.EQUATIONS OF THE SECOND KIND 99 Any of the Gregory formulas can be used. Both forms of Simpson's method require starting values F0 and Ft. since these rules involve some restriction on the number of points. One way is to apply the so-called three-eighths rule over four adjacent points and the Simpson's rule over the rest of the interval. t l5 t2. we shall see later that Simpson's method 2 is preferable for computational purposes. In these formulas. that is. For example.1).

we will use for simplicity the notation limh for it.3. The resulting complication limits the usefulness of this type of method. Methods based on higher order Newton-Cotes formulas can be constructed in a similar way. We are interested in the behavior of this discretization error as a function of the stepsize h. In practice this is usually adequate. thus we can talk only at the error at these points. This intuitive approach leads us to a number of different plausible methods for the approximate solution of (7. N goes to infinity in such a way that Nh = T. one has several choices. but it becomes necessary to combine several different rules.2. such that .1. To justify them and to predict their relative usefulness requires a detailed error analysis. Derive the weights for both forms of Simpson's method.7) based on Newton-Cotes formulas of order at least five. DEFINITION 7. 7. One can either construct an interpolating polynomial using the computed values of the solution at ti or one can use special methods yielding continuous functions directly. Since this particular way of approaching the limit occurs frequently in subsequent discussions. independent of h. Investigate the construction of a method of the form (7. If.1). Algorithms for this latter approach differ somewhat in the practical details from the methods discussed here. as h -» 0. This implies that h can take on only certain admissible values. T] is divided into N parts in such a way that Nh = T. If it is necessary to produce an approximate solution in the form of a continuous function. but the underlying principles are very much the same.3. For any particular value of h the interval [0.100 CHAPTER 7 Exercise 7. namely such that h^T and T/h is an integer. The methods discussed in the previous section determine an approximate solution only at the error at the points %. Therefore.7) is said to be a convergent approximation method (for some equation or class of equations) if DEFiNrnoN 7. Exercise 7. there exists a number M<°°. Let us therefore consider the set of values which we will call the discretization error. Error analysis: convergence of the approximate solution.4. for all admissible h. A method of the form (7.

1.3.4.7) is said to be consistent with (7. If for every equation in % there exists a constant c (independent of h. in the context of a given equation. From (7. not surprisingly.1. DEFINITION 7. Let the sequence £0» £i> • • • satisfy where Then Proof. THEOREM 7. We omit the details.1).1).1) for the class of equations <£.5.EQUATIONS OF THE SECOND KIND 101 and if p is the largest number for which such an inequality holds. The result is easily established with an inductive argument. Exercise 7. tn = nh. DEFINITION 7. then . If for every equation in ^ then the approximation method (7. the numerical integration rule represents the integral. The local consistency error is a measure of the accuracy with which. Let ^ be a class of equations of the form (7. The order of convergence is. Before proceeding with the statement and proof of a convergence theorem. but generally dependent on K and f) such that then the method is said to be consistent of order p in ^. Then the function is the local consistency error for (7.17) it follows that if A = hK. closely connected with the accuracy of the numerical integration employed. we need the following basic result.1). then p is called the order of convergence of the method. Let / be the solution of (7. Prove Theorem 7.

.. Proof.1).1 and (7. r-1 go to zero as h—>0. t . showing that these methods are .18) gives Since by assumption both the starting errors and the local consistency error are zero in the limit... (ii) the weights satisfy (iii) the starting errors F. then and the proof is complete. this implies that Then the method is a convergent approximation method..1) by (7. Since r is fixed. One further conclusion can be drawn from (7. r + 1. in the absence of starting errors. and choosing h < l/LW. THEOREM 7.7).1) and the kernel K ( s . we get for n = r.-/(^). Also. the order of convergence is at least p.2) and assumption (ii). it follows that If there are no starting errors. Using the Lipschitz conditions (7.19): the effect of the starting errors is attenuated by a factor of h.7) and assume that (i) the solution f(t) of (7. i = 0.102 CHAPTER 7 The main convergence theorem follows by a simple application of this preliminary result..1) and subtracting from (7.2. we have Applying Theorem 7. Putting t = k in (7.1. f ) are such that the approximation method is consistent of order p with (7.. Consider the approximate solution of (7.

7. This gives us some initial insight into the expected effectiveness of the methods. we need only that to achieve an order of convergence p for a method whose order of consistency is p. It establishes the convergence of a class of approximation methods and shows the dependence of the discretization error on the consistency error and the starting errors. This requires some knowledge of the properties of the unknown solution f(t) and it is consequently rather difficult to carry through. it is of some practical concern to know whether the errors will vary smoothly from point to point or whether they oscillate more or less unpredictably within the bounds given by (7. To use (7.4. several orders of magnitude larger than the actual error. Of course.3) is convergent with order 2. which we call numerical instability. we see that the bound is essentially proportional to e^**. If the actual solution grows like e^1. . both methods have order of convergence 4. the solution grows more slowly than the error. is well known in the solution of ordinary differential equations. Furthermore. then the approximation method (7. Show that both Simpson's method 1 and Simpson's method 2 are consistent of order 4 for sufficiently smooth / and K. This phenomenon. but one needs to consider whether there are some cases when the bound is realistic. Error estimates and numerical stability. Exercise 7. A closely related problem is the question of the growth of the error. then the method may have poor accuracy even though it is of high order. 7. Consequently.19) to establish usable bounds on the error in the approximate solution is usually not very profitable.19) is only a bound. While the order of convergence of a method is useful in establishing its potential efficiency. If WLtn is much larger than one. As we shall see. the results may be quite pessimistic. even if |en| can be so bounded. For example. If.19). this type of instability also occurs with Volterra equations. in the absence of starting errors.19) we need to bound the consistency error.6. In fact. Show that if / and 1C are sufficiently smooth (that is. To compute the right-hand side of (7. one frequently wants to have a better idea of how the error can be expected to behave. sufficiently differentiable).EQUATIONS OF THE SECOND KIND 103 relatively insensitive to the starting errors.19) allows for a rather large error even when h is quite small.19) for fixed h. since (7. that is. however. The main use of (7. If we consider (7. then there is no problem since the relative error will remain small. To answer this question we need to obtain an error estimate.19) is in the qualitative information it carries. Exercise 7. the actual error may be much smaller. then (7. not just a bound.

yielding similar conclusions at the expense of additional work. Some authors make stability essentially synonymous with convergence. s)f(s). . Their sum is the discretization error and is a solution of (7. arises from two sources: one contribution coming from the propagation of the starting errors and a second one due to the consistency error.104 CHAPTER 7 In numerical analysis the term "stability" is used with several different meanings. We therefore use the term numerical instability for such a phenomenon. s. . The overall error.5. the equation defining the error becomes The starting errors e0 = TJO. Corresponding to these different definitions one obtains alternative ways of analyzing numerical stability. With an appropriate definition of the term one can show that.21). What we are concerned with here are cases where. The method used in this section is the most general. This is not the definition we want to use here. With the assumed linearity. Let e% and e^ denote the respective solutions of and We call Sn the accumulated consistency error and s^ the accumulated starting error. the error can grow rapidly with t. consistency and stability imply convergence. but requires some detailed manipulations. A somewhat less technical approach will be described in § 7. . These two components behave quite differently. The nonlinear case can be treated in much the same manner. /(s)) = k(t. . while the solution itself does not grow very fast at all. Actually. To simplify the arguments we consider only the linear case K(t. rather generally. . ei = T?I.21). for fixed h. defined by (7. since all reasonable methods for Volterra integral equations of the second kind are convergent if they are consistent. so that for further discussion it is convenient to separate them. er_! = 7}r_x will be determined by the starting method used. the above description is quite imprecise and the phrase "the error can grow rapidly" can be defined rigorously in several ways.

5. The existence of such an expansion is closely related to a similar expansion for the numerical integration formula used.26) Now.. p +1) and e(i) is the solution of provided that all functions involved in (7. we say that the local consistency error has an expansion of order p (for a given problem) if there exists a continuous function Q(t) such that THEOREM 7. From (7... where P — min (q.1.28) with order at least one. For each admissible h. In particular. We say that this set of sequences has an expansion of order p if there exists a continuous function x(t) such that. Consider the linear equation If the local consistency error has an expansion of the form (7. Proof. Using Definition 7.30) . Subtracting (7.3. by Definition 7. DEFINITION 7.3. then it varies smoothly from point to point in the sense that the scaled quantity ejhp tends to a limit which is a continuous function. N. £i(h). where Se(h.EQUATIONS OF THE SECOND KIND 105 We now define more precisely what is meant by saying that the error varies smoothly. where q>p...28) are sufficiently smooth so that the method in question is consistent for (7. t) is the local consistency error for (7.22) and (7. let £0(h).. If the discretization error has an expansion... then the accumulated consistency error also has an expansion.5.26).28). for n = 0. %N(h} be a sequence of numbers.

the accumulated consistency error behaves smoothly. it follows from Theorem 7. where q is an integer independent of n.1 that which was to be proved. and Simpson's method 2 have a repetition factor of 1. the trapezoidal method described in § 7.8. Exercise 7. . A method of type (7. The accumulated starting error. .1 has a local consistency error which has an expansion with p = 2 and q = 4. THEOREM 7. Whether or not this occurs seems to depend on the pattern in the weights wni. n — q. in general.7) is said to have a repetition factor p if p is the smallest integer such that for i = 0.s). According to this definition the trapezoidal method. while Simpson's method 1 has repetition factor 2. however.7) under the assumptions (i) the method has repetition factor 1.106 CHAPTER 7 from (7. DEFINITION 7. does not always behave as nicely and e^ may exhibit oscillations of magnitude O(e^). .29).2 (and indeed all standard numerical integration rules) the local consistency error has an expansion. 1. (ii) the method is convergent with at least order 1 for equations (7. (iii) the starting errors r i i = F i — f ( t i ) can be written as Then the accumulated starting error e^ has an asymptotic expansion of the form . Consider the numerical method (7. Show that. and using the fact that we get Since by assumption 8e(h. more specifically it is governed by the following concept.6. 2 . . It is also a straightforward exercise to show that for the methods suggested in §7.35) below. t) is at least O(h). the fourth order Gregory method.4. for sufficiently smooth /(f) and k(t. Thus.

(t) is the solution of The V.34) follows.EQUATIONS OF THE SECOND KIND 107 where C. given by The difference between / and / then satisfies . For methods with repetition factor larger than one this is not necessarily so and significant oscillations may appear. We can therefore write wnj = V. Next. r— 1 are independent of n.. Let us write then substitute this into (7. consider what happens when the right-hand side of (7. j = 0..1) is perturbed by an amount Ag(t). provided n^r + q. 1.35) and compare the resulting equation with (7.37)..23). we see that and (7. exchange orders of summation.'s are constants whose value will be exhibited in the proof. and get If we now apply the approximation method to (7. This gives Since the method has repetition factor 1 the weights wn]. let us consider the question of numerical stability. We see then that for methods having a repetition factor of 1 both the accumulated consistency error and the accumulated starting error have an asymptotic expansion and therefore the total error behaves in a systematic way. To provide some motivation.23) we see that e^ is just a linear combination of the T^. starting with a precise definition of the term. From (7.. and equate coefficients of ^. This will change the solution of the equation to /(t). Proof.

Since there are invariably some inaccuracies present. If the error can have a component which grows at a faster rate.3 implies that all standard methods are stable with respect to the accumulated consistency error.108 CHAPTER 7 Thus. Theorem 7. provided the method has repetition factor 1. Simpson's method 1 is stable with respect to e£. Writing out (7. Theorem 7. Rather than trying to provide a general analysis. A method is said to be numerically stable with respect to an error component en if that error has an expansion of the form with e(t) satisfying an equation for some Q(t). we can make the following definition. we will consider a specific case by looking at Simpson's method 1 in some detail. the fourth order Gregory method and Simpson's method 2 are stable with respect to the accumulated consistency error as well as the accumulated starting error. we must always expect an error whose growth is governed by (7.23) for Simpson's method 1 and rearranging. we call the corresponding method numerically stable. we have . According to this definition and our previous results the trapezoidal method. the change in the solution satisfies an integral equation of the same form as the original equation. but the general result is that such methods have a tendency to show unstable error growth.39). The theoretical situation here is somewhat complicated. DEFINITION 7. More precisely.7.4 shows the same thing for the accumulated starting errors. The remaining question is the stability with respect to the accumulated starting error of methods having a repetition factor larger than one. but not necessarily with respect to the accumulated starting error. except for a difference in the nonhomogeneous term. we have a case of numerical instability. If all errors are governed by such an equation.

We outline the steps only.43) represents an approximation to an integral. This yields for (7.47) imply if we introduce . We can see a little more clearly what (7.42) and (7. But because of the integration rule chosen. and a similar equation for (7.50) and (7.46) and (7.46) and (7. If x(t) and y(t) satisfy (7. we can then proceed rigorously.47) obtained by using the rectangular integration rules with stepsize 2h. Carry out the details of the proof of Theorem 7. Exercise 7.5.51) follow.EQUATIONS OF THE SECOND KIND 109 where Since each set of summed terms in (7. By scaling and rearranging (7.47).9.52) only by terms of order h.5. if it exists at all. will be given by a set of coupled integral equations.42) we see that it differs from (7. THEOREM 7.47) and its approximation. It is not hard to see that the appropriate system is where Once we have guessed this form.46)-(7.46). and (7. it appears that the expansion for e^.49). A similar result holds for (7. then Proof. Let Xn and Yn denote the approximate solutions of (7.

Therefore. In practice this method should be avoided. Consequently.110 CHAPTER 7 and rearrange the equations. the method is not necessarily stable. if h is very small this dominant term closely represents the actual error. say h = 0. It is possible to make a more complete analysis for methods with repetition factor higher than one. a method is called numerically stable if the dominant error term satisfies an integral equation having the same kernel as the original equation. One way to proceed is to take a very simple equation for which a complete analysis can be made. we see that where A l5 A2. The discussion in the previous section focused on the dominant error term. How the solution of the equation behaves depends on g(0. let us consider the . To show that it can be unstable. the component z2 of the accumulated starting erro can grow exponentially. What one would really like to know is how the error behaves for practical values of h. We then get The first component za satisfies an equation of the form (7. the solution is and therefore decreases exponentially for A > 0. All evidence is that such methods are not numerically stable and should be avoided. Blt B2 are functions whose exact form is immaterial for the argument.5.05. we need to construct a case for which z2 can grow much faster than the actual solution. but it is not easy to say exactly what is meant by "very small". but this provides little further insight. When A > 0.2. This problem is too difficult to be solved in general so that some simplifying assumptions have to be made. Therefore. Another view of stability.1 or 0.For g(f) = 1. Take the simple case Using the results of § 1. The conclusion is then that Simpson's method 1 can show unstable error growth. for large the accumulated starting error may become so large as to obscure the nature of the solution altogether. 7.41). but z2 does not. Certainly.

We can make this more precise with the following definition.EQUATIONS OF THE SECOND KIND 111 equation which has the solution f(i) = e~xt. the trapezoidal method is stable in (0. °°).8. s) = -A. it is reasonable to require that the approximate solution should have the same behavior.7) is said to be stable in the interval (0. dropping the argument h in Fn(h). Taking the difference between (7. with weights given by (7. with A >0. it follows that for all possible F0. Again. Let Fn(h) denote the approximate solution of (7. differencing the approximation formula gives This is a simple linear difference equation with constant coefficients and its solution can be written as where the ct are constants determined by the starting values and . let us take the trapezoidal method (7. Next.7). an ideal situation which is often called A-stability.8). or Therefore. consider the fourth order Gregory method. if A >0 and h>0. DEFINITION 7. Then the method (7.3) for n-1 we get. Since for A > 0 the solution decays exponentially.57) is that it now becomes possible to analyze fully the resulting discrete equations and thereby find the intervals of stability. Therefore.57) computed by (7. arbitrary starting values F fixed h.3) with g(t) = 1 and k(t. h all 0<h<h0 The reason for defining stability with respect to the simple test equation (7.3) for n and (7. As a first example.

then the method is stable. it is common to consider not only real . Investigate the stability regions for Simpson's method 1 and Simpson's method 2. 3/A). In particular we find that Simpson's method 2 is stable in (0. |p2|. This requires a detailed investigation. if max (IpJ. then we have again that for arbitrary starting values.10. of the location of the roots as a function of h. 1. However.2/A). For h = 0. The approximate solution usually reduces to a linear difference equation with constant coefficients and hence its behavior is governed by the roots of the characteristic polynomials. If all of the roots are inside the unit circle in the complex plane. we get Expanding and throwing away higher order terms (i. we find that the fourth order Gregory method is stable in the interval (0. When this is done.61) in a little more detail. Show that Simpson's method 1 is not stable for any h > 0. etc. It is then a simple computational matter to find the interval of stability. Simpson's method 1 is not stable for any Jt>0.8 to define stability has many advantages..61).61) are inside the unit circle. In practice.) we find that indicating that for small positive h all roots of (7. Exercise 7.112 CHAPTER 7 are the roots of the polynomial Now. the roots of (7.7). Thus we see that the question of stability has been reduced to finding the roots of certain characteristic polynomials. Jt0). Next. Therefore the fourth order Gregory method is stable in some nonempty interval (0. When we write and substitute in (7. we ask for the largest interval in which the method is stable. the three roots are obviously 0.e. 0.2/A). A similar analysis can be made for any method of the form (7. e2. usually by numerical techniques.61) for small h must be either near 0 or 1. The use of Definition 7. Let us look at (7. |p3|)< 1. Show that Simpson's method 2 is stable in (0. We need be concerned only with roots near 1. Since the roots of a polynomial are continuous functions of the coefficients. terms of order 2he.

Numerical root-finding methods can then be used to find the regions of stability in the complex plane.7) on (7. As a more representative example one might take This equation satisfies the differential equation and therefore has an exponentially decaying solution for A > 0. It has. implying unstable error growth.4 apply only as h —> 0 and hence give no information on the interval . if (7.64) is then where pj and p2 are the roots of that is It is no longer easy to see just how the roots behave as functions of A and JUL. as an example. the trapezoidal method is not stable for all values of h (that is. One can then say that a method is stable (for some value of h) if where Fn is the solution obtained by using (7. This allows for solutions with an oscillating component. The analysis of Fn is now more complicated.62). The results of § 7. Again. Thus. then one of the roots of (7.65) will have magnitude greater than one. consider the trapezoidal method (7. However. for small A we find that if h2^>4. This has been done for many of the common methods. but still manageable. however.3).>0. been argued that (7.57) is too simple and that the conclusions derived from it may be misleading. If we compare the view of stability in this section with that of the previous section.EQUATIONS OF THE SECOND KIND 113 values of A. we obtain after some manipulation The solution of (7. we see that neither approach is completely satisfactory.62) is used as the test equation. but to take A in the complex plane with Re (A)>0. JLL > 0. not A-stable).62) with A > 0 and fi. When applied to (7.

the form of this test case can affect the conclusions.7. Block-by-block methods. using values F0. the scheme (7. As our discussion has shown. it is quite immaterial which definition of stability we use. stability analysis does give some insight by characterizing some methods as stable and others as unstable. and' Fl to give To deal with the unknown value F1/2 we approximate it by quadratic interpolation. may show some rapid error growth arid should therefore be avoided. 7. such as Simpson's method 1. Fl and F2.114 CHAPTER 7 of stability. The answer to this question is not known at the present.6.57) and (7. This difficulty is overcome by using something like Definition 7. then use Simpson's rule with F0.62) represent the general case.7) requires one or more starting values which must be found in some other way. F1/2. such as the trapezoidal method. that is. but at the expense of restricting the analysis to a simple test case. we replace F1/2 by so that we can compute F^ by . This in turn raises the question to what extent (7. The idea behind the block-by-block methods is quite general. but is most easily understood by considering a specific case. If we knew F1} then we could simply compute F2 by To obtain a value for Fl5 we introduce another point f1/2 = h/2 and the corresponding value F1/2. Let us assume that we have decided to use Simpson's rule as the numerical integration formula. The method which we now describe not only gives starting values but provides a convenient and efficient way for solving the equation over the whole interval. The so-called block -by -block methods are a generalization of the wellknown implicit Runge-Kutta methods for ordinary differential equations and one finds the latter term also used in connection with integral equations. Unstable methods. although convergent. Nevertheless. Except for some relatively low order methods. As long as we are looking only for this kind of qualitative information.

Going through the usual procedure we find that the errors satisfy the equations . The analysis parallels the development in §§ 7. with some notational complications. for m = 0. This explains the origin of the term block-by-block method..69) and (7. For sufficiently small h there exists a unique solution which can be obtained by the method of successive substitution or by a more efficient procedure.3 to 7.68) are a pair of simultaneous equations for F^ and F2.5. The general process should now be clear. Let us briefly consider the question of convergence and stability for blockby-block methods.66) and (7. so that we obtain a block of unknowns at a time.EQUATIONS OF THE SECOND KIND 115 Equations (7. such as Newton's method. 2..70) have to be solved simultaneously for the unknowns F2rn+\ and F2m+2.1. For simplification we consider only the linear case. we compute the approximate solution by where and At each step (7. Since there is not much new here we merely sketch the arguments..

71) and (7. Show that for sufficiently smooth k and / Use this to show that (7.73) holds.11. Find the expressions for Cj and 4 in (7. proceeding as in §7.1 we find that showing that the method has order of convergence four.116 CHAPTER 7 where the expression for c{ and di are obvious and An elementary analysis then shows that for sufficiently smooth k and / the terms R2m+i and R2m+2 are of order h4. Taking into consideration the results of Exercise 7.12.72). we are led . To obtain error expansions we consider (7.4. so that. applying Theorem 7.72). Exercise 7.71) and (7.12. Exercise 7.

only the results at the even-numbered points.79).7.) To see this. indicating that there exist oscillations of the order of magnitude as the error itself. the method can be considered stable with respect to the consistency error.74) and (7.71) and (7.79). of course. Prove the results (7. As shown by (7. This may be considered a minor drawback of the method.EQUATIONS OF THE SECOND KIND 117 to consider the system where When (7.41).72). subtract (7. A similar argument can be made for y(f).74). the equation for x(f) becomes The equation for x(t) then satisfies the stability criterion (7.8 is used as the stability criterion.13. then the method is stable for all values of h. It is also known that if Definition 7. that is. there does not exist an asymptotic expansion for the error at all points.75) to give When this is substituted into (7. it is found that Exercise 7. In spite of the lack of a unique expansion for the error. so that the method can be considered stable according to Definition 7. (There are. say.74) from (7.78) and (7.78) and (7.75) are discretized by the rectangular rule and compared with (7. although it is easily eliminated by retaining. Since x(t) and y(t) are generally not the same. it is A-stable. no starting errors. . But there is a separate expansion for the even and odd-numbered points.

(This is the only column where all computed results are listed.7. approximate f(t) by a quadratic interpolating polynomial P2(t) on the points t2i.83) were computed using various fourth order methods. and more accurate integration rules. when used to integrate between t0 and t2m+i. higher degree interpolating polynomials. gives the approximating formulas (7. In each interval [f2i.) For the fourth order Gregory method the error behaves quite smoothly in the sense that its graph is a smooth curve. While this equation is quite simple. and t2i+2. Some numerical examples.69) and (7. then between t0 and t2m+2. any required starting values were taken as the exact solution f(t) = 1. To understand why such higher order oscillations do not affect the fourth order Gregory method we must return to . In each case. one can construct block-by-block methods of arbitrarily high order. An explanation for this observation can be found by looking at the column h = 0. The observed errors are shown in Tables 7. but less closely for the other three methods. 7. The resulting numerical integration rule. the results are nevertheless typical of what one can expect in more complicated (but well-behaved) cases. the unknowns at each step were determined by solving a linear equation. Approximate solutions to (7.70) is motivated as follows. These results indicate that all four methods have about the same accuracy. but becomes even more obvious if the derivation of (7.70). we expect a reduction of the error by a factor of ^ as the stepsize is halved. The behavior predicted by the analysis in the previous sections is readily demonstrated with some simple examples. since the errors on odd.5.1. In this section we summarize some numerical results on the equation considered in Example 7. a situation which seems to hold true for many examples. Since the problem is linear. but the errors for the other three methods do not behave quite as nicely. such numerical experiments serve to give some additional insight into the relative effectiveness of the methods.69) and (7. By using more points in each block. For the two Simpson's methods the oscillations are due to the effect of the O(h5) terms which in this case are still significant.118 CHAPTER 7 The general procedure for constructing block-by-block methods should be easy to infer from what we have said.and even-numbered points have a different expansion. Example 7. Certainly for the fourth order blockby-block method this is to be expected. This expectation is closely realized for the fourth order Gregory method. t2i+l.1. Since all of the methods are fourth order.2-7. Replace f(t) by this piecewise quadratic and integrate using Simpson's rule. and expressed in terms of the values of f(ti).2. t2i+2]. In addition to substantiating the theoretical predictions.

05 -3.5 xlQ-9 -1.2 Errors in the solution of (7.4 0.6X10"9 -6.0 h = 0.0X1Q-8 -2.OxlO"9 -9.83) by Simpson's method 1.2 0.2xlQ-10 -8.1 — — — -1.4X1Q-7 -1.7 0.OxlO"8 TABLE 7.2 0.0 xl(T8 -5.0 xKT 6 -2.6 xlO~ 7 h = 0.4XHT8 -9.0 xlO~7 -3.5X10"8 -1.5 0.9 1.8 0.025 -2.1xlO~10 -4.5 xlO" 6 h = 0. t 0.6xlO~9 -9.9 1.7 xHT 8 -7.lxlO"7 -4.025 -1.2 xlO"9 -1.05 — -4.6 0.9X1Q-9 .1 x!0~9 -l.EQUATIONS OF THE SECOND KIND 119 TABLE 7.9xlQ-9 -5.7 xlQ-9 -2.6 xKT 7 -2.8 xlO~ 8 -2.3 0.2xlO~8 -2.3 xUT 6 -2.4X10"6 -1.0 h = 0.3 0.0X1Q-10 -6.3 Errors in the solution of (7.3 xlO'7 -1.6 0.1 0.7xlO~9 -1.7X10-8 h = 0.4 0.9xlO~10 -1.1 xlO~ 6 -1.83) by the fourth order Gregory method.6xlO~7 h = 0.4xlO"9 -1. t 0.1 xKT 7 -2.1 x!0~8 -l.0xlO~9 -S.8 0.6 xlQ-7 -7.lxlQ-7 -1.8xlO~9 -4.0 xKT 7 -5.2xlO~8 -1.2 x!0~7 -9.5 xKT 8 -2.9xlO~9 -7.8xlO~9 -3.5 0.6 xlO"9 -1.7 0.2X10-7 -3.1 — -l.7X10"6 -2.1 xlO^10 -9.

3 0.6 xlQ7 h= 0.025 -2.4X10'7 -3.8 xHT8 -3.5 xlQ-7 -6.4 xHT9 -1.2 xlO~ 8 -3.5 0.1xlO~9 -6.SxlO"9 -1.0X1Q-7 -4.0 XHT 8 -1.6 xlO~ 8 .1 xlQ-7 -2.6 0.2xlO~9 -l.4X1Q-10 -9.6 x!0~ 9 h = 0.4X1Q-8 1.7 -5.7xlO~7 -3.1 xlQ-10 -4.8 0.0 h=0.9 xlO~7 -2.2 0.1 0.2 xlQ-8 -2.120 CHAPTER 7 TABLE 7.1 -4.26 xHT8 -1.9 1. t 0.4 0.8 0.2xKT10 -l.4X109 h = 0.6xlO~9 -1.7 -4.2xixr7 -4.8xlO~8 -2.2 0.7 xlO.025 -2.5 Errors in the solution of (7.lxlO~ 8 -2.4 Errors in the solution of (7.9X1Q-8 -2.0 h = Q.2xlO~4 -1.5 0.3 0.7 0.2xlO~ 8 -2.4 0.9xlQ-9 -2.2 x 10~9 -1.2 xlO~ 7 -1.5 x!0~8 -2.83) by the fourth order block-by-block method.9 xKT 7 -1.5xlO~ 7 -3.05 -3.2 xlO"10 -8.TxlO^9 2.9 1. t 0.3 xlO"7 ft =0.5X1Q-7 -4.05 -3.83) by Simpson's method 2.OxlO"10 -1.6 x!0~7 -4.4 xlO.0xlO~8 TABLE 7.7 xKT 7 -5.6 xKT8 -1.5 xlO~ 8 -2.3xlO~9 -1.6 0.1 0.9 xHT 8 -3.l — -1.7 0.8 xlO.2 xlQ-10 -4.2xlO~ 7 -7.lxlO" 9 -1.1X10'10 -6.5xlO~9 -l.3x10°* -6.2 xlO"10 -l.7 -2.6xlQ-9 -l.

the use of the three-eighths rule at every second step limits the expansion to one term. Simpson's method 1 should show no instability. Simpson's method 1 shows some noticeable oscillations.3. Equation (7. the objection could be raised that.83) for large t. The errors are listed in Table 7.6 6.5xl(T5 -2.8 xlO" 2 -2.3xlO~ 4 -3.2 xl(T3 -S. therefore.2. Actually. Second. As we have shown.7xlO~5 -2.2xl(T4 -3.8 7. the TABLE 7. the accumulated consistency error has an expansion of the form For the fourth order Gregory method (and also the trapezoidal rule).2 to t = 7. The errors in the stable methods are significantly smaller.2 Simpson 2 -1.2 and these can contribute to instability. it is present and becomes more pronounced as the solution is computed over a larger interval.3.9 xlO" 5 . the method is unstable only with respect to the starting errors! Such an argument ignores a few fine points. However.1 xlO" 5 -6. there are round-off errors.6xlO~ 5 -1.5 x 1(T5 -6.3.0 4th order Gregory -2. Hence we can expect small oscillations due to the O(h5) term. As can be seen from Table 7. Example 7.8 x 1(T5 -6.4X1Q-4 Simpson 1 6.6 xHT 5 -2.6.7xlO~ 5 -1. At each step.9 xl(T4 -3. the round-off error acts like a starting error for the next step. For the two Simpson's methods. the higher order terms in e^ also have a systematic behavior. the local consistency error has an expansion consisting of several terms (depending on the smoothness of k and /).1 xl(T4 -3.lxHT 3 l.4xlO~5 -6. Hence. First. since exact starting values were used.0 x!0~4 -3.6 Errors in the solution of (7.2 6.83) was integrated with h = 0.6. using the four methods as in Example 7.0 x 1(T5 -6.EQUATIONS OF THE SECOND KIND 121 Theorem 7.8xlO~ 5 4th order block-by-block -6. t 6. Working along the lines indicated. the higher order error components are still noticeable for h = 0. it can be shown that if the local consistency error has an expansion with several terms of the form then (ignoring a few technical restrictions). as in all computations. The expected instability in Simpson's method 1 is not apparent from Table 7.lxlO~ 2 -1.3 xlO.4xlO~ 2 1.4 6.0 6.

. many different methods are possible.8... then matching as many terms as possible. Explicit Rimge-Kutta methods. In the solution of ordinary differential equations. Aqs... F10. Since in general. a particular set of parameters for a Pouzet-type method is . A similar idea can be used for integral equations... The general form involves a number of undetermined parameters which are chosen by performing a Taylor expansion in h on the exact and approximate solutions... Flp The value Fnp is taken as an approximation to f^ + h). since there is even more freedom here.84) is so general that only few restricted forms have been investigated in any detail. the class of methods so obtained is very large. F0p. Actually. also Fnp = F n+lj0 . p. a class of algorithms called explicit Runge-Kutta methods is popular in practice and has been studied in great depth. one first writes down a general form for computing the solution at tn + h in terms of the solution at previous points.. Using <pn(0) = F00 = g(0). (7.122 CHAPTER 7 propagation of round-off error is governed by the equation for the starting error. 2. 7. grow and eventually become large. By taking we get the so-called Beltyukov methods. To present a fairly comprehensive approach. this equation can be used to determine the sequence of values F00. cs are determined by a Taylor expansion. we introduce the intermediate values for q = 1. dqs.. by setting we obtain a set of formulas sometimes called Pouzet-type. To develop such methods.. For example. F 01 . the set of equations is underdetermined. For p = 4. The function <p n (f) is chosen so that it is an approximation to while the parameters #<. as in this example. Even though small it can.. F n .

there is an easier and more intuitive way of deriving the formulas. we can use standard quadrature rules to get the parameters Aqs. In many instances.EQUATIONS OF THE SECOND KIND 123 The determination of the coefficients by Taylor expansion is a tedious matter.89).88) and (7. such as for certain Pouzet-type formulas. .92) to replace the integrals in (7. an observation which is easily proved. The first term on the right-hand side of (7. The indicated order of convergence is three. For example. We can consider F^ as an approximation to /(k + qh).84) is then an approximation to while the second term represents Once we have chosen a set of 0i5 with O ^ ^ ^ l .90)-(7.83) was solved using several values of h. we are led to the formulas with In Table 7. equation (7.7. take p = 3 and with the numerical integration formulas Using (7.

8xlO~5 2. For (7.6 0.1 xlQ.1 xHT 6 2. explicit Runge-Kutta methods are reasonably competitive with implicit schemes.4 xlQ-6 Convergence and stability analyses for explicit Runge-Kutta methods are very similar to what has been presented in preceding sections of this chapter. the analysis of the explicit Runge-Kutta methods raises no new questions. most of the work involves the repeated approximation of integrals.4 h = 0. apart from technical considerations.7 Observed error in the solution of equation (7.88).4 0. Therefore. such as certain multistep methods.4 3. The repetition factor depends on the choice of the parameters.57) gives the characteristic polynomial from which the stability region can be computed.83) by the explicit Runge-Kutta method (7.2 0.4 is used.2 7. The basic framework established in this chapter allows for a great many variations. On the whole. implicit methods with their greater accuracy and better stability properties seem preferable. If the approach of § 7.5xlO~ 5 fi = 0. specifically on the numerical integration rule used to approximate (7.9 xlQ.3xlO~ 4 3. Theorem 7.7xlO~ 5 3. t 0.8 xl(T5 1. looking at Fn3 = Fn+1 only. we see that it has a repetition factor of one and is therefore stable if the definition of § 7.05 1.124 CHAPTER 7 TABLE 7.93)-(7. The selection of the parameters assures local consistency and establishes its order. It is of course possible that explicit Runge-Kutta methods are the most suitable in special circumstances.93)-(7. an application of the approximating formulas to the test equation (7.3 xlO" 6 4. the additional work to solve the nonlinear equations is relatively unimportant.5xlO~ 4 2. however.5 is preferred.0 h=0.2xlO~ 6 3. they appear to be less efficient than either the step-by-step methods of § 7. For differential equations the work required to solve nonlinear implicit formulas is often the main part of the work. Thus.2 or the implicit Runge-Kutta methods. A considerable .8 1.9.1 9. Whether explicit Runge-Kutta methods are widely useful in practice is an open question.2 can then be used to prove convergence. Thus. For integral equations.95).0 xHT6 1. 7.6xlO~ 5 4. but no definitive study on this has yet been carried out. A summary of related ideas and methods.3 XHT 6 5.95). requiring more work for the same accuracy. The situation here is not quite analogous to the differential equations case.

in differential equations. Consider. the approximation method (7.EQUATIONS OF THE SECOND KIND 125 amount of work has been done on this topic and it is not our intention to review all of it here. One example is the definition of stability given in Definition 7. Because of this.1) as where Then clearly . In general. using either Gregory or Newton-Cotes integration. To see how this can be done for integral equations write (7. which can be generalized is the so-called predictor-corrector method which. for example. There is. will reduce to a linear multistep method for ordinary differential equations. as previously pointed out.8. This is completely analogous to the stability definition usually used in differential equations. In this section we survey some of the ideas which have received attention. Then But this is just a standard linear multistep method for ordinary differential equations applied to (7. the simple Volterra equation which is equivalent to the differential equation Apply now the fourth order Gregory method to (7. a close connection between Volterra integral equations and initial value problems for ordinary differential equations. It is therefore not surprising that numerical methods for Volterra equations usually turn out to be generalizations of corresponding methods for differential equations. Much of the work is.7). Another concept.7). is used to overcome the implicit nature of certain multistep methods such as (7.98). in any case.96) and difference the resulting equation (7. many of the ideas developed in connection with differential equations have been extended to integral equations. in the nature of variations on a general theme.97). Details can be found in the literature.

we can use the formula This integration rule can be derived by approximating <p(t) by a cubic interpolating polynomial on the points ^ f n-i> k-2> *n-s» then integrating. Fn. F l 5 . To use this... The scheme is very similar to the corresponding formulas for differential equations.104) and (7. we employ another integration rule. this can be very efficient. now involving also <p (^+1)... using the predicted value F*+1 in place of f(tn+i). k+i].101). How competitive in general such a method is with the more usual approaches is an unresolved question. To approximate ^. Another approach to solving Volterra equations of the second kind is based on an observation made in Chapter 1. generally of the form Equation (7..For example. If we apply (7. . we are led to an approximation to Fn+1 of the form where GM+1 is an approximation to Gn(tn+l). Fn and is a predictor for Fn+1. then G n (f n+1 ) can be approximated by a numerical integration using Ft instead of f(ti).1). that for degenerate kernels the integral equation can be reduced to a system of differential equations. For example.(fn+1) we can use quadratures which involve values outside the interval [k.126 CHAPTER 7 If we know values F0.107) are then a predictor-corrector scheme for the solution of (7.. one might use We take this to approximate Zri(fn+1).104) defines F*+1 explicitly in terms of F0. The corrected and final value for Fn+1 is then computed by Equations (7. F l 5 . When the kernel is simple enough to be closely approximated by a degenerate kernel with a few functions.103) to (7.

By far the most complete treatment is given by Baker in [17]. [212]. The analysis given in §7. Block-by-block methods seem to have been suggested first by Young [249]. [62]. [132]. began with the work of Kobayashi [152] and Linz [163]. explores the closely related idea of piecewise polynomial approximation and collocation ([42]. Garey [108] studied predictor-corrector type methods. [116]. there are a number of collections of papers and conference proceedings dealing with numerical methods for integral equations [2].4 was adapted from [198]. [181] and [196]. Recent advances include the introduction of an asymptotic repetition factor [245] and the use of more complicated test equations [174].EQUATIONS OF THE SECOND KIND 127 Finally. [199]). [140]. a concerted effort to develop a theory started only with the work of Pouzet [210]. An exception is an early book by Collate [69]. [11]. [133]. Brunner. [195]. There are also few books specifically devoted to the numerical solution of integral equations. [58]). which. A later analysis by Noble [198] clarified and extended the original idea. and analyzed in [83] and [164]. and his work resulted in a variety of algorithms. [77]. The development of the general approach used here. based on piecewise polynomial and spline functions.5 are given in [18]. in one way or another. Notes on Chapter 7. Special starting methods are investigated in a number of papers (for example. The idea was developed in [63]. are studied in [91]. [136]. The concept of a repetition factor was introduced and exploited in [163]. [92]. Investigations of stability as defined in §7. and in most subsequent work on Volterra integral equations. Details on this can be found in [244]. However. A more up-to-date discussion can be found in Churchhouse [67]. such as the one given in §7. [96]. [51]. but a comparative study is lacking. [57]. there are a variety of other methods. [97].8. Beyond this. The stability question continues to be of interest. in a series of papers. [186]. [111]. A complete bibliography of work up to 1971 is given in [197]. combine and modify the ideas outlined. [243]. later surveys are by Tsalyuk [230] and Brunner [59]. which discusses the use of standard numerical integration rules on integral equations. [107]. [93]. All of these seem to have desirable features in some instances. Conditions required for the existence of error expansions with several terms were studied by Hock ([126]. [127]). [20]. [76]. [205]. The question of numerical stability was first raised in [152]. [157]. Pouzet's primary interest lay in explicit Runge-Kutta methods. Some early results on the approximate solution of Volterra equations can be found in [104]. [163]. [84]. [211]. [22]. Texts on elementary numerical analysis rarely mention integral equations. Additional methods. . [185]. The fact that some formulas for Volterra equations reduce to linear multistep methods for ordinary differential equations is useful in the study of their stability. [178].

[224]. [98]. using computers for algebraic manipulations. [156]. slightly different forms of the equation are used. and Brunner [60]. [34]. [223]. [35]. For example. The use of degenerate kernel approximations in the numerical solution of Volterra equations is due primarily to Bownds and his co-workers ([31].128 CHAPTER 7 In addition to the work of Pouzet. [99]. see [61]. [242]. [241]. [203]. Occasionally. Later studies were carried out by Garey [112]. [36]. Baker and his co-workers ([16]. [209]. [175]. [150]. For a related idea. [21]). were made by Goldfine [118] and Stoutemeyer [225]. [246]). [19]. There are many other papers which discuss various minor modifications of the ideas contained in this chapter. [215]. . van der Houwen ([131]. Beltyukov [28] gives some early work. [202]. Somewhat more unusual suggestions. [151]. [134]). [117]. [141]. see [27]. [238]. there has been considerable interest in explicit Runge-Kutta methods and various modifications.

Using the product integration method the extension of the standard techniques is conceptually quite simple. the analysis also required additional smoothness for these functions. we assumed that K(t. As a result a complete generalization of the results in the previous chapter has not been carried out. the methods previously described can be adapted to these more complicated conditions if we use more powerful numerical integration techniques. In the interest of clarity and conciseness we will make our treatment somewhat more descriptive and less rigorous than that in Chapter 7. f) and g(t) were continuous. s.1. In some cases. f) are continuous in the region of interest. However. we present a complete discussion where this is possible without undue complication. As we shall see. Unfortunately technical complications arise which make the analysis somewhat tedious. (ii) the well-behaved part of the kernel satisfies a Lipschitz condition of the form 129 . this can be done. If the kernel or some of its lower derivatives are unbounded. in particular. The equation considered here is where we assume that (i) g(f) and K(t. given sufficient patience. new methods are needed. s. although it is fairly clear that. One such technique is product integration which is briefly described in § 8.Chapter O Product Integration Methods for Equations of the Second Kind The methods in Chapter 7 were developed under the assumption that all functions involved in the equations were well-behaved. In many practical situations such conditions are not satisfied.

130 CHAPTER 8 Since K(t. To evaluate numerically.1.1. Because of its simplicity. usually <p will be some piecewise polynomial approximation to <p. we begin with a rather simple numerical method which we call Euler's method in analogy with the corresponding method in ordinary differential equations. some restrictions must be put on p(t. A similar approach is taken to construct product integration rules but instead of approximating the whole integrand by a piecewise polynomial. One of the most powerful ways to deal with poorly behaved integrands is product integration. such as the trapezoidal and Simpson's methods. with a specific example. are constructed under the assumption that the integrand is at least bounded. all of the singular properties of the full kernel must be included in the term p(t. this method has rather low accuracy.4) is the (composite) trapezoidal rule. s). Special methods are needed to handle such cases efficiently. a great deal of accuracy is lost if higher derivatives fail to exist. if q> is a piecewise linear approximation to <p obtained by interpolation on the points h. In the concluding section of this chapter a theoretical justification for all of these methods is presented by establishing a general convergence theorem. In § 8. Even if the integrand is continuous. 8. so that a unique continuous solution exists. Product integration. To understand the motivation behind product integration let us briefly review the way in which the usual interpolatory integration rules are constructed.4 we show. we obtain Simpson's rule. Here some practical difficulties appear which are absent in the case of continuous kernels. s. how one can generalize the previously introduced block-by-block methods. then (8. The standard numerical integration rules. After outlining the product integration technique in § 8.8. However. then compute The approximation <p has to be chosen in such a way that the integral for I can be evaluated explicitly.3 we introduce a more accurate method based on the product integration analogue to the trapezoidal method. If we take as <p a piecewise quadratic interpolation on equidistant points. When this is not the case such methods may not work. s). but the situation can be improved by the use of certain extrapolation techniques. /) is assumed to be smooth. For example. we . We will use here the conditions of Theorem 4. we first replace <p by some approximation <p. In § 8.

This is well known for the standard Simpson's rule. is at least third order. A detailed analysis for some typical values of p(t). algebra. Typically. b] into n equal subintervals of width h.5) and (8. although occasionally tedious. whatever singularities or poor behavior the integrand has are included in p(t).PRODUCT INTEGRATION METHODS 131 only approximate the well-behaved part. The function $ . \ty(t) — $(t)\ = O(h2). so that the product trapezoidal is second order. Thus. In subsequent sections of this chapter the reader will find several cases where the details have been worked out.6) can be evaluated. The explicit form of product integration formula depends of course on the method of approximation we use to construct iji(t). From (8. We stress the words "at least". which is based on a piecewise quadratic approximation. We subdivide the interval [a. Once a form has been chosen the rest is elementary. where an O(h3) approximation of the integrand yields an integration method whose order is four. For example. shows an order of convergence between three and four. since occasionally one does achieve a higher order. here we only describe a general schem which includes both the product trapezoidal and product Simpson's rule. then we must be able to evaluate (either explicitly or by an efficient numerical technique) integrals of the form For most problems of practical interest this can be done.6) we immediately get that so that error bounds and orders of convergence for product integration follow from standard results of approximation theory. if we use piecewise polynomial approximations. By the same reasoning the product Simpson's rule. with points ti^ti0^tn< • • • <tims^ti+l. We write the integral as where $(i) is assumed to be continuous and generally smooth. such as p(0 = f~ 1/2 . the type of approximation must be chosen so that the integral in (8. We then approximate $ by a function i// and compute Again. For the product Simpson's rule the situation is not as favorable and the order of convergence is not necessarily four. for a piecewise linear approximation to a smooth function <//. using In each subinterval we introduce a further subdivision.

132

CHAPTER 8

will be approximated by a polynomial of degree m in each subinterval U» *i+i]> using a Lagrange-type interpolation on the points *j0> *ii> • • • > *imFrom elementary interpolation theory we know that this piecewise polynomial can be explicitly represented as

where the ^, are the fundamental polynomials defined as

Inserting this into (8.6), we have immediately that

where

If the integrals in (8.7) are known, then the weights \vy can be found explicitly. 8.2. A simple method for a specific example. As a simple example of (8.1) we take a case which is of some practical importance. To apply product integration we approximate the integrand by a piecewise constant function agreeing with the integrand at the left endpoint of each subinterval. This leads of course to the product integration analogue to the rather crude rectangular method in ordinary numerical integration. In the context of the integral equation we get the approximation

Using this in (8.1), satisfying the resulting equation at f l 5 1 2 , . . . , and denotn the approximation to /(O, we obtain

where

PRODUCT INTEGRATION METHODS

133

Starting with F0 = g(0), successive values for Fn can be immediately computed from (8.15). The product integration rule, being based on a piecewise constant approximation of the integrand, has, according to the discussion of the preceding section, an error of magnitude O(h). If this order of magnitude is preserved in the solution of the integral equation (and we will see below that this is generally true), then we can expect that The method, which is usually called Euler's method, can therefore be expected to have low accuracy for reasonably sized h. Nevertheless, it is of some practical interest because of the simplicity of the weights wnf and the ease with which the resulting system (8.15) can be solved. As we shall shortly see, higher order methods come with certain complications which are absent here. The situation is of course familiar to anyone with practical experience. Often the simpler methods are easy to use but yield poor results, while the more accurate methods involve some nontrivial technical complications. One way around this dilemma is to use certain extrapolation techniques in which quite accurate answers can be obtained using only results from the inaccurate methods. One such technique is the so-called Richardson's extrapolation. The scope of this process is extensive, but we will discuss it only in the context of the problem at hand. Suppose we know not only that (8.17) is true, but that where e(i) is a function independent of h. By computing an approximate solution with two different stepsizes we can then eliminate the O(h) term from the error and thus obtain a more accurate answer. To make this a little clearer, let Y(t, h) denote the approximation to f(t) using stepsize h. With t as one of the points of subdivision, we have

Repeating the computation with stepsize h/2 gives

From these two equations we see that

Thus, the "extrapolated" value 2Y(t, h/2)-Y(t, h) has an accuracy of order fip, with p>l.

134

CHAPTER 8

In order to apply Richardson's extrapolation, one needs to know the exact order of the dominant error term. If this is not known, one can use a technique due to Aitken to get around the difficulty. Suppose we know only that with q unknown. Then, recomputing the solution with stepsize h/2 and h/4, we have

Ignoring the O(hp) terms and eliminating f(t) and e(t) from (8.22)-(8.24), we find that

Having estimated q we can now apply Richardson's extrapolation to obtain

Some numerical results demonstrating the effectiveness of these extrapolation techniques are given in Example 8.1. The main advantage of Euler's method lies in its simplicity, but if extrapolation is used the method can yield reasonable accuracy with little computational work. Thus, in situations where very high accuracy is not

TABLE 8.1 Results for Example 8.1 using Euler's method and extrapolation. Extrapolation using (8.21) and h= 0.1, 0.05 0.91333 0859 .44 0702 .98 0.74555 0776 .02 Extrapolation using (8.25) and (8.26) 0.91200 0847 .48 0709 .92 0.74514 0763 .09 Correct answer 0936 .54 0.91288 0.87706 0.84515 0.81650 0.79057 0.76697 0.74536 0758 .24 0771 .01

s

h=0.1 0.89953 0836 .33 0.77937 0.73492 0678 .93

Ji = 0.2 0989 .44 0978 .06 0.87171 0892 .38 0.81123 0.78542 0.76194 0707 .44 0704 .27 0.70251

ft = 0.05 0.95138 0.91050 0.87458 0825 .46 0841 .10 0.78812 0746 .65 0.74301 0.72319 0748 .08

0.1 0.2 0.3 04 . 0.5 0.6 0.7 0.8 09 . 1.0

PRODUCT INTEGRATION METHODS

135

needed, this approach may be preferable to the more accurate but also more complicated methods to be described next. Another advantage of extrapolation algorithms is that a rough estimate of the error can be obtained as part of the process. Example 8.1. The equation

has the solution Numerical results for several stepsizes and the values obtained by extrapolation are shown in Table 8.1. 8.3. A method based on the product trapezoidal rule. To construct higher order methods directly, it is necessary to use more accurate numerical integration rules. The next step is the product trapezoidal method which is constructed by approximating K(t, s, /(s)) by piecewise linear functions, in particular,

This leads to the integration formula

where

The numerical method for solving (8.1) is then

with F 0 =g(t 0 ).

69) and (7.4. As we saw.. For nonlinear problems it is therefore advisable to use a standard rootfinding method.. Using a piecewise quadratic interpolation polynomial. Again. F 1 ? . however. A way of constructing highly accurate numerical methods is now fairly obvious: using product integration formulas based on approximating integrands by polynomials of degree two.70).2 based on the Newton-Cotes quadratures. using product integration instead of regular integration. we use a specific method to illustrate the general concept. such as the secant method or the NewtonRaphson method to solve (8.31).1) is linear. F n _ l5 then (8. three. For the case of bounded kernels we can take so that /3nn = h/2 and this iteration converges if For the case of unbounded kernels this situation may be much less favorable. several different formulas have to be combined (leading possibly to numerical instability) and starting values are required which have to be obtained in some other way. If (8.136 CHAPTER 8 If we know F0.31) is an implicit equation defining Fn. so that in principle we can solve for the approximate values in the usual stepwise process. this leads to the approximating system . After some manipulations. The situation is the same here and we now describe the analogous product integration procedure. requiring often an unreasonably small h. etc. in the nonlinear case we must be a little careful. The problem of finding Fn for nonlinear equations also arises in the case of bounded kernels. 8. In this way it is in principle easy to construct formulas analogous to those in §7. Of course we still have the problems encountered before. For example. so that convergence is now governed by the factor which must be less than one. and we remarked that for small enough h we can always solve for Fn by successive substitution. we proceed exactly as in the derivation of (7. one obtains methods which are of the general form where expressions for the weights wnj are readily found. then we can solve for Fn directly. A block-by-block method based on quadratic interpolation... if we have then /3MM = O(Vh). the block-by-block methods are convenient because they eliminate these difficulties.

1 for various values of h are given in Table 8.34) successively for blocks of values (Fl5 F2). if there were no singularity in the integrand.2.. This indicates an order of convergence of An intuitive explanation for this somewhat unexpected noninteger order can be given as follows.. (F3. and we expect the overall error to have . Thus near s = t the integration has accuracy O(h3) while away from this point the accuracy is O(h4}. its accuracy is at least O(h3).33) and (8. /3. y are defined by Starting with F0 = /(0). Since the integration rule is based on quadratic interpolation. then the method would simply be Simpson's rule with accuracy O(h4). F 4 ). using a root-finding technique such as the Newton-Raphson method in the nonlinear case. Example 8. We see from these results that halving the stepsize by a factor of two reduces the error by about a factor of 10.. The solution computed by (8. we solve (8.2. Actually.. The order of convergence is the same as the order of accuracy of the product integration rule (this will be proved in the next section).34) to the equation presented in Example 8.33) and (8.PRODUCT INTEGRATION METHODS 137 where The functions a.

A convergence proof for product integration methods.146 07064 . As in § 7.6 0.. we define the consistency error by We assume that K(t.7254763 0.1 02 .7669651 0. we get and. 0.414 08696 ..7905695 0.543 0.546 09277 .7669650 0.5 0.7453698 0. For p(t. let us consider approximation methods of the form so that.1) from (8. s) = 1/VF—s one can show that the order of accuracy of the quadratic product integration is 3.0 knaxIXlO7 h = 0.7071068 True answer 09366 .180 08750 .8451842 0. /(s)) satisfies the conditions of Theorem 4.8 0. Subtracting (8.7453570 0.7 0.7254763 0.2 to product integration methods.. since the kernel satisfies a Lipschitz condition. At this point we cannot apply Theorem 7.138 CHAPTER 8 TABLE 8.5. s 0. This is sufficient to guarantee the existence of a unique continuous solution for sufficiently small h.3.706 0.7071074 620 62 7 order between three and four.7905562 0.l H=00 .642 09279 . s.9128189 0. 0.8. one at a time.7071018 h=0.5.959 0.1 by the quadratic block-by-block method.7453560 0.184 08756 .7453561 0.9 1.8164968 0.1 directly since we can no .7254761 0.8451567 0.2 0.694 0.7071068 09354 . We now generalize the convergence proof of Theorem 7.3 04 . given starting values F0. For simplicity.8451544 0.9128712 08753 .790579 07668 .5 09360 . 8. This obviously intuitive reasoning can be made precise.2 Results for Example 8.40). Fr_1? we compute FM successively.8164961 0.708 0. F l 5 .708 08553 .

45) are satisfied.2.PRODUCT INTEGRATION METHODS 139 longer claim that wni = O(h). consider the example discussed in §8. THEOREM 8.46) follows. Consider the sequence pn satisfying Then clearly pn is a nondecreasing function of n and A simple inductive argument then shows that. with 0 ^ r < j \ and jm =s n <jm+i.1 for the case of bounded kernels. where (8.r+l... m and n = r. for v = 0.. if (8. then Proof.44) and (8. such that.. To proceed.16) indicates that wn>n_1 = O(\/h).1. Suppose that with B > 0 and (a) If then (b) If there exist integers 0 = j 0 </i< • • •/ m </ m +i.1.. we establish a somewhat complicated subsidiary result which plays essentially the same role here as did Theorem 7.. then so that (8. .. To see this. as required in the proof of Theorem 7.2.

140

CHAPTER 8

To prove part (b), let

Then, from (8.44) and (8.47),

from which we get immediately

From (8.46) we have

and we can now apply Theorem 7.1 to get

The inequality (8.48) then follows obviously. This result paves the way for the convergence theorem for product integration methods. THEOREM 8.2. Assume that (i) the conditions of Theorem 4.5 are satisfied, (ii) the starting values F 0 ,..., F r _j satisfy

(iii)

(iv) the interval [0, T] can be divided into a finite number of subintervals [0 = z0, zj, [z1? z 2 ],..., [zm_1? zw = T], such that i/ /v denotes the largest integer less than or equal to zjh and wnj = 0 for / > n, the weights vvn, satisfy the condition

This subdivision must be independent of h. Then (a) for sufficiently small h (8.40) defines a unique sequence Fn F r+1 ,...,

PRODUCT INTEGRATION METHODS

141

(b)

Thus, if the product integration method is consistent and if the starting errors go to zero as h —» 0, then Proof. Assumption (iii) implies that L |wnn| can be made as small as desired by choosing a sufficiently small h. Thus, claim (a) follows from the contraction mapping theorem. For part (b), apply Theorem 8.2 to (8.43). Assumption (iv) defines the integers jv needed in (8.47); to get (8.52) we identify JB with the consistency error 8(h, tn). Conditions (iii) and (iv) on the weights wn, in the above theorem are easily verifiable and hold for cases of practical interest. In fact they are closely related to and generally implied by conditions of Theorem 4.8. The proof which for the sake of simplicity was carried out only for the step-by-step methods can obviously be extended to cover the block-by-block methods. All we need do is to group values in each block into a vector Fn, then repeat the steps of the argument using norms instead of absolute values. Exercise 8.1. Extend Theorem 8.2 to the block-by-block method described in § 8.4. Notes on Chapter 8. The use of product integration for approximating integrals with unbounded integrands seems to have been first suggested by Young in [248]. While the idea is relatively simple, the error analysis does involve some complications. For details, see [1] and [78]. A number of early papers ([135], [200], [233], [249]) suggest product integration for singular integral equations. Makinson and Young [176] introduced block-by-block methods in this connection. These various original ideas were later worked on and extended in [81], [82], [95], [109], [110], [158], [167]. Degenerate kernel approximations for singular equations are considered in [37] and [38]. An equation with a singularity of a type different from that discussed here is discussed in [145].

This page intentionally left blank

Chapter 9

Equations of the First Kind with Differentiable Kernels

As was shown in Chapter 5, equations of the first kind with smooth kernels are equivalent to equations of the second kind. Where the kernels have explicit forms which can be differentiated, one can use either (5.2) or (5.5) with the appropriate numerical method for equations of the second kind. But often explicit expressions for the kernels are not known, so that some modifications have to be made. A considerable amount of work has been done on the so-called direct methods, that is, methods which are based on discretizing the original equation (5.1), rather than a differentiated form. This will be our main concern in this chapter, although some comments on the use of the differentiated forms will be made. One of the unexpected results on direct method is that not all convergent numerical integration rules lead to convergent methods for the integral equation. While some of the simpler rules such as the midpoint and trapezoidal methods perform adequately, some of the well-known and more accurate rules do not. The midpoint method produces a fairly simple and convenient direct method, but the equally simple trapezoidal method is less satisfactory. The reasons for this are discussed in § 9.2. Problems relating to the construction of higher order direct methods are outlined in §§ 9.3 and 9.4, while in § 9.5 we present some brief remarks on the solution of the equation using finite difference approximations to the derivatives. Another topic briefly considered in § 9.6 is the extension of the methods to nonlinear equations. One difficulty with all equations of the first kind is that they are ill-posed, as was discussed in Chapter 5. This raises the possibility of a high sensitivity of the solution to small perturbations. This is investigated briefly in § 9.7. To avoid some rather unimportant technical details, we will assume that the kernel k(t, s) and the right-hand side g(f) have unlimited differentiability.

143

Application of simple integration rules. which approximates /(ii+i/a).. we know that.4) will always have a solution. T] by the points ^ = ih. there is no difficulty in extending the results to be given to nonlinear equations. As a next step we can try to use the midpoint integration rule which is where ^+1/2 = t. F l 5 . suffers from having low accuracy. this is perhaps the most satisfactory method for solving equations of the first kind as we will shortly see. For the moment.144 CHAPTER 9 For simplicity.1) as Given values F0. F M _ 2 > this can be solved for FM_! by Since a basic assumption is that k(t. 9. can be determined by On balance. k(tn. Euler's method is simple. The resulting method is again called Euler's method.1. namely the trapezoidal method. Using the usual subdivision of [0.. for sufficiently small h. + h/2. but as always.. This leads . are satisfied. The simplest approximation can be obtained by using the rectangular integration rule. provided some suitable conditions.1) is The solution Fi+1/2.2. we will consider primarily the linear case In principle. let us consider one more simple integration rule. such as the ones in Theorem 5. t)^0. the rectangular integration rule is which gives the discrete analogue of (9. k-i) ^ 0 and (9. The corresponding approximation method for (9.

Consequently we must take into consideration the specifics of the integration rule. .12) the same equation with n replaced by n — 1.2) we see that which we will take for F0.EQUATIONS OF THE FIRST KIND 145 to the scheme for the approximate solution. From (5. that is. It is not possible to claim that any convergent numerical integration rule leads to a convergent approximation scheme. this gives where On subtracting (9. we substract from (9. By this we obtain From this. As we shall see. Consider the midpoint method (9.2. Corresponding to (9. it is of course just the discrete analogue of converting an equation of the first kind into the more easily treated second kind form by differentiation.6). it is an easy matter to bound ei+1/2 using the following result.6) we write an equation using f(ti+i/2) instead of Fi+1/2. the error analysis of methods for equations of the first kind presents some difficulties not present for equations of the second kind. The trapezoidal method needs a starting value F0.10) from (9.6) we get where In order to obtain a bound on the error ei+1/2 from (9.12). We begin by analyzing the simple methods discussed in the previous section. we difference the equation. 9. This is a standard approach with equations of the first kind. Error analysis for simple approximation methods.

^_i/2). s) is differentiable with respect to t.16) holds we must analyze the midpoint integration rule in some detail. Exercise 9. Condition (9. To determine when (9.14) by hk^. while (9. such that for all h>0. for st determined by (9. Assume that there exist constants c1? c2.1.1. s)/(s) is bounded.146 CHAPTER 9 THEOREM 9. then (9.1.1. Then. Divide (9.15) holds if k(t.17) follows from the assumption that k(t. From elementary results of numerical integration we know that for sufficiently smooth <p(t) Consequently. then apply Theorem 7. and c3. so that If (33/dtds2) k(t.20) shows that . Proof. Establish an explicit error bound for |ei+1/2|. Carry out a complete proof for Theorem 9.14). t) is bounded away from zero.

2. then We can now write (9.14) to (9. then the relation between the solution f(t) of (9.21) is a term in an expansion.23) by the midpoint method. since halving the stepsize will result in a . calling the resulting approximation en. A slight change has to be made to the usual process. THEOREM 9. if (83/ds3) k(t. difference. Then Compare this with (9. s)f(s) is bounded.21). If all functions are sufficiently smooth so that the O(h4) term in (9. The result (9.22). If k(t.12) as which suggests the next result.19) is just a rectangular approximation to so that. Actually. and apply Theorem 7.16) is satisfied. then Next. Discretize (9. we can say a little more: the summed term in (9. s) and g(t) are sufficiently smooth.1.26) completes the proof of (9.6) is given by where e(t) is the solution of Proof.EQUATIONS OF THE FIRST KIND 147 and (9.22) immediately suggests the use of Richardson's extrapolation to improve the accuracy of the midpoint method. apply the steps leading to (9.1) and its approximation Fn by (9.24) and apply Theorem 9.25) and (9.1 to show that Putting together (9.

75 1.45019 0. TABLE 9.45000 0. The equation has the exact solution f(t) = t.64999 1.05 1.34999 1.1.75285 1.15057 0.65 1. h) denote the approximate value of /(f).95 K-0.45 0.15 0.65069 1.45171 0.94999 .75031 1.35 1. Results from the midpoint method and extrapolated values are shown in Table 9.14999 0. The difficulty can be overcome by dividing the stepsize by three.148 CHAPTER 9 different set of points at which approximations to f(t) are computed.74999 1.3 0.65626 1.1 Solution of (9. then Hence The extrapolated values YE(0 given by therefore have fourth order accuracy. Example 9.15006 0. t 0.95081 Extrapolated value 0. computed with stepsize h.35512 1.05044 1.05000 1.05398 1.35056 1.1 0. If we let Y(f.95740 Ji = 0.29) by the midpoint method and extrapolation.1.

2. The first of these involves computing the dominant term in the error expansion.9) satisfies Proof.22) exists. s) and g(t) are sufficiently smooth. Linz [163]). Another way of looking at the difference between the midpoint and the trapezoidal methods is to consider their stability. it is not only easier but also more instructive. Instead of doing this.3. the oscillating component limits the usefulness of the method altogether. under the appropriate assumptions on k ( t . For the trapezoidal method we would have to make a detailed investigation of the function v(t] in (9. A little thought convinces us that the appropriate equation of the first kind is . Show that Euler's method converges with order one. Here one uses a simple equation whose answer is known and whose approximation can be completely analyzed. The results are therefore not immediately suitable for extrapolation.8) and (9.31). While it is a fact that the solution obtained via the trapezoidal method has an accuracy of order of magnitude O(h2).23) show that it is numerically stable. Actually.3. For the trapezoidal method the convergence proof is somewhat lengthy.23) and v(t) is some other function (of no immediate interest). no result analogous to (9. THEOREM 9. s ) and g(f). for numerical stability one requires that this term satisfy an integral equation having the same kernel as the original equation. where e(t) is the solution of (9.3.22) and (9.EQUATIONS OF THE FIRST KIND 149 In a similar fashion one can establish convergence theorems and error estimates for Euler's method but we will not pursue this as the method is of little interest. As in Chapter 7 we can use here one of two possible approaches. Prove Theorem 9. Exercise 9. then the solution computed by (9. We have already done so for the midpoint method and (9. We now take this path. If k(t. we will consider the second definition of numerical stability. What can be shown is that. The arguments are somewhat technical and we refer the reader to the literature (for example. The conclusion which can be drawn from this is that the error in the trapezoidal rule has an oscillating component of magnitude O(h2). Exercise 9.

5. Putting k(t.s) and g(f).8) once gives Differencing once more yields The characteristic polynomial has a root of magnitude . Consider first the midpoint method. so with A > 0 and fixed stepsize h we expect that for a stable method.150 CHAPTER 9 since after differentiating we obtain exactly the equation used in § 7. we get Differencing this once yields and after a further differencing Thus and (9. differencing (9. showing that the method is A -stable.34) is satisfied for all A>0.32) has the solution f(t) = e~Kt. Equation (9.s) and g(0 = t into (9.6). s) = 1 + A ( t . For the trapezoidal method with the above k(t.

45) must be inside or on the unit circle in the complex plane and any root of modules unity must be simple. is numerically unstable. although sometimes suggested in earlier references. p3. Somewhat surprisingly. making this method nonconvergent and hence practically useless.3. c3. The trapezoidal method. c4 depend on the starting values.4. Take for example the fourth order Gregory method and apply it to the simple equation The approximating solution is then given by where the weights wni are as defined in (7.h > 0.45) has a root near p = —2. this does not work. By differencing (9. It is tempting to try to construct higher order methods for the solution of equations of the first kind by using some higher order Gregory or Newton-Cotes rules as was done in § 7.8). . Difficulties with higher order methods. there seems to be no reason at all to use it. p4 are the roots of the characteristic polynomial If a small initial perturbation to (9. To summarize what we have found so far. however. Thus.42) we obtain The solution of this linear difference equation is where cl5 c2. and p1} p2.2 for equations of the second kind. A simple computation. then the characteristic equation must satisfy the classical condition well known from ordinary differential equations: the roots of (9. and the trapezoidal method must be considered numerically unstable.EQUATIONS OF THE FIRST KIND 151 which is greater than one for A. we can say that the midpoint method is a simple and effective method for solving Volterra equations of the first kind. Since in general its accuracy is lower than that of the midpoint method. will show that (9. for any Ah > 0. It has good stability properties and its accuracy can be improved by extrapolation. 9.43) is to be propagated stable.

while the jS's could be selected to integrate polynomials of degree five exactly. The observation that the standard numerical integration methods are unsatisfactory has generated some interest in nonstandard rules especially designed so that they will work for equations of the first kind. The derivation of useful formulas of this type is quite lengthy. we will be left with three undetermined parameters. it is known that none of the standard numerical integration schemes yield convergent methods for Volterra equations of the first kind. a4. the result would be a combination of standard Newton-Cotes formulas and hence nonconvergent. such as the three-eighths and Simpson's rule. |86 are for the moment undetermined. 9. so we will sketch this topic only in some broad outlines. t. This is one of the relatively rare instances in numerical analysis where a quite plausible approach yields a useless method. /33.]. jSx. We can then try to choose these so that the resulting method is convergent. For example. . «3. However. /35. Suppose that we use the three-eighths rule in composite form to approximate an integral over [0. If we require only that both rules are accurate for polynomials of degree three.. We will not pursue this but simply claim FIG. the first involves five parameters and could be made exact for polynomials up to degree four. where the parameters a l5 a2. The trick is to relax the accuracy requirement on the integration rules. Let us use the scheme depicted in Fig.152 CHAPTER 9 This difficulty is not limited to the fourth order Gregory method. |32. For n = 3m + 2 and n = 3m + 3 some adjustment has to be made in a way similar to the use of Simpson's rule in Chapter 7. It also occurs for other higher order Gregory methods as well as the Newton-Cotes formulas.1. if we did this.1. /34. a5. As given. this can be done only when n = 3m + 1. 9. Weights for the modified three-eighths rule. In fact. One way to choose the a's and |3's is to make the rule exact for polynomials up to a certain degree. It takes a great deal of work to show that this can actually be done and to find a set of acceptable a's and |3's.

07706.24607. one can conjecture that it is desirable for a general method to reduce to a numerical differentiation formula when used with k(t. the weight matrix W must be the inverse of the matrix A of the form . a 4 = 1. « 2 = 1.46198. /32= 1. fa = 0.6) applied to the simple case k(t. j33 = 0.43285.95934. 06 = 0.02180. as = 0.05107.87670. 04=1. Since the midpoint method is very satisfactory.39740.07706. give a fourth order convergent method. Differencing (9. A somewhat different motivation for the investigation of higher order methods comes from reconsidering the midpoint method (9. then reduces to a differentiation formula.6).EQUATIONS OF THE FIRST KIND 153 that the parameters a! = 0. we see that The method. when solved for FM_1/2. s) = 1. s) = 1. /35 = 0.39740. ignoring complications arising from possible starting values. For the general backward differentiation formula we might try to choose the method so that Fn turns out to be given as However. a 3 = 1. the Fn are determined by If we look at this as a matrix equation then.

it is possible to compute the weights wni for any chosen differentiation like equation (9. so that <p(s) will be identified with k(t. we describe the approach using a specific method which illustrates all the essential features of the general case. when scaled to the smaller intervals becomes We want to apply these formulae to (9.1). these methods lead to formulas which are not intuitively obvious. we may use the integration formula which. Each interval [*„. As we saw in the previous section.48). tn+l] is subdivided into three parts by the points The approximate solution will be computed at the points t^. ^2 and tn+l> everything must be expressed in terms of these values.6. respectively. Many of these problems can be overcome by resorting to block-by-block methods. We therefore apply to f(t) a quadratic interpolation .4. to denote these approximations. are hard to derive. As in §7. It turns out that the adaptation of the methods described in § 7. they suffer from the disadvantage of requiring starting values. and even harder to analyze. Block-by-block methods. Furthermore. it is possible to construct higher order methods through the use of nonstandard integration rules. 9. The details. ^2 and tn+i.154 CHAPTER 9 Allowing for special starting values. ^2* and tn+i and we will use the symbols Fnl. can be found in the literature (cf. Fn2 and Fn+1. Taylor [228]). It can be shown that most backward differentiations result in convergent methods for integral equations of the first kind. For example. Unfortunately. Since we will have approximations to f(t) only at t^. which are not trivial. To obtain these unknown values we replace the integral by a numerical integration using these points and satisfy the equation at f^.6 to equations of the first kind is a relatively easy matter. s)f(s).

19 -4.2 Error in the approximation of (9.38 -5.85 xlO" 3 h = 0.60) for this case are shown in Table 9.63 -4. .2. The errors generated by the use of equations (9. .2 0.89 xlO" 4 x 10 4 xlO"4 xlO" 4 x 10"4 xlO"4 xlO" 4 xlO"4 x 10"4 xlO" 4 h = 0.40 xlO" 3 — -3.50 xlO"3 — -3. .EQUATIONS OF THE FIRST KIND 155 Using these approximations in the original equation and satisfying the result at f nl . The results suggest that the order of convergence of the method is three.4 -3. to yield the block of three unknowns Fnl. Fn2 and Fn+1.29 -4.21 -4.28 -5.56 -5. t 0. Example 9. We briefly sketch the required steps. The numerical results in Example 9. 2 .4 1.2 1.2 show the effectiveness of the method.8 2.24 -4.15 xlO" 5 xlO" 5 xlO" 5 xlO" 5 xlO" 5 xlO"5 xlO" 5 xlO"5 xlO" 5 xlO"5 . tn2 and f n+1 .25 -5.2.0 h = 0.4 0.97 -6.32 -5.58)-(9.82 -5.58)-(9.1 -5.2 -4.6 0.68 -5.8 1.0 1. TABLE 9. A proof that the method is of third order is straightforward.60).6 1.46 -5.35 -4.61) by the method of (9.43 -4.75 -4. we are led to the system These three simultaneous equations are solved for n = 1.65 xlO"3 — -3. The equation has exact solution f(t) = t.33 xlO" 3 — -3.52 -4.

that is. Proof. en2 = Fn2-/(«ri2).60) converges to the true solution of (9.64) can be written as where An and Kni are 3 x 3 matrices of obvious form.58)-(9.62)-(9. subtract from (9..64) with n replaced by n — 1. To complete the proof we must now verify the following claims: (a) For sufficiently small h<h0 the matrices An have inverses which are bounded independently of h. Provided k(t. the approximation defined by equations (9. s) and g(t) are sufficiently smooth.64) we now subtract (9. This gives where the Ani are the errors due to numerical integration and interpolation.58)-(9.!. Proceeding in the usual manner.1) with order three.4.62)-(9. If we now introduce vectors then (9.60) the original equations at t. . and en+1 = Fn+1-/(tn+1). The result can be written as where the forms of the vectors pn and the matrices AKni are easily written down. From (9.156 CHAPTER 9 THEOREM 9. k 2 and k+1. Let eni = Fnl-/(lrel).

53) may seem somewhat artificial. we can write down a system of simultaneous equations for Fnl. s) and g(f) The verification of these claims is elementary and we will leave this as an exercise. but they have a tendency to be numerically unstable and the points t^ have to be selected carefully to avoid this. Verify claims (a)-(c) in Theorem 9. . Then show that The idea underlying the above algorithm is easily generalized. However. . t) (c) For sufficiently smooth k(t. (One of the equations reduces to 0 = 0 and has to be replaced by a requirement of continuity of the solution at k. This involves the evaluation of moment . by satisfying the resulting equations at points tni e [*„. resulting in small oscillations within each interval). tn\ = tw ^2 = k+iA generalization of the block-by-block methods can be made by using piecewise polynomial approximations to f(i). Not only are methods of this type convergent. Fnv.4 in this more general context. the analysis then has to be modified since the above arguments break down. Fn2. We can then multiply (9.4. ^+1]. We introduce the intermediate points then. using appropriate numerical integration and interpolation. a more immediate approach would use values at both endpoints of the interval.65) by A~l and apply Theorem 7.) It is known that such methods are convergent. and satisfying the equations at t^. that is. .4. This potential instability should come as no surprise if we note that the trapezoidal method is a special example of this type with v = 2.1. This shows that and that Exercise 9. Convergence can then be established by repeating the proof of Theorem 9.EQUATIONS OF THE FIRST KIND 157 (b) For sufficiently smooth k(s. but it can also be shown that they are numerically stable (although the eni have different expansions. The coefficients of the polynomial are determined by collocation. . The integration method (9. This can be done and another class of methods is obtained by letting ^1 = ^.

) The differentiated form of (9. In connection with these methods it should be remarked that the most successful methods are those for which £„!><„. 9.5. It can then be shown that. one obtains the block-by-block methods described above.7. if these integrals are evaluated by certain specific numerical methods. the accuracy of the result will be affected only by a small amount. But even if the analytic differentiation cannot be performed. see § 9. If it is possible to perform analytically the differentiations for converting (9. although it is not easy to put this observation on a rigorous basis.158 CHAPTER 9 integrals of the form either analytically or numerically.1) is Let us use the notation and Then (9. Using very smooth approximations such as cubic splines can lead to nonconvergent algorithms unless special care is taken. one can use numeric differentiation formulas to obtain an approximate solution. Use of the differentiated form.69) becomes the standard equation of the second kind If we now pick some method from Chapter 7 to compute the approximate solution by . It seems that the more smoothness one requires of the solution the worse the algorithm becomes. so that one imposes no continuity conditions on the solution at tn. (For a fuller discussion. then it is certainly appropriate to use any of the methods described in Chapter 7 to solve equations of the first kind.1) into an equation of the second kind. Provided the numerical differentiation is done carefully.

s) are given. But if G(t) and D(t. r . Since by assumption we see that . Setting and subtracting (9. then the effect of this error on the accuracy of the final results has to be considered. The result (9.1.Let Fn be the solution computed by THEOREM 9. we get From this it follows that where D = max |Dni|.EQUATIONS OF THE FIRST KIND 159 then.5. Let us denote by Dni an approximation to D(tn. . s) can only be obtained approximately. . then Proof. the error will satisfy where p is the order of the method.1. assuming no starting errors. = Ft. . and r]3 = O^^).. If G(t) and D(t.75). If and if F.79) then follows by applying Theorem 7.73) from (9.1. nothing more has to be said. f f ) and by Gn an approximation to G(O. for i = 0.

Fn2.160 CHAPTER 9 Exercise 9. if the conditions of Theorem 5. the formulas are more complicated and the computations somewhat more time consuming. the numerical differentiation formulas for approximating d/dt k(t. The reason is probably that it offers little apparent advantage over the direct methods. Not much has been said about this approach in the literature on the numerical solution of Volterra integral equations of the first kind.81)-(9. . The only difficulty here is to show that the system (9. Improve the order of magnitude estimate (9.5. Fnl.2 are satisfied.83) has a unique solution for sufficiently small h. in order to preserve the order of convergence p of the method. The not unexpected conclusion then is that.81)-(9. Nonlinear equations. the methods described can be extended to nonlinear problems. 9. then the system (9. For the equation equations (9.58) to (9.79) by establishing an explicit bound for |Fn-Fn| in terms of T^. such as Newton's method. these three simultaneous nonlinear equations can be solved for the unknowns.6.60) become In principle. A direct approach via a block-by-block method is in general easier. This is a somewhat tedious matter which can be attacked using known results for nonlinear equations. Due to the necessity of computing numerical derivatives. and Fn+1 by standard numerical techniques. The conclusion is that.83) does in fact have a unique solution. Although we have concentrated on the linear case. T]2 and max|Fj|. the Kantorovich theorem which can be found in books on advanced numerical analysis. in particular. s) and g(f) should also have order of accuracy p.

They appear to be somewhat unwieldy. we can say that an equation is well-posed if a small change in the parameters of the problem can cause only a small change in the solution. if equation (9. such as those based on piecewise polynomial or splines approximations or stepby-step methods using nonstandard quadratures. but dk (t. the midpoint method is quite convenient and together with extrapolation can give good accuracy. Also. If we wish to tackle the first kind equation (9. however. There is of course always some round-off error whenever numerical computations are performed. Therefore.7.1) directly. the best method is likely to be the numerical solution of the equivalent equation of the second kind. There is one point not dealt with so far that merits some attention. then observational errors will affect the solution. Now it is possible to perturb g(f) in (9.EQUATIONS OF THE FIRST KIND 161 9.1) by a small amount in such a way as to change g'(t) by an arbitrarily large amount.1) arises from an experimental situation. We now briefly sum up our views on the question of method selection in practice. Some of the more recently developed methods. One reason why this point needs to be of concern is that in some sense (9. but no systematic study has been done to show that the increased complexity is compensated by additional efficiency.73) have received little attention. s)ldt and g'(t) can be tabulated to adequate accuracy.1) is not a well-posed problem. usually present. it may still be reasonable to solve (9. Problems which are not well-posed are generally difficult to solve numeri- . If we can explicitly perform the differentiations required to get (9.69) and the resulting formulas are convenient for numerical work. When high accuracy is required. any given practical problem may impose some limitations or requirements which might make one or the other of these methods more suitable than the rest. In proving convergence for the various methods we have ignored the possible presence of errors other than those arising from the replacement of the integral by a sum. Additional errors are. Although there is a variety of methods for the numerical solution of Volterra integral equations of the-first kind. Thus. the block-by-block methods may be most suitable. It is not clear whether such methods have any advantage at all.69) a small change in g can cause a very large change in / and the problem is not well-posed.1). Methods based on (9. this becomes cumbersome near the ends of the interval of integration. When high order differences are used.1) and (9. we must consider the effect of small perturbations on the computed solution. Without making a precise definition of this term. since it is numerically unstable. The trapezoidal method seems less satisfactory. Some practical considerations. If explicit differentiation cannot be performed. since they require numerical differentiation.69) instead of (9. because of the equivalence between (9. look promising.

it is possible to keep it more or less under control. If and if the conditions of Theorem 9. This simplifies the analysis and illustrates an effect common to the other methods. then. let us take the midpoint method as an example. Reconsider (9. for h =£ h0. then completes the argument. where and Proof.6. To understand this a little better.6) from (9.84).18). Then Differencing this equation and dividing by \k(tn. The solution of such an equation calls for special methods. tn_1/2)| gives Applying Theorem 7. Subtract (9.6) and (9. Let Fn_1/2 and Fn_1/2 denote the respective solutions of (9.162 CHAPTER 9 cally since small errors can be magnified significantly. the problem is not nearly as serious for Volterra equations as it is for the Fredholm case. perturbing the right-hand side by an amount T]M.6). While error magnification does occur. However. The corresponding solution Fn_1/2 is then determined by THEOREM 9. The classical case of an ill-posed problem is the Fredholm integral equation of the first kind.1 and inequality (7. Several authors have recognized the difficulty and proposed the use of some classical regularization techniques.84). usually involving some form of regularizing or smoothing the solution.2 are satisfied. .

completely invalidating the results. then numerical differentiation will magnify the errors by a factor of O(l/h). The trapezoidal method is analyzed in detail in [153]. t) = l and Tjn = (-l)nT) we see that the bound (9. This perhaps also explains why the difficulty has largely been ignored by most writers on this subject.86) can in fact be attained. no systematic study was made until Jones [146] studied the trapezoidal method for convolution equations. No higher order direct methods were known until de Hoog and Weiss [79]. In this situation considerable care must be taken. Brunner [43]. so that we should be able to obtain an accuracy of at least 10~5. On a typical computer the error due to round-off might be of order 1(T8 or 10~9. [53]. thus changing the solution by a similar amount. For example. So far. The use of product integration in connection with some simple methods is explored in [3] and [168]. [80] produced an analysis for block-by-block methods. this aspect of Volterra equations of the first kind has received little attention.EQUATIONS OF THE FIRST KIND 163 By taking the special case k(s. round-off error is not likely to present a serious problem. The numerical examples usually show an accuracy several orders of magnitude below the machine accuracy and round-off is therefore not significant. [56] analyzed closely related collocation methods. [55]. Unless very high accuracy is required. The fact that many of the higher order integration rules lead to nonconvergent methods was pointed out in [163]. [49]. Later. . Thus whatever method we use there will be error magnification.86) we then draw some general conclusions. One precaution would be to smooth all data which is likely to be contaminated with experimental errors. Whether this by itself is sufficient is debatable. Notes on Chapter 9.69). Practical limitations make it unlikely that one would use a stepsize much smaller than 1CT3. What the best methods are when large experimental errors are present appears to be an open problem. then for h = 10~2 the effect on the solution may be of the same order of magnitude as the solution itself. Although some of the older references [69]. A study of spline approximations can be found in [94]. [50]. [87] make brief mention of numerical methods. if we approach the problem via (9. fortunately it is of a limited extent and generally manageable. Gladwin and Jeltsch [114] showed that this was true for all standard interpolatory integration rules. while in [163]. If an error is about 1% of the true value. Any other algorithm we might try leads to similar (or perhaps worse) results. This error magnification is inherent in the problem and is not due to any defect of the midpoint method. The effect of experimental errors may be more serious. The start of the development of a general theory began with the work of Kobayashi [153] and Linz [163]. that is. [52]. [165] the midpoint method is developed. the error is magnified by a factor 21h. From (9.

[129]. [115].3 were taken from this reference. However. [130]. The difficulty arising from the presence of large errors and their effect on standard direct methods is pointed out in [169].164 CHAPTER 9 A thorough analysis of nonstandard modified Newton-Cotes formulas is given in [128]. the work of Radziuk [214] on least squares methods give an indication that regularization may sometimes be unnecessary. The numbers in § 9. [219] also recognize this problem and suggest the use of classical regularization. . Some authors [87]. Radziuk's empirical observations are given some theoretical support in [170]. Further investigations of this and related ideas are in [12]. Taylor [228] studied the relation between methods for first kind equations and differentiation. [113]. [145]. [183].

While the formulas are easily derived. this is the least understood. but at this point no comprehensive theory exists. t)^Q for all value of t. To assure the existence of a solution. practical situations almost always lead to cases in which the singularity is of the form (f — s)"^.Chapter 10 Equations of the Abel Type A logical continuation of the development of the previous chapters would be to consider now equations of the first kind with nondifferentiable or unbounded kernels. We therefore consider the generalized Abel equation where 0 < ja < 1 and k(t. s) = 1. often technically difficult. Abel equations arise most frequently in reconstruction and inference problems.1) an immediate discretization. A number of isolated results.2 to 10.4 we discuss some of the methods and provide some introduction to the less complicated aspects of the analysis. 165 . s) is a smooth function. Of all the numerical techniques described in this book. their analysis has proved to be a difficult matter. The completely general case is quite difficult and virtually nothing has been done on it. Since Abel's equation is somewhat ill-posed.1) may be known only within an experimental inaccuracy. For the more general case (10. Of considerable practical interest is the simple Abel equation for which k(t.4. Q<IJL<I. Thus the right-hand side of (10.5 is devoted to some of the practical aspects of solving Abel's equation. similar to the techniques used in Chapter 7. The explicit inversion formula discussed in Chapter 5 permits a straightforward numerical approach using product integration. In §§ 10. This is discussed in § 10.1. is readily made. as indicated by the examples in § 2. Fortunately. we make the assumption that k(t. this matter must be considered carefully. Section 10. have been made.

Both alternatives involve numerical differentiation.2. This is straightforward.1) can be converted into an integral equation with a bounded and smooth kernel. this simple approach is useful only when an error of magnitude O(e/h) is within acceptable limits.38) and (5. . A more direct approach.1) and product integration is much easier.3) and (10. As pointed out in § 5. the approximate solution of (10. then set in (10. Thus.4) with <p = g and use product integration to evaluate /. 10. Equations (5.1.4) is an elementary matter. Solving a simple Abel equation. The midpoint and trapezoidal methods for general Abel equations.5). with g(0) = 0.5). a process which may cause error magnification. one might use the differentiation formula which for q> e C(3)[f2i.3) approximately. One can then either apply (10.39) is rather complicated. Using numerical differentiation and product integration. the simple Abel equation has the solution or. or alternately use product integration first to evaluate the integral in (10.39) show that the general Abel equation (10. there seems to be little practical usefulness in this observation. As is obvious from (10. since the kernel in (5. a perturbation in cp(f) of order e can cause an error in the approximate derivative of order e/h. ^1+2] and t2i s^ts^t2i+2 is an approximation with error O(h2).5) to (10. using a discretization of (10. For example. However.166 CHAPTER 10 10. but some caution has to be used.2.

6). The exact solution of (10. To estimate the order of convergence and perhaps improve the accuracy of the solution. Allowing for the fact that in the midpoint method the stepsize has to be reduced by a factor of 3 to allow extrapolation. described in Chapter 8.58 0. f) ^ 0.0541 0. fi+1/2)/(ti+1/2) and use product integration to evaluate the integral. but its behavior is no longer as satisfactory as it is for equations with smooth kernels.9) is f(0 = f.3619 009 .s)f(s) by a constant function k(f.3501 .15 0.1643 022 . The solution can be computed step by step via Since by assumption k(t. equation (10.50 0.35 14 .2525 0.1528 0.1. using stepsize h.52 YE 008 .3505 P 1. TABLE 10.8) has a solution for sufficiently small h. Numerical solutions for the trial equation using the midpoint method with several values of h are shown in Table 10.25 0.48 0. with estimated order of convergence p and extrapolated values YE. Example 10. we get the approximation method where This is the analogue of the midpoint method (9.2505 0. Aitken's method.EQUATIONS OF THE ABEL TYPE 167 If for f i : ^s=^f i + 1 . we find that the order of convergence p can be estimated by where Y(f.05 0. was tried.6 14 .67 0.1.1505 0.35 n h -J~~ 10 h=& 0.1499 020 .66 0. The midpoint method for Abel's equation is simple. h) is the approximation at t.3523 h = ^0 000 . we approximate k(t.1 Approximations for (10.9) by the midpoint method. t 0.8 1.

using (10.1 strongly suggest that the method has (at least for this example) an order of convergence f.3. the solution is unbounded at f = 0. and letting x-»0.12) and (10. s)/(s). as indicated in the proof of Theorem 5.1) in the limit as t approaches zero. The extrapolated values have improved accuracy. Actually. From (5. using the results in § 8. for example.1) is with where the a's and jS's are given by (8. For example. Suppose that near t = 0 Then.39) Differentiating (5.30).13). To develop more accurate methods requires a better approximation to k(t. This method requires a starting value F0. an extrapolated value YE is computed by The computed values of p in Table 10.3 on the product trapezoidal method. All we need to do is to adapt the relevant formulas to Abel equations taking into account that we are dealing with equations of the first kind. we see that When fji — a<l. we have already discussed this in Chapter 8 in connection with equations of the second kind.29) and (8.38). In this case different techniques must be used. we can introduce a new function . One possible way to get this is to consider (10. we find that an appropriate approximating equation for (10.168 CHAPTER 10 Once p is estimated.

t 0.2 Approximate solution of (10.229 0.2.120 0. we obtain an approximation defined by . 0.126 0.555514 0496 .106 0640 . If in (8. this is fairly straightforward and we will not pursue it.555388 0.499858 h = 0. The observed error indicates a convergence of order two.10) for several stepsizes are shown in Table 10.15) by the method (10.0 h = 0. 10.624951 0.6 0.EQUATIONS OF THE ABEL TYPE 169 TABIJE 10.833282 0743 .4 can immediately be adapted for the generalized Abel equation. the method described in § 8.942 h = 0. The equation has the exact solution Approximate values computed by (10. we can find any number of alternative formulae by using various product integration rules.10) to which can be used to obtain successive values of Fn.34) we omit the term on the left-hand side and rewrite the rest.2 0. Example 10.554890 0493 . making use of the fact that our equation is linear.713418 0641 .832478 0.555556 0.833129 0746 . Working along the lines indicated in the previous section.2.625000 0.284 0.994 Exact solution 0.f(f) such that <p(f) is bounded. If /(O) is bounded.833333 0748 .05 0.500000 <p(f) = fe. The idea behind the block-by-block methods can also be used to obtain higher order self -starting methods.8 1. Thus.10).3.33) and (8. The modified equation can then be treated by product integration. Block-by-block methods. we simply set F0 = /(0) and rearrange (10.2 04 .1 0.

000033 0006 .000 0002 0004 . t 0.39) with p(f.002 ft =0. 09 .000011 0000 .005 -.001231 -.000158 0001 .005 0000 .007 0005 0.0 Jt = 0.008 0009 0005 . 0.044 0000 0006 .2 -.5 06 .1 -0.05 0. 0.007 0000 .019 -.063 -. 0.022 -.095 0003 0.008 0000 .008 0001 .24) by method (10. s) = (f-s) •*. TABLE 10.004 .009 0002 .17) are given by The Wj.004 0.000102 -0.15). a.170 CHAPTER 10 where A is a 2x2 matrix with elements The values on the right-hand side of (10.019 0006 h=0.3 Observed errors for the solution of (10.7 08 .1 02 .3 04 .000031 0002 .35)-(8.003 0001 . 1. 7 are as defined in (8. 0.

so that differencing does not produce an equation to which the basic Theorem 7. Some remarks on error analysis.1) with k(t. The techniques used in Chapter 9 for equations with smooth kernels are not easily modified for this case.3. The approximate solution is then given by where By the usual arguments one can deduce that the error e fl -i/2 = -Fn-i/2~ /(fn-i/2> satisfies where 8(h. s) = 1 and JUL =\. The difficulty arises because the weights wn. From this we get . Several approaches have been tried.6). let us take the simplest case. To avoid as many technical difficulties as possible.4. O is the error caused by the use of the product midpoint method. The recurrence relations which do arise must be analyzed using methods which are extremely complex. We will briefly outline here a method (without rigorously justifying it) which seems to hold some promise. The method of (10. namely equation (10. considerable difficulty arises when we try to analyze the results.EQUATIONS OF THE ABEL TYPE 171 Example 10. no longer follow a simple pattern.3. While it is easy to think of any number of plausible methods for the approximate solution of Abel equations. using the midpoint method (10. but a clear-cut preferred choice has not yet emerged.17) was used on whose solution is The observed errors for various stepsizes are shown in Table 10. 10.1 can be applied.

If we let B = A"1 then B is also a semicirculant matrix. First. The matrix A has a rather special form and is called a semicirculant matrix. Since there is no factor h in front of the summed term. establishing such a bound requires a detailed investigation of the weights wni.30) as where and In matrix notation we write this as and the analysis then hinges on being able to find a bound for the inverse of the matrix A.172 CHAPTER 10 The hard question is to obtain a bound on the solution of this recurrence relation. but its coefficients depend only on the difference between the row and column numbers. Not only is it triangular. where the bt can be obtained by the step-by-step formula . that is. notice that wni is a function only of n — i. There exist some classical results which give information on the inverses of such matrices. We can therefore write (10.

EQUATIONS OF THE ABEL TYPE 173 If we now consider the two infinite series then. expand (10.34) and the fact that B = A"1 we see that Therefore .40). This yields (10. We further rewrite this as To proceed we must now use a theorem dealing with the inverse of certain infinite series.36). A proof of this result can be found in Hardy [122. To apply Theorem 10.36) to be equivalent to To see this. Po = l. and then with Proof.1. with p n >0. we can consider (10.33) satisfy (10.1 to (10. If is convergent for |x|<l. THEOREM 10.31). The theorem then guarantees that bn 2* 0 and From (10. This is a relatively easy matter. 68]. at least formally. we must show that the a. p.37) and equate powers of x. defined by (10.

then the change in Fn_1/2 will be of order -n/Vh. or to justify extrapolation.4. A deeper analysis is necessary to show this. 10. If we assume that /(r) = 0 for r> 1. O is at least O(h}.3) is preferable to (10. In the second example in § 2. One value of obvious practical interest is the expected volume of the particles. A perturbation of order TJ causes a change in the computed solution of order r^l^Jh. While the situation here is not as bad as for equations with smooth kernels (where the dependence is of order rj/h).4). f(r) represents the probability density for the radius of the spherical particles. to show that this is a general result. Product integration is then used to evaluate the integral. using arguments similar to those used in § 10. The question of how to approximate the derivative of a function in the presence of uncertainties has received some attention and reasonably good methods are available. It is a fairly straightforward task. then the expected volume is . It may not be f(t) which is of interest. Again.4. and finally the derivative is computed by using a stable numerical differentiation method. Our understanding of these matters is still quite incomplete. to treat general kernels. but some physically meaningful quantities. If in (10. but no complete theory has yet been established. As was remarked before. k(t. The conclusion is that in applications the form (10. equations of the Abel type occur most commonly in connection with the analysis of certain experimental measurements. Consequently An is at least O(Vfi). Consequently we must consider the effect of fairly large perturbations on the right-hand side.174 CHAPTER 10 The product midpoint method is based on a piecewise constant approximation. Solving Abel equations in the presence of experimental errors. however. it is still sufficiently severe to warrant careful attention when TJ is significant. somewhat pessimistic as it does not demonstrate the observed O(h3'2) convergence.5. It is. The question of the solution of the simple Abel equation has been studied extensively. expressed as linear functionals of /. to get some simple insight. s) = l. To proceed beyond this point to study various more accurate methods is complicated and involves a myriad of technical details.44) guarantees the convergence of the method.1 that 8(h. Another observation is sometimes helpful.8) we perturb g(^) by an amount TJ. and p = 1. It follows therefore easily from the discussion of § 8. take the simple case of fi=5. so that (10. Many authors have obtained limited special results. For references see the notes at the end of this chapter. A satisfactory procedure is first to smooth g(f) by some methods such as least squares or Fourier approximation.

32) and substituting into (10. For Abel equations in the presence of errors. For the general Abel equation. [222]. if this is all that is needed.4 was suggested by Eggermont in [89]. . are discussed in [29]. [9]. [133]. [39]. the midpoint method is studied in Weiss and Anderssen [236]. [32]. Also. we note that the computation of V through (10. was carried out by Anderssen and co-workers. [10]. [143].4) with various ways of approximating g'(f) are given in [102].48) is completely well-posed. [191].55) for f(r) using (5. The final expression is We find that we need only a simple integration to get V. Eggermont [90] and Weiss [237] contain results on the trapezoidal methods. [138]. while Balasubramanian et al. their effect will tend to cancel out altogether. The error analysis used in § 10. using equation (10. Other questions. [142]. including the use of stable differentiation methods and the computation of functionals. A thorough study of the simple Abel equation. [44]. Stable numerical differentiation schemes are considered in [5] and [6]. [23] use a least-squares method. [47]. Solution methods for the simple Abel equation. Some of their results are summarized in [4]. [46]. [7]. [193].EQUATIONS OF THE ABEL TYPE 175 Solving (2. In fact. [206]. including higher order methods. interchange the order of integration and carry out the ^-integration explicitly. [54]. [154]. while Atkinson [13]. Baev and Glasko [15] suggest regularization. Notes on Chapter 10. if the errors in g(f) are random and without bias. it is completely unnecessary to solve the integral equation.45) gives Integration by parts then yields To simplify further. [137].

This page intentionally left blank .

A quite simple method is analyzed in detail in § 11. Equation (11. if we can assume that the simple Lipschitz conditions hold for O^ssstssT and all u. Block-byblock methods are also suitable for this case and are discussed in § 11. has a unique continuously differentiable solution.1. In § 11.2 we discuss the class of linear multistep methods and show how the classical Dahlquist theory carries over to integrodifferential equations. v. In this chapter we consider in detail only one case.1) bear a great deal of similarity to the results for initial value problems in ordinary differential equations. then (11. the equation with initial condition The classical Volterra population equation discussed in Chapter 2 is of this form.3.2).Chapter 1 1 Integrodifferential Equations A variety of integrodifferential equations occur in practical applications. 177 . As indicated in Example 4. In §§11.4 we outline some of the numerical methods which have been studied.1) can be considered a generalization of the ordinary differential equation Consequently the numerical methods for (11.4. subject to the condition (11.1).1 to 11. w.

11) with various values of h.4 considers the question of numerical stability. A simple numerical method. The equation whose exact solution is f(t) = t. starting with .10) and (11.1.11) can be solved successively for F1? F2. but as usual. At each step.178 CHAPTER 11 Section 11. if we use the trapezoidal rule for both integrals. Example 11.1. 11.1). For example. the nonlinear equation was solved by successive substitution.7) from *„_! to k to give Replacing this integral and the integral in the definition of z(f) by a numerical integration rule yields a formula for computing the approximate solution Fn. For the development of numerical methods it is convenient to rewrite (11. we get where Equations (11.10) and (11. In the nonlinear case Fn is defined implicitly. The chapter concludes with a very brief discussion of some important types of equations not of the form (11. a unique solution exists for sufficiently small h.1) as We then integrate (11. was solved using (11.

0177 08056 . If conditions (11.4 06 . Suppose that the sequence {£„} satisfies with A > 1.0 h = 0.1 02005 .4) are satisfied. Hence by induction the inequality holds for all n.0204 and iterating until The last value of F^ was then taken as a sufficiently close approximation to Fn.2.40000811 06035 . the error analysis for (11. omitting some details. C.0252 h = 0. Assume that (11. and £0 — 0. Then For n = 1 we have I&I^C so that (11.0141 06092 .1.8 1. .1.2.20001731 04063 ..0081 0.14) is satisfied.0450 10116 .2 0.INTEGRODIFFERENTIAL EQUATIONS 179 TABLE 11.. we need another basic result on sequences generated by sums. and if in addition H and K are twice continuously differentiate with respect to all arguments.80011223 10058 . We sketch it briefly. n -1.0004 0. THEOREM 11. positive B.. t 0.0049 h = 0. The results are shown in Table 11.10) is straightforward..0627 0.12) by the trapezoidal method.0029 04031 .2 0. For the error analysis of such methods.14) is satisfied for i = 1. which is not surprising because of the use of the trapezoidal method. The apparent order of convergence is two. THEOREM 11. With this result.1 Solution of (11.0044 06057 .05 02005 .2)-(11. 0.Then Proof.80193268 10449 .

10) and (11. We set as usual en=Fn—f(tn). we get .3)-(11.15) and (11. we have Substituting this inequality into (11. we have where 5n is the integration error Now.16) so that.9) from (11. and use the further shorthand notation where the double primes on the summation sign indicate that the first and last terms are to be halved. Proof.1) with order two. subtracting (11.17).10). Then.5). from (11.11) converges to the true solution of (11. applying the Lipschitz conditions (11.180 CHAPTER 11 then the approximate solution defined by (11.

We will not further pursue the question of convergence of this method since it is a special case of a general theory which we describe next. if we integrate (11.INTEGRODIFFERENTIAL EQUATIONS 181 For sufficiently small h this implies that with Applying Theorem 11. .1) from ^2 to ^ and replace the result by Simpson's rule we obtain with The weights wni are determined by whatever numerical integration rule we choose. For example. Higher order methods can be constructed along similar lines.1 it follows that It is now easily shown that so that For sufficiently smooth H and K the use of the trapezoidal rule implies that which completes the proof. Normally one would choose it to have the same order of accuracy as Simpson's rule.

A multistep method of the form (11. Because the theory we are about to describe closely parallels the corresponding theory for ordinary differential equations we use terminology and notation customary in this field. then (11.26)-(11. with the Dahlquist stability theory as one of the major results..10) becomes while (11. We first introduce two polynomials p(x) and tr(x) defined by then use the following definitions.22) reduces to Both of these are simple examples of the so-called multistep methods. (11.26)-(11.2. and to completely determine the solution we then need r + k starting values. In this instance the approximate methods described so far reduce to well-known methods for solving ordinary differential equations.25) is an instance of the Milne-Simpson method.1. When H is independent of z. As a general form for the multistep methods for integrodifferential equations we will take where These equations will generally be valid only for n = r.1) reduces to an ordinary differential equation. then (11.28) satisfies . A great deal of work has been done in connection with multistep methods. DEFINITION 11. If these are available. linear multistep methods. it is possible to extend much of this theory to integrodifferential equations... As we now indicate.28) can be solved for Fn+k at each step (provided that h is sufficiently small).24) is of the Adams-Moulton type while (11..182 CHAPTER 11 11. Thus (11. r + 1.

1. THEOREM 11. A multistep method satisfies the stability condition if all of the roots of the polynomial p(x) lie in or on the circle |x| = 1 in the complex plane.26) satisfies the consistency condition of Definition 11. and if the starting errors tend to zero as h —» 0. Proof. The concept of consistency. If we apply this idea to the integrodifferential equation. Using these definitions we can then extend the classical Dahlquist theorem for ordinary differential equations to integrodifferential equations. In spite of the different appearance. that is.3. then the method is convergent. For integrodifferential equations there are a few additional details which will not be reproduced here. for example in [166]. If a multistep method of the form (11. The reader can easily verify that the two methods above satisfy the consistency and stability conditions. From Definition 7. The arguments follow closely those for differential equations found in most books on the numerical solution of differential equations. DEFINITION 11.2. the two concepts are based on the same idea.2.1 and the stability condition of Definition 11. Explicit arguments can be found in several references on this topic. introduced in Chapter 7. is closely related to the consistency condition of Definition 11. we are led to defining the consistency error as . and roots of modulus one are simple.3 we see that consistency is defined by subtracting the original equation from that obtained by using a numerical integration rule in place of the integral.INTEGRODIFFERENTIAL EQUATIONS 183 the consistency conditions if (iii) the weights wni are uniformly bounded. and are such that for every continuous function f ( t ) .

k) in (11. that is. Condition (11. the order of consistency and the order of convergence are essentially the same. with some additional complications.26) can be defined via the consistency error 8(h.4.32) are sufficient to make these terms vanish as h—»0. If we say that the method is consistent of order p. For details. we have that if (11. Then there exist constants ct and c2. We then want to show that consistency in the sense of Definition 11.3.35) shows that the first two consistency conditions (11. such that Proof.2 and is consistent. As expected. Let 17(h) be the absolute sum of the starting errors. k+jj.33) are satisfied then The order of consistency of the multistep (11.31) and (11.31M11.184 CHAPTER 11 for some t in [k. .34) are identically zero for all t. so that the equation can be rewritten as A Taylor expansion of the first group of terms in (11. Thus.1 implies that The terms in the last curly bracket in (11. Again the arguments are similar to the differential equations case. independent of n and h.26) satisfies the stability condition of Definition 11. consult the reference cited in Theorem 11. THEOREM 11.33) implies that the second group of terms will also tend to zero in the limit. Assume that the multistep method (11.34).

if for some p > l we integrate (11. Its order of convergence is three.INTEGRODIFFERENTIAL EQUATIONS 185 However. We can choose This method is a combination of the third order Adams-Moulton method with the third order Gregory integration formula. The block-by-block methods for integral equations can be modified to work for integrodifferential equations. We simply combine a multistep method for ordinary differential equations. 11. It requires five starting values F 0 .32) and the stability condition refer only to the discretization of the derivative and are identical with the conditions imposed on multistep methods for ordinary differential equations. even though the consistency condition introduced here is equivalent to the one encountered before.. To construct block-by-block methods we start from the integrated form of (11.33) assures that the numerical integration used to replace the integral is convergent.2. This works because the consistency conditions (11. The stability condition of Definition 11. with an appropriate numerical integration rule.4.. A method which does not satisfy the stability condition of Definition 11. On the other hand.31) and (11.. Block-by-block methods. then Block-by-block methods are then constructed by appropriate combinations of numerical integration and interpolation..2 is nonconvergent.2 has little to do with the question of numerical stability discussed in § 7. may still be numerically unstable. This yields a class of methods which can be used either to provide starting values for the multistep methods or simply as techniques for solving the whole problem. the same cannot be said for the stability condition.1). We again present the method . We will take a brief look at the numerical stability problem for integrodifferential equations in § 11.1) from ^ to k+p. although convergent. so the question of numerical stability is irrelevant. In particular. a method which does satisfy the stability condition.1). Example 11. such as an Adams-Moulton formula.3. The consistency condition (11. The results developed here make it easy to construct multistep methods for equations of the form (11.4. F4.

obtaining a block of two values at each step.4. F 4 .2. F3. as is easily proved.. are then computed by applying Simpson's rule to (11.186 CHAPTER 11 based on Simpson's rule and quadratic interpolation as an example. seems to be preferred by writers on integrodifferential equations and we will follow this tradition. Here tn+l/2 .1}...38) and replacing the integral by an appropriate Simpson's rule we have The Z. .tn + h/2 and {w(} is the set of Simpson's rule weights {1. It is also easy to see that one can generalize the above idea to construct methods of arbitrarily high order. Using p = l and p = 2 in (11. This method has fourth order accuracy.40)-(11. Starting with F0 = /0 we can solve these for Fl5 F2. A suitable equation for the analysis is .4. completely solvable case. To investigate the numerical stability of a method we have two alternatives: we can carry out an asymptotic analysis to determine the behavior of the dominant error term...4.44) constitute an implicit set of equations for F2n+i and F2n+2.4.. or we can apply the method to a simple. The latter approach.4. being somewhat simpler..39). Numerical stability.2. 11. The extraneous values F2n+1/2 and Z2n+i/2 are approximated by quadratic interpolation as Equations (11.

JLL >0. Let us apply this criterion to the trapezoidal method (11.48) is where pt and p2 are the roots of the characteristic polynomial Solving this quadratic equation gives roots p1 and p2 as Since both roots have magnitude less than one for any positive h. and ju. the approximating scheme gives where the double prime denotes that the first and last terms in the sum are halved.INTEGRODIFFERENTIAL EQUATIONS 187 The solution of this equation is easily seen to be of the form so that if A >0. The solution to this equation becomes more obvious if we subtract from it the same equation with n replaced by n-1. /LA we see that for every h>0. cancelling terms. For equation (11.10) and (11..45) with positive A.47) holds are the stability region. A numerical method will then be called stable if. when applied to (11. it yields an approximate solution Fn for which The values of h for which (11. Doing this. f(t) —»0 as t —><». and rearranging we get Therefore the solution to (11.11). the method is A-stable. A.46). and fixed h. therefore. .

25). there is little reason for pursuing this line of thought.188 CHAPTER 11 A similar process can be carried out for the other multistep methods and regions of stability can be computed. It is known that this method is numerically unstable. the corresponding multistep method reduces to a method for ordinary differential equations. The discretization and design of algorithms is quite elementary. we arrive at the . (11. For example.. m -1. 11. we get where If we now introduce variables z{ = /(l). Actually. the general nonlinear case with given initial conditions Integrating (11.5. The general equivalence between integrodifferential equations and systems of Volterra equations already discussed in earlier chapters provides an immediate way for the construction of numerical algorithms. Therefore..22).1.1) reduces to a differential equation. i = 0..50) once. Other types of integrodifferential and functional equations.23) then when H is independent of Z. we can characterize method (11. In principle it is possible to extend the techniques discussed to other types of integrodifferential equations. for example.. but the analysis tends to get involved. One easy way of obtaining a characterization of the stability of multistep methods is to note that if a method is to be considered stable it ought certainly to be so in the special case when (11. The reduction of integrodifferential equations of arbitrarily high order to systems of Volterra equations is an elementary matter. and much is known about the stability in that case. Consider.22) and (11.23) as numerically unstable without having to carry out a lengthy analysis. if we consider the method defined by (11. In that case. the equations reduce to the well-known Milne-Simpson method (11.

3 and 11. unless f(0-) = H(0.1) is a special case of (11.56). Equation (11. Notes on Chapter 11.50) and develop special methods analogous to those for differential equations of higher order. with little to be gained by it. So is the simple delay differential equation where a is some fixed positive number. generates a discontinuous second derivative at s = a.1). (11. with a certain amount of care.INTEGRODIFFERENTIAL EQUATIONS 189 system Numerically this can be solved with any of the techniques described in Chapter 7. This points out that the set of necessary initial conditions can be more complicated than in (11. and so on. we need to know f(t) for — a ^t <0. In many applications there arise equations which are somewhat different from (11. This. if we want to solve (11. Other related questions are studied by Mocarsky [192] and McKee [182]. However./(-a)). It is of course possible to start directly from (11. the analysis for this appears to be difficult.1) and a large number of other equations can be considered special instances of the general Volterra functional differential equation t maps the function / defined on -<x><s*^t into the space of real numbers.1). Obviously. The resulting numerical methods are much like the methods we have presented here. Nevertheless.56) some complications may be encountered.57). in turn. Piecewise polynomial and spline approximations are described in [45] and [122]. including proof of Theorems 11. the solution has a discontinuous derivative at t = 0. Furthermore. . For instance.4 is given in Linz [166]. Since numerical methods are generally designed on the assumption that the solution is very smooth. it is possible to extend known techniques for Volterra integrodifferential equations to general functional differential equations. In extending our techniques to (11. this may cause trouble. A discussion of the theory of multistep methods. again referring to (11.57) for 13*0.

[124]. [213]. The question of numerical stability for integrodifferential equations is treated in Brunner and Lambert [48] and Matthys [180]. Ford [103] and Neta [194] discuss some problems and their numerical solution. Equations with singular kernels are the subject of study in [177]. [148]. [240]. Some representative papers are [72]. . There is an extensive literature on the numerical solution of functional differential equations. Chang and Day [64] study an equation somewhat different from that considered here. but this has not been systematically studied and is presently not well understood.190 CHAPTER 11 while [100] and [210] give some other methods that require no starting values. [65]. [201]. [184]. Partial integrodifferential equations of Volterra type are also occasionally encountered in practice. [227]. [149]. [144].

The designer consequently has to choose between sophisticated and highly efficient programs for standard equations. Another advantage is that their simplicity makes modifications reasonably easy. They should deliver acceptable results in a variety of settings.Chapter 12 Some Computer Programs While there exist many methods for the solution of Volterra equations. There are advantages and disadvantages to every possible choice. the implementations were done for systems of equations. Our major decision was to use essentially the simplest nontrivial algorithms. Highly accurate results are rarely needed. One is that the methods are constructed for the simple. Their main disadvantage is their poor accuracy when compared to higher order methods. Another unrewarding decision which has to be made is what language to use. However. Applications often involve equations somewhat different from these. One reason for this selection is that the methods are well understood and have good stability properties. It is the spirit of these observations which guided the decisions for the programs in this chapter. 191 . there may be other reasons for most authors' reluctance to include programs in their publications. the trapezoidal method for equations of the second kind. theoretical or descriptive papers greatly outnumber those giving actual programs. if they are. there are very few published computer programs. This may not be too serious. so that the programs can rarely be used without modification. it is reasonable to provide at least something which may occasionally be helpful. but inefficient algorithms. Nevertheless. namely. easily modifiable. and the midpoint method for equations of the first kind. For equations of the second kind the nonlinear problem is treated. extrapolation techniques can sometimes be used to improve the results with a minimal effort. prototype cases. Apart from the fact that programming is on the whole not considered as creative as theorem proving. rather than single equations. and flexible. even though one cannot hope to please everyone with one's decisions. This is not uncommon for numerical analysis in general. To make the programs useful in many circumstances.

10). Two are for nonlinear systems of the second kind. The trapezoidal method for systems of the second kind. when written for such a system.1. An attempt was made to keep the programs as simple as possible. would probably have been a popular choice. Then. so that an experienced programmer should have little difficulty in translating them to another language. four programs are provided. Such a system can be solved in various ways. To cover most of the standard types discussed in this book. Here we will use Newton's method. The trapezoidal approximation method (7. Since program modifications are to be expected.192 CHAPTER 12 because of theoretical difficulties. Pascal was chosen as a compromise between ease of programming and availability. The programs provided here are aimed at the casual user. g and K are vectors with m components. 12.design emphasizes simplicity and generality over computational efficiency.3). as given in equations (1. The last of these can be used for Abel-type equations. they can be used to solve integrodifferential equations as well. . We consider here a system of m equations of the form where f. Since all programs are for systems. because of its wide usage. becomes where Fn is an m-component vector In this notation. Unfortunately. the programs are based on the midpoint and the product midpoint methods.2) is a nonlinear system to be solved in a stepwise fashion for Fl5 F2. equation (12. The most debatable decision was the selection of the programming language. For the repeated computation of accurate solutions of large systems they are relatively inefficient. FORTRAN.9) and (1. it is difficult to program cleanly in this language. (Fn)t is the approximation to £(0. With F0 = g(f0). using the regular trapezoidal and the product trapezoidal methods. a language with a better structure is preferable. For equations of the first kind. only linear equations of the first kind are considered.

and so on. m] of real. This value will be used as the first guess. and J is the Jacobian matrix with elements The iterations (12. . var t : array [0 .in (12. The calling program must contain the following declarations: const m = (size of system). The vector F[n] represents Fn.6) will be carried out until where e is some assigned tolerance.3).2) as where then the Newton iterates are Here F*° stands for the value of Fn at the ith iteration.SOME COMPUTER PROGRAMS 193 If we write (12. for the next step. NX] of real. m. The results produced by VOLT2 will be stored in t and F. fx in f[l]. F: array [0 . with t0 in f[0].l . m] of real. consequently F[n][i] is the value of (F B). JV(of type integer) = total number of steps taken. type MATRIX = array [1 . The procedure VOLT2 is called by VOLT2(h. . . The last iterate will be taken as Fn. F^+j. VECTOR = array [1 . . . NX] of VECTOR where NX is an integer constant equal to or larger than N. This algorithm is implemented in the following Pascal procedure VOLT2.N) where h(of type real) = stepsize. .

.2. The listing for VOLT2 is given in Fig.05 in the interval [0. and KDIFF shown in Fig. s real. K and KDIFF.1.1.5] define and use const m = 2 VOLT2(0. K.05. For completeness. we include a simple Gaussian elimination algorithm SOLVE in the code below. To solve this with stepsize h = 0. and u of type VECTOR). Code for functions g and K. so that (with i. f.194 CHAPTER 12 The user must also provide three real functions g. Example 12. / integer.1. The procedure VOLT2 requires a procedure for the solution of a linear algebraic system. 12. FIG. 12. 12. The system has exact solution f^t) = 1 and /2(f) = f.10) with the code for g.0.

Taking exactly the . 12.2.SOME COMPUTER PROGRAMS 195 FIG.2. 12. Pascal procedure VOLT2. We next consider the system using the product trapezoidal method described in § 8.3. The product trapezoidal method for a system of the second kind.

1. With i. . 12. Fig. The code utilizes the procedure SOLVE given in the lasting of VOLT2. j integer and h real. Flo.3.196 CHAPTER 12 same approach and using the notation in § 12. we obtain the scheme where and J(u) has components This method is implemented in the procedure VOLT2PROD. Its usage is identical to that of VOLT2.3.2. Pascal procedure VOLT2PROD. 12. except that two additional real functions ALPHA and BETA have to be supplied. these are to be defined so that A listing of VOLT2PROD is given in Fig. 12.

The results produced by VOLT1 are stored in t and F. a real function k must be provided. s) has the value of ky(f.16). For equations of the first kind we take the system If we adapt the midpoint method described in Chapter 9 to systems. we are led to the approximating equations where Kn] is an m x m matrix The procedure VOLT1 computes the solution of (12.17) for n = 1. However.3. f[0] contains tQ. The usage is essentially idf ntical with that of VOLT2. the results are stored in F such that F[0] contains the vector F1/2. s. In the array t. FIG. f. . N. such that. j. Pascal procedure VOLT1. in place of K and KDEFF.4. . F[l] the vector F3/2. s) in (12. The midpoint method for systems of the first kind. k(i. 2. . .SOME COMPUTER PROGRAMS 197 12. and so on. f[l] contains fls and so on. / integer. real. with i. 12. and t. However. .

As the final case we take If the product midpoint method (10. only minor changes are required to convert VOLT1 into the product midpoint procedure VOLT1PROD. 12. 12. The code utilizes the procedure SOLVE given in the listing of VOLT2. has to be provided so that FIG. with integer parameters n. we get the scheme where The rest of the notation is the same as in § 12.198 CHAPTER 12 A listing of VOLTl is given in Fig. Therefore. An additional function w.8) is adapted to a system of equations. 12. The product midpoint method for a system of the first land.4. j and real parameter h.4. Pascal procedure VOLT1PROD.3. The usage of VOLT1PROD is identical to VOLT1.5. Fig.2. 12. .

SOME COMPUTER PROGRAMS 199 A listing of VOLT1PROD is given in Fig. 12. [85] contains some ALGOL 68 modules. 12. but unfortunately. Fig. . Notes on Chapter 12. including Algol procedures by Pouzet [212] and Rumyantsev [217]. it has not been formally published and consequently is not readily available. The article by Miller [188] mentions several programs. A recently published paper by Delves et al.5. A more accessible reference is the Fortran program for a linear equation of the second kind given in Churchhouse [67]. These modules are aimed primarily at the solution of nonstandard equations where the standard methods fail. The code utilizes the procedure SOLVE given in the listing of VOLT2. essentially constituting a higher level language for the solution of integral equations.2. A quite sophisticated Fortran program is given in Logan's thesis [172].

This page intentionally left blank .

To elaborate on this. Consider the system This set of equations differs in one respect from the various examples throughout the book—we do not know its exact solution. When this happens. Nevertheless. the assumptions of certain theorems may be violated. 13. These problems are still somewhat simpler than what one can expect to get in practice. various questions arise which are not immediately answered by the theory. the insight gained from the theory allows us to obtain approximate results with a high degree of confidence. or the conclusions inadequate. It tells us under what conditions the methods will work and gives some insight into their relative merits. the general ideas incorporated into the theory will have to be modified to deal with specific questions. However. Often these modifications cannot be rigorously defended. We use it only to study the question of the accuracy of the solution. For our first case study we take a more or less arbitrary example with no relation to any particular practical application. in practice. When studying algorithms we normally use simple examples with known results.1.Chapter 13 Case Studies The study of numerical methods provides a general framework for understanding the behavior of approximating algorithms. This is 201 . but they do have some realistic features and raise questions which are common. The problem at hand may be somewhat different from the ones used in the analysis. Estimating errors in the approximation. we study three problems with a view towards dealing with some of these practical questions.

The situation is quite common in numerical analysis.44 h=0. we must be satisfied with rather crude answers.03113 1. Rigorous error bounds are not useful even for such simple cases as the system (13.1).005. rigorous error bounds are seldom useful as a practical means for determining the accuracy of the results.43880 1. But we can use our knowledge of what to expect from the method to increase our confidence in this claim and perhaps improve the results slightly. (13.2).1) and (13.43660 1. To use (7.80560 2.19). 0.1. This approach to estimating accuracy appears rather naive. In Table 13. An examination of the entries will lead us to conclude that the answers for h = 0. The system (13.1 0. Since this involves the unknown solution. using increasingly smaller stepsizes.) At best. but one rarely can do much better. with some assurance of the accuracy of the results. (13.2 0.8 1.55775 1. using procedure VOLT2.56160 1.0 h = 0. If such bounds were usable we could obtain a guarantee that the error would be within a certain tolerance.1).19).202 CHAPTER 13 done to demonstrate that the expected behavior of the method is indeed realized (or to discover features not brought to light by the theory).03412 1. it is necessary to determine some of its properties. such as bounds on its various derivatives.1 are accurate to about 0. (The reader might try to compute bounds for f and its first two derivatives in the system (13. t 0. A second problem with (7. raises the possibility of computing explicit and rigorous error bounds.2). The discussion in Chapter 7.2). In actual practice the answers are unknown and the problem is to compute an approximation. f).19) is that it tends to give unrealistically large bounds for the error. in most actual situations.1 we give the computed results for /^(f) using the procedure VOLT2 with stepsizes h =0.14889 . Unfortunately. This may prove impossible or at least impractical.2 and h =0.2 04 .2) is certainly well behaved TABLE 13.1). Anyone experienced in numerical computation will have a quite pragmatical way of dealing with the difficulty. this turns out to be impractical.6 0. in particular (7.80150 2143 . The method used has order of convergence two for sufficiently smooth kernels and solutions. it is necessary to bound the consistency error 8(h. until the answers have "settled down" to the desired accuracy.1 Approximation to /t(t) in (13. (13. One simply computes several solutions.

If we write Y(t. With three answers available at each point.CASE STUDIES 203 and it is not hard to show that fl and /2 are infinitely differentiable. For Table 3.1 have an apparent error slightly larger than 0. If the stage has not been reached where higher orders of h are negligible.2. the results can be further improved by the extrapolation A complete set of results for these computations is given in Table 13. What "small" means in this context is very hard to say. To reduce the chance for a mistaken conclusion. The arguments leading to (13. h] for the approxima-expansion for the discretization error. Therefore.05 is about 0. we can compute the results for yet another stepsize. this implies that the numbers in the column fi = 0.1. h] for the approximation to f(t) with stepsize h. The trapezoidal method has a repetition factor one and consequently there is an asymptotic expansion for the discretization error. we can expect second order convergence. the error of the best results is about one-third of the tabular difference.2 is very close to the expected value of two.6) yields a number close to two. The maximum difference in results of columns h = 0. the error in the results computed with h/2 is approximately In other words.00120 so that the error in column h =0. we can use Aitken's method to estimate the order of convergence by If (13.05 can be expected to be about .1 and h = 0. The estimated order of convergence in Table 13. we can be confident that (13.3) holds reasonably well.5) assume that h is so small that higher order terms in the asymptotic expansion of the error can be neglected. If we write Y(t. then we expect that and Therefore. In that case. our conclusions may be incorrect.001.

05 0. with some modifications to take care of the nonlinearity. While this equation is of the form (11. We can therefore with reasonable confidence claim that the column YE is an approximation to f^t) with an error not much larger than 1 x 10~4. we introduce and Then . then released and allowed to undergo elastic recovery for t>0.204 CHAPTER 13 TABLE 13. The equation models the elongation of a filament of a certain polyethylene which is stretched on the time interval — o°<fs£0.95 2.43937 1.2 0.56160 1.43880 1.03113 1.56281 1044 . t 0.2 04 .8 1. 0.96 YE 0. since the stepsize is small enough to permit extrapolation.93 1. (13. the kernel can be written as This suggests an attack on (13.2) using VOLT2.1).33 1.80535 2.1).15004 P 20 .56252 1042 . 06 .44 h = 0.43957 1.8) through a conversion to a system of differential equations.80559 2.2 Results for ft(t) in (13.80460 2. YE should be even more accurate.0 h = 0.1 0.15044 Actually.6 1.2.55775 1. An example from polymer rheology. Using the ideas described in Chapter 1.14889 h = 0.05 1.80150 2143 . 13. the methods described in Chapter 11 may have to be modified to take into account the form and behavior of the kernel k(i).43660 1. with estimated order of convergence and extrapolated values.03350 1. In many cases.31 1.

A second approach.9). The difficulty encountered in using (13. although some quite efficient programs are available.16) makes the equation of nonstandard form and the methods we have described are not immediately applicable. In a typical situation.8) into a system of integral equations.12)-(13.CASE STUDIES 205 and Substituting (13. its value at t = 0 must be given as part of the problem.14) comes from the actual values in (13.0. necessary when the kernel does not have a simple form like (13. we might have al = 10~3. For example. is to convert (13. This makes the kernel very peaked near t . We introduce .14) constitute a system of 2m + 1 ordinary differential equations with given initial conditions Since / is the elongation of the filament.9). If we use then The presence of f(t) in the integral of (13. Great care must be taken in solving stiff differential equations.8) gives Equations (13.12) and (13. TX = 103 and am = 109.13) into (13. A somewhat different arrangement is more satisfactory. rm = 10~4. the values of at and TJ range over many orders of magnitude.12)-(13. The resulting differential equations are said to be stiff.

as well as the corresponding a. . f(t) changes rapidly near f = 0. VOLT2PROD cannot be used because it was written under the assumption that the same factor p(f. j3 in (8. Here k(t — s) is missing in (13. This gives where . .19). VOLT2 would not work well because of the presence of the rapidly varying kernel fc(f-s) in (13.r^ be constant. for (13. Unfortunately.29) and (8.18) and (13.206 CHAPTER 13 then integrate (13. However. We introduce the meshpoints 0 = t0 < ti < t2 < . (13.8) from *„_! to ^ using the trapezoidal approximation where The integral in (13.22) is now approximated by product integration using a piecewise linear approximation to the function in the curly brackets. then make the appropriate changes in VOLT2PROD.20).19).8) to give The system composed of (13. and (13.30). but now allow for unequa stepsizes by dropping the requirement that ^ . "canned" programs are rarely useful and may have to be modified to suit the particular circumstances. s) occurs in all equations. Integrating (13. none of our programs is quite suitable for this problem.10). when dealing with integral equations. What we would need to do is to index the function p(s. .8) an attack from first principles is warranted. so that an algorithm with variable stepsizes is needed. t) in (12. Actually.20) is now in standard form. In typical cases. This illustrates the point that. a further complication may arise.18). It would not be particularly difficult to modify VOLT2PROD to take care of this case.

21) as an equality. This should alert us to potential trouble. provided some care is taken to take into account the rapid changes near t = 0.26) using the midpoint method with h = 0.3 are plotted (Fig. that is. 13.1).. where g.12)(13. The results obtained by a direct application of the midpoint method are plotted in Fig. so this approach cannot be considered very promising.. The data in Table 13. but seem to oscillate about a smooth function with an error of order of magnitude 0. but rather as a table of measured values. we expect an error magnification by a factor 1/h. If we solve (13.. we replace /(O by its approximation Fn and write (13.CASE STUDIES 207 Finally. One of the most common smoothing techniques is to write g(f) as a linear combination .3. One of the simplest is to smooth the given data. Numerical experiments with the system (13. we see that the values do not vary smoothly. When the entries in Table 13.14) and method (13.3 were generated to simulate a typical experimental situation.3. To obtain more acceptable results several methods of attack can be chosen.05 or so.2.05. For this study we consider the solution of the equation where g(f) is not known in functional form. Both are viable numerical algorithms. The data with which we will be working is given in Table 13. They show the typical oscillatory behavior generally encountered in the numerical solution.25) is a cubic equation for Fn and is easily solved in a step by step fashion for F1? F2. In such cases one can expect that experimental errors and uncertainties will play a role in the solution. 13. construct a smooth function g(f) such that but where g(f) comes from a properly chosen class of functions. denotes the observed value for g(f) at t = tt. Solving an equation of the first kind in the presence of large data errors. 13.. Thus Equation (13. Little useful information can be extracted from these results.25) yielded consistent results.

05 0.26).10 0.672 FIG.150 0.492 0.634 0.367 0.551 0.538 0.95 1.462 0.342 0.559 0.674 0.30 0.555 0.40 0.576 0.50 0.337 0.197 0. .050 0.233 0.676 Smoothed gt with M = 5 0.481 0.208 CHAPTER 13 TABLE 13.420 0.602 0.65 0.55 0. 13.315 0.191 0.413 0.097 0.098 0.535 0.119 0.305 0.25 0.236 0.20 0.598 0.365 0.80 0.643 0.519 0.105 0.469 0.400 0.630 0.541 0.3 Original data for (13.269 0.330 0.430 0.85 0.045 0.35 0.401 0.145 0.583 0.516 0.271 0.518 0.437 0.059 0.492 0.576 0.320 0.60 0. Original and smoothed data for (13.15 0.495 0.26) and smoothed results.299 0.45 0.593 0.70 0.299 0. (i 0.211 0.90 0.211 0.75 0.1.00 ft Smoothed gt with M = 4 0.

2). but still show some oscillations. we obtain results which are better than before. The minimization leads to the well-known normal equations. Table 13.2. this should not be too surprising. The uncertainties in the original data were of that order of magnitude. the coefficients are chosen so that is minimized. For this case we used splines with M equidistant knots on the interval [0.05. When we use the smoothed values for the midpoint method with h . The most frequently used expansion functions are polynomials. by the method of least squares.26) by the midpoint method with original and smoothed data.CASE STUDIES 209 FIG. While the results for M = 4 and M = 5 agree on the general shape of the solution. they differ by about 10% (Fig. so that a similar error in the result is not unreasonable. where the i^(f) are a set of expansion functions. Solution of (13. with some modifications to constrain the solution so that g(0) = 0.0. trigonometric functions. and splines. In this method.1]. 13. The two sets of smoothed values are so close together that they are virtually indistinguishable on the graph. Still. The coefficients Q can be determined in various ways. In our study we computed the least squares cubic spline approximation by choosing for i//t the so-called B-splines. Perhaps. 13.4 and Fig.1 show the smoothed data values using M = 4 and M = 5. the . 13. for example.

because for each. to define "plausible" means bringing in some additional information. Usually.725 0.34 1.575 0. they can both be considered plausible answers. «.39 1.90 2.825 0.34 1.04 1.40 1.20 1.40 1.95 1.4 Results for (13.44 1.775 0.025 0.36 1. To choose between the results for M = 4 and M = 5 is a little harder.49 2. using original and smoothed data.45 1.125 0. Without such additional information we cannot make a meaningful choice between the cases M = 4 and M = 5.27 1.23 1. many people .40 2. or the results produced by some other smoothing method.14 1. such as the statistical properties of the fluctuations in &.90 0.05 2. If we had to make the choice without any further information. and so on.19 1.75 1. the residuals are small enough to be in the range of the expected fluctuations in gt. 13.23 1.69 1.87 0.73 2.12 0.24 1.925 0.26) by the midpoint method.13 1.29 1.21 1.35 1.425 0.47 1.85 2.24 1.48 1.2 are "solutions" to (13.17 1.28 1.975 Original data Smoothed data with M = 4 Smoothed data with M = 5 1.26).375 0.46 1.29 1. In fact. we would reject the result for the unsmoothed data as too erratic and giving no information. Of course.24 1.74 1.03 1. But no doubt.875 0.97 results illustrate the error magnification inherent in the solution of Volterra equations of the first kind.08 0.475 0.14 1.25 0.48 1.175 0.525 0.325 0.56 1. we would probably choose M = 4 on the grounds that it gives a somewhat less oscillatory answer than M = 5.35 1.47 1.10 0.65 0.41 1.210 CHAPTER 13 TABLE 13.23 1.33 1.24 1.85 1. the expected shape of /(f).36 1. 0.44 1.16 0.075 0.625 0. all three functions shown in Fig.675 0.275 0.225 0.

[171]. . In this way. unsolved problems in the study of ill-posed equations. while it is not difficult to produce a plausible solution to (13. The example from polymer rheology used in § 13. in particular. but plausible answers can be obtained. Thus. how it is related to other plausible solutions. This can be done by using still fewer B-splines or by constraining the approximating function further. This is one of the basic. it is very difficult to assign much meaning to it.26). a number of different.CASE STUDIES 211 would consider both cases M = 4 and M = 5 as somewhat unsatisfactory and may look for smoother functions which also approximately account for the observed data.2 is described in Lodge et al. Notes on Chapter 13.

This page intentionally left blank .

DE HOOG. Oxford.. JAKEMAN. WEISS. [9] R. Anal. 329-342. Sijthoff and Noordhoff. and Math. GLASKO. pp. 157-182. pp. [11] R. ANDRADE and S. Numerical differentiation procedures for nonexact data. A. 1978. Springer-Verlag. Computational methods of solution and random spheres approximation. 14 (1971). H. ABD-ELAL. eds. [16] C. ANDERSSEN. [6] . Phys.. ANDERSSEN AND E. 1787. pp. ANDERSSEN and A. 19 (1979). J. AND M. R. pp. BAEV AND V. J. 1-11. 135-153.. [15] A. The numerical solution of an Abel integral equation by a product trapezoidal method. 442-443. 11 (1974). Basel. [10] . Math. Inst. pp. SIAM J. of Wisconsin.. 96-106. S. [8] . Numerical Treatment of Integral Equations. Math. Univ. BAKER. SIAM J. Berlin. [4] R. Math. pp. [14] . Technometrics. Stable procedures for the inversion of Abel's equation. The Numerical Solution of Integral Equations. T. Num. Appl. pp. J. [2] J. 1980. 53. F. On solution of the converse kinematic problem of seismics by means of a regularizing algorithm. An asymptotic expansion for a regularization technique for numerical singular integrals and its application to Volterra integral equations. ALBRECHT AND L. [3] A.. Alphen aan den Rijn. J. 105 (1975). J. pp. AND R. A time series approach to numerical differentiation. Appl. V. S. 22 (1974). DE HOOG. Prob. Utilitas Mathematica. Improved numerical methods for Volterra integral equations of the first kind.. 8 (1975). F. F. 10 (1973). Numer. Math. Center.. 409-418. Lecture Notes in Mathematics 630. 17 (1976). 97-101. [17] . Inst. 213 . USSR Comp. pp. Math. 4 (1976). 111-126. 1977. On optimal high accuracy linear multistep methods for first kind Volterra integral equations. McKEE. ANDERSSEN AND P.. ATKINSON. 729-736. Summary Rept. J. 23 (1979). Math. S. [7] R. B. 16. 1980. Anal. Product integration for functional of particle size distribution. Existence theorem for Abel integral equations.References [1] L. Madison. [12] C.. Application and numerical solution of Abel-type integral equations. Microscopy. The Application and Numerical Solution of Intergral Equations. pp. LUKAS. Tech. 69-75. 5 (1974). WHITE. Comput. Birkhauser Verlag. BLOOMFIELD. Abel type integral equations in stereology. On the numerical solution of Brownian motion processes. S. [13] K. S. pp. Ser. 291-309. BIT. JRunge-Kutta Methods for Volterra Integral Equations of the Second Kind. Clarendon Press. Int. Math. 1977. Numer. II. ANDERSSEN. T.. E. Appl.. S. ANDERSSEN. Vol. COLLATZ. pp. 16 (1974). Res. [5] R.

J. [33] J. [41] . BELLMAN AND K. pp. BOWNDS.. J. A combined recursive collocation and kernel approximation technique for certain singular Volterra integral equations.. Math. 747-757. B). E. Appl. 312-317. Math. Appl. On numerically solving nonlinear Volterra integral equations with fewer computations. F. A note on solving Volterra integral equations with convolution kernels. BEESACK. pp. A representation formula for linear Volterra integral equations. On an initial value method for quickly solving Volterra integral equations. pp. A. BOWNDS AND B. R. T. GUSHING.. M. 4 (1978). 30 (1976). KEECH. 3 (1977) pp. 20 (1984). BALASUBRAMANIAN. 4 (1978). 13 (1976). Appl. W. [30] J. 1954. Computing. and Hereditary Processes. Differential-Difference Equations. Univ. pp. London. J. The application of the least squares finite element method to AbeVs integral equation. Optim. H. Ph. [26] R. ACM Trans. Structure of recurrence relations in the study of stability in the numerical treatment of Volterra integral and integro-differential equations. L.. Approx. 120-141. Comp. 705-719. 22 (1981). [29] M. BOWNDS AND B. H. Retarded Control. eds. Bull.MAN.. 201-209. 1973. AMS. 14 (1979). Constant rate harvesting of a population governed by Volterra integral equations. [19] . Stability regions in the numerical treatment of Volterra integral equations. Rand Corp. [25] R. BRANCA. Anal. 18-27. Stability analysis of certain Runge-Kutta procedures for Volterra integral equations. NORRIE. 56 (1976). pp. 417-426. Math. 25 (1979). Math. Numer. An analog of the Runge-Kutta method for solutions of nonlinear Volterra integral equations. [21] C. 532-536.214 REFERENCES [18] C. pp. Comp. BAKER. pp. J. [35] . M. BENSON. 305-315. M. [32] J. MILLER. 1963. [40] F. BAKER AND J. 67-79. 11-29. Integral Equations. SIAM J. On an initial-value method for quickly solving Volterra equations: a review. 1982. [20] C. pp. Differential approximation applied to the solution of convolution equations. 18 (1964). pp.. (Ser. [38] J. Anal. D. SIAM J. H. Stability analysis of Runge-Kutta methods applied to a basic Volterra integral equation. H. 225-243. AND B. M. [24] P. Theory. [22] C. J. nonlinear Volterra equations. Differential Eq. . BRAUER. T. Anal. Math. M. [28] B. BOWNDS. 1 (1978). 394-417. H. Integral Equations.. Madison.. pp. [27] R. pp. [39] H. pp. pp. On a nonlinear integral equation for population growth problems. 1 (1965) pp. WOOD. The nonlinear Volterra equation of AbeVs kind and its numerical treatment. COOKE. BOWNDS AND J. [23] R. Theory Appl. Meth. [31] J. 24 (1978). SIAM J. Austral. Math. J.. 518-540.. M. 6 (1975). AND G. 307-324. Int. 61-66. Numer. 20 (1978). Comp. WOOD. pp. BAKER AND G. Math. C. BOWNDS AND B. 79 (1973). [34] J. 305-315. 2 (1980). WILKINSON. P. WOOD.. Wisconsin. J.. A Survey of the Mathematical Theory of Time-Lag. Proc. Numer. Treatment of Integral Equations by Numerical Methods. in Golberg [ 116]. [36] . 15 (1978).. A modified Galerkin approximation method for Volterra equations with smooth kernels. Eng. Academic Press. Comp. pp. BOWNDS. On solving weakly singular Volterra equations of the first kind with Galerkin approximation. pp. pp. pp. A smoothed projection method for singular. BAKER AND M. 133-151. Errors in the numerical quadrature for certain singular integrals and the numerical solution of Abel integral equations. Comparison theorems and integral inequalities for Volterra integral equations. 487-491. E. KALABA. T. T. AMS. 153-164. [37] J. Math. M. Academic Press. R.D thesis. Anal. BELLMAN. DE VRIES. BELTYUKOV. New York. Soc. KOTKIN. S. BELT . Software.

pp. Eighth Manitoba Conference on Numerical Mathematics. SIAM J. DAY. pp. 13 (1973). Discretization of Volterra integral equations of the first kind. Third Manitoba Conference on Numerical Mathematics. J.. pp. J. Numerical Methods. pp.REFERENCES 215 [42] H. New York.. 1973. pp. pp. [60] H. 28 (1974). 1971. 30 (1978). J. pp. [52] H. [56] . Comp. H. 12 (1974). On the approximate solution of first-kind integral equations of the Volterra type. . [54] . Appl. pp. Comp. J. pp.. [63] . Computing. [55] . [50] . A. FIAIRER. [66] J. Piecewise polynomial collocation for Volterra integral equations of the second kind. CAHLON. Global solution of the generalized Abel integral equation by implicit interpolation. 151-157. pp. Computing. BRUNNER AND J. Math. 12 (1975). BRUNNER. Order of convergence of linear multistep methods for functional differential equations. 3. Handbook of Applied Mathematics. Computing. Superconvergence in collocation and implicit Runge-Kutta methods for Volterra type integral equations of the second kind. S. A block-by-block method for the numerical solution of Volterra integral equations.. Appl. [45] . 117-138. pp. On the numerical solution of a certain nonlinear integrodifferential equation. 23 (1979). pp. 1975. Projection methods for the approximate solution of integral equations of the first kind. Stability of numerical methods for Volterra integrodifferential equations. ed. 1978. [46] . The numerical solution of a class of Abel integral equations by piecewise polynomials. in Albrecht and Collate [2]. Appl. Inst. pp. BRUNNER.. A survey of recent advances in the numerical solution of Volterra integral and integro-differential equations. CHURCHHOUSE. [59] .. Proc. pp. 295-302. [64] S. Math. F. 12 (1973).. 412-416. CAMPBELL AND J. Analyse Math. Comp. 415-423. 121-128. Numerical solution of nonlinear Volterra integral equations.. BRUNNER. 10 (1970). [43] . Inst. 3-23. STEPLEMAN. Discretization of Volterra integral equations of the first kind (II). Utilitas Mathematica. Math. Phys. pp. Proc. 255-290. 105-122. Proc. [62] G. 117-128. 65-78. EVANS. The solution of nonlinear Volterra integral equations by piecewise polynomials. [53] . A note on collocation methods for Volterra integral equations of the first kind. pp. 708-716. BIT. NORSETT. Anal. J. The nonlinear renewal equation. Numer. pp. 381-413.. NEY. 7 (1981). M. Math. [61] B. 1981. On the numerical solution of nonlinear Volterra integro-differential equations. CHOVER AND P. pp. [57] . pp. Superconvergence of collocation methods for Volterra integral equations of the first kind. CHANG AND J. On the numerical solution of a class of Abel integral equations. BRUNNER AND M. J. On superconvergence in collocation methods for Abel integral equations. Vol. LAMBERT. 147-163. 171-187. 54-72. pp. 381-390. D. 213-229. Comp. 61-67. 11 (1974). Math. Math. pp. AND S. [47] . BIT. BRUNNER. CHARTERS AND R. pp. Appl. BIT. J. [48] H. 67-79. 75-89. 13 (1974). 12 (1973). Runge-Kutta theory for Volterra equations of the second kind. 120-124.. D. Fifth Manitoba Conference on Numerical Mathematics. pp. 31 (1977). 8 (1982).. Comp. 162-168. 19 (1982). [58] . T. T.. The application of the variation of constants formula in the numerical analysis of integral and integro-differential equations. First Manitoba Conference on Numerical Mathematics. [67] R. Computing. 10-19. Phys. 21 (1968). 876-886. The numerical solution of nonlinear Volterra integral equations. Numer. P. Comp. John Wiley. Math. Comp. The solution of Volterra integral equations of the first kind by piecewise polynomials. pp. Math. 26 (1978). 19 (1981). [44] . Proc. [49] H.. 20 (1977). [65] B. E. pp. DAY. [51] H. 21 (1979).

[71] . Integral Equations. On the solution of a Volterra integral equation with a weakly singular kernel. A. [78] F. EGGERMONT. 23 (1975). Bloomington. [72] C. 269-274. 199-213. pp. pp. L.. Clarendon Press. Springer-Verlag. [81] . M. ABD-ELAL.. J.. Academic Press. Comp. EL TOM. J.. JR. J. 11 (1974). On the numerical solution of Volterra integral equations. DAY. BIT. State Univ. Math.216 REFERENCES [68] J. 1972. 561-573. Principles of Etifferential and Integral Equations. J. in PICC [205]. [89] P. 21 (1973). of New York at Buffalo. Math. [80] . 21 (1967). 1930. 184-190. Math. Mathematical programming and integral equations. pp. Anal. 295-306. Tech. Numer. B. T. pp. [90] . CRYER. [84] L. Numer. Dept. New York. On the solution of Volterra integral equations of the first kind. Studies 88-90. 1972. [74] H. Appl. Numerical methods for functional equations. Inst. 1981. DAVIS. Computer Science. Math. 7 (1970). A new analysis of the trapezoidal method for the numerical solution of Abel-type integral equations. SIAM J. 10 (1973). [83] . pp. K. 22-32. 24 (1981). [69] L. [92] . B. An abstract Volterra equation with application to linear viscoelasticity. 27 (1973). McGraw-Hill. [73] C. YOUNG. New York. pp. 365-383. Dover. [82] . Integral Equations and Stability of Feedback Systems. DOUGLAS. The Analysis of Linear Integral Equations. New York. COCHRAN. [85] L. W.. Application of spline functions in Volterra integral equations. IN. 8 (1971). Differential Equations. pp. Implicit Runge-Kutta methods for second kind Volterra integral equations. J. 1973.. 134-137. 1-7. 3 (1981). Academic Press. pp. [77] . T. Math. CORDUNEANU. E. Numer. DAFERMOS. MIPG50. 1962. pp. SIAM J. 317-332. COLLATZ. SIAM J. pp. Comp. 62-75. Indiana Univ. High order methods for a class of Volterra integral equations with weakly singular kernels... New York. Math. Schmitt. M. 1955. [88] H. in Delay and Functional Differential Equations. EDELS. Math. 1971. Appl. DISTEFANO. pp. 23 (1968). pp. WEISS. HEARNE. Oxford. DELVES AND J. A. pp. A starting method for solving nonlinear Volterra integral equations. 41 (1962).. pp. Numerical Solution of Integral Equations. [91] M. The theory of Volterra integral equations of the second kind. 354-357. [87] J. 178-188. F. Comput. HENDRY. eds. J.. Introduction to Nonlinear Differential and Integral Equations. [75] .. Kept. 8 (1968).. AND A. pp. DELVES. Anal. 1166-1180. AND J. 554-589. Numerical solution of Volterra integral equations by spline functions. New York. WALSH. 13 (1973). Anal. . A. pp. M. [70] C. ed. DE HOOG AND R. [76] J. Math. Berlin. Special discretization for the integral equation of image reconstruction and for Abel-type integral equations. Numer. High order methods for Volterra integral equations of the first kind. 647-664. Anal. [79] . A set of modules for the solution of integral equations. 1974. A Volterra integral equation in the stability of linear hereditary phenomena. BIT. Chelsea. K.. [86] N. Numerische Behandlung von Differentialgleichungen. and Phys. 4 (1973). Asymptotic expansions for product integration. Numerical solution of the Abel integral equation.

691-707. E. 33-39. pp. Anal. pp. Block methods for nonlinear Volterra integral equations. Inst. Math. GOODWIN. J.. 171-202. pp. ESSER. Math. SOPKA. pp. [116] M.REFERENCES [93] 217 . GLADWIN AND R.. 269-284. 14 (1975). Phil. BIT. FORD. Computing. Mathematical programming and integrodifferential equations. [106] A. [94] . On the integral equation of renewal theory. GLADWIN. pp. GAREY. [99] R. A.. 1978. SIAM J. Ser. [96] . Washington. Math. Math. BIT. 14 (1974). [115] C. pp. [101] W. SIAM J. 10 (1970). London. An Introduction to the Theory of Linear Systems. GLADWIN. [112] . 1974. Appl. Quadrature rule methods for Volterra integral equations of the first kind. [109] . pp. Comp. Numerical methods for second kind Volterra equations with singular kernels. 1972. [107] L. 826-846. Ann. Stability of quadrature rule methods for first kind Volterra integral equations.. BIT. [100] A. [117] . 12 (1975). J. Soc. 1977. in Golberg [116]. 17 (1976). SIAM J. 457-464. pp. Comp. Proc.. FRIEDMAN. 14 (1974). DC. 303-309. pp. Second Manitoba Conference on Numerical Mathematics. Comp. pp. 243-267. 12 (1941). Fourth Manitoba Conference on Numerical Mathematics. J. Stat. FETTIS. 381-413. A. 31 (1977). Numer. Soluing nonlinear second kind Volterra equations by modified increment methods. Dept. [98] R.. BYRNE. pp. 14 (1974). Anal. J. Proc. Second Manitoba Conference on Numerical Mathematics. FELDSTEIN AND J. Roy. pp. pp. [95] . T. 19 (1978).. 3.. AND G. BIT. BIT.. [102] H. 15 (1975). 288-297. 136-143. 253-263. On the numerical solution of equations of the Abel type. BIT. [118] A. J. 33 (1979). [114] C. Computing. 245-256. Efficient algorithms for Volterra integral equations of the second kind.. 705-716. Predictor-corrector methods for nonlinear Volterra integral equations of the second kind. 491-496. Application of spline functions to systems of Volterra integral equations of the second kind. T. 11 (1963). GOLBERG. Methods of higher order for the numerical solution of first kind Volterra integral equations. Numerische Behandlung einer Volterraschen Integralgleichung. pp. 295-310. 179-193. Trans. 325-333. pp. 245 (1953). Numerical methods for nonlinear Volterra integrodifferential equations. Appl. 14 (1974). Solution of linear integral equations by the Gregory method. Analyse Math. pp. On an integral equation of Volterra type. On the numerical stability of spline function approximations to solutions of Volterra integral equations of the second kind. 167-177. pp. On a method of Bownds for solving Volterra integral equations. pp. Inst. On spline function approximation to the solution of Volterra integral equations of the first kind.. ed. Proc. Fox AND E. ESPINOSA-MALDONADO. 12 (1972). 501-534. [113] C. R.. New York. [108] . 401-408. 153-166. [105] R. pp. Numer. pp. . Spline function approximation to the solution of singular Volterra integral equations of the second kind. The numerical solution of nonsingular linear integral equations. 14 (1974). 144-151. Math. [104] L. The numerical solution of Volterra integral equations with singular kernels. Math. 501-508. 1972. Plenum. of the Navy. Electronic System Command. 11 (1974). 18 (1964). pp. [97] . Solution Methods for Integral Equations. [110] . FRATILA. Anal. pp. FELLER. [103] W. Taylor series methods for the solution of Volterra integral and integrodifferential equations. JELTSCH. pp. Implicit methods for Volterra integral equations of the second kind. Numer. [Ill] . D. BIT. GOLDFTNE. pp. 2 (1965).

Phys. 10 (1967). [133] P. Clarendon Press. 1979. HOLYHEAD. 1949. 698-711. Stability and convergence of multistep methods for linear Volterra integral equations of the first kind. [140] K. 18 (1967). H. Application of integral equations in particle size statistics. Multistep methods for solving linear Volterra integral equations of the first kind. TAYLOR. On the stability of multistep formulas for Volterra integral equations of the second kind. 1970. Univ. S. New York. Computing. Comp. J. Math. Math. VAN DER HOUWEN. 240-246. IMA J. 205-217. pp. Assoc.. Series. 1921. pp. Math. D. 906-909. S.. P. pp. HOPKINS AND R. J. HELZMAN. Pacific J. pp. . Eine Ndherungsmethode zur Auflosung Volterrascher Integralgleichungen.218 REFERENCES [119] A. The numerical solution of differential and integral equations by spline functions. pp. IGUCHI. WOLKENFELT. Numer. R. Phys. SIAM J. 33 (1979). Phys. Center. GOLDMAN AND W. BIT. pp. Res. SHARMA.. 27 (1973). 1933. [123] G. Asymptotic expansion for multistep methods applied to nonlinear Volterra integral equations of the second kind. 37 (1981). Numer. Summary Rept.. WOLKENFELT. VAN DER HOUWEN AND H. BIT. H. H. 1053. Univ. K. VISSCHER.. 12 (1975). Math. Tech.. A starting method for solving nonlinear Volterra integral equations of the second kind. [138] . L. HOCK. J. McKEE. GUZEK AND G. J. [139] . Wisconsin. [137] . Anal. McKEE. HOLYHEAD AND S. VAN DER HOUWEN AND P. Numer. TE REELE. Summary Rept. [136] H. A new error analysis for the cubic spline approximate solution of a class of Volterra integro-differential equations. Brit. Convergence and stability results in Runge-Kutta type methods for Volterra integral equations. JAIN AND K. SIAM J. Appl. pp.. Tables of Integrals. Center. pp. 375-377. pp. Math. 24 (1980). [129] P. [121] I. HUNG. 813-830. 303-328. [134] P. 460-461. Numer. J. M. 1979. 155-178. [131] P. [122] J. T. 13 (1976). S. Wisconsin. Madison. 10 (1960). A. Numerical solution of linear differential equations and Volterra's integral equation using Lobatto quadrature formula. 14 (1974). J. Tech. Univ. Summary Rept. Madison. A. 298-305.. [132] P. Oxford. HUBER. pp. [126] W. Center. HILL. J. Res. J. Convergence and stability analysis for modified Runge-Kutta methods in the numerical treatment of second kind Volterra equations. Math. 1 (1981). Divergent Series. 169-182. Academic Press. J. Spfine approximation to the solution of a class of Abel integral equations.. Madison. Math. Wisconsin. 101-107. A new class of one-step methods for the solution of Volterra functional differential equations. BAKER. KEMPER. pp. On creep and relaxation. Comp. J. The calculation of true particle size distributions from the sizes observed in a thin slice. 203-207. Madison. [128] P. 77-100. A. pp. [124] D.. pp. AND P. H. Backward differentiation type formulas for Volterra integral equations of the second kind. 28 (1957). Tech. [135] A. Monatsh. pp. Res. 15 (1972). 269-292. A higher order global approximation method for solving an Abel integral equation by quadratic splines. 47 (1938). GRADSHTEYN AND I. Math. 1904. M. Numer. 1979. pp. HARDY.. A. Numer. [141] M. Error bounds for an approximate solution to the Volterra integral equation. Summary Rept. Comp. GOLDSMITH. Anal. Appl. Error analysis of a linear spline method for solving an Abel equation. M. AND C. [127] . 341-347. 563-570. Mach. An extrapolation method with step size control for nonlinear Volterra integral equations. and Products. pp. RYZHIK. 1965. Anal. [125] J. [120] P. Math. Univ. Math.. HAMMING. Comm. 20 (1980).. Tech. f. Wisconsin. in Golberg [116]. Center. [130] I. Res. pp. 38 (1981). VAN DER HOUWEN.

Numer.. SIAM J. 73-88. Anal. 381-397. A.INOVSKII. Math. 21 (1976). MAUTZ. 307-332. ANDERSSEN. pp. USSR Comp. On numerical solution of the Volterra integral equation of the second kind by linear multistep methods.. Comp. Math. Ph. 12 (1969). P. Phys. Assoc. S. General discussion. Comp. J. JONES.. pp. 399^15. [161] S. 121-133. pp. Res. LING. JUSE 14 (1967). [162] R. 271-277. [154] E.. [157] M.. and Math. Yu. Integral equations of Volterra type. 131-142. Comput. [155] G. J. Center. .D thesis. 1-14. Summary Rept. The numerical solution of singular Volterra integral equations. The numerical inversion of Abel type integral equations in stereology. JAKEMAN. pp. pp. Canberra. 32 (1979). Math. A third order. The numerical solution of AbeVs integral equation. 12 (1975). [158] V. BIT. KEECH. L. J. [145] R. pp. Madison. Anal. 16 (1969). Stat. JACKIEWICZ. [163] P. KOSAREV. Stat. pp. Mech. FD. Wisconsin. Leipzig. 312-320. KOBAYASHI. 352-363. 23 (1969).. Accurate and efficient quadrature for Volterra integral equations. JAMES. Rat. pp. [150] G. J. pp. 13. Mach. Numerical methods for Volterra integral equations of the first kind. L. KNIRK. Appl.. Fluid Mech. A method for solving nonlinear Volterra integral equations of the second kind. BIT. pp. Arch. SIAM J. LIN. Phys. Damped vibration of a string. KENT AND J. KEMPER. J. [165] . [159] J. 6 (1969). 119-139. AND V. On a method of Noble for second kind Volterra integral equations. PODGAETSKI.. in PICC [205]. M. pp. 210-243. 138-145. [143] A.. LEVICH. On the numerical solution of convolution equations and systems of such equations. 1930. 10. Comp. 72 (1975). J. Spline function approximation for solution of functional differential equations. Math. II. 361-372. 393-397. 787-797. G.. pp. pp. pp. [166] . Math. Res. LINZ. [144] Z. 105 (1975)... Numer. pp. #6 (1973). On a system of integrodifferential equations occurring in reactor dynamics. Appl. 117-121. Convergence of multistep methods for Volterra functional differential equations. 5 (1968). 3 (1967). Phys. Numerical methods for Volterra integral equations with singular kernels. Math.. [152] M. and Math. Numer. A. LEVIN AND J. [160] N.. pp. SIAM J.REFERENCES 219 [142] A. LEVINSON. On the numerical solution of the Volterra integral equation of the first kind by the trapezoidal method. Math. 3 (1978). Sur I'integration numerique des equations integrates du type de Volterra. pp. Math. [153] . Anal. pp. OULES. A successive approximation for nonlinear Volterra integral equations of the second kind. [147] M. G. [146] J. J. The numerical solution of Volterra integral equations by finite difference methods. DeGruyter.. Comput. Anal. LAUDET AND H. 595-600. J. [151] D. Rep.. KOWALEWSKI. 825. 393-397. The numerical solution of a Volterra integral equation. Anal. Univ. pp. 1975. semi-explicit method in the numerical solution of first kind Volterra integral equations. 1967. KUMAR. Rep. J. E. Linear multistep methods for Volterra integrodifferential equations. pp. J. 19 (1979). Phys. Anal. L. Abel type integral equations in stereology I. pp. Res. 482-488. Numer. pp. 1 (1960). NOHEL. 19 (1972). J. 1-11. JUSE 13 (1966). JAKEMAN AND R. 64 (1973). A nonlinear Volterra equation arising in the theory of superfluidity. 11 (1962). [167] . USSR Comp.. Comp.. [148] G. S. pp. 17 (1977).. Appl. 15 (1961). Integralgleichungen. Microscopy. Tech. [156] S. Numer. Math. 295-301. Australian National University. Linear multistep methods for a class of functional differential equations. [149] . Appl. 371-399. [164] .

The stability of solution of differential and integral equations. pp. Ann. [169] . [191] G. pp. Math. MAKINSON AND A. pp. 1980. E. L. Nonlinear Anal. Anal. [188] G. pp. in Baker and Miller [22]. MINERBO AND M. G. L. Comp. SIAM J. 3 (1979). Hindustan Publ. 336-344. Appl. 8 (1971). J. MILLER. E. 2 (1971). 247-256. Appl. [192] W. Numer. Numerical solution of Volterra integral equations. 99-137. Oxford. 6 (1980). [179] W. J. [170] . [190] R.. 163-184. 80A (1978). Best convergence rates for linear multistep methods for Volterra first kind equations. MOCARSKY. [11]. [189] R. 11 (1971). 1971. pp. A. Roy. MAYERS. Angew. 95-99. Delhi. A contribution to the theory of self-moving aggregates with special reference to industrial replacement. 9 (1951). Convergence of step-by-step methods for nonlinear integro-differential equations. [183] . Menlo Park. Stat. Math. The repetition factor and numerical stability of Volterra integral equations. [173] A. Iowa City. Soc. 233-238. in Delves and Walsh [84]. Smoothness of solutions of Volterra integral equations with weakly singular kernels.. F.. A. C. [172] J. 10 (1939). Anal. Nonlinear Volterra Integral Equations. 1-25.. MILLER AND A. [171] A. 20 (1975). BRUNNER. pp. in PICC [205]. Math. McKEE AND H.—. pp. 1962. [182] S. K. 23 (1979). 1976.. The approximate solution of Volterra integral equations of the second kind. 16 (1979). A nonlinear perturbed Volterra integrodifferential equation occurring in polymer rheology.. LOGAN. pp. [178] L. Heat transfer between solids and gases under nonlinear boundary conditions. Ph. AND J. Cyclic multistep methods for solving Volterra integro-differential equations. A-stable linear multistep methods for Volterra integro-differential equations. [185] S. Bermerkungen zur numerischen Behandlungvon nichtlinearen Volterraschen Integralgleichungen mit Splines. [186] G. Proc. Edinburgh. B. SIAM J. MAONA. pp.. FELDSTEIN. J. Fox. Math. pp. F. Aplikace Matematiky. in Baker and Miller [22]. LODGE. Appl. MILLER. 56 (1976). pp. Mech. ed. [174] CH. W. [181] D.. S. 37 (1981). Product integration methods for Volterra integral equations of the first kind. Comp. 183-194. MANN AND F. 329-347.. MIKHLIN. 343-358. Numer. McKEE. J. [175] R. Inst. A suruey of methods for the solution of Volterra integral equations of the first kind. The analysis of a variable step. Appl. Z. A block-by-block method for Volterra integro-differential equations with weakly singular kernel.220 REFERENCES . 123-130. pp. pp. pp. Anal. BIT. LEVY. Benjamin. 413-421.. [187] S. MACCAMY AND P. The solution of Volterra equations of the first kind in the presence of large uncertainties. pp. [176] G. Pergamon. Quart. CA.. in Numerical Solution of Ordinary and Partial Differential Equations. SIAM J. pp. 677-685. YOUNG. Provision of library programs for the numerical solution of integral equations.D thesis.. pp. NOHEL. LOTKA. Iowa. 106-114. Linear Integral Equations. variable coefficient linear multistep method for solving a singular integrodifferential equation arising in diffusion of discrete particles in a turbulent fluid. J. MAKROGLOU. LUBICH. R. Math. WOLF. T302-304. A-stable methods of higher order for Volterra integral equations. Math. 373-388. [184] . 499-509. N. On the stability of linear multistep methods for Volterra integral equations of the second kind. Math.. Univ. pp. Computing. [168] . pp. J. 21 (1979). Inversion of Abel's integral equation by means of orthogonal polynomials. Math. Equations of Volterra type. Math. Numer. Inst. McLEOD. Co. MATIHYS. in Anderssen et al. 235-239. 85-94. 27 (1976). [177] A. pp. WEISS. 598-616. [180] J. 6 (1969). MICULA. K. pp.. 242-258.

[210] P. [198] . G. Res. pp. Quart. 1969. Center. [200] H. Int. Tech. Math. pp. Rev. pp. Madison. 2 (1960). Methode ^'integration numerique des equations integrates et integrodifferentielles du type de Volterra de seconde espece. C. in Nonlinear Integral Equations. ed. pp. [205] PICC (Prov. Numer.. 99-106. Eng. The error of Adams type methods for Cauchy. Numer. USSR Comp. N.. An error estimate for Volterra integral equations. Appl. POSPELOV. J. Numerical solution of a nonlinear integro-differential equation.. H. POUZET. RADZIUK. 8 (1968). VERBAETEN. Comp. 117-124. The numerical solution of nonlinear integral equations and related topics. [203] G. NOBLE. Some integral equations in geometric probability. BIT.. [211] . Math. NETA. H. The cylinder problem in viscoelastic stress analysis. LEE. ROGERS AND E.. NETRAVALI. I. [204] J. BIT. Internat. M.. Basel. pp. 1964. 431-438. The numerical solution from measurement data of linear integral equations of the first kind. Birkhauser Verlag. A bibliography on methods for solving integral equations. Eng. 729-740. E. pp. [196] B. Appl. JESANIS. 89 (1982). Lecture Notes in Mathematics 109. [194] B. PHTTT.REFERENCES 221 [193] O. Math. PAPATHEODOROU AND M. Math. D.TPS. [206] R. M.. [212] . Biometrika. G. PHTT. Nachr. A numerical method for the solution of certain classes of nonlinear Volterra integro-differential equations and integral equations. Meth. Comp. RAKOTCH. . NESTOR AND H. Math. 6 (1980). Instability when soluing Volterra integral equations of the second kind by multistep methods. pp. pp. 152-159. [215] E. [208] G. 20 (1973). 6 (1963).. Int. Anselone. 3-8. P.. 598-611. 7 (1964). R. 451-457. [214] J. 13. Numer. V. Wisconsin Press. Resolution numerique d'une equation integrate singuliere. J. SLAM Rev. PROSPERETTI. 169-173. Anal. 53 (1966). Math. The approximate solution of Volterra integral equations. 181-186. Theory. 1960. J. PIESSENS AND P. Math. 83-98. [209] V. Francaise Traitment de ITnformation. 362-368. Summary Rept. Formules de Runge-Kutta. Pergamon. 1966. BIT.Volterra problem. 37 (1968). J. Madison. Numerical methods for reducing line and surface probe data. 11 (1971). H. 7 (1964). pp. 271-279. 200-207. pp. Comp. pp. pp. Berlin. [213] A. [202] A. pp. Rev. [197] . PETSOULAS. Francaise Traitment de 1'Information. 22 (1964). 27 (1983). BYRNE. POGORZELSKI. [201] T. Francaise Traitment de ITnformation. A starting method for the numerical solution of Volterra's equation of the second kind. 5 (1973). 365-374.. [216] T. Numerical solution of Abel integral equation. Math. pp. 43-47. Rev. Appl. Algorithm de resolution des equations integrales de type Volterra par des methodes par pas. [195] A. [207] W. Center): Symposium on the Numerical Treatment of Ordinary Differential Equations. Storungsrechnung fur lineare Volterrasche Integralgleichungen. 11 (1977). [199] J. Integral Equations and their Application. Integral and Integro-differential Equations. Wisconsin. Vol. 301-305. Univ. S. Phys. OULES. Collocation methods for Volterra integrodifferential equations with singular kernels. pp. pp. Approx. Etude en vue de leur traitment numerique des equations integrales de type Volterra. and Math. 1971. in PICC [205]. pp. pp. Spline approximation to the solution of the Volterra integral equation of the second kind. 13 (1973). Rome 1960. 79-112. O'NEILL AND G. Springer-Verlag. Meth.. 117-131.TTPS. New York. OLSEN. pp. J. 1177. 11 (1977). Numerical solution of Volterra integral equations. PORATH. Univ. 14 (1975).

Anal. WOLFE. Anal. 59-87. J. Successive approximations to solutions of Volterra integral equations. [222] R. [220] V. 8 (1971). pp. Product integration for the generalized Abel equation. 529-534. [233] C. CAHLON. J. Math. Math. R. The Laplace Transform. [235] D. 218-224. 5. A. Dover. [218] T.. 11 (1953). 1941. TAYLOR. F. Appl. Inst. Programme for solving a system of Volterra integral equations (of the second kind). Appl. 159-170. SWICK. Sov. O. G. Math. SIAM J. [221] C. WEISS AND R. Volterra integral equations. Integral Equations.. Appl.. Theory. Sur /'equation integrate nonlineaire de Volterra. J. Math. Compositio Math. 23 (1968). Anal. WESTREICH AND B. GauthierVillars. 30 (1976). SATO. pp. pp. in Quatrieme Congres de Calcul et de Traitment de I'lnformation. One-step methods for the numerical solution of Volterra functional differential equations. SPOHN. NJ. Regularization of Volterra equations of the first kind. 416-425. pp. Analytically solving integral equations by using computer algebra. The asymptotic behavior of an integral equation with application to Volterra's population equation. S. Theory of Functional and Integral and Integra-differential Equations. 283-290. 271-282. [227] L. 12 (1979). H. [234] F. [231] V. ACM Trans. pp. On the numerical solution of Volterra integral equations. SCHMAEDEKE. 12 (1969). pp. pp. B. Interscience. Numerical solution of a class of Abel integral equations. pp. Math. 786-795. Paris. TSALYUK. Asymptotic behavior of some nonlinear Volterra integral equations. J. pp. SIAM J. New York. 289-303. SERGEEV.. Math. S.. 9 (1978).. pp. J. [237] R. 340-349. pp. Math. Math. [232] . WEISS. [238] D. Software. Stability of multistep methods for delay differential equations. Dunod.222 REFERENCES [217] I. [224] J. Versailles 1964. [223] D. pp. J. The solution of Volterra integral equations of the first kind using inverted differentiation formulae. pp. L. WIDDER. J. 764-779. [230] Z. WAGNER. 26 (1980). WANG. A. pp... Dokl. [229] F. pp. Soviet Math. 128-148. pp.. 16 (1976).. Math. Math. Press. Numer. 1931. J.. Inst. TRICOMI. Asymptotic behavior of some deterministic epidemic models. [242] M.. USSR Comp.. J. pp. Sur les formules a pas lies dans Vintegration des equations integrates du type Volterra. M. Math. Anal. and Phys. WIEDERHOLT. Math. Lemons sur la theorie mathematique de la lutte pour la vie. [219] W. 22 (1978). Princeton Univ.. SMARZEWSKI AND H. A nonlinear model for human population dynamics. Comp. [240] L. Appl. Princeton. Appl. New York. Math. STOUTEMEYER. [243] P. TAVERNINI. SIAM J. 501-505. Comput. 175-186. and Math. 177-190. 40 (1981). 5 (1965). C. RUMYANTSEV. V. [225] D. VOLTERRA. 32 (1954). The numerical solution of nonsingular integral and integro-differential equations by iteration with Chebyshev series.. 22 (1978). BIT. [236] R. MALINOWSKI. pp. 19 (1972). Numerical solution of Volterra integral equations with continuous and discontinuous terms. 3 (1977). WEISS. J. VAN DER HouwEN. ANDERSSEN. SHILEPSKY. STEINBERG. 12 (1971). J. pp. 1959.. W. 212-217. Numerical solution of Volterra integral equations. [226] K. A product integration method for a class of singular first kind Volterra equations. 193-196. 442-456. Math. pp. Approximate solutions for Volterra integral equations of the first kind. [228] P. 18 (1972). 604-613. Phys. Appl. Anal. [241] K. 1965. WIGGINS. J. 48 (1974). Math. Analysis of numerical methods for . G. pp. Comp. 48 (1975). WOLKENFELT AND P.. Paris. Numer. 26 (1972). E. 1957. 266-278. [239] D. 715-758.. Approx. Numer.

pp. C. K. . . M. Integral Equations. A. Stability analysis of methods employing reducible rules for Volterra integral equations. 7-1982. Mathematisch Centrum. YOSIDA. 28 (1982). AND H. EGGERMONT. 20 (1983). Integral Equations. 317-332. Interscience. 322-328. Roy. pp. 224 (1954). 1960. Amsterdam. AMINI. 224 (1954). 131-138. H. 1981. Rept. 552-561. Kept. 1106-1119. J. London. B. T. The numerical analysis of reducible quadrature methods for Volterra integral and integro-differential equations. Comp. Application and numerical solution of Abel-type integral equations. 705-719. Lectures on Differential and Integral Equations. Australian National University. Numerically solving nonlinear Volterra integral equations with fewer computations.SUPPLEMENTARY BIBLIOGRAPHY 223 [244] [245] [246] [247] [248] [249] second kind Volterra equations by embedding techniques. S. SLAM J. 43-46. J. Amsterdam. 33-42. J. London. Collocation as a projection method and superconvergence for Volterra integral equations of the first kind. pp. New York. Roy. Math. WOOD. . On the construction of stability polynomials for modified R-K methods for Volterra integro-differential equations. R. T. Stability analysis of numerical methods for Volterra integral equations with polynomial convolution kernels. S. Mathematisch Centrum. 61-82. BAKER AND J. 5 (1983). BIT. 95-109. S. C. in Baker and Miller [22]. Canberra. BOWNDS. Proc. Linear multistep methods and the construction of quadrature formulae for Volterra integral and integrodifferential equations. 187-203. pp. 1982. P. Theory and performance of a subroutine for solving Volterra integral equations. H. P. Soc. TE RIELE. C. Proc. pp. NW 76/79 (1979). pp. 73-92. Nonpolynomial spline collocation for Volterra equations with weakly singular kernels. BAKER. pp. SIAM J. . Anal. J. J. WILKINSON. 13 (1976). Math. S. in Baker and Miller [22]. . 409-420. in Baker and Miller [22]. H. BAKER. Ser. 561-573. M. Akademisch Proefschrift. On collocation approximations for Volterra equations with weakly singular kernels. H. 40 (1984). J. BRUNNER. Comments on the performance of a FORTRAN subroutine for certain Volterra equations. . VAN DER HOUWEN.. 23 (1983). H. On the stability of numerical methods for Volterra integral equations of the second kind. Supplementary Bibliography A list of recent papers on the numerical solution of Volterra equations. DE HOOG. Implicit Runge-Kutta methods of optimal order for Volterra integro-differential equations. J. in Baker and Miller [22]. 163-168. The application of approximate product-integration to the numerical solution of integral equations. H. pp. Numer. YOUNG. AMINI. J. pp. J. C. T. TE RIELE. ANDERSSEN AND F. R. pp. P. pp. pp. pp. BRUNNER AND H. 107-122. B. 6 (1984). Integral Equations. Ser. A. Approximate product integration. pp.. Res. in Baker and Miller [22]. Stability and structure in numerical methods for Volterra integral equations. Computing.. Numer. AMINI. pp. Anal. in Baker and Miller [22]. P. 3 (1981). A. Soc. WOLKENFELT. Volterra type integral equations of the second kind with nonsmooth solutions: High-order methods based on collocation techniques. pp.

. McKEE AND A. Anal. SIAM J. 301-316. 20 (1983). 153-162. Math. JACKIEWICZ. Anal. 615-643.. pp. 39 (1982).. IMA J. pp. . 459-461. IMA J. SIAM J. J. 2 (1980). 49-62. SIAM J. in Baker and Miller [22].224 SUPPLEMENTARY BIBLIOGRAPHY . The numerical solution of parabolic Volterra equations arising in polymer rheology. pp. Numer. J. HAIRER. SIAM J. pp. . J. KERSHAW. H. On the relation between the repetition factor and the numerical stability of direct quadrature methods for second kind Volterra integral equations. Some results for Abel-Volterra integral equations of the second kind. 21 (1984). . Linear multistep methods for Volterra integral equations of the second kind. 221-231. J. Numer. Anal. On optimal integration methods for Volterra integral equations of the first kind. 21 (1984).. in Baker and Miller [22]. Anal. in Baker and Miller [22]. A. GLADWIN. A review of linear multistep methods and product integration methods and their convergence for first kind Volterra integral equations. 123-135. Anal. J. Anal. 18 (1981). Comp. . Anal. Anal. E. pp.. Comp. 2 (1982). 30 (1983). pp. Numer. Numer. in Baker and Miller [22]. S. Anal. Anal.. 185-196... SIAM J. pp. 3 (1983). D. 20 (1983). 20 (1983). VAN DER HOUWEN AND H. P. Reducible quadrature methods for Volterra integral equations. pp. P. 20 (1983). LUBICH. 439-465. E. TAYLOR. Modified multilag methods for functional equations. Product integration methods for the nonlinear Basset equation. P. in Baker and Miller [22]. The construction of reducible quadrature rules for Volterra integral and integro-differential equations. 24-51. SIAM J. STOKES. in Baker and Miller [22].. Math. Extended Volterra-Runge-Kutta methods. pp. IMA J. IMA J. Numer. SIAM J. . WOLKENFELT. J. 890-908. 1049-1061. McKEE. On the stability of linear multistep methods for Volterra equations of the second kind. . pp. HAIRER AND CH. p. 21-35. Lax-Wendroff methods for hyperbolic history value problems. 131-152. pp. On the stability of linear multistep methods for Volterra convolution equations. pp. 1032-1048. pp. . Collocation for Volterra integral equations of the first kind with iterated kernel. The stability of a numerical method for a second kind Abel equation. 67. . P. pp. pp. M. Numer. 511-518. H. Numer. 143-160. pp. pp. 233-238. 437-449. TE RTELE.. Z.. Anal. RENARDY. MARKOWICH AND M. Computing. S. Numer. The numerical solution of Volterra functional differential equations of neutral type. pp. pp. Numer. pp. On the stability of Volterra-Runge-Kutta methods.. pp. 2 (1982). Hybrid methods in the numerical solution of Volterra integro-differential equations. MAKROGLOU. 79-94. LUBICH. Applications of results of Vainikko to Volterra integral equations. CoJiocafion methods for weakly singular second kind Volterra integral equations with nonsmooth solution. CH. J. A block-by-block method for the numerical solution of Volterra delay integro-differential equations. Numer. 273-282. 40 (1983). in Baker and Miller [22]. TE RiELE.. C. in Baker and Miller [22]. Numer.

104 starting. 133. 209 causality. 13 first kind equation to second kind. 177 degenerate kernel approximation methods. 104 local. 48 integrodifferential equations. 83 differentiation methods for first kind equations. 84 Dahlquist theory. 5 inversion formulas. 101 conversion: differential equation to integral equation. 67 linear second kind equations. 77 nonlinear. 126 delay differential equation. 77 convolution theorem. 69 nonlinear second kind equations. 37 differential resolvent. 185 of an approximation method. 47. 78 integrodifferential equation to integral equation. 42. 101 contraction mapping argument. 134 X-stability. 75 equations with unbounded kernels. 136 fi-splines. 103 error expansion. 101 for integrodifferential equations. 101 order. 100 order of. 61. 112 comparison theorems. 4 accumulated error: consistency. 32 convergence: and stability. 114 for integrodifferential equations. 7. 5 convolution equation: linear. 158 direct methods for equations of the first kind. 204 on equations of the first kind. 165 simple. 49 linear first kind equations. 185 using product integration. 73. 189 difference kernel. 18. 104 Adams-Moulton methods. 169 for equations of the first kind. 40. 58 consistency condition: for integral equations. 144 existence of a solution: Abel's equation. 183 consistency error: accumulated. 89 Beltyukov methods. 143 discretization error. 55 225 . 68 integral equation into differential equation. 105. 67. 111 asymptotic behavior of solutions. 52.Index Abel's equation: generalized. 100 effect of data errors: on Abel's equation. 122 block-by-block methods: for Abel's equation. 74 numerical solution. 154 for equations of the second kind. 17 characteristic polynomial of stability. 121 Euler's method. 204 elongation of a filament. 204 error estimates. 29 nonlinear first kind equations. 182 Aitken's method.

149 equations of the second kind. 125 product integration. 19 functional equations. 196 midpoint method for first kind equations. 3 singular. 7 instability in numerical computation. 51 monotonicity properties. 177 interchanging order of integration. 36 kernel: degenerate. 45 Picard method. 130 product trapezoidal method: equations of the second kind. 67 Fourier transform. 151 heat conduction equation. 8 Lipschitz continuous. 160 numerical solution. 133 roundoff error: in first kind equations. 195 product trapezoidal method for second kind equations. 182 Lipschitz condition: for equations of the first kind. 35 Richardson's extrapolation. 177 method of continuation. 152 for second kind equations. 122 . 38 linear problems. 161 in second kind equations. 3 of Fredholm type. 19 for Laplace transform. 52 for integrodifferential equations. 36 nonlinear problems. 135 equations of the first kind. 58 of an integral equation. 162 on second kind equations. 162 initial value problems and Volterra equations. 186 Pascal program: trapezoidal method for second kind equations. 118. 69. 121 Runge-Kutta methods: explicit. 32 method of successive approximation. 84 iterated kernels. 143 with smooth kernels. 48 finite rank kernel. 182 Newton-Cotes methods: for first kind equations. 8 finite rank. 144 Milne-Simpson methods. 197 product midpoint method for first kind equations. 3 weakly singular. 157 polymer rheology example. 188 Gregory methods: for equations of the second kind. 8 first kind equations: nonlinear. 84 linear multistep methods. 168 Pouzet methods. 15 predictor-corrector methods. 108. 19 ill-posed problems. 5 numerical methods. 166 for equations of the first kind. 204 population equation. 98 for equations of the first kind. 71. 4 unbounded. 29 midpoint method: INDEX for Abel's equation. 103 integral equations: of Volterra type. 106 resolvent equation: difference kernels. 14 repetition factor. 63 resolvent kernel. 29 piecewise polynomial approximations. 110 integrodifferential equations. 69 for equations of the second kind. 3 integrodifferential equations: reduction to integral equations. 21 renewal: equation. 4 Laplace transforms. 6 inversion formulas: for Fourier transforms.226 systems of the second kind. 23 reactor dynamics equation. 198 perturbation. 98 numerical stability: equations of the first kind. 122 radiating source. 14 density. effect: on first kind equations.

48. 18 history-dependent. 55 systems of the second kind. 122 second kind equations: linear. 17 with memory. 17 evolutionary. 183 interval. 24 system identification. 46 systems theory. 18 systems: causal. 5. 46. 16 unit-response function. 16 nonlinear. 4 of first kind. 99 starting values. equations of the first kind. 178 227 uniqueness of a solution: Abel's equation. 3 linear. 49 linear first kind equation. 15 weights for numerical integration. 13 systems of integral equations. 13 linear. 17 Volterra equations: applications. connection with differential operators Volterra population equation. 67 linear second kind equation. 104 stability: definition. 52. 111 stereology example.INDEX implicit. 114 of Beltyukov type. 16 time-invariant. 29 nonlinear first kind equation. 3 feedback. 104. 91 trapezoidal method: Abel's equation. 3 Volterra operators. 166 equations of the second kind. 122 of Pouzet type. 144 integrodifferential equations. 62 integrodifferential equation. 98 well-posed problems. 4 of second kind. 98 starting error. 29 nonlinear. 13 classification. 129 semicirculant matrix. 45 . 62 unit-impulse function. 95 product integration methods. 73. 69 nonlinear second kind equation. 172 Simpson's methods. 15 Tauberian theorems. 75 equations with an unbounded kernel. 51 numerical solution.

Sign up to vote on this title

UsefulNot useful- Analytical Perspectives of 2010 Federal Budgetby Aaron Monk
- Winding Function for Electrical Machine Analysisby Yong Li
- Applications of Functional Analysis in Engineeringby Raúl Barriga
- Anatoly a. Kilbas, Megumi Saigo H-Transforms Theory and Applications Analytical Methods and Special Functions 2004by vz123456

- Fast Registration of Medical Imaging Data Using Optimised Radial Basis Functionsby Roger Rowland
- Ulrike Bucking- Approximation of Conformal Mappings by Circle Patterns and Discrete Minimal Surfacesby GremnDL
- The Drug Problem in the Americasby PanAm Post
- guildlines for analytical vulnerabilityby razi1364

- Lyotard, Jean-Francois - Lessons on the Analytic of the Sublimeby Chris Fenech
- "Emerging and persisting food hazards: Analytical challenges and socio-economic impact"by ДинкоЙорданов
- Algorithms for the Matrix Exponential and Frechet Derivativeby Julio Cesar Barraza Bernaola
- Econometric Estimation CESby Valentin Burca

- Analytical Perspectives of 2010 Federal Budget
- Winding Function for Electrical Machine Analysis
- Applications of Functional Analysis in Engineering
- Anatoly a. Kilbas, Megumi Saigo H-Transforms Theory and Applications Analytical Methods and Special Functions 2004
- Complex Functions and Application
- Algebraic Problems and Exercises for High School (Sets, sets operations, Relations, functions, Aspects of combinatorics)
- FOURNIER FunctionalItoCalculus
- 2002 Federal Budget Analytical Perspectives
- Fast Registration of Medical Imaging Data Using Optimised Radial Basis Functions
- Ulrike Bucking- Approximation of Conformal Mappings by Circle Patterns and Discrete Minimal Surfaces
- The Drug Problem in the Americas
- guildlines for analytical vulnerability
- RUSSIAN ELITE – 2020
- Liao Homotopy
- Numerical and Analytical Methods With MATLAB
- Analytic Number Theory a Tribute to Gauss and Dirichlet~Tqw~_darksiderg
- The Evolutionary Functions of Interest Groups
- Ieee Holomorphic
- An Introduction to Functional Analysis - Vitali Milman
- Modeling With Equations and Functions
- Lyotard, Jean-Francois - Lessons on the Analytic of the Sublime
- "Emerging and persisting food hazards: Analytical challenges and socio-economic impact"
- Algorithms for the Matrix Exponential and Frechet Derivative
- Econometric Estimation CES
- Filter Design Techniques
- Analytical Soltion for Wind Turbine
- Functions of a Complex Variable
- An Introduction to the Theory of L-Functions
- Functions and Mappings
- Perturbation methods.pdf
- Analytical and Numerical Methods for Volterra Equations