You are on page 1of 11

Course Project

Determination of BOD’s Parameters

TABLE OF CONTENTS

1. INTRODUCTION AND OBJECTIVES ................................................................................................................. 1


2. NUMERICAL PROBLEM ..................................................................................................................................... 1
2.1. THOMAS’ METHOD ........................................................................................................................................ 1
2.2. NEWTON’S METHOD ...................................................................................................................................... 2
2.3. GAUSS-NEWTON’S METHOD .......................................................................................................................... 4
2.4. BARNWELL’S METHOD ................................................................................................................................... 5
2.5. UNCONSTRAINED OPTIMIZATION ..................................................................................................................... 6
3. DISCUSSION OF RESULTS ............................................................................................................................... 6
3.1. INITIAL APPROXIMATIONS AND MODELS’ GOODNESS ......................................................................................... 7
3.2. COMPUTATIONAL COST .................................................................................................................................. 8
4. CONCLUSIONS ................................................................................................................................................... 9
5. REFERENCES ..................................................................................................................................................... 9
1
Course Project - Araujo Roso, González Usúa, Reina Redondo & Sierra Muñoz

1. Introduction and Objectives


The current document is based on establishing a model to predict the variation biochemical oxygen demand (BOD)
with respect to time. BOD is the amount of dissolved oxygen that biological organisms need to degrade organic material
present in a certain quantity of water (Sawyer et al., 2003). It is considered as an indirect measurement because not
all the organic material is biodegradable. On the other hand, there are some chemicals products, which are not
biodegradable. For this purpose, in those cases, it is better to employ the test of chemical oxygen demand (COD)
(Masters, 1997), although this test does not take part in that report due to there exists some drawbacks in the practical
uses.

The present level of BOD in a given water is determined by means of standard laboratory known as BOD test. The
sample of water is diluted in other prepared sample of water, which is introduced with microorganisms. The final mixture
is put into a special bottle so-called Winkler’s bottle. It has 300 ml of volume and it is isolated to avoid the entry of air
in order not to supply a dissolved oxygen, which could change the results of the test. The sample must remain under
20oC of temperature and in a darkness room. The aim of the experiment is to measure the difference between the
concentration of dissolved oxygen at the beginning of the process and the homologous one at the end of the test. The
value of BOD is the result of a mass balance of the dissolved oxygen in the two aforementioned situations.

There have been performed different numerical methods whose goal is to simulate the evolution of the BOD with
respect to time. One of the most important models represents the variance by using a first order kinetic reaction
(McGhee, 1991; Metcalf & Eddy Inc., 1991; Davis y Cornwell, 1991; Horan, 1993; Masters, 1997). The resulting
expression entails that the velocity of dissolved oxygen consumption is directly proportional to the concentration of
organic material without oxidation. The mathematical expression of this theoretical model is the following one:

𝑑𝐿
= −𝜅𝐿(𝑡) 𝑓𝑜𝑟 𝑡 > 0 (1)
𝑑𝑡

Where L(t) is the admissible concentration of oxygen, t is the time and κ is the is the rate of degradation of the organic
compounds which is regarded as constant. Considering the analytical solution of the upper ODE and being L0 the
maximum concentration of dissolved oxygen that the microorganism can consume (at the very end of the test), the
final expression that defines the level of BOD with respect to time is:

𝐵𝑂𝐷(𝑡) = 𝐿0 [1 − exp(−𝜅𝑡)] (2)

Equation (2) shows that the unknowns of the present problem are L0 and κ. To solve the non-linear problem, the aim
of this document is to implement different codes in Matlab in order to simulate the behaviour of BOD through the time.
By this way, the authors have checked literature about this topic and they have considered four different models and
several built-in functions of the used software.

The obtained results will be discussed at the final part of the document. In some cases, particularly in the Newton’s
method, the region of convergence is analysed due to its singularity. Among the used methods, there are two of them
that the problem is posed like a non-linear problem, whereas the other two methods solve the statement reducing the
initial system to a root-finding problem and a linear least squares problem respectively.

2. Numerical Problem
Some methods that obtain the solution of L0 and κ values have been developed taking advantage of Matlab and the
authors’ knowledge. Some of them are related with the minimization of a function¸ where non-linear solvers and the
unconstrained optimization theory play an important role, and others are based on specific methods specially crafted
for these kinds of problems. On the following pages, each method is explained in detail.

2.1. Thomas’ Method


The numerical problem that is posed by the faculty of the subject is to solve the analytic equation (2) with the aim of
finding both L0 and κ. From the study of it, it is deduced that the expression is non-linear in the aforementioned
parameters. To alleviate this problem, it has been observed that the first strategies employed have been graphic
methods such as Fujimoto's (Metcalf & Eddy Inc., 1991) or Thomas’ (Davis and Cornwell, 1991; McGhee, 1991).
2
Course Project - Araujo Roso, González Usúa, Reina Redondo & Sierra Muñoz

The Thomas method is an analytical alternative to the initial nonlinear problem and is based on a linearization that after
a Taylor’s series development up to the third order term, a set of relevant substitutions and by using simple algebra
gives:

𝑡 1/3
1 𝜅 2/3
[ ] = 1/3
+ 𝑡 = 𝐴 + 𝐵𝑡 (3)
𝐵𝑂𝐷(𝑡) (𝜅𝐿0 ) 6𝐿01/3
The expression (3) together with the table 1 incorporated in statement number 7 allows to carry out an approximation
of the sought values using the linear least squares criterion. It is decided to approximate by means of least squares as
opposed to polynomial interpolation, since it has 7 experimental measurements of BOD. Working with a seventh order
polynomial would involve large oscillations between the different points, a phenomenon known as Runge's paradox
(Runge, 1901).

The problem posed by equation (3), together with the seven experimental data, is based on the resolution of a system
of equations. To find this system you need an interpolating polynomial. The fundamental theorem of algebra guarantees
us the existence and uniqueness of this interpolating polynomial, besides specifying a procedural way of finding it.
What happens with the result of this methodology is that the final matrix, Vandermonde's matrix, is ill-conditioned. For
this reason, a different polynomial base is used in practice as will be seen below: the Lagrange interpolation. This base
of polynomials can be chosen so that the resulting matrix is the identity. Applying this procedure to (3) it stays as:

𝑛 𝑛 1
𝑡𝑖 3
𝑛 ∑ 𝑡𝑖 ∑( )
𝐴 𝐵𝑂𝐷𝑖 (𝑡)
𝑖=1 𝑖=1
𝑛 𝑛 [ ]= 𝑛 1 (4)
𝐵 𝑡𝑖 3
2
∑ 𝑡𝑖 ∑ 𝑡𝑖 ∑( ) ·𝑡
[ 𝑖=1 𝑖=1 ] [ 𝐵𝑂𝐷𝑖 (𝑡) ]
𝑖=1

The approximation of Thomas’ method returns the following values:

1 1 𝜅 2/3 6𝐵
𝐴= 1/3
→ 𝐿0 = →𝐵= → 𝜅= = 0.3068 𝑑𝑎𝑦𝑠 −1 (5)
(𝜅 𝐿0 ) 𝜅 𝐴3 1 1/3 𝐴
6 ( 3)
𝜅𝐴
1 1 1
𝐿0 = = = = 315.9949 𝑚𝑔 𝐵𝑂𝐷/𝑙 (6)
𝜅 𝐴3 6 𝐵 𝐴3 6 𝐴2 𝐵
𝐴
The solution of this system could be used as initial approximation to solve the non-linear problem such as it is explained
in more detail later on. As far as the solution comes from a minimum square method that takes a linearized equation
as a basis and that does not give enough confidence in order to trust these results.

2.2. Newton’s Method


In the previous section you can see the methodology to follow to solve the problem by working from a linear point of
view. This can be useful as values for a first trial. In other words, the above values can be used as an initial
approximation of methods that do consider the non-linear character of equation (3). Approximate analytical methods
could be considered as an alternative proposal to graphical methods, but they are usually not very precise. It is for this
reason, that in the bibliographic references provided by the faculty of the subject as Barnwell (1981) and Cutrera et al.
(1999) it is proposed to solve the problem by using the method of least squares to a function dependent on L 0 and κ
and that must be minimized to adjust the function (3) to the experimental values of table 1 from the statement.
Therefore, the function to be minimized is:
𝑛

𝜀(𝜅, 𝐿0 ) = ∑{𝐵𝑂𝐷𝑖 − 𝐿0 [1 − exp(−𝜅𝑡𝑖 )]}2 (7)


𝑖=1

Recall that n is the total number of experimental data observed (ti, BODi). The procedure used to minimize the function
(7) is the usual one. The partial derivatives are found with respect to κ and L0 and are equal to zero. After this, a non-
linear system of two equations will be obtained:
2

2018
3
Course Project - Araujo Roso, González Usúa, Reina Redondo & Sierra Muñoz

∑{𝐵𝑂𝐷𝑖 − 𝐿0 [1 − exp(−𝜅𝑡𝑖 )]}2 · 𝑡𝑖 · 𝑒𝑥𝑝(−𝜅𝑡𝑖 ) = 0


𝑖=1
𝐹(𝜅, 𝐿0 ) = 𝑛 (8)
∑{𝐵𝑂𝐷𝑖 − 𝐿0 [1 − exp(−𝜅𝑡𝑖 )]}2 · [1 − 𝑒𝑥𝑝(−𝜅𝑡𝑖 )] = 0
{ 𝑖=1

As it is known, the non-linear analysis is much more complex than the linear one, hence numerical methods which are
efficient and that have robust iterative solvers are used. There are different nonlinear solvers such as Picard’s method,
direct iteration method, etc. But it has been Newton's method which has been considered by the authors of the present
project. Newton's method is an iterative method whose main properties are the following:

When the method converges, the convergence is quadratic.

In each iteration it must be solved a linear system of equations that will allow us to find the value of the next iteration.

It’s supposed a high computational cost. This is due to the need to evaluate the Jacobian matrix of the function to be
minimized in each step. On the other hand, the resolution of the aforementioned system is subordinate to the fact that
the matrix is not singular or that it is not ill-conditioned. Sometimes to reduce this high computational cost that involves
the evaluation of the partial derivatives of the Jacobian, they are approximated by finite differences or, for instance, by
working with the modified Newton’s method.

The algorithm that defines the method is easy to implement.

The method converges towards the local minimum closest to the initial approximation. That is, Newton's method is
sensitive to the initial approximation considered.

Taking into account the advantages and disadvantages of the Newton’s method, the authors have implemented it in
Matlab, as it can be seen in the code attached to this report. The values obtained by using the Thomas method (see
equations (5) and (6)) have been considered as the initial approximation. Another way to make a first guess and use it
as an initial approximation in Newton's method is to match the BOD data with more time and equal it to the value of L0,
since L0 can be considered the final BOD. Considering the data in table 1 of the statement and imposing the value of
L0 equal to the BOD for a time of 9.2 days in the equation (4) it can be deduced that another initial approximation could
be (it should be noted that in the code implemented in Matlab the values of Thomas' method have been maintained as
an initial approximation):

𝐿0 = 𝐵𝑂𝐷(𝑡𝑛 ) = 𝐵𝑂𝐷(9.2 𝑑𝑎𝑦𝑠) = 285 𝑚𝑔 𝐵𝑂𝐷/𝑙 (9)

1 𝐵𝑂𝐷𝑖 1 160
𝜅=− 𝑙𝑛 (1 − )=− 𝑙𝑛 (1 − ) = 0.3746 𝑑𝑎𝑦𝑠 −1 (10)
𝑡𝑖 𝐿0 2.2 285

Anyway, the f set of functions is the starting point of the Newton’s method, which is summarized on these steps:

𝒇(𝒙) = 0 ; 𝒙𝟎 = [𝜅, 𝐿0 ]𝑇 (11)

∂𝐟 k
0 ≅ 𝒇(𝒙k+1 ) + (𝐱 )∆𝒙k+1 (12)
∂x

Solving the following system and updating:

𝑱(𝒙𝑘 )∆𝒙𝑘+1 = −𝒇(𝒙𝑘+1 ) (13)

𝒙𝑘+1 = 𝒙𝑘 + ∆𝒙𝑘+1 (14)

For the calculation of the Jacobian matrix:

2018
4
Course Project - Araujo Roso, González Usúa, Reina Redondo & Sierra Muñoz

𝜕𝐹1 𝜕𝐹1
𝜕𝜅 𝜕𝐿0
𝑱(𝒙𝑘 ) = (15)
𝜕𝐹2 𝜕𝐹2
( 𝜕𝜅 𝜕𝐿0 )

The system is updated until the solution is good enough: that is controlled using the f array, checking if it’s similar
enough to zero. A code has been developed from these equations. In the developed code, the stopping condition is
that the norm of the f vector is lower than a certain tolerance, which is set at 10 -6. Taking the results of the Thomas’
method, the following results are obtained in only five iterations:

𝜅 = 0.3309 𝑑𝑎𝑦𝑠 −1 𝐿0 = 298.8522 𝑚𝑔 𝐵𝑂𝐷/𝑙

Figure 1 shows the convergence of the error for the parameters κ and L0. As it has been explained previously, it can
be observed that for both parameters the convergence of the error is quadratic. The function to be minimized consists
of a system of two non-linear equations. Therefore, the code implemented in Matlab has worked with a while loop that
does not stop the iterative method until the value of the two functions that make up the aforementioned system have a
value lower than a tolerance of the order of 10-6. It has been found that for this particular case there are no problems
due to the singular Jacobian matrix
of the function. The convergence is
obtained for a number of iterations
equal to seven, provided that the
values obtained from performing the
linear least squares method known
as the Thomas’ method (graphic
method) are used as the initial
approximation. The authors of the
project have ascertained that the
method is really sensitive to the
given initial approximation, thus, it is
a disadvantage given that the
iterative methods converge at the
local minimum closest to the
determined initial approach. At the
end of the iterative process, it is
observed that the obtained values
from the function to be minimized are
of the order of 10-10 and 10-13, which
Figure 1 - Convergence of L and κ (Newton Method) implies values very close to zero.

2.3. Gauss-Newton’s Method


As it is specified by the bibliography consulted by the authors of the present project, the most correct approach to solve
the problem is considering the non-linearity of the phenomenon, therefore the method of least squares is of great use.
Before the study of Newton's algorithm, it is observed that it is based on minimizing the sum of squares of the residuals.
On the other hand, the advantage that the Gauss-Newton method incorporates with respect to its counterpart is that
the Jacobian matrix of the function is symmetric and positive definite. Both features can be exploited from a
computational point of view. In this section, what is sought is to carry out a comparison between Newton's method and
the Gauss-Newton method in terms of the number of iterations and the final relative error. The algorithm of the Gauss-
Newton method is defined as follows:

𝒙𝟎 = [𝜅, 𝐿0 ]𝑇 (16)
𝒓(𝒙𝒌 ) = 𝐿0 [1 − exp(−𝜅𝑡)] − 𝐵𝑂𝐷(𝑡) (17)
[𝑱(𝒙𝑘 )𝑇 · 𝑱(𝒙𝑘 )]𝒔𝑘 = − 𝑱(𝒙𝑘 )𝑇 · 𝒓(𝒙𝑘 ) (18)
𝒙𝑘+1 = 𝒙𝑘 + 𝑠 𝑘 (19)

2018
5
Course Project - Araujo Roso, González Usúa, Reina Redondo & Sierra Muñoz

Figure 2 shows the convergence of the error for the Gauss-Newton method for the two parameters analysed: κ and L0.
Unlike Newton's method, the convergence of the error of the method analysed in this section is very similar to a
quadratic behaviour, while the performance of Newton's method is clearly quadratic. This is due to the low importance
of the Q matrix in the expression of
the Hessian matrix considered by the
authors of the present project. Since
its value is zero, since it is a zero-
residual problem, the convergence
of the method is practically
quadratic. Analysing the data in
detail, it is observed how, along the
first three iterations, the reduction of
the error is equal to or greater than a
quadratic order. Working with the
Gauss-Newton method allows us to
use a symmetric and positive definite
matrix in the implemented algorithm,
which implies a reduction of two
iterations with respect to Newton's
method. In other words, the Gauss-
Newton method achieves the
solution value for a tolerance of 10-6
using five iterations, whereas
Newton's method must employ five
Figure 2 - Convergence of L and κ (Gauss-Newton Method) iterations.

2.4. Barnwell’s Method


As it is explained in the introduction to this section, in addition to working with various methods of direct resolution of
the non-linear system of equations, an analytical approximate method proposed by Metcalf & Eddy Inc. (1991) has
also been incorporated as an alternative. This method transforms the problem of solving the non-linear system of
equations to a one-dimensional function zero problem. To achieve this, Barnwell proposes the following equations:
𝑛 𝑛

∑ 𝑎𝑖 · 𝐵𝑂𝐷𝑖 − 𝐿0 ∑ 𝑎𝑖 · 𝑏𝑖 = 0 (20)
𝑖=1 𝑖=1

𝑛 𝑛

∑ 𝑏𝑖 · 𝐵𝑂𝐷𝑖 − 𝐿0 ∑ 𝑏𝑖 2 = 0 (21)
𝑖=1 𝑖=1

where

𝑎𝑗 = 𝑡𝑗 · exp(−𝜅𝑡𝑗 ) (22)

𝑏𝑗 = [1 − exp(−𝜅𝑡𝑗 )] (23)

Isolating the parameter L0 from equations (24) and (25) and setting equal in the previous expressions we can see that
the parameter κ can be found as a root of the nonlinear function that is presented below:

𝑛 𝑛 −1 𝑛 𝑛 −1
2
𝐹(𝜅) = [∑ 𝑏𝑖 · 𝐵𝑂𝐷𝑖 ] [∑ 𝑏𝑖 ] − [∑ 𝑎𝑖 · 𝐵𝑂𝐷𝑖 ] [∑ 𝑎𝑖 · 𝑏𝑖 ] =0 (24)
𝑖=1 𝑖=1 𝑖=1 𝑖=1

The objective of this section is to show the differences between Barnwell's approximate analytical method and Newton's
method. For the Barnwell’s method, a value of κ equal to the one obtained by the Thomas’ method (linear least squares)
is needed. If we compare the values obtained by executing the previous with Newton's method with an initial
approximation equal to the one in Thomas' method, it can be observed that the resulting values are equal until the
tenth decimal place.
5

2018
6
Course Project - Araujo Roso, González Usúa, Reina Redondo & Sierra Muñoz

For the resolution of zero of functions of the Barnwell’s method, a built-in function of Matlab called fzero was used. This
function works with the bisection and interpolation method. If Newton's method is compared with the Barnwell’s
approximation, it can be seen the computational cost of Barnwell's method is lower than the one from Newton's because
the first does not need to calculate the Jacobian matrix. Both methods are sensitive to the initial approach which they
work with.

In the case of working with the Barnwell’s method we must take into account the possible existence of multiple roots
which would generate an inefficient performance of the bisection method since in these cases the points adjoining both
ends of the root have the same sign. It has been found that for a hypothetical case in which the value of κ was equal
to one thousand both values are negative in the adjacent points to the searched root of the function.

Barnwell's method could be modified considering another method for resolution of zero of functions. For example, one
could work with the Newton method, a method that would converge quadratically unlike that of the bisection that
converges linearly. The disadvantages that working with the Newton method would have is that the function to be
treated must be continuous and derivable in the study domain. Another aspect to take into account in the use of Newton
is that it can be sensitive to the initial value, since in the vicinity of inflection points or large slopes in the vicinity of the
root the likelihood that the algorithm diverges increases significantly.

2.5. Unconstrained optimization


The last of the methods used has been a built-in function of Matlab that works by means of unconstrained optimization.
There are several similarities between the methods of resolution of non-linear systems of evaluation and unconstrained
optimization. In the case of unconstrained optimization, a function must be minimized, therefore, the gradient hereof
must be equal to zero. In the case of non-linear systems of equations, the function is equal to zero. Another
characteristic of these methods is that the gradient of the function is equal to the Jacobian matrix thereof. While for the
unconstrained optimization the Laplacian of the function is equal to the Hessian matrix thereof, in addition, this Hessian
is a positive definite matrix.
Observing the error function explained in (7), it has been noted that this function can be minimized taking advantage
of some Matlab tools. The goal is obtaining the minimum value of E, which will return a local minimum value for L0 and
κ. The Matlab tool that has been used is the built-in function fminsearch, whose inputs arrays are the error function
and the initial approximation. Using the Thomas’ values as an initial approximation, the obtained values are:
𝜅 = 0.3309 𝑑𝑎𝑦𝑠 −1 𝐿0 = 298.8852 𝑚𝑔 𝐵𝑂𝐷/𝑙

3. Discussion of results
As can be seen in the previous section, from Thomas’ yielded values it is possible to get the same results in the other
methods, which makes Thomas’ values not useful to be taken into account seriously as a solution for L0 and κ. This
fact does not take any merit away in the extent of the goodness of it solution in order to point out an excellent starting
point for the real solution sought in the other methods.
Once this has been said, it may seem that any discussion of results is needed, as the results obtained in every method
are the same and, more importantly, these results are consistent with the theoretical background of the problem, as
can be observed at Figure 3. Following this, using the yielded values for L0 and κ, values of BOD have been obtained
in the times where initial data was provided, comparing them with initial values and observing the error, which is in the
order of 10-2, which proves that results are consistent with the procedure.
Time DBO (data values) DBO (yielded values) Absolute Error
0,2 20 19,13936 0,04303
1,2 90 97,94826 0,08831
2,2 160 154,55542 0,03403
3,2 200 195,21545 0,02392
4,2 220 224,42089 0,02009
6,2 260 260,46671 0,0018
9,2 285 284,64789 0,00124
Table 1 - Comparison of initial data and final results

2018
7
Course Project - Araujo Roso, González Usúa, Reina Redondo & Sierra Muñoz

Once that is clear that results are correct, going further more reflections have been done, as it’s shown in the following
pages.

3.1. Initial approximations and models’ goodness


First of all, let’s explore a case where no
Thomas function is provided. Most of the
methods need an initial approximation of L0 and
κ. In a case where there is not a clue about that
initial approximation, any value will lead to the
solution? Well, using that aforementioned
theoretical background, it can be seen that L0
may tend to 300 (as it’s noticed at the Figure
3), but there is not a clue about κ. This is not
useful in the Barnwell’s method, as long as the
only value needed for the initial approximation
is κ.
Figure 3 – Comparison between function with final values and initial data
This analysis will start by the comparison of the
obtained results for different approximations where L0 is fixed at 300 (in the methods where L0 is required as an initial
approximation), taking a base of initial κ between 0 and 100:

It can be seen that convergence is ensured almost always for the unconstrained optimization method, Barnwell gets a
nice approximation until initial values of κ around 40 and Newton and Gauss Newton has a poor performance when
the sought κ solution gets a little further.

The criteria to distinguish if it’s a bad or a good solution is taken making a comparison between the exact solution,
which is known in each method taking as initial approximation the value from Thomas method, and the obtained solution
for each initial approximation of κ. If the result division between these two values is not contained at the domain [0.99,
1.01], the obtained solution is considered a bad solution, a good one otherwise.

Figure 4 - Comparison of different methods results according to its initial κ value, L0 fixed at 300

On the other hand, if the results of Newton, Gauss Newton method and Unconstrained Optimization are compared for
any values of L0 and κ, the following is obtained:

2018
8
Course Project - Araujo Roso, González Usúa, Reina Redondo & Sierra Muñoz

Interest zone

Figure 5 – Results of Newton Method on sparse-view according to different initial values for κ and L0.

Interest zone

Figure 6 – Results of Gauss Newton Method on sparse-view according to different initial values for κ and L0.

Interest zone

Figure 7 – Results of Unconstr. Optim. method on sparse-view according to different initial values for κ and L0.

The previous figures show that unconstrained optimization has almost an overall convergence, excepting some values
for lower L0 and very few values for higher L0 and κ. On the other hand, Newton and Gauss Newton method have a
small level of convergence, even close to the solution the convergence it is not ensured. Gauss Newton has a better
performance, but for high values of κ the convergence is not ensured when L0 is close to the solution. In any case, the
form of the convergence area of Newton method appears to follow an exponential alike graph, and the reason of this
fact it is as interesting as unknown according to the knowledge of this assignment authors. The same happens with
the shape of the Gauss Newton convergence area, where it’s surprising the level of convergence on the top-right angle
of the graph.

The criteria to distinguish if it is a bad or a good solution is also taken here by means of a comparison between the
exact solution, which is known in each method taking as initial approach the value from Thomas’ method, and the
obtained solution for each initial approximation of the couple (L0, κ). The norm is done for each pair of results and then
compared. If the result division between these two values is not contained at the domain [0.99, 1.01], the obtained
solution is considered a bad solution, a good one otherwise.

3.2. Computational cost


Once considered the convergence, it must be taken into account if each method takes the same time to solve the
problem.
Computational cost is a key issue as well and must be studied. In this assignment, a single pair of values is requested,
but taking a wider insight, where these methods may be used in a larger domain for bigger problems, computational
cost is important. In fact, during this assignment operations at Matlab, the evidence of the importance of computational

2018
9
Course Project - Araujo Roso, González Usúa, Reina Redondo & Sierra Muñoz

cost has been observed during the coding and the time waiting for results on section 3.1, where many calls to the
function were performed and running times were up to half hour.

The way to study this issue is the use of tic-toc function in Matlab. The conditions will be the same for the three methods:
from the Thomas’ solution, calculating the exact solution, as it has been performed in section Error! Reference source n
ot found.. Results are the following showed in the next table:

Method Computing time (s)


Barnwell 0.00104
Unconstrained Optimization 0.00186
Newton 0.01119
Gauss Newton 0.00861
Table 2 - Results for the computational time cost of each method.

4. Conclusions
Along the project it has been shown that the results are coherent: the four used methods tend to the same value. This
value is also logical taking into account the provided data: these values follow a horizontally asymptotic pattern, where
the asymptote value has to be L0. This is shown in Figure 3 where we can see that the function with the final values of
Lo and κ fits with the experimental data obtained, so we can conclude that these values are accurate.

Once has been shown that the methods have a correct performance, the decision was to go further and explore which
method has the best performance. Taking into account the results shown in 3.1 and 3.2, Newton method should be
discarded first due to its overall lack of convergence and its high computational cost compared with the other two
options. Although Gauss Newton has a slightly better performance, same terms apply for discarding it. Afterwards, a
decision between Barnwell and Unconstrained Optimization should be taken. It depends on the knowledge of the
problem: unconstrained optimization has a better convergence level, even if the Barnwell convergence is not bad at
all. In a case where there is no idea about any approximated solution value, unconstrained optimization would be a
good choice, even if its computational time almost doubles the Barnwell computational time. Even more, another benefit
of using Barnwell is that only a variable is needed for the initial approximation in a case of two variables problem.

This discussion, according to the data provided, which allows to get a fair enough initial approximation, leads to the
election of the Barnwell method as the most convenient one. However, Unconstrained Optimization (fminsearch
function), works well enough and ensures convergence in a bigger initial approximation space.

The methods proposed in this method allowed the team to get closer to the treatment of data and obtaining the
parameters that controls the evolution of any sample’s BOD.

5. References
- Davis, M.L. & Cornwell, D.A. (1991). Introduction to environmental engineering. Ed. McGraw-Hill. New York.
- Horan, N.J. (1993). Biological wastewater treatment systems. Theory and operation. Ed. John Wiley & Sons.
Chichester (UK).
- Laboratori de Càlcul Numèric (LaCàN). (2018). Computational Engineering Course Notes. Departament de
Matemàtica Aplicada III. Universitat Politècnica de Catalunya. Barcelona
- Matworks (2015). Matlab 2015b help.
- McGhee, T.J. (1991). Water supply and sewerage. Ed. McGraw-Hill. New York.
- Metcalf & Eddy Inc. (1991). Wastewater engineering. Treatment, disposal and reuse. (Revised by
Thobanoglous, G. & Burton, F.L.). Ed. McGraw-Hill. New York.
- Rodríguez, M. (1998). “Demanda Bioquímica de Oxígeno de Efluentes con Productos Xenobióticos”
Ingeniería del agua 5(4):47-54.
- Snoeyink, V.L. y D. Jenkins. Water Chemistry, John Wiley & Sons 2008. “Caracterización de aguas
residuales por DBO y DQO”. Ingeniería de Tratamiento de Aguas Residuales: 1-7.

2018
Determination of BOD’s Parameters
Giovanni Araujo Roso, Joaquín González Usúa, Marc Reina Redondo & Jaime Sierra Muñoz
Màster d’Enginyeria de Camins, Canals i Ports | ETSECCPB | UPC

Engineering Problem & Objective Newton’s Methods Gauss - Newton Method

 BOD Amount of dissolved oxygen in water  Nonlinear problem  Nonlinear least squares
that organisms need to degrade OM.  Quadratic convergence  Linear convergence 𝑸 ≈ 𝟎
 Aim Simulate the behaviour of BOD with
respect to the time by using different
numerical methods:
o Thomas o Barnwell o Newton o Gauss - Newton

Thomas’ Method o 𝐿0 = 315.9949 𝑚𝑔 𝐵𝑂𝐷/𝑙


o κ = 0.3068 𝑑𝑎𝑦𝑠 −1
 Linear least squares
 Good initial approach for the next methods

Unconstrained  Fminsearch (based on Nelder – Mead method)

Barnwell’s Method
 Approximate analytical method
 One-dimensional root finding method Final Values o 𝐿0 = 298.8852 𝑚𝑔 𝐵𝑂𝐷/𝑙 o κ = 0.3309 𝑑𝑎𝑦𝑠 −1

Discussion of Results  Same 𝑳𝟎 and κ Different convergence regions Conclusions

You might also like