You are on page 1of 109

Econometric Estimation of the “Constant Elasticity

of Substitution” Function in R: Package micEconCES
Arne Henningsen


eraldine Henningsen∗

Institute of Food and
Resource Economics
University of Copenhagen

Risø National Laboratory
for Sustainable Energy
Technical University of Denmark

Abstract
The Constant Elasticity of Substitution (CES) function is popular in several areas of
economics, but it is rarely used in econometric analysis because it cannot be estimated by
standard linear regression techniques. We discuss several existing approaches and propose
a new grid-search approach for estimating the traditional CES function with two inputs
as well as nested CES functions with three and four inputs. Furthermore, we demonstrate
how these approaches can be applied in R using the add-on package micEconCES and
we describe how the various estimation approaches are implemented in the micEconCES
package. Finally, we illustrate the usage of this package by replicating some estimations
of CES functions that are reported in the literature.

Keywords: constant elasticity of substitution, CES, nested CES, R.

Preface
This introduction to the econometric estimation of Constant Elasticity of Substitution (CES)
functions using the R package micEconCES is a slightly modified version of Henningsen and
Henningsen (2011a).

1. Introduction
The so-called Cobb-Douglas function (Douglas and Cobb 1928) is the most widely used functional form in economics. However, it imposes strong assumptions on the underlying functional relationship, most notably that the elasticity of substitution1 is always one. Given these
restrictive assumptions, the Stanford group around Arrow, Chenery, Minhas, and Solow (1961)
developed the Constant Elasticity of Substitution (CES) function as a generalisation of the
Cobb-Douglas function that allows for any (non-negative constant) elasticity of substitution.
This functional form has become very popular in programming models (e.g., general equilib∗

Senior authorship is shared.
For instance, in production economics, the elasticity of substitution measures the substitutability between
inputs. It has non-negative values, where an elasticity of substitution of zero indicates that no substitution
is possible (e.g., between wheels and frames in the production of bikes) and an elasticity of substitution of
infinity indicates that the inputs are perfect substitutes (e.g., electricity from two different power plants).
1

2

Econometric Estimation of the Constant Elasticity of Substitution Function in R

rium models or trade models), but it has been rarely used in econometric analysis. Hence, the
parameters of the CES functions used in programming models are mostly guesstimated and
calibrated, rather than econometrically estimated. However, in recent years, the CES function has gained in importance also in econometric analyses, particularly in macroeconomics
(e.g., Amras 2004; Bentolila and Gilles 2006) and growth theory (e.g., Caselli 2005; Caselli
and Coleman 2006; Klump and Papageorgiou 2008), where it replaces the Cobb-Douglas function.2 The CES functional form is also frequently used in micro-macro models, i.e., a new
type of model that links microeconomic models of consumers and producers with an overall
macroeconomic model (see for example Davies 2009). Given the increasing use of the CES
function in econometric analysis and the importance of using sound parameters in economic
programming models, there is definitely demand for software that facilitates the econometric
estimation of the CES function.
The R package micEconCES (Henningsen and Henningsen 2011b) provides this functionality.
It is developed as part of the “micEcon” project on R-Forge (http://r-forge.r-project.
org/projects/micecon/). Stable versions of the micEconCES package are available for download from the Comprehensive R Archive Network (CRAN, http://CRAN.R-Project.org/
package=micEconCES).
The paper is structured as follows. In the next section, we describe the classical CES function
and the most important generalisations that can account for more than two independent
variables. Then, we discuss several approaches to estimate these CES functions and show
how they can be applied in R. The fourth section describes the implementation of these
methods in the R package micEconCES, whilst the fifth section demonstrates the usage of
this package by replicating estimations of CES functions that are reported in the literature.
Finally, the last section concludes.

2. Specification of the CES function
The formal specification of a CES production function3 with two inputs is 
− ν 

ρ
−ρ
y = γ δx−ρ
+
(1

δ)
x
,
1
2

(1)

where y is the output quantity, x1 and x2 are the input quantities, and γ, δ, ρ, and ν are
parameters. Parameter γ ∈ [0, ∞) determines the productivity, δ ∈ [0, 1] determines the
optimal distribution of the inputs, ρ ∈ [−1, 0) ∪ (0, ∞) determines the (constant) elasticity of
substitution, which is σ = 1 /(1 + ρ) , and ν ∈ [0, ∞) is equal to the elasticity of scale.4
The CES function includes three special cases: for ρ → 0, σ approaches 1 and the CES turns
to the Cobb-Douglas form; for ρ → ∞, σ approaches 0 and the CES turns to the Leontief
2

The Journal of Macroeconomics even published an entire special issue titled “The CES Production Function in the Theory and Empirics of Economic Growth” (Klump and Papageorgiou 2008).
3
The CES functional form can be used to model different economic relationships (e.g., as production
function, cost function, or utility function). However, as the CES functional form is mostly used to model
production technologies, we name the dependent (left-hand side) variable “output” and the independent (righthand side) variables “inputs” to keep the notation simple.
4
Originally, the CES function of Arrow et al. (1961) could only model constant returns to scale, but later
Kmenta (1967) added the parameter ν, which allows for decreasing or increasing returns to scale if ν < 1 or
ν > 1, respectively.

Arne Henningsen, G´eraldine Henningsen

3

production function; and for ρ → −1, σ approaches infinity and the CES turns to a linear
function if ν is equal to 1.
As the CES function is non-linear in parameters and cannot be linearised analytically, it is
not possible to estimate it with the usual linear estimation techniques. Therefore, the CES
function is often approximated by the so-called “Kmenta approximation” (Kmenta 1967),
which can be estimated by linear estimation techniques. Alternatively, it can be estimated
by non-linear least-squares using different optimisation algorithms.
To overcome the limitation of two input factors, CES functions for multiple inputs have been
proposed. One problem of the elasticity of substitution for models with more than two inputs
is that the literature provides three popular, but different definitions (see e.g. Chambers
1988): While the Hicks-McFadden elasticity of substitution (also known as direct elasticity of
substitution) describes the input substitutability of two inputs i and j along an isoquant given
that all other inputs are constant, the Allen-Uzawa elasticity of substitution (also known as
Allen partial elasticity of substitution) and the Morishima elasticity of substitution describe
the input substitutability of two inputs when all other input quantities are allowed to adjust.
The only functional form in which all three elasticities of substitution are constant is the plain
n-input CES function (Blackorby and Russel 1989), which has the following specification:

y=γ

n
X

!− ν

ρ

δi x−ρ
i

(2)

i=1

with

n
X

δi = 1,

i=1

where n is the number of inputs and x1 , . . . , xn are the quantities of the n inputs. Several scholars have tried to extend the Kmenta approximation to the n-input case, but Hoff
(2004) showed that a correctly specified extension to the n-input case requires non-linear parameter restrictions on a Translog function. Hence, there is little gain in using the Kmenta
approximation in the n-input case.
The plain n-input CES function assumes that the elasticities of substitution between any two
inputs are the same. As this is highly undesirable for empirical applications, multiple-input
CES functions that allow for different (constant) elasticities of substitution between different
pairs of inputs have been proposed. For instance, the functional form proposed by Uzawa
(1962) has constant Allen-Uzawa elasticities of substitution and the functional form proposed
by McFadden (1963) has constant Hicks-McFadden elasticities of substitution.
However, the n-input CES functions proposed by Uzawa (1962) and McFadden (1963) impose
rather strict conditions on the values for the elasticities of substitution and thus, are less useful
for empirical applications (Sato 1967, p. 202). Therefore, Sato (1967) proposed a family of
two-level nested CES functions. The basic idea of nesting CES functions is to have two or
more levels of CES functions, where each of the inputs of an upper-level CES function might
be replaced by the dependent variable of a lower-level CES function. Particularly, the nested
CES functions for three and four inputs based on Sato (1967) have become popular in recent
years. These functions increased in popularity especially in the field of macro-econometrics,
where input factors needed further differentiation, e.g., issues such as Grilliches’ capital-skill
complementarity (Griliches 1969) or wage differentiation between skilled and unskilled labour
(e.g., Acemoglu 1998; Krusell, Ohanian, R´ıos-Rull, and Violante 2000; Pandey 2008).

but has no effect on the functional form. i = 1. δ2 = (1 − δ1 ) δ . the three-input nested CES function defined in Equation 4 reduces to the plain three-input CES function defined in Equation 2. the final specification of the four-input nested CES function is as follows:    ρ/ρ1 ρ/ρ2 −ν/ρ −ρ1 −ρ1 −ρ2 −ρ2 y = γ δ δ1 x1 + (1 − δ1 )x2 + (1 − δ) δ2 x3 + (1 − δ2 )x4 . Hence. normalising γ1 to one changes γ to γ δγ1−ρ + (1 − δ) and changes δ to   δγ1−ρ δγ1−ρ + (1 − δ) . As the nesting structure is theoretically arbitrary. because without these normalisations. In these lower-level CES functions. δ2 = δ3 /(δ3 + δ4 ). indicates the 2i−1 + (1 − δi ) x2i two lower-level CES functions. the parameters of the four-input nested CES function defined in Equation 3 (indicated by the superscript n) and the parameters of the plain four-input CES function defined in Equation 2 (indicated p p n n n n n n by the superscript p) correspond in the following way: where ρp = ρn 1 = ρ2 = ρ . However. nested CES functions are not invariant to the nesting structure and different nesting structures imply different assumptions about the separability between inputs (Sato 1967). labour. 6 Papageorgiou and Saam (2005) proposed a specification that includes the additional term γ1−ρ : h i−ν/ρ ρ/ρ1 1 1 y = γ δγ1−ρ δ1 x−ρ + (1 − δ1 )x−ρ + (1 − δ)x−ρ . Kemfert (1998) used this specification for analysing the substitutability between capital.4 Econometric Estimation of the Constant Elasticity of Substitution Function in R The nested CES function for four inputs as proposed by Sato (1967) nests two lower-level (two-input) CES functions into an upper-level (two-input) CES function: y = γ[δ CES1 +  −νi /ρi −ρi i (1 − δ) CES2 ]−ν/ρ . 2.5 In the case of the three-input nested CES function. δ1 = δ1 δ . p p p p n p n n n δ3 = 1 − δ . . δ1 = δ1 /(1 − δ3 ). δ1 = δ1 δ .7 The nesting of the CES function increases its flexibility and makes it an attractive choice for many applications in economic theory and empirical work. the parameters of the three-input nested CES function defined in Equation 4 (indicated by the superscript n) and the parameters of the plain three-input CES function defined in Equation 2 (indicated p p n n n n n by the superscript p) correspond in the following way: where ρp = ρn 1 = ρ . and energy. only one input of the upper-level CES function is further differentiated:6   −ν/ρ ρ/ρ1 −ρ1 −ρ1 −ρ y = γ δ δ1 x1 + (1 − δ1 )x2 + (1 − δ)x3 . γ = γ . If ρ1 = ρ. the four-input nested CES function defined in Equation 3 reduces to the plain four-input CES function defined in Equation 2. we (arbitrarily) normalise coefficients γi and νi to one. 7 In this case. adding the term γ1−ρ does not increase the flexibility of this function as γ1 can be arbitrar−(ν/ρ) ily normalised to one. δ1 = δ1 /(δ1 + δ2 ). an infinite number of vectors of non-normalised coefficients exists that all result in the same output quantity. not all coefficients of the (entire) nested CES function can be identified in econometric estimations. (4) For instance. respectively. and x3 capital. x1 and x2 could be skilled and unskilled labour. γ1 . and δ = 1 − δ3 . p p p p p p p p p p n n n n p n n n n δ3 = δ2 (1 − δ ). Hence. (3) If ρ1 = ρ2 = ρ. where CESi = γi δi x−ρ . and δ = δ1 + δ2 . and δ cannot be (jointly) identified in econometric estimations (see also explanation for the four-input nested CES function above Equation 3). 1 2 3 However. δ4 = (1 − δ2 ) (1 − δ ). 5 In this case. the parameters γ. given an arbitrary vector of input quantities (see Footnote 6 for an example). Alternatively. δ2 = (1 − δ1 ) δ . γ = γ . the selection depends on the researcher’s choice and should be based on empirical considerations.

and seventh commands use the function cesCalc.Arne Henningsen.5.seed( 123 ) cesData <.1. The second line creates a data set with four input variables (called x1. nu = 1. x2.1). Estimation of the CES production function Tools for economic analysis with CES function are available in the R package micEconCES (Henningsen and Henningsen 2011b). to calculate the deterministic output variables for the CES functions with two.6. 10). coef = c(gamma = 1. y3.cesData$y4 + 1. Hence. and x4) that each have 200 observations and are generated from random χ2 distributions with 10 degrees of freedom. three. for the three-input nested CES function.cesData$y2 + 2. delta_2 = 0. δ1 = 0. In the following section. G´eraldine Henningsen 5 The formulas for calculating the Hicks-McFadden and Allen-Uzawa elasticities of substitution for the three-input and four-input nested CES functions are given in Appendices B.1 ) ) cesData$y2 <.e.cesCalc(xNames = c("x1". and . nested = TRUE ) cesData$y4 <. delta = 0. rho_1 = 0. "x4"). ρ = 0. nested CES functions cannot be easily linearised. The third.frame(x1 = rchisq(200. rho_1 = 0. and ν = 1. R> R> + R> + R> R> + + R> R> + + R> set. data = cesData. they have to be estimated by applying non-linear optimisation methods. rho = 0.cesCalc(xNames = c("x1". delta_1 = 0. coef = c( gamma = 1.3 and C. δ = 0.cesCalc( xNames = c( "x1". respectively) given a CES production function. If this package is installed. which is included in the micEconCES package.1). respectively. we use an artificial data set cesData.5. coef = c( gamma = 1. because this avoids several problems that usually occur with real-world data. "x3".5. For this.3.3.5. Like in the plain n-input case. 10). x3 = rchisq(200.5 * rnorm(200) cesData$y4 <.1. Anderson and Moroney (1994) showed for n-input nested CES functions that the Hicks-McFadden and Allen-Uzawa elasticities of substitution are only identical if the nested technologies are all of the Cobb-Douglas form.6.7. 10). 10) ) cesData$y2 <.6. nested = TRUE ) cesData$y3 <. x2 = rchisq(200.5. and four inputs (called y2.7. rho = 0. nu = 1. x3. fifth. nu = 1. "x2". x4 = rchisq(200.6.4. and y4. we will present different approaches to estimate the classical two-input CES function as well as n-input nested CES functions using the R package micEconCES. it can be loaded with the command R> library( "micEconCES" ) We demonstrate the usage of this package by estimating a classical two-input CES function as well as nested CES functions with three and four inputs.. "x2".5 * rnorm(200) The first line sets the “seed” for the random number generator so that these examples can be replicated with exactly the same data set. For the two-input CES function. ρ1 = 0. we use the coefficients γ = 1.3.6.cesData$y3 + 1.3. "x2" ).5 * rnorm( 200 ) cesData$y3 <. delta_1 = 0. data = cesData. ρ = 0. delta = 0. delta = 0. we use γ = 1. "x3"). 3. data = cesData. rho = 0.7.data. δ = 0. rho_2 = 0. and ν = 1. ρ1 = ρ2 = ρ = 0 in the four-input nested CES function and ρ1 = ρ = 0 in the three-input nested CES function.5. i.

delta) * x2^(-rho))^(-phi/rho) data: cesData gamma delta rho phi 1. phi = 1 ) ) R> print( cesNls ) Nonlinear regression model model: y2 ~ gamma * (delta * x1^(-rho) + (1 .5. it does not perform well in many applications with real data. delta = 0.5420 1.6. 180)—first-order Taylor series expansion in the remainder of this paper. ρ = 0. The fourth. and eighth commands generate the stochastic output variables by adding normally distributed random errors to the deterministic output variable. As the CES function is non-linear in its parameters. which performs non-linear least-squares estimations. δ1 = 0. As the authors consider the latter approach to be more straight-forward. Kmenta approximation Given that non-linear estimation methods are often troublesome—particularly during the 1960s and 1970s when computing power was very limited—Kmenta (1967) derived an approximation of the classical two-input CES production function that could be estimated by ordinary least-squares techniques.5. p. δ = 0.0239 0.3. start = c( gamma = 0. ρ1 = 0. δ2 = 0. R> cesNls <.5. we use γ = 1.217e-06 While the nls routine works well in this ideal artificial example.4. The Kmenta approximation can also be written as a restricted translog function (Hoff 2004): ln y =α0 + α1 ln x1 + α2 ln x2 (6) . convergence to a local minimum. ln y = ln γ + ν δ ln x1 + ν (1 − δ) ln x2 ρν − δ (1 − δ) (ln x1 − ln x2 )2 2 (5) While Kmenta (1967) obtained this formula bylogarithmising the CES function and applying −ρ a second-order Taylor series expansion to ln δx−ρ at the point ρ = 0.1. rho = 0.0858 residual sum-of-squares: 1197 Number of iterations to convergence: 6 Achieved convergence tolerance: 8. Therefore. we show alternative ways of estimating the CES function in the following sections. or theoretically unreasonable parameter estimates.delta) * x2^(-rho) )^(-phi / rho + data = cesData. ρ2 = 0. and ν = 1.1. sixth. the 1 + (1 − δ)x2 same formula can be obtained by applying a first-order Taylor series expansion to the entire logarithmised CES function at the point ρ = 0 (Uebe 2000).6222 0.nls( y2 ~ gamma * ( delta * x1^(-rho) + (1 .5. either because of non-convergence. the most straightforward way to estimate the CES function in R would be to use nls.7.6 Econometric Estimation of the Constant Elasticity of Substitution Function in R for the four-input nested CES function. the Kmenta approximation is called—in contrast to Kmenta (1967. 3.25.

it (a) estimates an unrestricted translog function (6). a simple t-test for the coefficient β12 = −β11 = −β22 can be used to check if the Cobb-Douglas functional form is an acceptable simplification of the Kmenta approximation of the CES function.cesEst( yName = "y2". data = cesData. The following code estimates a CES function with the dependent variable y2 (specified in argument yName) and the two explanatory variables x1 and x2 (argument xNames). + method = "Kmenta". but only with its linear approximation. R> cesKmenta <. "x2" ). (c) estimates the restricted translog function (6. If argument method of this function is set to "Kmenta". 2 2 where the two restrictions are β12 = −β11 = −β22 . These restrictions can be utilised to test whether the linear Kmenta approximation of the CES function (5) is an acceptable simplification of the translog functional form. vrs = TRUE ) Summary results can be obtained by applying the summary method to the returned object.9 The parameters of the CES function can be calculated from the parameters of the restricted translog function by: γ = exp(α0 ) ν = α1 + α2 α1 δ= α1 + α2 β12 (α1 + α2 ) ρ= α1 · α2 (9) (10) (11) (12) The Kmenta approximation of the CES function can be estimated by the function cesEst. 9 Note that this test does not compare the Cobb-Douglas function with the (non-linear) CES function. (7) If constant returns to scale are to be imposed. . (d) calculates the parameters of the CES function using Equations 9–12 as well as their covariance matrix using the delta method. G´eraldine Henningsen + 7 1 1 β11 (ln x1 )2 + β22 (ln x2 )2 + β12 ln x1 ln x2 . xNames = c( "x1". 7).8 If this is the case. or whether the non-linear CES function can be approximated by the Kmenta approximation. which is included in the micEconCES package.Arne Henningsen. all taken from the artificial data set cesData that we generated above (argument data) using the Kmenta approximation (argument method) and allowing for variable returns to scale (argument vrs). (b) carries out a Wald test of the parameter restrictions defined in Equation 7 and eventually also in Equation 8 using the (finite sample) F -statistic. a third restriction α1 + α2 = 1 (8) must be enforced. and finally. R> summary( cesKmenta ) 8 Note that this test does not check whether the non-linear CES function (1) is an acceptable simplification of the translog functional form.

05 '.2085723 2.2126387 0.498807 Multiple R-squared: 0.091 0.01 '*' 0. Error t value Pr(>|t|) E_1_2 (all) 0.13442 0. To see whether the underlying technology is of the Cobb-Douglas form.' 0.7728315 0. xNames = c("x1". we can obtain summary information on the estimated coefficients of the Kmenta approximation by: R> coef( summary( cesKmenta$kmenta ) ) Estimate Std.41286 2.14738 6.001 '**' 0.836951e-02 eq1_b_1_2 0.' 0.05785460 13.86321 0.2085723 2.001 '**' 0.3581693 0.910 < 2e-16 *** rho 0.8 Econometric Estimation of the Constant Elasticity of Substitution Function in R Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".0365 * nu 1. data = cesData.1072116 0.07308 15.513 6.16406442 -0.3615885 0. "x2").05 '.2126387 0.09627881 -2.6534723 5. we can conclude that the underlying technology is not of the Cobb-Douglas form.836951e-02 eq1_b_2_2 -0.2085723 2.3896839 1.000000e+00 eq1_a_2 0.523 < 2e-16 *** --Signif.7548401 Elasticity of Substitution: Estimate Std.04029 16.1 ' ' 1 The Wald test indicates that the restrictions on the Translog function implied by the Kmenta approximation cannot be rejected at any reasonable significance level.01 '*' 0.09627881 2. codes: 0 '***' 0.1 ' ' 1 Residual standard error: 2. Error t value Pr(>|t|) eq1_(Intercept) -0. vrs = TRUE.89834 0. we can check .836951e-02 Given that β12 = −β11 = −β22 significantly differs from zero at the 5% level. Error t value Pr(>|t|) gamma 0. As the estimation of the Kmenta approximation is stored in component kmenta of the object returned by cesEst. method = "Kmenta") Estimation by the linear Kmenta approximation Test of the null hypothesis that the restrictions of the Translog function required by the Kmenta approximation are true: P-value = 0.39e-06 *** --Signif. we can check if the coefficient β12 = −β11 = −β22 significantly differs from zero. Alternatively.2269042 Coefficients: Estimate Std.68126 0.05658941 6.5367 0.095 1.195467e-09 eq1_b_1_1 -0.09627881 -2.1189 4.09e-09 *** delta 0. codes: 0 '***' 0.2126387 0.142216e-01 eq1_a_1 0.

results for γ and ρ are estimated with generally considerable bias and MSE (Thursby and Lovell 1978. if the underlying CES function is of the Cobb-Douglas form. This iterative . the Levenberg-Marquardt algorithm (Marquardt 1963) was most commonly used for estimating the parameters of the CES function by non-linear least-squares. Second. R> library( "miscTools" ) R> compPlot ( cesData$y2.2. Gradient-based optimisation algorithms Levenberg-Marquardt Initially. This should—as in our case—deliver similar results (see above). This is a major drawback of the Kmenta approximation as its purpose is to facilitate the estimation of functions with non-unitary σ. Finally.. Maddala and Kadane (1967) and Thursby and Lovell (1978) find estimates for ν and δ with small bias and mean squared error (MSE). which is calculated from the coefficients of the Kmenta approximation. G´eraldine Henningsen 9 if the parameter ρ of the CES function. i. xlab = "actual values". significantly differs from zero. σ → 1 which increases the convergence region. the Kmenta approximation encounters several problems.Arne Henningsen. Thursby 1980). it is a truncated Taylor series and the remainder term must be seen as an omitted variable. and thus. + ylab = "fitted values" ) Figure 1 shows that the parameters produce reasonable fitted values.e. we plot the fitted values against the actual dependent variable (y) to check whether the parameter estimates are reasonable. First. ● 25 ● ● 15 10 5 fitted values 20 ●● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ●● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●●● ●● ● ● ● ●● ● ●●● ● ●●● ●● ● ● ●● ● ● ● ●●● ● ●●● ● ●● ● ●● ● ●●● ● ●●● ● ● ●● ●● ●● ●● ●● ● ●●● ● ●● ● ● ●● ●● ●● ● ● ● ●● ● ● ● ● ● ● ●●● ● ● ●● ● ● ●● ● ● ●● ●●● ● ● ●● ● ● 0 5 10 15 20 25 actual values Figure 1: Fitted values from the Kmenta approximation against y. However. the Kmenta approximation only converges to the underlying CES function in a region of convergence that is dependent on the true parameters of the CES function (Thursby and Lovell 1978). More reliable results can only be obtained if ρ → 0. fitted( cesKmenta ). Although. 3.

10

Econometric Estimation of the Constant Elasticity of Substitution Function in R

algorithm can be seen as a maximum neighbourhood method which performs an optimum
interpolation between a first-order Taylor series approximation (Gauss-Newton method) and a
steepest-descend method (gradient method) (Marquardt 1963). By combining these two nonlinear optimisation algorithms, the developers want to increase the convergence probability
by reducing the weaknesses of each of the two methods.
In a Monte Carlo study by Thursby (1980), the Levenberg-Marquardt algorithm outperforms the other methods and gives the best estimates of the CES parameters. However, the
Levenberg-Marquardt algorithm performs as poorly as the other methods in estimating the
elasticity of substitution (σ), which means that the estimated σ tends to be biased towards
infinity, unity, or zero.
Although the Levenberg-Marquardt algorithm does not live up to modern standards, we include it for reasons of completeness, as it is has proven to be a standard method for estimating
CES functions.
To estimate a CES function by non-linear least-squares using the Levenberg-Marquardt algorithm, one can call the cesEst function with argument method set to "LM" or without this
argument, as the Levenberg-Marquardt algorithm is the default estimation method used by
cesEst. The user can modify a few details of this algorithm (e.g., different criteria for convergence) by adding argument control as described in the documentation of the R function
nls.lm.control. Argument start can be used to specify a vector of starting values, where
the order must be γ, δ1 , δ2 , δ, ρ1 , ρ2 , ρ, and ν (of course, all coefficients that are not in the
model must be omitted). If no starting values are provided, they are determined automatically (see Section 4.7). For demonstrative purposes, we estimate all three (i.e., two-input,
three-input nested, and four-input nested) CES functions with the Levenberg-Marquardt algorithm, but in order to reduce space, we will proceed with examples of the classical two-input
CES function only.
R> cesLm2 <- cesEst( "y2", c( "x1", "x2" ), cesData, vrs = TRUE )
R> summary( cesLm2 )
Estimated CES function with variable returns to scale
Call:
cesEst(yName = "y2", xNames = c("x1", "x2"), data = cesData,
vrs = TRUE)
Estimation by non-linear least-squares using the 'LM' optimizer
assuming an additive error term
Convergence achieved after 4 iterations
Message: Relative error in the sum of squares is at most `ftol'.
Coefficients:
Estimate Std. Error t value Pr(>|t|)
gamma 1.02385
0.11562
8.855
<2e-16 ***
delta 0.62220
0.02845 21.873
<2e-16 ***
rho
0.54192
0.29090
1.863
0.0625 .
nu
1.08582
0.04569 23.765
<2e-16 ***

Arne Henningsen, G´eraldine Henningsen
--Signif. codes:

0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 2.446577
Multiple R-squared: 0.7649817
Elasticity of Substitution:
Estimate Std. Error t value Pr(>|t|)
E_1_2 (all)
0.6485
0.1224
5.3 1.16e-07 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

R> cesLm3 <- cesEst( "y3", c( "x1", "x2", "x3" ), cesData, vrs = TRUE )
R> summary( cesLm3 )
Estimated CES function with variable returns to scale
Call:
cesEst(yName = "y3", xNames = c("x1", "x2", "x3"), data = cesData,
vrs = TRUE)
Estimation by non-linear least-squares using the 'LM' optimizer
assuming an additive error term
Convergence achieved after 5 iterations
Message: Relative error in the sum of squares is at most `ftol'.
Coefficients:
Estimate Std. Error t value Pr(>|t|)
gamma
0.94558
0.08279 11.421 < 2e-16 ***
delta_1 0.65861
0.02439 27.000 < 2e-16 ***
delta
0.60715
0.01456 41.691 < 2e-16 ***
rho_1
0.18799
0.26503
0.709 0.478132
rho
0.53071
0.15079
3.519 0.000432 ***
nu
1.12636
0.03683 30.582 < 2e-16 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.409937
Multiple R-squared: 0.8531556
Elasticities of Substitution:
Estimate Std. Error t value Pr(>|t|)
E_1_2 (HM)
0.84176
0.18779
4.483 7.38e-06 ***
E_(1,2)_3 (AU) 0.65329
0.06436 10.151 < 2e-16 ***
---

11

12

Econometric Estimation of the Constant Elasticity of Substitution Function in R

Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
HM = Hicks-McFadden (direct) elasticity of substitution
AU = Allen-Uzawa (partial) elasticity of substitution

R> cesLm4 <- cesEst( "y4", c( "x1", "x2", "x3", "x4" ), cesData, vrs = TRUE )
R> summary( cesLm4 )
Estimated CES function with variable returns to scale
Call:
cesEst(yName = "y4", xNames = c("x1", "x2", "x3", "x4"), data = cesData,
vrs = TRUE)
Estimation by non-linear least-squares using the 'LM' optimizer
assuming an additive error term
Convergence achieved after 8 iterations
Message: Relative error in the sum of squares is at most `ftol'.
Coefficients:
Estimate Std. Error t value Pr(>|t|)
gamma
1.22760
0.12515
9.809 < 2e-16 ***
delta_1 0.78093
0.03442 22.691 < 2e-16 ***
delta_2 0.60090
0.02530 23.753 < 2e-16 ***
delta
0.51154
0.02086 24.518 < 2e-16 ***
rho_1
0.37788
0.46295
0.816 0.414361
rho_2
0.33380
0.22616
1.476 0.139967
rho
0.91065
0.25115
3.626 0.000288 ***
nu
1.01872
0.04355 23.390 < 2e-16 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.424439
Multiple R-squared: 0.7890757
Elasticities of Substitution:
Estimate Std. Error t value Pr(>|t|)
E_1_2 (HM)
0.7258
0.2438
2.976 0.00292 **
E_3_4 (HM)
0.7497
0.1271
5.898 3.69e-09 ***
E_(1,2)_(3,4) (AU)
0.5234
0.0688
7.608 2.79e-14 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
HM = Hicks-McFadden (direct) elasticity of substitution
AU = Allen-Uzawa (partial) elasticity of substitution
Finally, we plot the fitted values against the actual values y to see whether the estimated
parameters are reasonable. The results are presented in Figure 2.

main R> compPlot ( cesData$y4. = "four-input nested CES" ) 20 two−input CES 0 5 10 15 20 actual values 13 25 5 10 15 actual values 20 ● ●● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ● ●● ●● ●● ●● ● ● ●● ● ● ● ●●● ● ●● ● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ●● ●●● ● ● ● ● ●● ● ●●● ● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ●●● ●● ●● ● ●●● ● ●● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●● ● ●● ● ● ● ●●●● ● ● ●● ● ● ● ●● ● ● ●●● ● ●● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● 5 10 15 ● 20 actual values Figure 2: Fitted values from the LM algorithm against actual values.. fitted( + ylab = "fitted values". fitted( + ylab = "fitted values". The “Conjugated Gradient” method works best for objective functions that are approximately quadratic and it is sensitive to objective functions that are not well-behaved and have a non-positive semi-definite Hessian.g. we will briefly discuss some practical issues of the algorithms that will be used to estimate the CES function. the “Conjugated Gradient” method is probably less suitable than other algorithms for estimating a CES function. = "two-input CES" ) cesLm3 ). main R> compPlot ( cesData$y3. i.g. fitted( + ylab = "fitted values".Arne Henningsen.. main three−input nested CES ● four−input nested CES ● 25 20 ● ● 15 fitted values 5 0 ●● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ●●●●● ● ● ●● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ●● ● ●●● ● ● ●● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ●● ●● ● ●●● ●●●● ● ● ● ●●●● ●●●● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ●● ● ●●●● ●●● ●● ● ●● ● ●●●● ● ● ● ● ● ●● ● ●● ●●● ●● ● ● ● ● ● ● ●● ●●● ● ●● ●●● ● ● ● ● ● ●●● ● ●● ●● ● ●● ● ● ● ● ●● ● ● ● ● 10 15 10 ● fitted values ● ● 5 20 15 10 5 ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ●●●● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ●● ● ● ●●●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ●●●● ● ●● ●●● ●● ● ●● ● ●● ● ●●● ● ●● ● ●● ● ● ●● ●● ● ● ●● ●●● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ●●● ● ●●● ● ●● ●● ● ● ●● ●● ● ●● ● ● ●● ● ● ● ●●●●● ● ●● ● ● ● ● ● ● fitted values cesLm2 ). As a proper application of these estimation methods requires the user to be familiar with the main characteristics of the different algorithms. e. A detailed discussion of iterative optimisation algorithms is available. in Kelley (1999) or Mishra (2007). xlab = "actual values". Setting argument method of cesEst to "CG" selects the “Conjugate Gradients” method for estimating the CES function by non-linear least-squares. replacing the update formula of Fletcher and Reeves (1964) by the formula of Polak and Ribi`ere (1969) or the one based on Sorenson (1969) and . = "three-input nested CES" ) cesLm4 ). This iterative method is mostly applied to optimisation problems with many parameters and a large and possibly sparse Hessian matrix. Function cesEst can use some of them to estimate a CES function by non-linear least-squares. it is not the aim of this paper to thoroughly discuss these algorithms. convergence within the given number of iterations is less likely the more the level surface of the objective function differs from spherical (Kelley 1999). xlab = "actual values". However. Given that the CES function has only few parameters and the objective function is not approximately quadratic and shows a tendency to “flat surfaces” around the minimum. G´eraldine Henningsen R> compPlot ( cesData$y2.e. Several further gradient-based optimisation algorithms that are suitable for non-linear leastsquares estimations are implemented in R.. Conjugate Gradients One of the gradient-based optimisation algorithms that can be used by cesEst is the “Conjugate Gradients” method based on Fletcher and Reeves (1964). because this algorithm does not require that the Hessian matrix is stored or inverted. xlab = "actual values". The user can modify this algorithm (e.

Error t value Pr(>|t|) gamma 1. reltol = 1e-5 ) ) R> summary( cesCg2 ) Estimated CES function with variable returns to scale Call: .84e-07 *** --Signif.05 '.28531 1. cesData. nu 1.001 '**' 0.cesEst( "y2".446998 Multiple R-squared: 0. c( "x1".01 '*' 0.g.1 ' ' 1 Although the estimated parameters are similar to the estimates from the Levenberg-Marquardt algorithm..7649009 Elasticity of Substitution: Estimate Std. "x2").952 <2e-16 *** rho 0.01 '*' 0.660 <2e-16 *** --Signif.05 '. data = cesData.04566 23. R> cesCg <. cesData. This confirms a slow convergence of the “Conjugate Gradients” algorithm for estimating the CES function. vrs = TRUE. vrs = TRUE.709 0. vrs = TRUE. Increasing the maximum number of iterations and the tolerance level leads to convergence. method = "CG".0874 .cesEst( "y2".' 0.1 ' ' 1 Residual standard error: 2.08043 0. "x2" ).02828 21. convergence tolerance level) by adding a further argument control as described in the “Details” section of the documentation of the R function optim. codes: 0 '***' 0. "x2" ).1289 5.865 <2e-16 *** delta 0.11684 8.' 0. method = "CG" ) R> summary( cesCg ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".214 1. Error t value Pr(>|t|) E_1_2 (all) 0. + control = list( maxit = 1000.001 '**' 0. c( "x1". codes: 0 '***' 0. xNames = c("x1".62081 0.14 Econometric Estimation of the Constant Elasticity of Substitution Function in R Beale (1972)) or some other details (e.48769 0. method = "CG") Estimation by non-linear least-squares using the 'CG' optimizer assuming an additive error term Convergence NOT achieved after 401 function and 101 gradient calls Coefficients: Estimate Std. the “Conjugated Gradient” algorithm reports that it did not converge.6722 0. R> cesCg2 <.03581 0.

1 ' ' 1 Newton Another algorithm supported by cesEst that is probably more suitable for estimating a CES function is an improved Newton-type method.62220 0. c( "x1".29090 1. data = cesData.855 <2e-16 *** delta 0.873 <2e-16 *** rho 0.001 '**' 0.05 '.765 <2e-16 *** --Signif. and Weiss (1985).05 '. "x2"). nu 1. The following commands estimate a CES function by non-linear least-squares using this algorithm and print summary results.02385 0. cesData. Setting argument method of cesEst to "Newton" selects this improved Newton-type method. this algorithm uses first and second derivatives of the objective function to determine the direction of the shift vector and searches for a stationary point until the gradients are (almost) zero.1 ' ' 1 Residual standard error: 2. in contrast to the original Newton method.0625 . xNames = c("x1".16e-07 *** --Signif. "x2" ).08582 0. vrs = TRUE.04569 23.' 0. However. + method = "Newton" ) R> summary( cesNewton ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".863 0. vrs = TRUE. the maximum step length) by adding further arguments that are described in the documentation of the R function nlm. Error t value Pr(>|t|) E_1_2 (all) 0.6485 0. Koontz.cesEst( "y2". this algorithm does a line search at each iteration to determine the optimal length of the shift vector (step size) as described in Dennis and Schnabel (1983) and Schnabel. codes: 0 '***' 0. codes: 0 '***' 0. G´eraldine Henningsen 15 cesEst(yName = "y2".1224 5.7649817 Elasticity of Substitution: Estimate Std.' 0.446577 Multiple R-squared: 0. As with the original Newton method.02845 21.01 '*' 0. R> cesNewton <. .g. data = cesData. Error t value Pr(>|t|) gamma 1. "x2")..54192 0. xNames = c("x1".3 1.001 '**' 0.11562 8.Arne Henningsen. reltol = 1e-05)) Estimation by non-linear least-squares using the 'CG' optimizer assuming an additive error term Convergence achieved after 1375 function and 343 gradient calls Coefficients: Estimate Std.01 '*' 0. control = list(maxit = 1000. method = "CG". The user can modify a few details of this algorithm (e.

.' 0.29091 1.446577 Multiple R-squared: 0.05 '. codes: 0 '***' 0.7649817 Elasticity of Substitution: Estimate Std. c( "x1".6485 0.863 0.873 <2e-16 *** rho 0. the BFGS method does a line search for the best step size and uses a special procedure to approximate and update the Hessian matrix in every iteration.1224 5. nu 1. Error t value Pr(>|t|) gamma 1.3 1.1 ' ' 1 Broyden-Fletcher-Goldfarb-Shanno Furthermore. Fletcher (1970). method = "BFGS" ) R> summary( cesBfgs ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2". The problem with BFGS can be that although the current parameters are close to the minimum. vrs = TRUE. If argument method of cesEst is "BFGS". xNames = c("x1".02385 0. the algorithm does not converge because the Hessian matrix at the current parameters is not close to the Hessian matrix at the minimum. the convergence tolerance level) by adding the further argument control as described in the “Details” section of the documentation of the R function optim.855 <2e-16 *** delta 0.11562 8.' 0. The user can modify a few details of the BFGS algorithm (e.001 '**' 0.. Goldfarb (1970).62220 0. a quasi-Newton method developed independently by Broyden (1970).01 '*' 0.16e-07 *** --Signif.04569 23. the BFGS algorithm is used for the estimation.02845 21. R> cesBfgs <. Error t value Pr(>|t|) E_1_2 (all) 0. BFGS proves robust convergence (often superlinear) (Kelley 1999).05 '.001 '**' 0. "x2").765 <2e-16 *** --Signif. data = cesData.08582 0. "x2" ).01 '*' 0.g. codes: 0 '***' 0. and Shanno (1970) can be used by cesEst. in practice.54192 0.16 Econometric Estimation of the Constant Elasticity of Substitution Function in R vrs = TRUE.1 ' ' 1 Residual standard error: 2. However.cesEst( "y2". cesData. method = "Newton") Estimation by non-linear least-squares using the 'Newton' optimizer assuming an additive error term Convergence achieved after 25 iterations Coefficients: Estimate Std.0625 . This so-called BFGS algorithm also uses first and second derivatives and searches for a stationary point of the objective function where the gradients are (almost) zero. In contrast to the original Newton method.

Arne Henningsen, G´eraldine Henningsen

17

vrs = TRUE, method = "BFGS")
Estimation by non-linear least-squares using the 'BFGS' optimizer
assuming an additive error term
Convergence achieved after 72 function and 15 gradient calls
Coefficients:
Estimate Std. Error t value Pr(>|t|)
gamma 1.02385
0.11562
8.855
<2e-16 ***
delta 0.62220
0.02845 21.873
<2e-16 ***
rho
0.54192
0.29091
1.863
0.0625 .
nu
1.08582
0.04569 23.765
<2e-16 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.446577
Multiple R-squared: 0.7649817
Elasticity of Substitution:
Estimate Std. Error t value Pr(>|t|)
E_1_2 (all)
0.6485
0.1224
5.3 1.16e-07 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

3.3. Global optimisation algorithms
Nelder-Mead
While the gradient-based (local) optimisation algorithms described above are designed to find
local minima, global optimisation algorithms, which are also known as direct search methods,
are designed to find the global minimum. These algorithms are more tolerant to objective
functions which are not well-behaved, although they usually converge more slowly than the
gradient-based methods. However, increasing computing power has made these algorithms
suitable for day-to-day use.
One of these global optimisation routines is the so-called Nelder-Mead algorithm (Nelder and
Mead 1965), which is a downhill simplex algorithm. In every iteration, n + 1 vertices are
defined in the n-dimensional parameter space. The algorithm converges by successively replacing the “worst” point by a new vertex in the multi-dimensional parameter space. The
Nelder-Mead algorithm has the advantage of a simple and robust algorithm, and is especially suitable for residual problems with non-differentiable objective functions. However, the
heuristic nature of the algorithm causes slow convergence, especially close to the minimum,
and can lead to convergence to non-stationary points. As the CES function is easily twice
differentiable, the advantage of the Nelder-Mead algorithm simply becomes its robustness.
As a consequence of the heuristic optimisation technique, the results should be handled with
care. However, the Nelder-Mead algorithm is much faster than the other global optimisation
algorithms described below. Function cesEst estimates a CES function with the Nelder-

18

Econometric Estimation of the Constant Elasticity of Substitution Function in R

Mead algorithm if argument method is set to "NM". The user can tweak this algorithm (e.g.,
the reflection factor, contraction factor, or expansion factor) or change some other details
(e.g., convergence tolerance level) by adding a further argument control as described in the
“Details” section of the documentation of the R function optim.
R> cesNm <- cesEst( "y2", c( "x1", "x2" ), cesData, vrs = TRUE,
+
method = "NM" )
R> summary( cesNm )
Estimated CES function with variable returns to scale
Call:
cesEst(yName = "y2", xNames = c("x1", "x2"), data = cesData,
vrs = TRUE, method = "NM")
Estimation by non-linear least-squares using the 'Nelder-Mead' optimizer
assuming an additive error term
Convergence achieved after 265 iterations
Coefficients:
Estimate Std. Error t value Pr(>|t|)
gamma 1.02399
0.11564
8.855
<2e-16 ***
delta 0.62224
0.02845 21.872
<2e-16 ***
rho
0.54212
0.29095
1.863
0.0624 .
nu
1.08576
0.04569 23.763
<2e-16 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.446577
Multiple R-squared: 0.7649817
Elasticity of Substitution:
Estimate Std. Error t value Pr(>|t|)
E_1_2 (all)
0.6485
0.1223
5.3 1.16e-07 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Simulated Annealing
The Simulated Annealing algorithm was initially proposed by Kirkpatrick, Gelatt, and Vecchi
(1983) and Cerny (1985) and is a modification of the Metropolis-Hastings algorithm. Every
iteration chooses a random solution close to the current solution, while the probability of the
choice is driven by a global parameter T which decreases as the algorithm moves on. Unlike
other iterative optimisation algorithms, Simulated Annealing also allows T to increase which
makes it possible to leave local minima. Therefore, Simulated Annealing is a robust global
optimiser and it can be applied to a large search space, where it provides fast and reliable
solutions. Setting argument method to "SANN" selects a variant of the “Simulated Annealing”

Arne Henningsen, G´eraldine Henningsen

19

algorithm given in B´elisle (1992). The user can modify some details of the “Simulated Annealing” algorithm (e.g., the starting temperature T or the number of function evaluations
at each temperature) by adding a further argument control as described in the “Details”
section of the documentation of the R function optim. The only criterion for stopping this
iterative process is the number of iterations and it does not indicate whether the algorithm
converged or not.
R> cesSann <- cesEst( "y2", c( "x1", "x2" ), cesData, vrs = TRUE, method = "SANN" )
R> summary( cesSann )
Estimated CES function with variable returns to scale
Call:
cesEst(yName = "y2", xNames = c("x1", "x2"), data = cesData,
vrs = TRUE, method = "SANN")
Estimation by non-linear least-squares using the 'SANN' optimizer
assuming an additive error term
Coefficients:
Estimate Std. Error t value Pr(>|t|)
gamma 1.01104
0.11477
8.809
<2e-16 ***
delta 0.63414
0.02954 21.469
<2e-16 ***
rho
0.71252
0.31440
2.266
0.0234 *
nu
1.09179
0.04590 23.784
<2e-16 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.449907
Multiple R-squared: 0.7643416
Elasticity of Substitution:
Estimate Std. Error t value Pr(>|t|)
E_1_2 (all)
0.5839
0.1072
5.447 5.12e-08 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
As the Simulated Annealing algorithm makes use of random numbers, the solution generally
depends on the initial “state” of R’s random number generator. To ensure replicability, cesEst
“seeds” the random number generator before it starts the “Simulated Annealing” algorithm
with the value of argument random.seed, which defaults to 123. Hence, the estimation of
the same model using this algorithm always returns the same estimates as long as argument
random.seed is not altered (at least using the same software and hardware components).
R> cesSann2 <- cesEst( "y2", c( "x1", "x2" ), cesData, vrs = TRUE, method = "SANN" )
R> all.equal( cesSann, cesSann2 )
[1] TRUE

0220481 1. "x2" ). control = list( maxit = 100000 ) ) m <. method = "SANN". cesData. control = list( maxit = 100000 ) ) cesSannB3 <. "x2" ). method = "SANN". vrs = TRUE.000 by default. cesSannB4 = coef( cesSannB4 ).seed = 123456.0936468 0.0917877 1. method = "SANN". "x2" ). method = "SANN". and Lampinen 2006) belongs to the class of . c( "x1". which is 10. the user can try increasing the number of iterations. cesData. c( "x1".5745630 0.2414348 rho 0.seed = 1234.0868674 0. the Differential Evolution algorithm (Storn and Price 1997.0337633 1.0386186 1.6341353 0.6225271 0. random.seed = 12345 ) cesSann5 <. cesSann4 = coef( cesSann4 ).2429077 rho 0.cesEst( "y2". vrs = TRUE.0208154 1. control = list( maxit = 100000 ) ) cesSannB5 <.0868685 1.2429077 Now the estimates are much more similar—only the estimates of ρ still differ somewhat.cesEst( "y2".20 Econometric Estimation of the Constant Elasticity of Substitution Function in R It is recommended to start this algorithm with different values of argument random. control = list( maxit = 100000 ) ) cesSannB4 <. c( "x1". vrs = TRUE. Price. random. R> + R> + R> + R> + R> cesSann3 <.6238302 0. cesSannB5 = coef( cesSannB5 ) ) rbind( m. "x2" ).0797618 1. c( "x1". random.5623429 0. "x2" ).6260139 0. "x2" ). method = "SANN".2429077 delta 0. vrs = TRUE. cesData.7125172 0. stdDev = sd( m ) ) cesSannB cesSannB3 cesSannB4 cesSannB5 stdDev gamma 1.seed = 12345. c( "x1". random. vrs = TRUE. Now we will re-estimate the model a few times with 100.6381545 0.0827909 1.2414348 If the estimates differ remarkably.2429077 nu 1. cesData.seed = 1234 ) cesSann4 <. cesData.cesEst( "y2". cesData. Differential Evolution In contrary to the other algorithms described in this paper.6206622 0. cesData.seed and to check whether the estimates differ considerably. cesSann5 = coef( cesSann5 ) ) rbind( m. "x2" ). c( "x1".6149628 0.5284805 0. cesSannB3 = coef( cesSannB3 ). vrs = TRUE.0232862 0.2414348 delta 0.0820884 1.6204874 0.0344585 1.cesEst( "y2".5759035 0.2414348 nu 1. random.0110416 1.0813484 1. R> + R> + R> + R> + R> + R> cesSannB <.rbind( cesSannB = coef( cesSannB ). cesSann3 = coef( cesSann3 ).5259158 0.000 iterations each.seed = 123456 ) m <.cesEst( "y2".5632106 0.rbind( cesSann = coef( cesSann ).4716324 0. random. stdDev = sd( m ) ) cesSann cesSann3 cesSann4 cesSann5 stdDev gamma 1.cesEst( "y2". method = "SANN". c( "x1". vrs = TRUE. Storn.cesEst( "y2".0101985 0. method = "SANN".

. Error t value Pr(>|t|) E_1_2 (all) 0. vrs = TRUE. + control = list( trace = FALSE ) ) R> summary( cesDe ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2". The user can modify the Differential Evolution algorithm (e. control = list(trace = FALSE)) Estimation by non-linear least-squares using the 'DE' optimizer assuming an additive error term Coefficients: Estimate Std. Function cesEst uses a Differential Evolution optimiser for the non-linear least-squares estimation of the CES function. and 0 ≤ ν ≤ 10.446587 Multiple R-squared: 0. ρ ≤ 10. the Differential Evolution method requires finite boundaries for the parameters. Error t value Pr(>|t|) gamma 1. For some problems.05 '. Ali and T¨ orn 2004.0648 .Arne Henningsen. 0 ≤ δ1 . or other genetic algorithms (Storn and Price 1997.01 '*' 0.01 '*' 0. However. the differential evolution strategy or selection method) or change some details (e.7649799 Elasticity of Substitution: Estimate Std.control. G´eraldine Henningsen 21 evolution strategy optimisers and convergence cannot be proven analytically. xNames = c("x1".2901 1. "x2").g. δ2 . ρ2 .761 <2e-16 *** --Signif.5358 0.1230 5. if argument method is set to "DE". nu 1. c( "x1". method = "DE".0859 0.2e-07 *** --Signif.1 ' ' 1 Residual standard error: 2. Quasi-Newton. the number of population members) by adding a further argument control as described in the documentation of the R function DEoptim.873 <2e-16 *** rho 0. Mishra 2007).1156 8. vrs = TRUE. the user can specify own lower and upper bounds by setting arguments lower and upper to numeric vectors. inter alia the CES function (Mishra 2007). codes: 0 '***' 0.853 <2e-16 *** delta 0.847 0. data = cesData. Of course. method = "DE". −1 ≤ ρ1 . R> cesDe <. δ ≤ 1.6212 0.05 '.001 '**' 0.6511 0.cesEst( "y2".0457 23.1 ' ' 1 .0233 0. By default.. it has proven to be more accurate and more efficient than Simulated Annealing.' 0.293 1. "x2" ).g.0284 21.' 0. the algorithm has proven to be effective and accurate on a large range of optimisation problems.001 '**' 0. codes: 0 '***' 0. cesData. the bounds are 0 ≤ γ ≤ 1010 . In contrary to the other optimisation algorithms.

vrs = TRUE. itermax = 1000 ) ) rbind( cesDeB = coef( cesDeB ).cesEst( "y2".0096730 1. random. cesDe3 = coef( cesDe3 ).6212100 0.6217174 0.seed = 123456.2483916 These estimates are rather similar.cesEst( "y2". method = "DE". + control = list( trace = FALSE ) ) R> all. c( "x1". cesData.rbind( cesDe = coef( cesDe ). "x2" ). control = list( trace = FALSE.0902370 0. cesData. "x2" ). c( "x1". "x2" ). control = list( trace = FALSE ) ) cesDe4 <.5358122 0. which generally indicates that all estimates are close to the optimum (minimum of the sum of squared residuals). c( "x1". method = "DE".seed result in considerably different estimates.seed = 12345.0233453 1. cesData. cesData. itermax = 1000 ) ) cesDeB4 <. e.6216175 0. method = "DE". method = "DE". vrs = TRUE.0488844 1. random. vrs = TRUE.cesEst( "y2".000 population generations each. vrs = TRUE.seed = 1234. cesData. cesDe4 = coef( cesDe4 ). it is also recommended to check whether different values of argument random.5457970 0.2483916 delta 0. cesDeB3 = coef( cesDeB3 ). random. method = "DE". cesDeB4 = coef( cesDeB4 ). control = list( trace = FALSE. vrs = TRUE.equal( cesDe.seed = 1234. control = list( trace = FALSE.2483916 nu 1. cesDe5 = coef( cesDe5 ) ) rbind( m. R> cesDe2 <. Now we will re-estimate this model a few times with 1. "x2" ).2483916 rho 0.0135793 0. itermax = 1000 ) ) cesDeB5 <. "x2" ).seed = 12345. method = "DE".0913477 1.5736423 0. c( "x1". which is 200 by default.0760523 1.cesEst( "y2". control = list( trace = FALSE ) ) cesDe5 <. However. R> + R> + R> + R> + R> cesDe3 <.g. stdDev = sd( m ) ) cesDe cesDe3 cesDe4 cesDe5 stdDev gamma 1.cesEst( "y2". c( "x1".cesEst( "y2". cesDe2 ) [1] TRUE When using this algorithm. random. c( "x1". vrs = TRUE. the user can try to increase the maximum number of population generations (iterations) using control parameter itermax.5019560 0. if the user wants to obtain more precise estimates than those derived from the default settings of this algorithm.0858811 1. the Differential Evolution algorithm makes use of random numbers and cesEst “seeds” the random number generator with the value of argument random. control = list( trace = FALSE. cesData. R> + R> + R> + R> + R> + cesDeB <. c( "x1".cesEst( "y2".seed before it starts this algorithm to ensure replicability. method = "DE". if the estimates differ considerably. cesDeB5 = coef( cesDeB5 ) ) . method = "DE".seed = 123456.. random.cesEst( "y2". random. "x2" ). control = list( trace = FALSE ) ) m <. cesData.22 Econometric Estimation of the Constant Elasticity of Substitution Function in R Like the “Simulated Annealing” algorithm. c( "x1". vrs = TRUE. "x2" ). cesData. "x2" ). itermax = 1000 ) ) cesDeB3 <. vrs = TRUE.6212958 0.

it is often desirable to constrain the parameter space to the economically meaningful region.6221982 rho 0. The user can tweak some details of this algorithm (e. Therefore. δ2 .g. but instead relies on the past (often less than 10) values of the parameters and the gradient vector.023853 cesDeB5 1. Nocedal.6221982 0.08582 The estimates are now virtually identical. The user can specify own lower and upper bounds by setting arguments lower and upper to numeric vectors.08582 1. the L-BFGS-B algorithm is especially suitable for high dimensional optimisation problems. R> cesLbfgsb <. the number of BFGS updates) by adding a further argument control as described in the “Details” section of the documentation of the R function optim. 0 ≤ δ1 . −1 ≤ ρ1 . Lu. the restrictions on the parameters are 0 ≤ γ < ∞. Moreover.023853 cesDeB4 1. The user can further increase the likelihood of finding the global optimum by increasing the number of population members using control parameter NP. vrs = TRUE. G´eraldine Henningsen gamma cesDeB 1. In contrast to the ordinary BFGS algorithm summarised above. ρ < ∞. cesData.08582 1. function cesEst can use two gradient-based optimisation algorithms for estimating a CES function under parameter constraints. which is 10 times the number of parameters by default and should not have a smaller value than this default value (see documentation of the R function DEoptim.08582 1. By default. method = "L-BFGS-B") . c( "x1". "x2" ). "x2"). This can be done by the Differential Evolution (DE) algorithm as described above.023853 cesDeB3 1. data = cesData. xNames = c("x1".023853 delta 0. Constraint parameters As a meaningful analysis based on a CES function requires that the function is consistent with economic theory. δ ≤ 1.. and Zhu (1995).6221982 0.cesEst( "y2". ρ2 .4. but—of course—it can also be used for optimisation problems with only a few parameters (as the CES function). Function cesEst estimates a CES function with parameter constraints using the L-BFGS-B algorithm if argument method is set to "L-BFGS-B".5419226 0. vrs = TRUE.5419226 23 nu 1.5419226 0.control). + method = "L-BFGS-B" ) R> summary( cesLbfgsb ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2". L-BFGS-B One of these methods is a modification of the BFGS algorithm suggested by Byrd. the so-called L-BFGS-B algorithm allows for box-constraints on the parameters and also does not explicitly form or store the Hessian matrix.Arne Henningsen. 3.5419226 0.6221982 0. and 0 ≤ ν < ∞.

codes: 0 '***' 0. "x2" ).g.62220 0.02385 0.765 <2e-16 *** --Signif.6485 0.54192 0. vrs = TRUE.24 Econometric Estimation of the Constant Elasticity of Substitution Function in R Estimation by non-linear least-squares using the 'L-BFGS-B' optimizer assuming an additive error term Convergence achieved after 35 function and 35 gradient calls Message: CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH Coefficients: Estimate Std.. cesData. "x2"). trust regions and reverse communication.cesEst( "y2".' 0.16e-07 *** --Signif.04569 23. Setting argument method to "PORT" selects the optimisation algorithm of the PORT routines.001 '**' 0.05 '. xNames = c("x1".05 '.29090 1.01 '*' 0. + method = "PORT" ) R> summary( cesPort ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".1224 5. vrs = TRUE.02845 21. Error t value Pr(>|t|) gamma 1.855 <2e-16 *** delta 0. data = cesData.' 0.1 ' ' 1 Residual standard error: 2. codes: 0 '***' 0.7649817 Elasticity of Substitution: Estimate Std.0625 . c( "x1". nu 1.873 <2e-16 *** rho 0.08582 0. The lower and upper bounds of the parameters have the same default values as for the L-BFGS-B method. the minimum step size) by adding a further argument control as described in section “Control parameters” of the documentation of R function nlminb.01 '*' 0. Error t value Pr(>|t|) E_1_2 (all) 0. R> cesPort <.001 '**' 0.g..11562 8. The user can modify a few details of the Newton algorithm (e. method = "PORT") Estimation by non-linear least-squares using the 'PORT' optimizer assuming an additive error term Convergence achieved after 27 iterations .863 0. e.1 ' ' 1 PORT routines The so-called PORT routines (Gay 1990) include a quasi-Newton optimisation algorithm that allows for box constraints on the parameters and has several advantages over traditional Newton routines.446577 Multiple R-squared: 0.3 1.

62220 0. Le´ on-Ledesma.. Technological change Estimating the CES function with time series data usually requires an extension of the CES functional form in order to account for technological change (progress).01 '*' 0. G´eraldine Henningsen 25 Message: relative convergence (4) Coefficients: Estimate Std.04569 23.0625 . ˆ factor augmenting (non-neutral) technological change y=γ  x1 e λ1 t −ρ  λ2 t + x2 e −ρ − νρ . (14) where λ1 and λ2 measure input-specific technological change. accounting for technological change in CES functions basically boils down to two approaches: ˆ Hicks-neutral technological change − ν  ρ −ρ + (1 − δ)x y = γ eλ t δx−ρ .5.08582 0. Klump.' 0. and Willman 2010).g.1224 5.3 1. McAdam.873 <2e-16 *** rho 0. There is a lively ongoing discussion about the proper way to estimate CES functions with factor augmenting technological progress (e. λ is the rate of technological change.05 '.Arne Henningsen. So far. we decided to wait until a state-of-the-art approach emerges before including factor augmenting technological change into micEconCES. Error t value Pr(>|t|) E_1_2 (all) 0. codes: 0 '***' 0.7649817 Elasticity of Substitution: Estimate Std.765 <2e-16 *** --Signif.1 ' ' 1 3. . Although many approaches seem to be promising. and t is a time variable. Therefore.1 ' ' 1 Residual standard error: 2.05 '.02385 0.446577 Multiple R-squared: 0.855 <2e-16 *** delta 0.001 '**' 0.11562 8.02845 21. and Willman 2007.29091 1. Error t value Pr(>|t|) gamma 1. micEconCES only includes Hicks-neutral technological change at the moment.54192 0.863 0. codes: 0 '***' 0.6485 0. nu 1.' 0. Luoma and Luoto 2010. 1 2 (13) where γ is (as before) an efficiency parameter.001 '**' 0. McAdam.16e-07 *** --Signif.01 '*' 0.

rho = 0. parameter λ is by default restricted to the interval [−0.0362520 27.76e-14 *** nu 1.964 < 2e-16 *** rho 0. (ii) calculate the (deterministic) output variable of a CES function with 1% Hicks-neutral technological progress in each time period.26 Econometric Estimation of the Constant Elasticity of Substitution Function in R When calculating the output variable of the CES function using cesCalc or when estimating the parameters of the CES function using cesEst. Error t value Pr(>|t|) gamma 0. codes: 0 '***' 0.9899389 Elasticity of Substitution: Estimate Std.c( 1:200 ) cesData$yt <.5. R> R> + R> R> + R> cesData$t <. "x2"). nu = 1. "x2" ). tName = "t". data = cesData. 0.5971397 0.6.653 < 2e-16 *** --Signif. data = cesData. as this algorithm requires finite lower and upper bounds of all parameters.5.10 The following commands (i) generate an (artificial) time variable t. "x2" ).0100173 0.0130233 84. .cesData$yt + 2. Error t value Pr(>|t|) E_1_2 (all) 0.0073754 80. vrs = TRUE. the name of time variable (t) can be specified by argument tName.5268406 0.001 '**' 0. delta = 0.1. (iv) estimate the model.0001007 99. method = "LM" ) summary( cesTech ) Estimated CES function with variable returns to scale Call: cesEst(yName = "yt".5 * rnorm( 200 ) cesTech <. tName = "t".1024537 0.550677 Multiple R-squared: 0. method = "LM") Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 5 iterations Message: Relative error in the sum of squares is at most `ftol'. where the corresponding coefficient (λ) is labelled lambda.94 <2e-16 *** 10 If the Differential Evolution (DE) algorithm is used. The user can use arguments lower and upper to modify these bounds. vrs = TRUE.01 '*' 0. (iii) add noise to obtain the stochastic “observed” output variable.397 < 2e-16 *** lambda 0.1 ' ' 1 Residual standard error: 2.9932082 0. xNames = c("x1".5].0696036 7. and (v) print the summary results.65495 0.cesCalc( xNames = c( "x1". c( "x1".cesEst( "yt".569 3. coef = c( gamma = 1.506 < 2e-16 *** delta 0. Coefficients: Estimate Std.02986 21.05 '. lambda = 0. data = cesData.' 0.01 ) ) cesData$yt <. tName = "t".

01 '*' 0. Since the “best” values of the substitution parameters (ρ1 . data = cesData. by = 0.1 and the default optimisation method.5. rho = seq(from = -0. but estimated (as the other parameters. if argument rho1. As the (nested) CES functions defined above can have up to three substitution parameters. where the pre-selected values for ρ are the values from −0. ρ). The function cesEst carries out this grid search procedure. ρ) is pre-selected and the remaining parameters are estimated by non-linear least-squares holding the substitution parameters fixed at each combination of the pre-defined values. the covariance matrix of the estimated parameters also includes the substitution parameters and is calculated as if the substitution parameters were estimated as usual. ρ2 . to = 1. codes: 27 0 '***' 0.3 to 1. "x2" ). Functions cesCalc and cesEst can account for Hicks-neutral technological change in all CES specifications that are generally supported by these functions. The estimates with the values of the substitution parameters that result in the smallest sum of squared residuals are chosen as the final estimation result. ρ2 .05 '. However. respectively. ρ) that are found in the grid search are not known. we have demonstrated how Hicks-neutral technological change can be modelled in the two-input CES function. G´eraldine Henningsen --Signif. i. but with a different estimation method). The following command estimates the two-input CES function by a one-dimensional grid search for ρ. The estimation of the other parameters during the grid search can be performed by all the non-linear optimisation algorithms described above. where a grid of values for the substitution parameters (ρ1 . by multiplying the CES function with eλ t . c( "x1".001 '**' 0.cesEst( "y2". vrs = TRUE.5. and ρ. or rho is set to a numeric vector.6. Grid search for ρ The objective function for estimating CES functions by non-linear least-squares often shows a tendency to “flat surfaces” around the minimum—in particular for a wide range of values for the substitution parameters (ρ1 . xNames = c("x1".1 ) ) R> summary( cesGrid ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".Arne Henningsen. ρ2 .' 0. is used to estimate the remaining parameters. or three-dimensional.3. many optimisation algorithms have problems in finding the minimum of the objective function. to = 1. Therefore.e.. The values of these vectors are used to specify the grid points for the substitution parameters ρ1 .5 with an increment of 0. rho2. by = 0. vrs = TRUE.1)) . this problem can be alleviated by performing a grid search. R> cesGrid <. In case of more than two inputs—regardless of whether the CES function is “plain” or “nested”—Hicks-neutral technological change can be accounted for in the same way. 3. ρ2 . data = cesData. particularly in case of n-input nested CES functions. the grid search over the substitution parameters can be either one-. two-. the Levenberg-Marquardt algorithm.1 ' ' 1 Above.3. + rho = seq( from = -0. "x2").

7649542 Elasticity of Substitution: Estimate Std.8. and ρ.4 to 0. −0.022 <2e-16 *** rho 0. we apply the default optimisation method.44672 Multiple R-squared: 0. Error t value Pr(>|t|) gamma 1.001 '**' 0.2 ) ) R> summary( ces4Grid ) Estimated CES function with constant returns to scale 11 This plot method can only be applied if the model was estimated by grid search.3 ).cesEst( yName = "y4". Coefficients: Estimate Std. xNames = c( "x1".1 ' ' 1 A graphical illustration of the relationship between the pre-selected values of the substitution parameters and the corresponding sums of the squared residuals can be obtained by applying the plot method. + rho1 = seq( from = -0. + rho = seq( from = -0.02819 22. codes: 0 '***' 0.9 with an increment of 0.794 <2e-16 *** --Signif.9.05 '.48e-07 *** --Signif.50000 0. "x4" ). nu 1.28 Econometric Estimation of the Constant Elasticity of Substitution Function in R Estimation by non-linear least-squares using the 'LM' optimizer and a one-dimensional grid search for coefficient 'rho' assuming an additive error term Convergence achieved after 4 iterations Message: Relative error in the sum of squares is at most `ftol'.11506 8. by = 0. and −0. Error t value Pr(>|t|) E_1_2 (all) 0.1269 5.' 0.8 with an increment of 0.08746 0.752 0. method = "LM".62072 0.0798 .6.852 <2e-16 *** delta 0.28543 1.7 with an increment of 0. Again.04570 23.1 ' ' 1 Residual standard error: 2. "x3".2 for ρ2 .01 '*' 0.3. ρ2 .2 ).001 '**' 0. to = 0. Preselected values are −0.6 to 0. As a further example. R> ces4Grid <.01 '*' 0.3 for ρ1 . the Levenberg-Marquardt algorithm. we estimate a four-input nested CES function by a three-dimensional grid search for ρ1 . + data = cesData.05 '.6667 0.4. + rho2 = seq( from = -0. to = 0. codes: 0 '***' 0.11 R> plot( cesGrid ) This graphical illustration is shown in Figure 3. by = 0.255 1.01851 0.3 to 1. by = 0. to = 1.' 0. "x2".2 for ρ.7. .

111 < 2e-16 *** delta 0. codes: 0 '***' 0. to = 1.0 ● ● ● 0. rho = seq(from = -0.8.3.3).657 0.000271 *** --Signif.088727 .0 1.30000 0.05 '.45684 0.' 0. G´eraldine Henningsen 1260 ● 1240 rss ● ● ● ● 1220 ● ● ● ● ● ● 1200 ● ● ● ● ● 0.01 '*' 0. rho1 = seq(from = -0. Coefficients: Estimate Std. by = 0. "x3".482 < 2e-16 *** delta_1 0.90000 0. by = 0.60272 0. 'rho' assuming an additive error term Convergence achieved after 4 iterations Message: Relative error in the sum of squares is at most `ftol'. data = cesData.40000 0.5 1.4.02119 24. "x4").425583 Multiple R-squared: 0. Call: cesEst(yName = "y4".9.01632 78.642 0. to = 0. method = "LM".702 0.6.7887368 Elasticities of Substitution: 29 . rho 0.02608 23.001 '**' 0. rho2 = seq(from = -0.03237 24.Arne Henningsen.7.51498 0.2). "x2".24714 3.28086 0. Error t value Pr(>|t|) gamma 1.302 < 2e-16 *** rho_1 0.511382 rho_2 0.197 < 2e-16 *** delta_2 0.1 ' ' 1 Residual standard error: 1.2)) Estimation by non-linear least-squares using the 'LM' optimizer and a three-dimensional grid search for coefficients 'rho_1'. xNames = c("x1".78337 0. 'rho_2'. by = 0.5 rho Figure 3: Sum of squared residuals depending on ρ. to = 0.23500 1.

27032 2. An example is shown in Figure 4.52632 0. data = cesData.05 '.71429 0.688 1. codes: 0 '***' 0. "x2"). the plot method generates three three-dimensional graphs.02845 21.1 ' ' 1 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution Naturally. Starting values can be set by argument start.76923 0.06846 7.01 '*' 0. vrs = TRUE. c( "x1".54192 0. for a three-dimensional grid search.62220 0. where each of the three substitution parameters (ρ1 .' 0. R> cesStartGrid <. "x2" ). ρ2 . vrs = TRUE. ρ2 .873 <2e-16 *** rho 0.0625 .765 <2e-16 *** --Signif. xNames = c("x1". or as starting values for a new non-linear least-squares estimation. Coefficients: Estimate Std. plotting the sums of the squared residuals against the corresponding (pre-selected) values of ρ1 .11562 8.1 ' ' 1 .001 '**' 0. the values of the substitution parameters that are between the grid points can also be estimated.00443 ** E_3_4 (HM) 0.2)_(3.958 2. Error t value Pr(>|t|) E_1_2 (HM) 0.49e-14 *** --Signif.08582 0.855 <2e-16 *** delta 0. ρ) in turn is kept fixed at its optimal value. nu 1.30 Econometric Estimation of the Constant Elasticity of Substitution Function in R Estimate Std.05 '. + start = coef( cesGrid ) ) R> summary( cesStartGrid ) Estimated CES function with variable returns to scale Call: cesEst(yName = "y2".846 0. Error t value Pr(>|t|) gamma 1. R> plot( ces4Grid ) The results of the grid search algorithm can be used either directly. In the latter case.11990 5.01 '*' 0.863 0.04569 23. data = cesData.' 0.001 '**' 0. As it is (currently) not possible to account for more than three dimensions in a graph. start = coef(cesGrid)) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 4 iterations Message: Relative error in the sum of squares is at most `ftol'. codes: 0 '***' 0. and ρ.02385 0.56e-09 *** E_(1.29090 1.cesEst( "y2". would require a four-dimensional graph.4) (AU) 0.

6 0.0 0.8 0.5 rh rh 2 o_ 1 o_ 0.5 0.2 −0.0 1.0 −0.0 1 rh o 0.4 0.0 o_ rh 0.5 −0.Arne Henningsen.4 −420 −440 −460 −480 1. G´eraldine Henningsen negative sums of squared residuals −410 −420 −430 −440 0.2 0.8 0.6 0.0 −0.5 −420 −440 −460 −480 −500 0.0 0. ρ2 .4 0.4 Figure 4: Sum of squared residuals depending on ρ1 .5 1.5 0.2 −0. and ρ. 31 .0 −0.2 0.5 rh o rh 2 1.5 o_ 0.

78554 0.1 ' ' 1 R> ces4StartGrid <. data = cesData.41742 0. "x4" ). codes: 0 '***' 0.46878 0.049 < 2e-16 *** rho_1 0.442 < 2e-16 *** delta_1 0.12678 5.05 '.001 '**' 0.024 0. codes: 0 '***' 0.132696 rho 0.783 < 2e-16 *** delta_2 0.02573 23.08e-14 *** --Signif.504 0.374 < 2e-16 *** delta 0.001 '**' 0.730 1.25067 3. "x4").425085 Multiple R-squared: 0. data = cesData. codes: 0 '***' 0.93762 0.373239 rho_2 0.60130 0.6485 0.866 4. Coefficients: Estimate Std.890 0.01 '*' 0.' 0.28212 0.2)_(3.cesEst( "y4".16e-07 *** --Signif.1 ' ' 1 Residual standard error: 1.000184 *** --Signif. start = coef(ces4Grid)) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 6 iterations Message: Relative error in the sum of squares is at most `ftol'.3 1.01634 78.02130 24.0025 ** E_3_4 (HM) 0. "x3".51610 0.' 0.05 '.001 '**' 0.06677 7.4) (AU) 0. "x3".7888844 Elasticities of Substitution: Estimate Std. Error t value Pr(>|t|) E_1_2 (all) 0.446577 Multiple R-squared: 0.51224 0. Error t value Pr(>|t|) E_1_2 (HM) 0. + start = coef( ces4Grid ) ) R> summary( ces4StartGrid ) Estimated CES function with constant returns to scale Call: cesEst(yName = "y4".32 Econometric Estimation of the Constant Elasticity of Substitution Function in R Residual standard error: 2.1 ' ' 1 .7649817 Elasticity of Substitution: Estimate Std. c( "x1".34464 0.' 0.70551 0. xNames = c("x1".22922 1. "x2".74369 0.46e-09 *** E_(1. Error t value Pr(>|t|) gamma 1.05 '.01 '*' 0.23333 3.01 '*' 0. "x2".1224 5.03303 23.741 0.

and Cline 2011). on which the grid search should be performed. BFGS. G´eraldine Henningsen 33 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution 4. and Hillstrom 1980). The restricted translog model (6. on which the grid search should be performed. Estimations with the Levenberg-Marquardt algorithm are performed by function nls. However. the actual estimations are carried out by internal helper functions. which is an R interface to the Fortran package MINPACK (Mor´e. Estimations with the PORT routines use function nlminb from the stats package (R Development Core Team 2011). The test of the parameter restrictions defined in Equation 7 is performed by the function linear. for which the user has specified vectors of pre-selected values of the substitution parameters (rho1. Gil. Windover. which implements the actual grid search procedure. or functions from other packages.3.lm package (Elzhov and Mullen 2009).1.Arne Henningsen. In each of these internal calls of cesEst.hypothesis of the car package (Fox 2009). Garbow. Simulated Annealing (SANN). Ardia. Estimations with the “Conjugate Gradients” (CG).2. the parameters on which no grid search should be performed (and which are not fixed by the user). ρ2 . Estimations with the Newton-type algorithm are performed by function nlm from the stats package (R Development Core Team 2011). 4. Non-linear least-squares estimation The non-linear least-squares estimations are carried out by various optimisers from other packages. Implementation The function cesEst is the primary user interface of the micEconCES package (Henningsen and Henningsen 2011b). to the particular elements of these vectors. As cesEst is called with arguments . Nelder-Mead (NM). which uses the Fortran library PORT (Gay 1990). are estimated given the particular combination of substitution parameters. rho2. and L-BFGS-B algorithms use the function optim from the stats package (R Development Core Team 2011).lm of the minpack. or rho being a vector. Kmenta approximation The estimation of the Kmenta approximation (5) is implemented in the internal function cesEstKmenta. For each combination (grid point) of the pre-selected values of the substitution parameters (ρ1 . Estimations with the Differential Evolution (DE) algorithm are performed by function DEoptim from the DEoptim package (Mullen. which uses the Fortran library UNCMIN (Schnabel et al. This is done by setting the arguments of cesEst. 4. rho2. the function cesEstGridRho consecutively calls cesEst. 7) is estimated with function systemfit from the systemfit package (Henningsen and Hamann 2007). 1985) with a line search as the step selection strategy. and/or rho). 4. Grid search If the user calls cesEst with at least one of the arguments rho1. cesEst calls the internal function cesEstGridRho. This function uses translogEst from the micEcon package (Henningsen 2010) to estimate the unrestricted translog function (6). ρ).

+ yCES = NA. rounding errors can become large in specific circumstances. ρ2 .1 and C. Furthermore.13 We illustrate the rounding errors in the left panel of Figure 5. nu = 1. A few examples of using cesCalc are shown in the beginning of Section 3. ρ/ρ1 . "x2" ).c( gamma = 1. This first-order Taylor series approximation is the Kmenta approximation defined in Equation 5. respectively. the cesCalc function is called by the internal function cesRss that calculates and returns the sum of squared residuals. or −ν/ρ) are applied. rhoApprox = 0 ) + } 12 The limit of the traditional two-input CES function (1) for ρ approaching zero is equal to the exponential function of the Kmenta approximation (5) calculated with ρ = 0. which is the objective function in the non-linear least-squares estimations. data = cesData[1. . 4. ρ2 .1.cesCalc( xNames = c( "x1".g.g. 2e-6.frame( rho = seq( -2e-6. it estimates the CES function by non-linear least-squares with the corresponding substitution parameters (ρ1 . 3. rhoApprox = Inf ) + rhoData$yCES[ i ] <.4. delta = 0.1. Therefore. data = cesData[1. We noticed that the calculations with cesCalc using Equations 1.cesCalc( xNames = c( "x1". for the traditional two-input CES function (1). rho2. which is 5 · 10−6 by default.g. 13 The derivation of the first-order Taylor series approximations based on Uebe (2000) is presented in Appendix A. if the absolute value of ρ is smaller than. + coef = cesCoef. or 4 are imprecise if at least one of the substitution parameters (ρ1 . the CES functions are not defined. where this function is applied to generate the output variables of an artificial data set for demonstrating the usage of cesEst.6. ρ2 . ρ/ρ2 . "x2" ). ρ2 .data. yLin = NA ) R> # calculate dependent variables R> for( i in 1:nrow( rhoData ) ) { + # vector of coefficients + cesCoef <.12 In case of nested CES functions with three or four inputs. which has been created by following commands. −ρ1 . This is caused by rounding errors that are unavoidable on digital computers. 5e-9 ). e. and/or ρ approaching zero. cesCalc uses a first-order Taylor series approximation at the point ρ = 0 for calculating the output. If at least one substitution parameter (ρ1 ... and/or ρ) fixed at the values specified in the corresponding arguments. or −ρ) and then very large (in absolute terms) exponents (e. but are usually negligible. However. in CES functions with very small (in absolute terms) substitution parameters.]. ρ2 . function cesCalc calls the internal functions cesCalcN3 or cesCalcN4 for the actual calculations. In this case. R> rhoData <. ρ) is close to 0. and rho being all single scalars (or NULL if the corresponding substitution parameter is neither included in the grid search nor fixed at a pre-defined value). and/or ρ approaching zero are presented in Appendices B. + coef = cesCoef. cesCalc returns the limit of the output quantity for ρ1 .34 Econometric Estimation of the Constant Elasticity of Substitution Function in R rho1.. −ρ2 . argument rhoApprox. ρ) is equal to zero. when first very small (in absolute terms) exponents (e. or equal to. The limits of the three-input (4) and fourinput (3) nested CES functions for ρ1 .].1 ) + rhoData$yLin[ i ] <. Calculating output and the sum of squared residuals Function cesCalc can be used to calculate the output quantity of the CES function given input quantities and coefficients. rho = rhoData$rho[ i ].

Argument rhoApprox of cesEst must be a numeric vector. The right panel of Figure 5 shows that the relationship between ρ and the output y can be rather precisely approximated by a linear function. if at least one substitution parameter (ρ1 . red = lines( rhoData$rho. ρ2 . black = linearised) R> R> R> R> + R> 35 −1 0 1 2 3 rho Figure 5: Calculated output for different values of ρ. because they are the same as the commands for the left panel of this figure except for the command for creating the vector of ρ values.0e−08 5.e.05 rhoData$rho == 0 ] rhoData$rho == 0 ] col = "red". because calculating Taylor series approximations of nested CES functions (and their derivatives with respect to coefficients.5e−07 # normalise output variables rhoData$yCES <. G´eraldine Henningsen −2e−06 −1e−06 0e+00 rho 1e−06 2e−06 −0. see the following section) is very laborious and has no advantages over using linear interpolation. Depending on the number of substitution parameters (ρ1 . ylab = "y (normalised. This might not only affect the fitted values and residuals returned by cesEst (if at least one of the estimated substitu14 The commands for creating the right panel of Figure 5 are not shown here.00 0..5e−07 y (normalised. In this case. where the first element is passed to cesCalc (partly through cesRss). ρ) is close to zero.rhoData$yLin[ plot( rhoData$rho.Arne Henningsen. . i. two-. cesCalc uses linear interpolation in order to avoid rounding errors. type = "l". xlab = "rho". a one-. ρ2 .rhoData$yCES . cesCalc calculates the output quantities for two different values of each substitution parameter that is close to zero.10 0.14 In case of nested CES functions. because it is nearly linear for a wide range of ρ values. red = CES.rhoData$yLin[ rhoData$yLin <.15 When estimating a CES function with function cesEst.rhoData$yLin . rhoData$yCES. black = linearised)" ) −0.0e−08 1. These interpolations are performed by the internal functions cesInterN3 and cesInterN4. 15 We use a different approach for the nested CES functions than for the traditional two-input CES function. CES.20 y (normalised. red = CES. rhoData$yLin ) −1. zero (using the formula for the limit for this parameter approaching zero) and the positive or negative value of argument rhoApprox (using the same sign as this parameter). the user can use argument rhoApprox to modify the threshold to calculate the dependent variable by the Taylor series approximation or linear interpolation. ρ) that are close to zero. black = linearised) −5. or three-dimensional linear interpolation is applied.

. the partial derivatives with respect to λ are calculated also with Equation 16. we calculate these derivatives by first-order Taylor series approximations at the point ρ = 0 using the limits for ρ approaching zero:16  ρ  ∂y ν (1−δ) = eλ t xν1 δ x2 exp − ν δ (1 − δ) (ln x1 − ln x2 )2 (20) ∂γ 2 ∂y ν(1−δ) = γ eλ t ν (ln x1 − ln x2 ) xν1 δ x2 (21) ∂δ    ρ 1 − 1 − 2 δ + ν δ(1 − δ) (ln x1 − ln x2 ) (ln x1 − ln x2 ) 2  ∂y 1 λt ν δ ν(1−δ) = γ e ν δ (1 − δ) x1 x2 − (ln x1 − ln x2 )2 (22) ∂ρ 2  ρ ρ + (1 − 2 δ) (ln x1 − ln x2 )3 + ν δ(1 − δ) (ln x1 − ln x2 )4 3 4  ∂y ν(1−δ) = γ eλ t xν1 δ x2 δ ln x1 + (1 − δ) ln x2 (23) ∂ν  ρ − δ(1 − δ) (ln x1 − ln x2 )2 [1 + ν (δ ln x1 + (1 − δ) ln x2 )] 2 If ρ is zero or close to zero. these partial derivatives are:  − ν ∂y ρ −ρ = eλ t δx−ρ + (1 − δ)x (15) 1 2 ∂γ ∂y ∂y =γt (16) ∂λ ∂γ  − ν −1 ν  −ρ ∂y ρ −ρ −ρ (17) = − γ eλ t x1 − x−ρ δx + (1 − δ)x 2 1 2 ∂δ ρ   − ν ∂y ν ρ −ρ −ρ −ρ (18) = γ eλ t 2 ln δx−ρ + (1 − δ)x δx + (1 − δ)x 1 2 1 2 ∂ρ ρ − ν −1  ν ρ −ρ −ρ −ρ + γ eλ t δ ln(x1 )x−ρ + (1 − δ) ln(x )x δx + (1 − δ)x 2 2 2 1 1 ρ − ν   ∂y 1 ρ −ρ −ρ −ρ + (1 − δ)x δx + (1 − δ)x (19) = − γ eλ t ln δx−ρ 2 1 2 1 ∂ν ρ These derivatives are not defined for ρ = 0 and are imprecise if ρ is close to zero (similar to the output variable of the CES function. Therefore.36 Econometric Estimation of the Constant Elasticity of Substitution Function in R tion parameters is close to zero). see Section 4. respectively. 4. Partial derivatives with respect to coefficients The internal function cesDerivCoef returns the partial derivatives of the CES function with respect to all coefficients at all provided data points. For the traditional two-input CES function. If at least one 16 The derivations of these formulas are presented in Appendix A.5.2.2 and C. but now ∂y/∂γ is calculated with Equation 20 instead of Equation 15.2. if ρ is zero or close to zero.4). The partial derivatives of the nested CES functions with three and four inputs with respect to the coefficients are presented in Appendices B. but also the estimation results (if one of the substitution parameters was close to zero in one of the steps of the iterative optimisation routines).

e. ui is the residual of the ith observation. cesDerivCoefN3Delta1 (∂y/∂δ1 ). ν} is a coefficient of the CES function. and PORT. cesDerivCoefN4Delta (∂y/∂δ). This argument must be a numeric vector with exactly four elements: the first element defines the threshold for ∂y/∂γ (default value 5 · 10−6 ). 1 cesDerivCoefN3L1 (returning L1 = δ1 ln x1 + (1 − δ1 ) ln x2 ). . ρ2 . using the limit for these parameters approaching zero and potentially a one-. and cesDerivCoefN4B (returning B = δB1 + (1 − δ)B2 2 ). Function cesRssDeriv is used to provide analytical gradients for the other gradient-based optimisation algorithms. Newton-type. Finally. Argument rhoApprox of cesEst must be a numeric vector of five elements. θ ∈ {γ.or more-dimensional linear interpolations are (again) performed by the internal functions cesInterN3 and cesInterN4. i. and ∂y/∂δ (default value 5 · 10−6 ). which calculates the partial derivatives of the sum of squared residuals (RSS) with respect to all coefficients by:  N  X ∂RSS ∂yi = −2 ui . cesDerivCoefN3Rho (∂y/∂ρ). cesDerivCoefN4Delta2 (∂y/∂δ2 ). L-BFGS-B. calculated by: cesDerivCoefN4Gamma (∂y/∂γ). cesDerivCoefN4Rho (∂y/∂ρ). and cesDerivCoefN3B (returning ρ/ρ B = δB1 1 + (1 − δ)x−ρ The partial derivatives of the nested CES function with four inputs are 3 ). cesDerivCoefN3Rho1 (∂y/∂ρ1 ). ρ) is exactly zero or close to zero. Function cesDerivCoef has an argument rhoApprox that can be used to specify the threshold levels for defining when ρ1 . BFGS. and 1 cesDerivCoefN3Nu (∂y/∂ν) with helper functions cesDerivCoefN3B1 (returning B1 = δ1 x−ρ + (1 − δ1 )x2−ρ1 ).Arne Henningsen.e. the user can use argument rhoApprox to specify the thresholds below which the derivatives with respect to the coefficients are approximated by Taylor series approximations or linear interpolations. ∂y/∂δ2 . ρ2 . function cesDerivCoef is used by the internal function cesRssDeriv. cesDerivCoefN4Delta1 (∂y/∂δ1 ).2 and C. or three-dimensional linear interpolation. δ2 .lm so that the LevenbergMarquardt algorithm can use analytical derivatives of each residual with respect to all coefficients. and the fourth element defines the threshold for ∂y/∂ν (default value 5 · 10−6 ). The calculation of the partial derivatives and their limits are performed by several internal functions with names starting with cesDerivCoefN3 and cesDerivCoefN4. The limits of the partial derivatives of the nested CES functions for one or more substitution parameters approaching zero are also presented in Appendices B. cesDerivCoefN4L2 3 4 ρ/ρ1 ρ/ρ (returning L2 = δ2 ln x3 + (1 − δ2 ) ln x4 ). function cesDerivCoef is used to obtain the gradient matrix for calculating the asymptotic covariance matrix of the non-linear least-squares estimator (see Section 4. and cesDerivCoefN4Nu (∂y/∂ν) with helper functions cesDerivCoefN4B1 (returning B1 = δ1 x1−ρ1 + (1 − δ1 )x2−ρ1 ). (24) ∂θ ∂θ i=1 where N is the number of observations.2. Conjugate Gradients.. Furthermore. cesDerivCoef uses the same approach as cesCalc to avoid large rounding errors. ρ2 . the second element defines the threshold for ∂y/∂δ1 . cesDerivCoefN4L1 (returning 2 2 L1 = δ1 ln x1 + (1 − δ1 ) ln x2 ). G´eraldine Henningsen 37 substitution parameter (ρ1 . δ1 . Function cesDerivCoef is used to provide argument jac (which should be set to a function that returns the Jacobian of the residuals) to function nls. δ. two-. and ρ are “close” to zero.17 The one. cesDerivCoefN4Lambda (∂y/∂λ). where the second to the fifth element of 17 The partial derivatives of the nested CES function with three inputs are calculated by: cesDerivCoefN3Gamma (∂y/∂γ). i. cesDerivCoefN4Rho1 (∂y/∂ρ1 ). cesDerivCoefN3Delta (∂y/∂δ). ∂y/∂ρ2 . ρ. cesDerivCoefN4B2 (returning B2 = δ2 x−ρ + (1 − δ2 )x−ρ ). When estimating a CES function with function cesEst. ρ1 .6). and ∂yi /∂θ is the partial derivative of the CES function with respect to coefficient θ evaluated at the ith observation as returned by function cesDerivCoef. and ∂y/∂ρ (default value 10−3 ). the third element defines the threshold for ∂y/∂ρ1 . cesDerivCoefN4Rho2 (∂y/∂ρ2 ).. cesDerivCoefN3Lambda (∂y/∂λ).

6. and δ are set to 0. which generally corresponds to an elasticity of substitution of 0.5% per time period.5.2 and C. Covariance matrix The asymptotic covariance matrix of the non-linear least-squares estimator obtained by the various iterative optimisation methods is calculated by: σ ˆ 2  ∂y ∂θ > ∂y ∂θ !−1 (25) (Greene 2008. but also the estimation results obtained by a gradient-based optimisation algorithm (if one of the substitution parameters is close to zero in one of the steps of the iterative optimisation routines).. i=1 CESi (27) where CESi indicates the (nested) CES function evaluated at the input quantities of the ith observation and with coefficient γ equal to one. ρ2 . δ2 . all “fixed” coefficients (e. and ρ are estimated (not fixed as.. the internal function cesEstStart checks if the number of starting values is correct and if the individual starting values are in the appropriate range of the corresponding parameters. If the CES function includes a time variable. N is the number of observations. the starting value of λ is set to 0. e. and all other coefficients equal to the above-described starting values.8. The starting value of ν is set to 1.38 Econometric Estimation of the Constant Elasticity of Substitution Function in R this vector are passed to cesDerivCoef. which corresponds to a technological progress of 1.g. ρ1 . Starting values If the user calls cesEst with argument start set to a vector of starting values.e. k is the number of coefficients. If the coefficients ρ1 .25. i.2 for the nested CES functions). and σ ˆ2 denotes the estimated variance of the residuals. As Equation 25 is only valid asymptotically. Finally. .7. which corresponds to constant returns to scale. the starting value of γ is set to a value so that the mean of the residuals is equal to zero. where ∂y/∂θ denotes the N ×k gradient matrix (defined in Equations 15 to 19 for the traditional two-input CES function and in Appendices B. during grid search). function cesEstStart determines the starting values automatically.015. The starting values of δ1 . 292).g. or ρ) equal to their pre-selected values. N (26) i=1 i. ρ2 . The choice of the threshold might not only affect the covariance matrix of the estimates (if at least one of the estimated substitution parameters is close to zero). PN yi γ = PN i=1 . p. their starting values are set to 0..e. we calculate the estimated variance of the residuals by σ ˆ2 = N 1 X 2 ui . 4. 4. without correcting for degrees of freedom. If no starting values are provided by the user.

respectively.Arne Henningsen. their standard errors. t-values. 4. The print method for objects of class "summary. the estimated coefficients. G´eraldine Henningsen 39 4. and marginal significance levels. In case of a two-dimensional grid search. the fitted values. their standard errors. the plot method plots three perspective plots by holding one of the three coefficients ρ1 . The internal function cesCheckRhoApprox checks argument rhoApprox of functions cesEst. In case of a three-dimensional grid search. ρ2 . L-BFGS-B. and cesRssDeriv. the internal function cesCoefBounds creates and returns the default bounds depending on the optimisation algorithm as described in Sections 3. The print method prints the call. If the model is estimated by a one-dimensional grid search for ρ1 .3 and 3. and the residuals.4. which are the names of the coefficients of the CES function. their covariance matrix. .8. and ρ constant in each of the three plots. The summary method calculates the estimated standard error of the residuals (ˆ σ ).cesEst". the plot method draws a perspective plot by using the command persp of the graphics package (R Development Core Team 2011) and the command colorRampPalette of the grDevices package (R Development Core Team 2011) (for generating a colour gradient). convergence). t-values. and ρ to the vector of coefficients. but does not specify lower or upper bounds of the coefficients... are not included in the vector of estimated coefficients. Methods The micEconCES package makes use of the “S3” class system of the R language introduced in Chambers and Hastie (1992).9. The coef method for objects of class "summary. If the user selects the optimisation algorithm Differential Evolution. The coef. cesRss. the estimated coefficients and elasticities of substitution.cesEst" prints the call. if these coefficients are fixed (e.g. the R2 value as well as the standard errors. and marginal significance levels (P -values) of the estimated parameters and elasticities of substitution.6). vcov. fitted. and residuals methods extract and return the estimated coefficients.cesEst" returns a matrix with four columns containing the estimated coefficients. and marginal significance levels as well as some information on the estimation procedure (e. Other internal functions The internal function cesCoefAddRho is used to add the values of ρ1 . the covariance matrix of the estimated coefficients and elasticities of substitution. cesDerivCoef. or ρ. ρ2 . ρ2 .default and points of the graphics package (R Development Core Team 2011). The object returned by the summary method is of class "summary. The internal function cesCoefNames returns a vector of character strings. or PORT.g. Objects returned by function cesEst are of class "cesEst" and the micEconCES package includes several methods for objects of this class. respectively. t-values. The plot method can only be applied if the model is estimated by grid search (see Section 3. during grid search for ρ) and hence. and the estimated elasticities of substitution. this method plots a simple scatter plot of the pre-selected values against the corresponding sums of the squared residuals by using the commands plot. algorithm.

We can re-define the parameters and variables in the above function in order to obtain the standard two-input CES function (1). g is the technology growth rate. x1 = 1. Buckheit and Donoho 1995.e. Hence. and Weil (1992) and Durlauf and Johnson (1995) and comprise crosssectional data on the country level. which is in contrast to the previous sections. to confirm the reliability of the micEconCES package by comparing cesEst’s results with results published in the literature. we aim at replicating estimations of CES functions published in two journal articles. ni is the population growth rate. yi is the steady-state aggregate output (Gross Domestic Product. second. We will re-estimate a Solow growth model based on an aggregate CES production function with capital and “augmented labour” (i. the elasticity of substitution is σ = 1/(1 − ρ). invest (average ratio of investment (including government investment) to GDP from 1960 to 1985 in percent). A is the efficiency of labour (with growth rate g). δ is the depreciation rate of capital goods. δ = 1−α . Please note that the relationship between the coefficient ρ and the elasticity of substitution is slightly different from this relationship in the original two-input CES function: in this case. we decomposed the (unobservable) steady-state output per unit of augmented labour in country i (yi∗ = Yi /(A Li ) with Yi being aggregate output. Henderson. the quantity of labour multiplied by the efficiency of labour) as inputs. and Kumbhakar (2011). Romer. amongst others. Sun. and Kumbhakar (2011) Our first replication study aims to replicate some of the estimations published in Sun. the estimated ρ must lie between minus infinity and (plus) one. The data used in Sun et al. Third. and ν = 1. GDP) per unit of labour. subscript i indicates the country. Li being aggregate labour. x2 = (ni + g + δ)/sik . which should be afforded higher priority in scientific economic research (e. This section has three objectives: first.. Greene. Replication studies In this section. This model is given in Masanjala and Papageorgiou (2004. the distribution parameter α should lie between zero and one. to encourage reproducible research. McGeary. where: 1 γ = A.1. Henderson. eq. so that the estimated parameter δ must have—in contrast to the standard CES function—a value larger than or equal to one. Furthermore. α is the distribution parameter of capital. Karrenbach. and σ is the elasticity of substitution. and Vinod 2008. McCullough. the variables gdp85 (per capita GDP in 1985).. 5. ρ = (σ − 1)/σ.e. and A being the efficiency of labour) into the (observable) steady-state output per unit of labour (yi = Yi /Li ) and the (unobservable) efficiency component (A) to obtain the equation that is actually estimated. 3):18 σ "   σ−1 #− σ−1 σ 1 α sik yi = A − (28) 1 − α 1 − α ni + g + δ Here. and popgrowth (average growth rate of working-age population 1960 to 1985 18 In contrast to Equation 3 in Masanjala and Papageorgiou (2004). McCullough. the coefficient ρ has the opposite sign. McCullough 2009). . and Claerbout 2000. Schwab. Anderson. and Harrison 2008.g. (2011) and Masanjala and Papageorgiou (2004) are actually from Mankiw.40 Econometric Estimation of the Constant Elasticity of Substitution Function in R 5. This data set is available in the R package AER (Kleiber and Zeileis 2009) and includes. to highlight and discuss the problems that occur when using real-world data by basing the econometric estimations in this section on realworld data. This article is itself a replication study of Masanjala and Papageorgiou (2004). sik is the ratio of investment to aggregate output. i.

we will calculate the distribution parameter of capital (α) and the elasticity of substitution (σ) manually. "x2"). "x2"). because it has to be calculated with a non-standard formula in this model. codes: 0 '***' 0.141 549.166 -1.175 0. R> cat( "alpha =". package = "AER" ) R> GrowthDJ <.2401 delta 3. (1992) that g + δ is equal to 5%): R> GrowthDJ$x1 <.cesEst( "gdp85".1 ' ' 1 Residual standard error: 3313. rho -0.05 '. ela = FALSE) Estimated CES function with constant returns to scale Call: cesEst(yName = "gdp85".2354 --Signif.187 0. G´eraldine Henningsen 41 in percent).748 Multiple R-squared: 0. Coefficients: Estimate Std. xNames = c("x1". R> data( "GrowthDJ". R> cesNls <. oil == "no" ) Now we will calculate the two “input” variables for the Solow growth model as described above and in Masanjala and Papageorgiou (2004) (following the assumption of Mankiw et al. data = GrowthDJ) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence achieved after 23 iterations Message: Relative error in the sum of squares is at most `ftol'. (2011).1 ) / coef( cesNls )[ "delta" ]. data = GrowthDJ ) R> summary( cesNls.1 R> GrowthDJ$x2 <.776 0.Arne Henningsen.01 '*' 0. "\n" ) . ( coef( cesNls )[ "delta" ] .subset( GrowthDJ. where we suppress the presentation of the elasticity of substitution (σ). The following commands load this data set and remove data from oil producing countries in order to obtain the same subset of 98 countries that is used by Masanjala and Papageorgiou (2004) and Sun et al.6016277 Now.993 1.197 0.( GrowthDJ$popgrowth + 5 ) / GrowthDJ$invest The following commands estimate the Solow growth model based on the CES function by nonlinear least-squares (NLS) and print the summary results.' 0.001 '**' 0. c( "x1".239 1. Error t value Pr(>|t|) gamma 646.0757 .977 2.

42 Econometric Estimation of the Constant Elasticity of Substitution Function in R alpha = 0. As the CES function approaches a Cobb-Douglas function if the coefficient ρ approaches zero and cesEst internally uses the limit for ρ approaching zero if ρ equals zero. ela = FALSE ) Estimated CES function with constant returns to scale Call: cesEst(yName = "gdp85". we also consider this replication exercise to be successful. ( coef( cdNls )[ "delta" ] . If we restrict ρ to zero and assume a multiplicative error term.5947313 R> cat( "alpha =".1609 0.0797 543. c( "x1".coef( cesNls )[ "rho" ] ).017722 * 0. 1 / ( 1 .1772 2.001 '**' Pr(>|t|) 0.0000 0.835429 These calculations show that we can successfully replicate the estimation results shown in Sun et al.05 '. Coefficients: Estimate Std.000445 *** 1. Table 1: α = 0. codes: 0 '***' 0. σ = 0. we can use function cesEst to estimate Cobb-Douglas functions by non-linear least-squares (NLS).01 '*' 0. (2011.6955 3.7485641 R> cat( "sigma =". "x2"). "\n" ) sigma = 0.1 ) / coef( cdNls )[ "delta" ]. "\n" ) alpha = 0.512 rho 0.4425 0.590591 As the deviation between our calculated α and the corresponding value published in Sun et al.1 ' ' 1 Residual standard error: 3342. Table 1: α = 0. (2011.308 Multiple R-squared: 0.000000 0.' 0. data = GrowthDJ. xNames = c("x1". if we restrict coefficient ρ to zero: R> cdNls <. data = GrowthDJ. Error t value gamma 1288.7486. "x2"). the estimation is in fact equivalent to an OLS estimation of the logarithmised version of the Cobb-Douglas function: .8354).5907) is very small.cesEst( "gdp85".000 --Signif. rho = 0 ) R> summary( cdNls.371 delta 2. rho = 0) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Coefficient 'rho' was fixed at 0 Convergence achieved after 7 iterations Message: Relative error in the sum of squares is at most `ftol'.

"x2"). "\n" ) alpha = 0. xNames = c("x1". "x2").2. Kemfert (1998) Our second replication study aims to replicate the estimation results published in Kemfert (1998).cesEst( "gdp85". codes: 0 '***' 0.1 ' ' 1 Residual standard error: 0.001 '**' 0.195 2. G´eraldine Henningsen 43 R> cdLog <. and labour.51e-16 *** rho 0.' 0.Arne Henningsen. data = GrowthDJ.05 '.19 As all of our above replication exercises were successful. ( coef( cdLog )[ "delta" ] . but generally. We do this just to test the reliability of cesEst and for the sake of curiosity. multErr = TRUE. Error t value Pr(>|t|) gamma 965.2337 120. we conclude that the micEconCES package has no problems with replicating the estimations of the two-input CES functions presented in Sun et al. multErr = TRUE ) R> summary( cdLog. c( "x1".4880 0. (2011).5973597 R> cat( "alpha =". Kemfert’s CES functions basically have the same specification as our three-input nested CES function 19 This is indeed a rather complex way of estimating a simple linear model by least squares.5981). we recommend using simple linear regression tools to estimate this model. ela = FALSE ) Estimated CES function with constant returns to scale Call: cesEst(yName = "gdp85".1 ) / coef( cdLog )[ "delta" ]. . data = GrowthDJ.6814132 Multiple R-squared: 0. Coefficients: Estimate Std. we can successfully replicate the α shown in Sun et al. Kemfert (1998) estimates nested CES functions with all three possible nesting structures.4003 8. (2011.017 1. As nested CES functions are not invariant to the nesting structure.3036 8. She estimates nested CES production functions for the German industrial sector in order to analyse the substitutability between the three aggregate inputs capital. energy. rho = 0.01 '*' 0.0000 0.5980698 Again.1056 0.08e-15 *** delta 2. 5. Table 1: α = 0. rho = 0) Estimation by non-linear least-squares using the 'LM' optimizer assuming a multiplicative error term Coefficient 'rho' was fixed at 0 Convergence achieved after 8 iterations Message: Relative error in the sum of squares is at most `ftol'.000 1 --Signif.

and labour. and ν = 1. we try to estimate the first model specification using the standard function for non-linear least-squares estimations in R (nls). labour input (A) is the total number of persons employed in the West German industrial sector (in million). and energy input (E) is determined by the final energy consumption of the West German industrial sector (in GWh).try( nls( Y ~ gamma * exp( lambda * time ) * + ( delta2 * ( delta1 * K ^(-rho1) + + ( 1 . The following commands load the data set. e. nls terminates with an error message. year < 1973 | year > 1975.5.1960 R> GermanIndustry <. λ = ms .2. . and energy. lambda = 0. Indeed. the three inputs x1 . where the subscript s = 1. δ = as . labour.5. labour. add a linear time trend (starting at zero in 1960).015. and in the third nesting structure (s = 3). δ1 = bs . which were originally published by the German statistical office. 2.44 Econometric Estimation of the Constant Elasticity of Substitution Function in R defined in Equation 4 and allow for Hicks-neutral technological change as in Equation 13.delta2 ) * A^(-rho) )^( . 3 of the parameters in Kemfert (1998) indicates the nesting structure. R> cesNls1 <. env) : Missing value or an infinity produced when evaluating the model However.. ρ = βs . the estimation with cesEst using. + start = c( gamma = 1. they are capital. ) First. R> data( "GermanIndustry" ) R> GermanIndustry$time <. they are energy. where—according to the specification of the three-input nested CES function (see Equation 4)—the first and second input (x1 and x2 ) are nested together so that the (constant) Allen-Uzawa elasticities of substitution between x1 and x3 (σ13 ) and between x2 and x3 (σ23 ) are equal.subset( GermanIndustry. However. The data used by Kemfert (1998) are available in the R package micEconCES. + data = GermanIndustry ) ) R> cat( cesNls1 ) Error in numericDeriv(form[[3L]].GermanIndustry$year . and x3 are capital. as in many estimations of nested CES functions with real-world data. delta1 = 0. the LevenbergMarquardt algorithm.2 ). In the first nesting structure (s = 1). these data were taken from the appendix of Kemfert (1998) to ensure that we used exactly the same data. energy. Output (Y) is given by gross value added of the West German industrial sector (in billion Deutsche Mark at 1991 prices).delta1 ) * E^(-rho1) )^( rho / rho1 ) + + ( 1 .1 / rho ). rho = 0. capital input (K) is given by gross stock of fixed assets of the West German industrial sector (in billion Deutsche Mark at 1991 prices). rho1= 0. x2 .g. In contrast. respectively. usually returns parameter estimates. ρ1 = αs . and remove the years of economic disruption during the oil crisis (1973 to 1975) in order to obtain the same subset as used by Kemfert (1998). names(ind). + delta2 = 0. in the second nesting structure (s = 2). The data are annual aggregated time series data of the entire German industry for the period 1960 to 1993. Kemfert (1998) does not allow for increasing or decreasing returns to scale and the naming of the parameters is different with: γ = As . and capital.

Coefficients: Estimate Std.853 rho_1 5.376e+00 -1.lm. data = GermanIndustry. --Signif. xNames = c("K".567 E_(1.005 5.561 0.Arne Henningsen.lm.20 While our estimated coefficient λ is not too far away from the corresponding coefficient in Kemfert (1998) (m1 = 0.9958804 Elasticities of Substitution: Estimate Std. Error t value Pr(>|t|) E_1_2 (HM) 0.03239 0. codes: 0 '***' 0.01 '*' 0. "A").581e-04 45.575 rho -2.' 0.control(maxiter = 1024.300e+01 9.064 . Consequently. "A" ). control = nls.641e-05 4.cesEst( "Y".112e-04 0. + control = nls. c( "K". tName = "time". coefficient ρ estimated by cesEst 20 Of course.01852 0. the default convergence criteria are not fulfilled after this number of iterations.653). .927e-66 0.2)_3 (AU) NA NA NA NA HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution Although we set the maximum number of iterations to the maximum possible value (1024).5300.222). Moreover. "E".25944 Multiple R-squared: 0.572 0. maxfev = 2000 ) ) R> summary( cesLm1 ) Estimated CES function with constant returns to scale Call: cesEst(yName = "Y".186 0. maxfev = 2000)) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Convergence NOT achieved after 1024 iterations Message: Number of iterations has reached `maxiter' == 1024.05 '. tName = "time".900e+01 5.1813 in).59e-07 *** lambda 2. we can achieve “successful convergence” by increasing the tolerance levels for convergence.549e+00 1. ρ1 = α1 = 0. G´eraldine Henningsen 45 R> cesLm1 <.139e-68 5.004 0.444e+01 0.100e-02 4.997 delta 7.852 0.795e+00 5. our Hicks-McFadden elasticity of substitution between capital and energy considerably deviates from the corresponding elasticity published in Kemfert (1998) (σα1 = 0. + data = GermanIndustry.001 '**' 0.842 < 2e-16 *** delta_1 2. ρ = β1 = 0. this is not the case for the estimated coefficients ρ1 and ρ (Kemfert 1998. Error t value Pr(>|t|) gamma 2.control( maxiter = 1024. "E".1 ' ' 1 Residual standard error: 10.

0839 . c( "K".09314 0.max = 1000.46 Econometric Estimation of the Constant Elasticity of Substitution Function in R is not in the economically meaningful range. rho_1 9.04886 1.453e+00 0. control = list(iter.' 0.061 0. Kemfert (1998) does not report the estimates of the other coefficients.729 0.075e-02 5. If cesEst is called with argument method equal to "PORT". + data = GermanIndustry. Unfortunately. "A"). rho 5.001 '**' 0. i. the coefficients are constrained to be in the economically meaningful region by default. contradicts economic theory.287e-04 39.311e-01 2.e.65314 1.932 0. For the two other nesting structures. method = "PORT".624 0. cesEst using the Levenberg-Marquardt algorithm reaches a successful convergence.158e-13 1.8286 --Signif.217 0.633e+00 1.533e+00 2.9516 delta 9. codes: 0 '***' 0. data = GermanIndustry.995533 Elasticities of Substitution: Estimate Std. which allows for box constraints on the parameters.5989 lambda 2. "E".68322 Multiple R-squared: 0.max = 2000)) Estimation by non-linear least-squares using the 'PORT' optimizer assuming an additive error term Convergence NOT achieved after 464 iterations Message: false convergence (8) Coefficients: Estimate Std.905e-01 1. eval.909e-12 0.526 0.906 0.0566 . + control = list( iter.. In order to avoid the problem of economically meaningless estimates. xNames = c("K".0534 . but again the estimated ρ1 and ρ parameters are far from the estimates reported in Kemfert (1998) and are not in the economically meaningful range.476e-01 4. Error t value Pr(>|t|) gamma 1. E_(1. so that the elasticities of substitution between capital/energy and labour are not defined. method = "PORT". "A" ).2)_3 (AU) 0.01 '*' 0. tName = "time".736e+00 5.max = 2000) ) R> summary( cesPort1 ) Estimated CES function with constant returns to scale Call: cesEst(yName = "Y".1 ' ' 1 Residual standard error: 10.max = 1000.5324 --- . "E". Error t value Pr(>|t|) E_1_2 (HM) 0. we re-estimate the model with cesEst using the PORT algorithm.cesEst( "Y". eval.05 '.914e+00 0.251 <2e-16 *** delta_1 1. tName = "time".04623 0. R> cesPort1 <.

While ρ1 and ρ2 can be restricted automatically by arguments rho1 and rho of cesEst.0222 "E1".001 '**' 0. γ.GermanIndustry$A cesLmFixed1 <.e. ρ1 .0222 * exp( 0.01 '*' 0. and ρ restricted to be equal to the estimates reported in Kemfert (1998). c( "K1".53. maxfev = 2000)) Estimation by non-linear least-squares using the 'LM' optimizer assuming an additive error term Coefficient 'rho_1' was fixed at 0. rho1 = 0.0222 * exp( 0. codes: 0 '***' 0.. As the model estimated by Kemfert (1998) is restricted to have constant returns to scale (i. maxfev = 2000 ) ) Estimated CES function with constant returns to scale Call: cesEst(yName = "Y". R> R> R> R> + + R> GermanIndustry$K1 <.Arne Henningsen.' 0. ρ1 . 1000. and ρ to the estimates reported in Kemfert (1998) and only estimate the coefficients that are not reported.lm.e. we can simply multiply all inputs by eλ t . we have to impose the restriction on λ manually. rho = 0.1813 Convergence achieved after 22 iterations Message: Relative error between `par' and the solution is at most `ptol'.21 In the following. data = GermanIndustry. these estimated coefficients are in the economically meaningful region and the fit of the model is worse than for the unrestricted estimation using the Levenberg-Marquardt algorithm (larger standard deviation of the residuals).lm.GermanIndustry$K GermanIndustry$E1 <.control( maxiter = summary( cesLmFixed1 ) * exp( 0. "E1".. rho1 = 0.05 '. the output is linearly homogeneous in all three inputs). we restrict the coefficients λ.1 ' ' 1 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution As expected. δ1 . Coefficients: Estimate Std. * GermanIndustry$time ) * GermanIndustry$time ) * GermanIndustry$time ) data = GermanIndustry. G´eraldine Henningsen 47 Signif. control = nls. "A1").5300. we could not replicate the estimates reported in Kemfert (1998). the optimisation algorithm reports an unsuccessful convergence and estimates of the coefficients and elasticities of substitution are still not close to the estimates reported in Kemfert (1998).53 Coefficient 'rho' was fixed at 0. The following commands adjust the three input variables and re-estimate the model with coefficients λ.cesEst( "Y". by setting argument multErr of function cesEst to TRUE). rho = 0. However. "A1" ). we re-estimated the models assuming that the error term was multiplicative (i.. xNames = c("K1".1813. . and δ. However. even when using this method.1813. because we thought that Kemfert (1998) could have estimated the model in logarithms without reporting this in the paper.e. control = nls.GermanIndustry$E GermanIndustry$A1 <.control(maxiter = 1000. i. Error t value Pr(>|t|) 21 As we were unable to replicate the results of Kemfert (1998) by assuming an additive error term.

-0.8465 2. -0. 4.4.290 0.767 E_(1.48 Econometric Estimation of the Constant Elasticity of Substitution Function in R gamma 1.2.075280 0.8. 0. .526 0. 10.181300 6.2068 0. 11. LM.965 Residual standard error: 12. 0. 12. 12.2. 8. Surprisingly.6536 2. L-BFGS-B. 3. 0.2. 7.5. although the coefficients λ.9204 0.103 0. 5.6.4. 3.6.884490 rho_1 0.1. Moreover. 0.2. -0. Hence.003031 delta 0. 0. -0. 0.1. BFGS.2.0.2)_3 (AU) 0. 1. 10.2. 2. the R2 value of our restricted estimation is not equal to the R2 value reported in Kemfert (1998).918 0.0.72876 Multiple R-squared: 0. and 14.0. 0. Error t value Pr(>|t|) E_1_2 (HM) 0.772 HM = Hicks-McFadden (direct) elasticity of substitution AU = Allen-Uzawa (partial) elasticity of substitution The fit of this model is—of course—even worse than the fit of the two previous models. -0.8.6. ρ1 .4. 0. 2. 9.0.7.8. DE) ˆ one gradient-based algorithms for unrestricted optimisation (LM) with starting values equal to the estimates from the global algorithms ˆ one gradient-based algorithms for restricted optimisation (PORT) with starting values equal to the estimates from the global algorithms ˆ two-dimensional grid search for ρ1 and ρ using all gradient-based algorithms (Newton.8. 0. as our R2 value is much larger for one nesting structure. 3. 1.2.165851 4.3. 1.6. Given our unsuccessful attempts to reproduce the results of Kemfert (1998) and the convergence problems in our estimations above. each grid search included 612 = 3721 estimations. 6. Given that ρ1 and ρ are identical to the corresponding parameters published in Kemfert (1998). we systematically re-estimated the models of Kemfert (1998) using cesEst with many different estimation methods: ˆ all gradient-based algorithms for unrestricted optimisation (Newton. 10. 13.806 0.0. 7. 3.245 -0. 8. 6.933 0. 8. BFGS. 2.4. 11.2. 2.8.2.044 0.530000 rho 0. -0.9936586 Elasticities of Substitution: Estimate Std. -0.4. 13.680111 5.035786 1.0.296 0. 9.0. 4.0. SANN.8. -0.6.4.2. PORT)22 22 The two-dimensional grid search included 61 different values for ρ1 and ρ: -1.4.0.6.4. 2. 12. 4.0. 0.8.6. -0.7. 5. 1.5.8. and ρ are exactly the same. whilst it is considerably smaller for the other.599 0.4. 1.494895 delta_1 -0. 3. 6.0. PORT) ˆ all global algorithms (NM.092737 0.9.085 0.6.6. For the two other nesting structures.6. the R2 values strongly deviate from the values reported in Kemfert (1998).4. LM) ˆ all gradient-based algorithms for restricted optimisation (L-BFGS-B.8. the reported elasticities of substitution are also equal to the values published in Kemfert (1998).3.8. the coefficient δ1 is not in the economically meaningful range.9.

5. δ. grid-search with the PORT or BFGS algorithm (second nesting structure). δ1 . LM. or the PORT algorithm with starting values taken from global algorithms or grid search (second and third nesting structure).max = 1000 (max. respectively. where the PORT algorithm can be used to restrict the estimates to the economically meaningful region. 2. Please note that the grid-search with an algorithm for unrestricted optimisation (e. these results suggest that the three-input nested CES production technology is a poor approximation of the true production technology.g.max = 1000 (max.000. because either coefficient ρ1 or coefficient ρ is less than than minus one. and 3. However.23 The models with the best fit. 10.24 23 The R script that we used to conduct the estimations and to create these tables is available in Appendix D. PORT) with starting values equal to the estimates from the corresponding grid searches. or that the data are not reasonably constructed (e. itermax = 1e4 (max.e.000 iterations) ˆ L-BFGS-B: maxit = 5000 (max. G´eraldine Henningsen 49 ˆ all gradient-based algorithms (Newton. 1. 2.. i. 5.000 function evaluations) ˆ Nelder-Nead: maxit = 5000 (max. 24 . are always obtained by the Levenberg-Marquardt algorithm—either with the standard starting values (third nesting structure).000 iterations) ˆ Levenberg-Marquardt: maxiter = 1000 (max.000 iterations).000 iterations) ˆ SANN: maxit = 2e6 (2..000 iterations) ˆ DE: NP = 500 (500 population members). maxfev = 2000 (max. LevenbergMarquardt) can be used to guarantee economically meaningful values for ρ1 and ρ. Hence. the models with the best fit are obtained by grid-search with the Levenberg-Marquardt algorithm (first and second nesting structure).Arne Henningsen.000 iterations). but not for γ.000 population generations) The results of all these estimation approaches for the three different nesting structures are summarised in Tables 1.000 function evaluations) ˆ PORT: iter.g. If we only consider estimates that are economically meaningful. problems with aggregation or deflation). 1.. with the smallest sums of squared residuals. L-BFGS-B. 500 iterations) ˆ BFGS: maxit = 5000 (max. we can conclude that the Levenberg-Marquardt and the PORT algorithms are—at least in this study—most likely to find the coefficients that give the best fit to the model. we changed the following control parameters of the optimisation algorithms: ˆ Newton: iterlim = 500 (max. or with starting values taken from global algorithms (second and third nesting structure) or grid search (first and third nesting structure). Assuming that the basic assumptions of economic production theory are satisfied in reality. 1. all estimates that give the best fit are economically meaningless. eval. and ν. the PORT algorithm with default starting values (second nesting structure). 5. For these estimations. BFGS.

0000 0.0210 0.8728 51.9954 0.9864 0.9883 0.0210 0.6487 52.2000 -1.5797 11.5300 10.0000 63.2078 0.8245 0.9870 0.0000 0.5447 0.9955 0.0000 1.0119 0.1470 -2.6000 14.9576 0.PORT NM .8845 0.0222 0.0126 0.LM Newton grid Newton grid start BFGS grid BFGS grid start PORT grid PORT grid start LM grid LM grid start 1.1078 0.1233 -0.9508 1.2479 7.2000 -0.9517 0.5797 2.9954 0.3934 8.9959 0.3677 2.4582 -0.7807 0.9956 0.9901 -1.0000 0.9956 0.9798 0.0150 0.0206 0.8310 0.3720 28.0150 0.0000 0.0000 0.4887 0. .2078 0.9872 0.9959 Table 1: Estimation results with first nesting structure: in the column titled “c”.4234 7.0000 0. whereas a “0” indicates that the optimisation algorithm reports non-convergence.7565 0.9850 3.7022 -2.0208 0.1813 3.7565 0.0207 0.9957 0. The rows titled “<globAlg> .5940 0.4887 0. ρ1 .9135 -1.6000 11.LM DE DE . The rows entitled “<gradAlg> grid” present the results when a two-dimensional grid search for ρ1 and ρ was performed using the gradient-based algorithm “<gradAlg>” at each grid point.6000 11.0208 0.6000 11.3205 9.8727 9.0920 2.0236 0.0143 0.2138 16.0000 0.0210 δ1 δ -0.0210 0.3870 13. The row entitled “fixed” presents the results when λ.8340 λ 0.9949 0.0000 0.9700 3.50 Econometric Estimation of the Constant Elasticity of Substitution Function in R γ Kemfert (1998) fixed Newton BFGS L-BFGS-B PORT LM NM NM .9728 0.0208 0.6382 6.0118 0.9955 0. The rows highlighted with in red include estimates that are economically meaningless.0001 0.0206 0.0000 0.0000 0.PORT SANN .0203 0. and ρ are restricted to have exactly the same values that are published in Kemfert (1998).6204 7.9954 0.3205 4.1386 1.9955 1.9347 ρ 0.8379 3.PORT DE .5373 0.2045 3.7000 -0.0900 2.9731 28.4400 6.1813 0.0000 1.0030 0.2704 10.9822 0.0001 ρ1 0.3841 3.0206 0.0207 0.9811 0.4882 8.9937 0.1000 -0.0000 0.9959 0.2192 14.0222 0.3205 15.0000 0.9996 0.LM SANN SANN .4668 -0.<gradAlg>” present the results when the model is first estimated by the global algorithm “<globAlg>” and then estimated by the gradient-based algorithm “<gradAlg>” using the estimates of the first step as starting values.9956 0.0000 0.8187 0.9656 0.5421 -0.9811 0.1502 11.0000 -2. The rows entitled “<gradAlg> grid start” present the results when the gradient-based algorithm “<gradAlg>” was used with starting values equal to the results of a two-dimensional grid search for ρ1 and ρ using the same gradient-based algorithm at each grid point.1502 4.0118 0.4969 c RSS 1 1 1 1 0 0 1 0 0 5023 14969 3608 14969 3566 3264 4043 3542 3266 16029 9162 10794 14075 9299 10611 3637 3637 3465 3465 3469 3469 3416 3259 0 1 0 1 1 0 1 1 1 0 1 0 R2 0.9528 -0.0002 0.0000 0.0370 0.9780 0.3982 3.8000 11.9423 6.9884 0.1078 0.9866 0.0120 0.8000 6.4949 14.1241 28. a “1” indicates that the optimisation algorithm reports a successful convergence.1000 0.0208 0.0000 0.9956 0.5300 0.0209 0.1791 3.7000 -0.0000 0.

9885 0.2382 7.0207 0.9942 0.5434 7.2167 -0.6353 5.0208 0.3101 6.7211 6.1816 -0.1000 -1.2895 0.0203 0.5015 16.9951 0.9942 0.9296 1.9954 0.9954 Table 2: Estimation results with second nesting structure: see note below Table 1.9910 0.5434 7. G´eraldine Henningsen γ Kemfert (1998) fixed Newton BFGS L-BFGS-B PORT LM NM NM .9892 0.5434 7.5208 5.7860 0.8530 -1.0208 0.7743 -2.6806 5.0207 0.7889 0.6891 5.2000 5.LM Newton grid Newton grid start BFGS grid BFGS grid start PORT grid PORT grid start LM grid LM grid start 1.9951 0.9798 0.3052 0.2841 5.0000 0.0207 0.0000 -1.9954 0.0206 0.0208 δ1 δ 0.2995 7.9984 0.0000 0.0069 0.3334 6.6749 7.6612 -1.9291 0.9892 0.9950 0.3350 -1.9291 1.9296 0.3534 9.9952 0.Arne Henningsen.8315 7.9940 0.9951 0.2543 6.3761 -0.0208 0.2155 0.9951 0.9951 0.9951 0.9748 0.8427 0.0207 0.8741 0.9952 0.9954 0.9291 1.0208 0.1869 λ 0.2000 6.9919 0.2545 5.0249 0.9293 1.9278 5.9291 1.9951 0.0000 -5.9296 0.9951 0.0207 0.0000 0.0207 0.0000 0.9933 0.2000 5.0207 0.0112 ρ 1.0026 0.9942 0.8000 5.0000 -5.0207 0.0207 0.5015 16.9880 0.9951 0.0182 0.0000 -5.0000 -1.0000 -5.0780 12.3101 5.9892 0.0069 0.5243 16.9466 7.9880 0.9296 0.8120 -1.5015 16.2248 51 c RSS 1 1 1 1 1 1 1 1 1 14146 3858 3778 3857 3844 3646 5342 3844 3636 4740 3844 3638 3875 3844 3640 3925 3826 3844 3844 3844 3844 3844 3642 1 1 1 1 1 0 1 1 1 1 1 1 R2 0.LM DE DE .LM SANN SANN .5434 16.9892 0.1816 1.2000 5.9821 0.9951 0.0207 0.PORT SANN .9954 0.0000 -5.0208 0.2155 5.9940 0.3101 6.0000 ρ1 0.2412 5.9555 0.7330 -1.PORT DE .9942 0.9886 0.0000 -1.5015 7.9951 0.4188 0.5056 5.6826 0.0194 0.8613 0.0167 0.3101 6. .0000 -1.0207 0.9942 0.9219 0.2132 2.PORT NM .9880 0.2680 0.9880 0.9951 0.3299 -0.1034 9.0133 3.0997 -0.0065 4.0207 0.

9958 0.9899 0.8632 0.9687 8.0000 0.1100 0.7944 -0.5964 9.0207 0.9440 0.4000 8.9956 0.9955 0.3619 15.5286 7.0192 0.0220 0.9956 0.PORT SANN .0208 0.3998 8.9437 0.0207 0.6000 7.9956 0.5834 0.0000 0.0898 15.9924 0.0000 -0.2264 0.3654 1.5303 10.9956 0.0085 ρ1 1.0000 7.9958 0.0000 -6.9942 0.1058 0.0000 0.6353 13.7000 -1.9686 1.0345 0.9955 0.0209 0.8632 0.52 Econometric Estimation of the Constant Elasticity of Substitution Function in R γ Kemfert (1998) fixed Newton BFGS L-BFGS-B PORT LM NM NM .3654 -0.PORT NM .8789 -6.0208 0.9956 0.2220 -1.7174 -1.9580 0.3492 15.0208 0.9258 10.7169 7.LM SANN SANN .0208 0.9258 11.5533 19.0000 0.9714 6.8498 19.0000 8.4859 -0.1487 0.9958 0.0085 0.5709 15.0085 1.9917 19.0724 0.0497 -0.7167 3.9937 0.9686 8.0208 0.1082 0.7167 9.4646 6.0196 0.LM DE DE .5155 9.0209 0.9687 c RSS 0 1 1 1 0 1 1 1 1 72628 4516 4955 3537 3518 3357 3582 3510 3357 4596 3510 3357 3588 3510 3357 3553 3551 3532 3532 3531 3510 3528 3357 1 0 1 1 1 1 1 1 1 1 1 1 R2 0.0000 -6.0207 0.8632 0.5285 6.9958 0.9955 0.0207 0.9958 Table 3: Estimation results with third nesting structure: see note below Table 1.0485 ρ 5.0482 -0.0345 0.9986 0.7042 7.2177 7.9997 0.8000 -6.9997 0.9924 0.0000 0.LM Newton grid Newton grid start BFGS grid BFGS grid start PORT grid PORT grid start LM grid LM grid start 17.0064 0.8482 0.1100 0.8617 0.8327 1.0480 0.7169 λ 0.0208 0.0000 -6.0086 0.1072 -1.5000 -0.0555 0.8327 5.0207 0.5533 19.6767 5.0000 0.6653 7.0068 0. .0208 0.9955 0.0209 0.7000 -0.9955 0.5533 12.2490 0.0085 0.2316 0.0206 0.9943 0.0208 0.0345 0.0207 δ1 δ -30.9955 0.8632 0.5898 -0.4646 6.1286 0.9955 0.6131 -0.9955 0.8000 6.7198 15.7305 7.4646 6.0497 -0.0064 0.7038 7.9083 0.PORT DE .4646 6.9534 1.0345 0.8745 -1.5533 19.0156 -0.0208 0.9955 0.7000 -0.9967 0.0000 0.

.4)) R> cesLmGridRho1 <. + rho1 = rhoVec. R> plot( cesLmGridRho1 ) As it is much easier to spot the summit of a hill than the deepest part of a valley (e. as this will help us to understand the problems that the optimisation algorithms have when finding the minimum of the sum of squared residuals.lm. maxfev = 2000).file("Kemfert98Nest1GridLm. tName = "time". This figure indicates that the surface is not always smooth. p.g. package = "micEconCES")) so that the reader can obtain the estimation result more quickly.control(maxiter = 1000. We demonstrate this with the first nesting structure and the Levenberg-Marquardt algorithm using the same values for ρ1 and ρ and the same control parameters as before.cesEst("Y". The fit of the model is best (small sum of squared residuals.045e-19 delta 3.06667 Inf AU = Allen-Uzawa (partial) elasticity of substitution HM = Hicks-McFadden (direct) elasticity of substitution We start our analysis of the results from the grid search by applying the standard plot method. 14. The idea of including results from computationally burdensome calculations in the corresponding R package is taken from Hayfield and Racine (2008. 15). This is shown in Figure 6. control = nls. c("K".lm. + data = GermanIndustry. data = GermanIndustry. the plot method plots the negative sum of squared residuals against ρ1 and ρ.2)_3 (AU) 0. 0. maxfev = 2000).000e+00 Elasticities of Substitution: E_1_2 (HM) E_(1. 25 As this grid search procedure is computationally burdensome.Arne Henningsen. rho = rhoVec) R> print(cesLmGridRho1) Estimated CES function Call: cesEst(yName = "Y". 0. rho = rhoVec) Coefficients: gamma lambda 1.087e-02 delta_1 6.control(maxiter = 1000. we get the same results as shown in Table 1:25 R> rhoVec <. The object cesLmGridRho1 can be loaded with the command load(system.512e+01 2.RData".c(seq(-1.696e-02 rho_1 rho 1.2).2. tName = "time". rho1 = rhoVec. "A"). because the deepest part could be hidden behind a ridge). G´eraldine Henningsen 53 Given the considerable variation of the estimation results. + control = nls.400e+01 -1. seq(1. Hence. 4. "E".1).4. seq(4. . we will examine the results of the grid searches for ρ1 and ρ. "E". 0. "A"). 1. we have—for the reader’s convenience— included the resultant object in the micEconCES package. xNames = c("K".

Econometric Estimation of the Constant Elasticity of Substitution Function in R negative sums of squared residuals −50000 −100000 −150000 −200000 −250000 10 5 5 o_ o rh 1 10 rh 54 0 0 Figure 6: Goodness of fit for different values of ρ1 and ρ. .

e. 1] (no matter the value of ρ1 ) or if ρ1 is approximately in the interval [2. The resulting Figure 7 shows that the fit of the model clearly improves with increasing ρ1 and decreasing ρ. the fit is worst (large sum of squared residuals. Furthermore. R> cesLmGridRho1a <.53 and ρ = 0. ρ1 = 0. At the estimates of Kemfert (1998).NA R> plot( cesLmGridRho1a ) negative sums of squared residuals −3500 −4000 −4500 10 5 5 rh o_ o rh 1 10 0 0 Figure 7: Goodness of fit (best-fitting region only).8 and 2 depending the value of ρ). G´eraldine Henningsen 55 light green colour) if ρ is approximately in the interval [−1. We also replicated the estimations for seven industrial sectors. We obtained similar results for the other two nesting structures.1813.Arne Henningsen. 7] and ρ is smaller than 7.26 but again. We contacted 26 The R commands for conducting these estimations are included in the R script that is available in Appendix D. the sum of the squared residuals is clearly not at its minimum. . i.000. In order to get a more detailed insight into the best-fitting region. we re-plot Figure 6 with only sums of squared residuals that are smaller than 5.. red colour) if ρ is larger than 5 and ρ1 has a low value (the upper limit is between −0. we could not reproduce any of the estimates published in Kemfert (1998).cesLmGridRho1 R> cesLmGridRho1a$rssArray[ cesLmGridRho1a$rssArray >= 5000 ] <.

As the CES function is not easy to estimate.g..e. Unfortunately. The function cesEst is constructed in a way that allows the user to switch easily between different estimation and optimisation procedures. including factor augmented technological change). i. given an objective function that is seldom well-behaved. fails. However. they will definitely benefit from this paper. as it provides a multitude of insights and practical hints regarding the estimation of CES functions. ρ2 . we were unable to find the reason for the large deviations in the estimates.. the user can visualise the surface of the objective function to be minimised and. The micEconCES package is open-source software and modularly programmed. even if some readers choose not to use the micEconCES package for estimating a CES function. ρ). which include the linear approximation suggested by Kmenta (1967). . The micEconCES package provides such a solution. technological change and three-input and four-input nested CES functions. in particular. Section 5. correspond to the smallest sums of squared residuals. gradient-based and global optimisation algorithms. Additionally. Additionally. the CES function has gained in popularity in macroeconomics and especially growth theory. we are confident that the results obtained by cesEst are correct.2 demonstrates how these control instruments support the identification of the global optimum when a simple non-linear estimation. The grid search procedure. the user can impose restrictions on the parameters to enforce economically meaningful parameter estimates. a software solution to estimate the CES function may further increase its popularity. as it is clearly more flexible than the classical Cobb-Douglas function. 6. both the Shazam scripts used for the estimations and the corresponding output files have been lost. Hence. i. Its function cesEst can not only estimate the traditional two-input CES function. increases the probability of finding a global optimum. and an extended grid-search procedure that returns stable results and alleviates convergence problems. the grid search allows one to plot the objective function for different values of the substitution parameters (ρ1 . This option is a further control instrument to ensure that the global optimum has been reached. hence.e.56 Econometric Estimation of the Constant Elasticity of Substitution Function in R the author and asked her to help us to identify the reasons for the differences between her results and ours. Conclusion In recent years. Hence. However. check the extent to which the objective function is well behaved and which parameter ranges of the substitution parameters give the best fit to the model. which makes it easy for the user to extend and modify (e. Furthermore. in contrast. the micEconCES package provides the user with a multitude of estimation and optimisation procedures.. In doing so. the user can easily use several different methods and compare the results. but also all major extensions.

Arne Henningsen. (37) (38) (39) At the point of approximation ρ = 0 we have g (0) = 1 (40) .1. ρ (35) We define function so that Now we can calculate the first partial derivative of f (ρ): f 0 (ρ) = ν ν g 0 (ρ) ln (g (ρ)) − ρ2 ρ g (ρ) (36) and the first three derivatives of g (ρ) −ρ g 0 (ρ) = −δ x−ρ 1 ln x1 − (1 − δ) x2 ln x2 00 g (ρ) = 000 g (ρ) = −ρ 2 2 δ x−ρ 1 (ln x1 ) + (1 − δ) x2 (ln x2 ) −ρ 3 3 −δ x−ρ 1 (ln x1 ) − (1 − δ) x2 (ln x2 ) . Traditional two-input CES function A. Derivation of the Kmenta approximation This derivation of the first-order Taylor series (Kmenta) approximation of the traditional two-input CES function is based on Uebe (2000). G´eraldine Henningsen 57 A. (32) Now we can approximate the logarithm of the CES function by a first-order Taylor series approximation around ρ = 0 : ln y ≈ ln γ + λ t + f (0) + ρf 0 (0) (33) −ρ g (ρ) ≡ δ x−ρ 1 + (1 − δ) x2 (34) ν f (ρ) = − ln (g (ρ)) . Traditional two-input CES function: y = γe λt  δ x−ρ 1 + (1 − δ) x−ρ 2 − ν ρ (29) Logarithmized CES function: ln y = ln γ + λ t −  ν  −ρ ln δ x1 + (1 − δ) x−ρ 2 ρ (30) Define function f (ρ) ≡ −   ν −ρ + (1 − δ) x ln δ x−ρ 2 1 ρ (31) so that ln y = ln γ + λ t + f (ρ) .

58 Econometric Estimation of the Constant Elasticity of Substitution Function in R g 0 (0) = −δ ln x1 − (1 − δ) ln x2 2 00 g (0) = δ (ln x1 ) + (1 − δ) (ln x2 ) (41) 2 3 000 g (0) = −δ (ln x1 ) − (1 − δ) (ln x2 ) (42) 3 (43) Now we calculate the limit of f (ρ) for ρ → 0: f (0) = = lim f (ρ) (44) −ν ln (g (ρ)) ρ→0 ρ (45) ρ→0 lim 0 = (ρ) −ν gg(ρ) lim 1 = ν (δ ln x1 + (1 − δ) ln x2 ) ρ→0 (46) (47) and the limit of f 0 (ρ) for ρ → 0: f 0 (0) = lim f 0 (ρ)   ν ν g 0 (ρ) = lim ln (g (ρ)) − ρ→0 ρ2 ρ g (ρ) (48) ρ→0 (49) 0 = lim (ρ) ν ln (g (ρ)) − ν ρ gg(ρ) ν = = lim ρ→0 lim − ρ→0 (50) ρ2 ρ→0 g 0 (ρ) g(ρ) −ν g 0 (ρ) g(ρ) −νρg 00 (ρ)g(ρ)−(g 0 (ρ))2 (g(ρ))2 2ρ ν g 00 (ρ) g (ρ) − (g 0 (ρ))2 2 (g (ρ))2 ν g 00 (0) g (0) − (g 0 (0))2 2 (g (0))2   ν − δ (ln x1 )2 + (1 − δ) (ln x2 )2 − (−δ ln x1 − (1 − δ) ln x2 )2 2 ν − δ (ln x1 )2 + (1 − δ) (ln x2 )2 − δ 2 (ln x1 )2 2  2 2 −2δ (1 − δ) ln x1 ln x2 − (1 − δ) (ln x2 )     ν − δ − δ 2 (ln x1 )2 + (1 − δ) − (1 − δ)2 (ln x2 )2 2  −2δ (1 − δ) ln x1 ln x2  ν δ (1 − δ) (ln x1 )2 + (1 − δ) (1 − (1 − δ)) (ln x2 )2 − 2  −2δ (1 − δ) ln x1 ln x2 (51) (52) = − (53) = (54) = = = = −  νδ (1 − δ)  (ln x1 )2 − 2 ln x1 ln x2 + (ln x2 )2 2 (55) (56) (57) (58) .

The first-order Taylor series approximations of these derivatives at the point ρ = 0 are shown in the main paper in Equations 20 to 23. Derivatives with respect to Gamma ∂y ∂γ  − ν ρ −ρ + (1 − δ) x = eλ t δ x−ρ 1 2    ν −ρ −ρ λt = e exp − ln δ x1 + (1 − δ) x2 ρ = eλ t exp (f (ρ)) (62) (63) 0 λt (61)  ≈ e exp f (0) + ρf (0)   1 = eλ t exp νδ ln x1 + ν (1 − δ) ln x2 − νρ δ (1 − δ) (ln x1 − ln x2 )2 2   1 2 λ t νδ ν(1−δ) = e x1 x2 exp − νρ δ (1 − δ) (ln x1 − ln x2 ) 2 (64) (65) (66) Derivatives with respect to Delta ∂y ∂δ − ν −1   ν  −ρ ρ −ρ −ρ δ x1 + (1 − δ) x−ρ x − x 2 1 2 ρ  − ν −1 x−ρ − x−ρ ρ −ρ 2 δ x−ρ + (1 − δ) x = −γ eλ t ν 1 1 2 ρ = −γ eλ t (67) (68) Now we define the function fδ (ρ) fδ (ρ) = = −ρ  − ν −1 x−ρ ρ −ρ 1 − x2 δ x−ρ + (1 − δ) x 1 2 ρ     −ρ  x−ρ ν −ρ −ρ 1 − x2 exp − + 1 ln δ x1 + (1 − δ) x2 ρ ρ (69) (70) . G´eraldine Henningsen = − νδ (1 − δ) (ln x1 − ln x2 )2 2 59 (59) so that we get following first-order Taylor series approximation around ρ = 0: ln y ≈ ln γ + λ t + νδ ln x1 + ν (1 − δ) ln x2 − νρ δ (1 − δ) (ln x1 − ln x2 )2 (60) A. Derivatives with respect to coefficients The partial derivatives of the two-input CES function with respect to all coefficients are shown in the main paper in Equations 15 to 19.2. These calculations are inspired by Uebe (2000). Functions f (ρ) and g(ρ) are defined as in the previous section (in Equations 31 and 34. The derivations of these Taylor series approximations are shown in the following.Arne Henningsen. respectively).

60 Econometric Estimation of the Constant Elasticity of Substitution Function in R so that we can approximate ∂y/∂δ by using the first-order Taylor series approximation of fδ (ρ): ∂y ∂δ = −γ eλ t ν fδ (ρ) (71) ≈ −γ eλ t ν fδ (0) + ρfδ0 (0)  Now we define the helper functions gδ (ρ) and hδ (ρ)     ν −ρ gδ (ρ) = + 1 ln δ x−ρ + (1 − δ) x 1 2 ρ   ν = + 1 ln (g (ρ)) ρ hδ (ρ) = −ρ x−ρ 1 − x2 ρ (72) (73) (74) (75) with first derivatives   0 ν ν g (ρ) (ρ) = − 2 ln (g (ρ)) + +1 ρ ρ g (ρ)   −ρ −ρ − x−ρ −ρ ln x1 x−ρ 1 + x2 1 − ln x2 x2 0 hδ (ρ) = ρ2 gδ0 (76) (77) so that fδ (ρ) = hδ (ρ) exp (−gδ (ρ)) (78) fδ0 (ρ) = h0δ (ρ) exp (−gδ (ρ)) − hδ (ρ) exp (−gδ (ρ)) gδ0 (ρ) (79) and Now we can calculate the limits of gδ (ρ). and h0δ (ρ) for ρ → 0 by gδ (0) = lim gδ (ρ)    ν = lim + 1 ln (g (ρ)) ρ→0 ρ (ν + ρ) ln (g (ρ)) = lim ρ→0 ρ ρ→0 ln (g (ρ)) + (ν + ρ) = lim g 0 (ρ) g(ρ) 1 0 g (0) = ln (g (0)) + ν g (0) = −νδ ln x1 − ν (1 − δ) ln x2 ρ→0 (80) (81) (82) (83) (84) (85) 0 gδ0 (0) = lim ρ→0 (ρ) −ν ln (g (ρ)) + ρ (ν + ρ) gg(ρ) ρ2 (86) . hδ (ρ). gδ0 (ρ).

Arne Henningsen. G´eraldine Henningsen 0 0 = lim = = 00 (ρ)g(ρ)−(g 0 (ρ))2 (g(ρ))2 (87) 2ρ ρ→0  = 0 (ρ) (ρ) (ρ) + (ν + ρ) gg(ρ) + ρ gg(ρ) + ρ (ν + ρ) g −ν gg(ρ) 61 lim  0 0 0 0 (ρ) (ρ) (ρ) (ρ) + ν gg(ρ) + ρ gg(ρ) + ρ gg(ρ) + ρ (ν + ρ) g −ν gg(ρ) lim g 00 (ρ) g (ρ) − (g 0 (ρ))2 g 0 (ρ) 1 + (ν + ρ) g (ρ) 2 (g (ρ))2  2 (g(ρ))  2ρ ρ→0 ρ→0 00 (ρ)g(ρ)−(g 0 (ρ))2 ! (89) g 0 (0) 1 g 00 (0) g (0) − (g 0 (0))2 + ν g (0) 2 (g (0))2 = −δ ln x1 − (1 − δ) ln x2 + (90) νδ (1 − δ) (ln x1 − ln x2 )2 2 (91) −ρ x−ρ 1 − x2 hδ (0) = lim ρ→0 ρ −ρ − ln x1 x−ρ 1 + ln x2 x2 = lim ρ→0 1 = − ln x1 + ln x2 h0δ (0) = lim (93) (94) (95) ρ2  = (92)   −ρ −ρ − x−ρ − ln x x −ρ ln x1 x−ρ 2 1 + x2 2 1 ρ→0 lim     −ρ 2 −ρ − ln x1 x−ρ + ρ (ln x1 )2 x−ρ 1 − ln x2 x2 1 − (ln x2 ) x2 2ρ ρ→0 (88) ! −ρ ln x1 x−ρ − ln x x 2 2 1 + 2ρ  1 2 −ρ = lim (ln x1 )2 x−ρ − (ln x ) x 2 1 2 ρ→0 2   1 = (ln x1 )2 − (ln x2 )2 2  (96) (97) (98) so that we can calculate the limit of fδ (ρ)and fδ0 (ρ) for ρ → 0 by fδ (0) = = = = lim fδ (ρ) (99) ρ→0 lim (hδ (ρ) exp (−gδ (ρ))) (100) lim hδ (ρ) lim exp (−gδ (ρ)) ρ→0   lim hδ (ρ) exp − lim gδ (ρ) (101) ρ→0 ρ→0 ρ→0 ρ→0 (102) = hδ (0) exp (−gδ (0)) (103) = (− ln x1 + ln x2 ) exp (νδ ln x1 + ν (1 − δ) ln x2 ) (104) = (− ln x1 + ν(1−δ) ln x2 ) xνδ 1 x2 (105) .

62 Econometric Estimation of the Constant Elasticity of Substitution Function in R fδ0 (0) = = = lim fδ0 (ρ) (106) ρ→0 lim h0δ (ρ) exp (−gδ (ρ)) − hδ (ρ) exp (−gδ (ρ)) gδ0 (ρ)  ρ→0 lim h0δ (ρ) lim exp (−gδ (ρ)) ρ→0 (108) ρ→0 − lim hδ (ρ) lim exp (−gδ (ρ)) lim gδ0 (ρ) ρ→0 ρ→0 ρ→0   = lim h0δ (ρ) exp − lim gδ (ρ) ρ→0 ρ→0   − lim hδ (ρ) exp − lim gδ (ρ) lim gδ0 (ρ) ρ→0 = = = = = = = = = ρ→0 h0δ (107) (109) ρ→0 (0) exp (−gδ (0)) − hδ (0) exp (−gδ (0)) gδ0 (0)  exp (−gδ (0)) h0δ (0) − hδ (0) gδ0 (0)    1 exp (νδ ln x1 + ν (1 − δ) ln x2 ) (ln x1 )2 − (ln x2 )2 2 − (− ln x1 + ln x2 )   νδ (1 − δ) (ln x1 − ln x2 )2 −δ ln x1 − (1 − δ) ln x2 + 2  1 ν(1−δ) 1 xνδ (ln x1 )2 − (ln x2 )2 + ln x1 − ln x2 1 x2 2 2   νδ (1 − δ) 2 (ln x1 − ln x2 ) −δ ln x1 − (1 − δ) ln x2 + 2  1 ν(1−δ) 1 xνδ (ln x1 )2 − (ln x2 )2 − δ (ln x1 )2 1 x2 2 2 νδ (1 − δ) − (1 − δ) ln x1 ln x2 + ln x1 (ln x1 − ln x2 )2 2  νδ (1 − δ) +δ ln x1 ln x2 + (1 − δ) (ln x2 )2 − ln x2 (ln x1 − ln x2 )2 2     1 1 2 νδ ν(1−δ) x1 x2 − δ (ln x1 ) + − δ (ln x2 )2 2 2    1 νδ (1 − δ) −2 − δ ln x1 ln x2 + (ln x1 − ln x2 ) (ln x1 − ln x2 )2 2 2   1 ν(1−δ) xνδ − δ (ln x1 − ln x2 )2 1 x2 2  νδ (1 − δ) 2 + (ln x1 − ln x2 ) (ln x1 − ln x2 ) 2   νδ (1 − δ) νδ ν(1−δ) 1 −δ+ (ln x1 − ln x2 ) (ln x1 − ln x2 )2 x1 x2 2 2 1 − 2δ + νδ (1 − δ) (ln x1 − ln x2 ) νδ ν(1−δ) x1 x2 (ln x1 − ln x2 )2 2 (110) (111) (112) (113) (114) (115) (116) (117) (118) and approximate ∂y/∂δ by ∂y ∂δ  ≈ −γ eλ t ν fδ (0) + ρfδ0 (0) (119) .

G´eraldine Henningsen 63  ν(1−δ) = −γ eλ t ν (− ln x1 + ln x2 ) xνδ 1 x2 (120) 1 − 2δ + νδ (1 − δ) (ln x1 − ln x2 ) νδ ν(1−δ) x1 x2 (ln x1 − ln x2 )2 2  ν(1−δ) = γ eλ t ν (ln x1 − ln x2 ) xνδ 1 x2  1 − 2δ + νδ (1 − δ) (ln x1 − ln x2 ) νδ ν(1−δ) x1 x2 (ln x1 − ln x2 )2 2  +ρ −ρ (121) ν(1−δ) = γ eλ t ν (ln x1 − ln x2 ) xνδ 1 x2   1 − 2δ + νδ (1 − δ) (ln x1 − ln x2 ) 1−ρ (ln x1 − ln x2 ) 2 (122) Derivatives with respect to Nu  − ν 1  ∂y ρ −ρ −ρ −ρ = −γ eλ t ln δ x−ρ + (1 − δ) x δ x + (1 − δ) x 1 2 1 2 ∂ν ρ Now we define the function fν (ρ)  − ν 1  −ρ ρ −ρ −ρ fν (ρ) = ln δ x1 + (1 − δ) x−ρ δ x + (1 − δ) x 2 1 2 ρ     ν 1  −ρ −ρ −ρ −ρ exp − ln δ x1 + (1 − δ) x2 = ln δ x1 + (1 − δ) x2 ρ ρ (123) (124) (125) so that we can approximate ∂y/∂ν by using the first-order Taylor series approximation of fν (ρ): ∂y ∂ν = −γ eλ t fν (ρ) (126) ≈ −γ eλ t fν (0) + ρfν0 (0)  (127) Now we define the helper function gν (ρ) gν (ρ) = =  1  −ρ ln δ x1 + (1 − δ) x−ρ 2 ρ 1 ln (g (ρ)) ρ (128) (129) with first and second derivative 0 gν0 (ρ) = = (ρ) ρ gg(ρ) − ln (g (ρ)) 1 g 0 (ρ) ln (g (ρ)) − ρ g (ρ) ρ2 gν00 (ρ) = − (131) ln (g (ρ)) 1 g 0 (ρ) 1 g 0 (ρ) 1 g 00 (ρ) 1 (g 0 (ρ))2 + − + 2 − ρ2 g (ρ) ρ g (ρ) ρ (g (ρ))2 ρ3 ρ2 g (ρ) 0 = (130) ρ2 (ρ) −2ρ gg(ρ) + ρ2 g 00 (ρ) g(ρ) − ρ2 ρ3 (g 0 (ρ))2 (g(ρ))2 (132) + 2 ln (g (ρ)) (133) .Arne Henningsen.

gν0 (ρ).64 Econometric Estimation of the Constant Elasticity of Substitution Function in R and use the function f (ρ) defined above so that fν (ρ) = gν (ρ) exp (f (ρ)) (134) fν0 (ρ) = gν0 (ρ) exp (f (ρ)) + gν (ρ) exp (f (ρ)) f 0 (ρ) (135) and Now we can calculate the limits of gν (ρ). and gν00 (ρ) for ρ → 0 by gν (0) = = = lim gν (ρ) (136) ln (g (ρ)) ρ→0 ρ (137) ρ→0 lim lim g 0 (ρ) g(ρ) 1 = −δ ln x1 − (1 − δ) ln x2 gν0 (0) = ρ→0 lim gν0 (ρ)   1 g 0 (ρ) ln (g (ρ)) = lim − ρ→0 ρ g (ρ) ρ2 (138) (139) (140) ρ→0 (141) 0 = = lim (ρ) ρ gg(ρ) − ln (g (ρ)) lim ρ→0 (142) ρ2 ρ→0 g 0 (ρ) g(ρ) +ρg 00 (ρ)g(ρ)−(g 0 (ρ))2 (g(ρ))2 − g 0 (ρ) g(ρ) 2ρ g 00 (ρ) g (ρ) − (g 0 (ρ))2 = lim ρ→0 2 (g (ρ))2 = = = = = g 00 (0) g (0) − (g 0 (0))2 2 (g (0))2   1 2 2 2 δ (ln x1 ) + (1 − δ) (ln x2 ) − (−δ ln x1 − (1 − δ) ln x2 ) 2 1 δ (ln x1 )2 + (1 − δ) (ln x2 )2 − δ 2 (ln x1 )2 2  2 2 −2δ (1 − δ) ln x1 ln x2 − (1 − δ) (ln x2 )     1 δ − δ 2 (ln x1 )2 + (1 − δ) − (1 − δ)2 (ln x2 )2 2  −2δ (1 − δ) ln x1 ln x2  1 δ (1 − δ) (ln x1 )2 + (1 − δ) (1 − (1 − δ)) (ln x2 )2 2  −2δ (1 − δ) ln x1 ln x2 (143) (144) (145) (146) (147) (148) (149) .

G´eraldine Henningsen = = gν00 (0) =  δ (1 − δ)  (ln x1 )2 − 2 ln x1 ln x2 + (ln x2 )2 2 δ (1 − δ) (ln x1 − ln x2 )2 2 lim gν00 (ρ)  0 (ρ) + ρ2 −2ρ gg(ρ) = lim  (152) g 00 (ρ) g(ρ) − ρ2 lim   lim  ρ→0 lim ρ→0 + 2 ln (g (ρ))  (153)  0 2 00 g 000 (ρ) g(ρ) (154) 3ρ2 ρ→0 + (g 0 (ρ))2 (g(ρ))2 ρ3 00 0 −ρ2 = (151) (ρ) (ρ)) (ρ) (ρ) − 2ρ gg(ρ) + 2ρ (g + 2ρ gg(ρ) + ρ2 −2 gg(ρ) (g(ρ))2  = (150) ρ→0 ρ→0 = 65 g 00 (ρ)g 0 (ρ) (g(ρ))2 ρ2 g 000 (ρ) g(ρ) 0 2 g 0 (ρ)g 00 (ρ) (g(ρ))2 2 3ρ (ρ)) − 2ρ (g − 2ρ2 (g(ρ))2 − 3ρ2 g 00 (ρ)g 0 (ρ) (g(ρ))2 3ρ2 + 2ρ2 (g 0 (ρ))3 (g(ρ))3 1 g 000 (ρ) g 00 (ρ) g 0 (ρ) 2 (g 0 (ρ))3 − + 3 g (ρ) 3 (g (ρ))3 (g (ρ))2 + 2ρ2 (g 0 (ρ))3 (g(ρ))3 0 (ρ) + 2 gg(ρ)     ! g 000 (0) g 00 (0) g 0 (0) 2 (g 0 (0))3 − + g (0) 3 (g (0))3 (g (0))2   −δ (ln x1 )3 − (1 − δ) (ln x2 )3 =   − δ (ln x1 )2 + (1 − δ) (ln x2 )2 (−δ ln x1 − (1 − δ) ln x2 ) = (155) 1 3 1 3 2 + (−δ ln x1 − (1 − δ) ln x2 )3 3 1 1 = − δ (ln x1 )3 − (1 − δ) (ln x2 )3 + δ 2 (ln x1 )3 3 3 +δ (1 − δ) (ln x1 )2 ln x2 + δ (1 − δ) ln x1 (ln x2 )2 + (1 − δ)2 (ln x2 )3  2 + δ 2 (ln x1 )2 + 2δ (1 − δ) ln x1 ln x2 + (1 − δ)2 (ln x2 )2 3 (−δ ln x1 − (1 − δ) ln x2 )   1 2 = δ − δ (ln x1 )3 + δ (1 − δ) (ln x1 )2 ln x2 3   1 2 2 +δ (1 − δ) ln x1 (ln x2 ) + (1 − δ) − (1 − δ) (ln x2 )3 3   2 2 2 2 2 + δ (ln x1 ) + 2δ (1 − δ) ln x1 ln x2 + (1 − δ) (ln x2 ) 3 (−δ ln x1 − (1 − δ) ln x2 )   1 2 = δ − δ (ln x1 )3 + δ (1 − δ) (ln x1 )2 ln x2 3 (156) (157) (158) (159) (160) (161) .Arne Henningsen.

66 Econometric Estimation of the Constant Elasticity of Substitution Function in R   1 2 +δ (1 − δ) ln x1 (ln x2 ) + (1 − δ) − (1 − δ) (ln x2 )3 3 2 3 − δ (ln x1 )3 + δ 2 (1 − δ) (ln x1 )2 ln x2 + 2δ 2 (1 − δ) (ln x1 )2 ln x2 3 2  +2δ (1 − δ)2 ln x1 (ln x2 )2 + δ (1 − δ)2 ln x1 (ln x2 )2 + (1 − δ)3 (ln x2 )3   1 2 = δ − δ (ln x1 )3 + δ (1 − δ) (ln x1 )2 ln x2 (162) 3   1 +δ (1 − δ) ln x1 (ln x2 )2 + (1 − δ)2 − (1 − δ) (ln x2 )3 3 2 3 − δ (ln x1 )3 − 2δ 2 (1 − δ) (ln x1 )2 ln x2 − 2δ (1 − δ)2 ln x1 (ln x2 )2 3 2 − (1 − δ)3 (ln x2 )3 3   1 2 3 2 = δ − δ − δ (ln x1 )3 + δ (1 − δ) − 2δ 2 (1 − δ) (ln x1 )2 ln x2 (163) 3 3    + δ 1 − δ − 2δ (1 − δ)2 ln x1 (ln x2 )2   1 2 2 3 + (1 − δ) − (1 − δ) − (1 − δ) (ln x2 )3 3 3   1 2 = δ − − δ 2 δ (ln x1 )3 + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 (164) 3 3 + (1 − 2 (1 − δ)) δ (1 − δ) ln x1 (ln x2 )2   1 2 + (1 − δ) − − (1 − δ)2 (1 − δ) (ln x2 )3 3 3   1 2 = − + δ δ (1 − δ) (ln x1 )3 + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 3 3 + (2δ − 1) δ (1 − δ) ln x1 (ln x2 )2   1 2 4 2 2 + 1 − δ − − + δ − δ (1 − δ) (ln x2 )3 3 3 3 3   1 2 = − + δ δ (1 − δ) (ln x1 )3 + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 3 3   1 2 2 + (2δ − 1) δ (1 − δ) ln x1 (ln x2 ) + − δ δ (1 − δ) (ln x2 )3 3 3 1 = − (1 − 2δ) δ (1 − δ) (ln x1 )3 + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 3 1 − (1 − 2δ) δ (1 − δ) ln x1 (ln x2 )2 + (1 − 2δ) δ (1 − δ) (ln x2 )3 3 1 = − (1 − 2δ) δ (1 − δ) 3  (165) (166) (167) (168) (ln x1 )3 + 3 (ln x1 )2 ln x2 + 3 ln x1 (ln x2 )2 − (ln x2 )3 1 = − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 3 (169) .

Arne Henningsen. G´eraldine Henningsen 67 so that we can calculate the limit of fν (ρ)and fν0 (ρ) for ρ → 0 by fν (0) = = = = lim fν (ρ) (170) lim (gν (ρ) exp (f (ρ))) (171) lim gν (ρ) lim exp (f (ρ)) ρ→0   lim gν (ρ) exp lim f (ρ) (172) ρ→0 ρ→0 ρ→0 ρ→0 (173) ρ→0 = gν (0) exp (f (0)) (174) = (−δ ln x1 − (1 − δ) ln x2 ) exp (ν (δ ln x1 + (1 − δ) ln x2 )) = − (δ ln x1 + (1 − δ) fν0 (0) = ν(1−δ) ln x2 ) xνδ 1 x2 (175) (176) lim fν0 (ρ) (177) ρ→0  lim gν0 (ρ) exp (f (ρ)) + gν (ρ) exp (f (ρ)) f 0 (ρ) ρ→0     0 = lim gν (ρ) exp lim f (ρ) + lim gν (ρ) exp lim f (ρ) lim f 0 (ρ) (178) exp (f (0)) + gν (0) exp (f (0)) f 0 (0)  = exp (f (0)) gν0 (0) + gν (0) f 0 (0)  δ (1 − δ) = exp (ν (δ ln x1 + (1 − δ) ln x2 )) (ln x1 − ln x2 )2 2   νδ (1 − δ) 2 (ln x1 − ln x2 ) + (−δ ln x1 − (1 − δ) ln x2 ) − 2 ν(1−δ) δ (1 − δ) = xνδ (ln x1 − ln x2 )2 (1 + ν (δ ln x1 + (1 − δ) ln x2 )) 1 x2 2 (180) = = ρ→0 gν0 (0) ρ→0 ρ→0 ρ→0 ρ→0 (179) (181) (182) (183) and approximate ∂y/∂ν by ∂y ∂ν ≈ −γ eλ t fν (0) + ρfν0 (0)  (184) ν(1−δ) = γ eλ t (δ ln x1 + (1 − δ) ln x2 ) xνδ 1 x2 ν(1−δ) δ (1 − δ) −γ eλ t ρxνδ (ln x1 − ln x2 )2 1 x2 2 (1 + ν (δ ln x1 + (1 − δ) ln x2 ))  λ t νδ ν(1−δ) = γ e x1 x2 δ ln x1 + (1 − δ) ln x2 (185) (186)  ρδ (1 − δ) 2 − (ln x1 − ln x2 ) (1 + ν (δ ln x1 + (1 − δ) ln x2 )) 2 Derivatives with respect to Rho ∂y ∂ρ = γ eλ t − ν   ν  −ρ ρ −ρ −ρ −ρ δ x + (1 − δ) x ln δ x + (1 − δ) x 1 2 1 2 ρ2 (187) .

68 Econometric Estimation of the Constant Elasticity of Substitution Function in R   − ν +1   ν  −ρ ρ −ρ −ρ +γ e δ x1 + (1 − δ) x−ρ δ x ln x + (1 − δ) x ln x 1 2 2 1 2 ρ   − ν   1 ρ −ρ −ρ −ρ −ρ = γ eλ t ν δ x + (1 − δ) x ln δ x + (1 − δ) x (188) 1 2 1 2 ρ2 !    − ν +1  1  −ρ ρ −ρ −ρ δ x1 + (1 − δ) x2 δ x−ρ ln x + (1 − δ) x ln x + 1 2 1 2 ρ λt Now we define the function fρ (ρ) − ν   1  −ρ ρ −ρ −ρ −ρ fρ (ρ) = δ x + (1 − δ) x ln δ x + (1 − δ) x 1 2 1 2 ρ2    − ν +1  1  −ρ ρ −ρ −ρ + δ x ln x + (1 − δ) x ln x δ x1 + (1 − δ) x−ρ 1 2 1 2 2 ρ (189) so that we can approximate ∂y/∂ρ by using the first-order Taylor series approximation of fρ (ρ): ∂y ∂ρ = γ eλ t ν fρ (ρ) (190) ≈ γ eλ t ν fρ (0) + ρfρ0 (0)  (191) We define the helper function gρ (ρ) −ρ gρ (ρ) = δ x−ρ 1 ln x1 + (1 − δ) x2 ln x2 (192) with first and second derivative −ρ 2 2 gρ0 (ρ) = −δ x−ρ 1 (ln x1 ) − (1 − δ) x2 (ln x2 ) gρ00 (ρ) = 3 δ x−ρ 1 (ln x1 ) + (1 − δ) 3 x−ρ 2 (ln x2 ) and use the functions g (ρ) and gν (ρ) all defined above so that − ν   1  −ρ ρ −ρ −ρ −ρ + (1 − δ) x + (1 − δ) x fρ (ρ) = δ x ln δ x 2 2 1 1 ρ2   − ν +1   1  −ρ ρ −ρ −ρ + δ x1 + (1 − δ) x−ρ δ x ln x + (1 − δ) x ln x 1 2 2 1 2 ρ ν     −ρ 1 −ρ −ρ −ρ = δ x + (1 − δ) x ln δ x−ρ 1 2 1 + (1 − δ) x2 ρ2 − ν  −1 1  −ρ ρ −ρ + δ x1 + (1 − δ) x−ρ δ x−ρ 2 1 + (1 − δ) x2 ρ   −ρ δ x−ρ ln x + (1 − δ) x ln x 1 2 1 2 ν     −ρ 1  −ρ 1 −ρ −ρ δ x−ρ + (1 − δ) x ln δ x + (1 − δ) x = 1 2 1 2 ρ ρ  −1   −ρ −ρ −ρ −ρ + δ x1 + (1 − δ) x2 δ x1 ln x1 + (1 − δ) x2 ln x2    1   1 ν  −ρ −ρ −ρ = exp − ln δ x1 + (1 − δ) x2 ln δ x−ρ + (1 − δ) x 1 2 ρ ρ ρ (193) (194) (195) (196) (197) (198) .

gρ0 (ρ). and gρ00 (ρ) for ρ → 0 by gρ (0) = lim gρ (ρ)   −ρ = lim δ x−ρ ln x + (1 − δ) x ln x 1 2 1 2 (204) −ρ = δ ln x1 lim x−ρ 1 + (1 − δ) ln x2 lim x2 (206) = δ ln x1 + (1 − δ) ln x2 (207) ρ→0 ρ→0 ρ→0 gρ0 (0) = ρ→0 lim gρ0 (ρ)   −ρ 2 2 = lim −δ x−ρ (ln x ) − (1 − δ) x (ln x ) 1 2 1 2 ρ→0 ρ→0 (205) (208) (209) . G´eraldine Henningsen  δ x−ρ 1 x−ρ 2 −1  δ x−ρ 1 ln x1 + (1 − δ) + (1 − δ)   exp (−νgν (ρ)) gν (ρ) + g (ρ)−1 gρ (ρ) + = 69 x−ρ 2 ln x2  (199) ρ and we can calculate its first derivative fρ0 (ρ) =   −ρν exp (−νgν (ρ)) gν0 (ρ) gν (ρ) + g (ρ)−1 gρ (ρ) (200) ρ2  +  ρ exp (−νgν (ρ)) gν0 (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + g (ρ)−1 gρ0 (ρ) ρ2  − exp (−νgν (ρ)) gν (ρ) + g (ρ)−1 gρ (ρ)  ρ2  = exp (−νgν (ρ)) ρ −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ)  ρ2 (201)  exp (−νgν (ρ)) ρ gν0 (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + g (ρ)−1 gρ0 (ρ)  + ρ2  − exp (−νgν (ρ)) gν (ρ) + g (ρ)−1 gρ (ρ)  ρ2 exp (−νgν (ρ))   ρ −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) 2 ρ  0 +gν (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + g (ρ)−1 gρ0 (ρ)  −gν (ρ) − g (ρ)−1 gρ (ρ) exp (−νgν (ρ))  −νρgν0 (ρ) gν (ρ) − νρgν0 (ρ) g (ρ)−1 gρ (ρ) = 2 ρ 0 +ρgν (ρ) − ρg (ρ)−2 g 0 (ρ) gρ (ρ) + ρg (ρ)−1 gρ0 (ρ)  −gν (ρ) − g (ρ)−1 gρ (ρ) = (202) (203) Now we can calculate the limits of gρ (ρ).Arne Henningsen.

70 Econometric Estimation of the Constant Elasticity of Substitution Function in R −ρ 2 = −δ (ln x1 )2 lim x−ρ 1 − (1 − δ) (ln x2 ) lim x2 (210) = −δ (ln x1 )2 − (1 − δ) (ln x2 )2 (211) ρ→0 gρ00 (0) = ρ→0 lim gρ00 (ρ)   −ρ 3 3 = lim δ x−ρ (ln x ) + (1 − δ) x (ln x ) 1 2 1 2 (212) −ρ 3 = δ (ln x1 )3 lim x−ρ 1 + (1 − δ) (ln x2 ) lim x2 (214) = δ (ln x1 )3 + (1 − δ) (ln x2 )3 (215) ρ→0 ρ→0 ρ→0 ρ→0 (213) so that we can calculate the limit of fρ (ρ) for ρ → 0 by fρ (0) = = = = = = = = lim fρ (ρ)    exp (−νgν (ρ)) gν (ρ) + g (ρ)−1 gρ (ρ)  lim  ρ→0 ρ    −ν exp (−νgν (ρ)) gν0 (ρ) gν (ρ) + g (ρ)−1 gρ (ρ) lim  ρ→0 1   exp (−νgν (ρ)) gν0 (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + g (ρ)−1 gρ0 (ρ)  + 1   −ν exp (−νgν (0)) gν0 (0) gν (0) + g (0)−1 gρ (0)   + exp (−νgν (0)) gν0 (0) − g (0)−2 g 0 (0) gρ (0) + g (0)−1 gρ0 (0)   exp (−νgν (0)) −ν gν0 (0) gν (0) + g (0)−1 gρ (0)   + exp (−νgν (0)) gν0 (0) − g (0)−2 g 0 (0) gρ (0) + g (0)−1 gρ0 (0)    exp (−νgν (0)) −ν gν0 (0) gν (0) + g (0)−1 gρ (0)  +gν0 (0) − g (0)−2 g 0 (0) gρ (0) + g (0)−1 gρ0 (0)  νδ (1 − δ) exp (−ν (−δ ln x1 − (1 − δ) ln x2 )) − (ln x1 − ln x2 )2 2 (−δ ln x1 − (1 − δ) ln x2 + δ ln x1 + (1 − δ) ln x2 ) δ (1 − δ) + (ln x1 − ln x2 )2 2 − (−δ ln x1 − (1 − δ) ln x2 ) (δ ln x1 + (1 − δ) ln x2 )  −δ (ln x1 )2 − (1 − δ) (ln x2 )2  νδ ν(1−δ) 1 x1 x2 δ (1 − δ) (ln x1 )2 − δ (1 − δ) ln x1 ln x2 2 1 + δ (1 − δ) (ln x2 )2 + δ 2 (ln x1 )2 + 2δ (1 − δ) ln x1 ln x2 2 ρ→0 (216) (217) (218) (219) (220) (221) (222) (223) .

We do this by defining a helper function hρ (ρ). G´eraldine Henningsen + (1 − δ)2 (ln x2 )2 − δ (ln x1 )2 − (1 − δ) (ln x2 )2   1 νδ ν(1−δ) 2 = x1 x2 δ (1 − δ) + δ − δ (ln x1 )2 2   1 2 + δ (1 − δ) + (1 − δ) − (1 − δ) 2  (ln x2 )2 + δ (1 − δ) ln x1 ln x2   1 1 2 νδ ν(1−δ) 2 = x1 x2 δ − δ + δ − δ (ln x1 )2 2 2   1 1 2 2 + δ − δ + 1 − 2δ + δ − 1 + δ (ln x2 )2 2 2 +δ (1 − δ) ln x1 ln x2 )   1 1 2 νδ ν(1−δ) = x1 x2 − δ + δ (ln x1 )2 2 2 !   1 1 2 2 + − δ + δ (ln x2 ) + δ (1 − δ) ln x1 ln x2 2 2 ν(1−δ) = xνδ 1 x2 71  1 − δ (1 − δ) (ln x1 )2 2 ! 1 2 − δ (1 − δ) (ln x2 ) + δ (1 − δ) ln x1 ln x2 2   1 ν(1−δ) = − δ (1 − δ) xνδ (ln x1 )2 − 2 ln x1 ln x2 + (ln x2 )2 1 x2 2 1 ν(1−δ) (ln x1 − ln x2 )2 = − δ (1 − δ) xνδ 1 x2 2 (224) (225) (226) (227) (228) (229) Before we can apply de l’Hospital’s rule to limρ→0 fρ0 (ρ). we can calculate limρ→0 fρ0 (ρ) by using de l’Hospital’s rule.Arne Henningsen. fρ0 (0) = lim fρ0 (ρ)  exp (−νgν (ρ)) −νρgν0 (ρ) gν (ρ) = lim ρ→0 ρ2 ρ→0 (236) (237) . where the numerator converges to zero if hρ (ρ) converges to zero for ρ → 0 hρ (ρ) = −gν (ρ) − g (ρ)−1 gρ (ρ) (230) hρ (0) = lim hρ (ρ)   = lim −gν (ρ) − g (ρ)−1 gρ (ρ) (231) = −gν (0) − g (0)−1 gρ (0) (233) = − (−δ ln x1 − (1 − δ) ln x2 ) − (δ ln x1 + (1 − δ) ln x2 ) (234) = 0 (235) ρ→0 ρ→0 (232) As both the numerator and the denominator converge to zero. we have to check whether also the numerator converges to zero.

72 Econometric Estimation of the Constant Elasticity of Substitution Function in R −νρgν0 (ρ) g (ρ)−1 gρ (ρ) + ρgν0 (ρ) − ρg (ρ)−2 g 0 (ρ) gρ (ρ)  +ρg (ρ)−1 gρ0 (ρ) − gν (ρ) − g (ρ)−1 gρ (ρ)  1 = lim (exp (−νgν (ρ))) lim −νρgν0 (ρ) gν (ρ) ρ→0 ρ→0 ρ2 −νρgν0 (ρ) g (ρ)−1 gρ (ρ) + ρgν0 (ρ) − ρg (ρ)−2 g 0 (ρ) gρ (ρ)  +ρg (ρ)−1 gρ0 (ρ) − gν (ρ) − g (ρ)−1 gρ (ρ)  1 = lim (exp (−νgν (ρ))) lim −νgν0 (ρ) gν (ρ) − νρgν00 (ρ) gν (ρ) ρ→0 ρ→0 2ρ (238) (239) −νρgν0 (ρ) gν0 (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) − νρgν00 (ρ) g (ρ)−1 gρ (ρ) +νρgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νρgν0 (ρ) g (ρ)−1 gρ0 (ρ) + gν0 (ρ) 2 +ρgν00 (ρ) − g (ρ)−2 g 0 (ρ) gρ (ρ) + 2ρg (ρ)−3 g 0 (ρ) gρ (ρ) −ρg (ρ)−2 g 00 (ρ) gρ (ρ) − ρg (ρ)−2 g 0 (ρ) gρ0 (ρ) + g (ρ)−1 gρ0 (ρ) −ρg (ρ)−2 g 0 (ρ) gρ0 (ρ) + ρg (ρ)−1 gρ00 (ρ) =  −gν0 (ρ) + g (ρ)−2 g 0 (ρ) gρ (ρ) − g (ρ)−1 gρ0 (ρ)  1 1 lim (exp (−νgν (ρ))) lim −νgν0 (ρ) gν (ρ) − νρgν00 (ρ) gν (ρ) ρ→0 ρ 2 ρ→0 (240) −νρgν0 (ρ) gν0 (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) − νρgν00 (ρ) g (ρ)−1 gρ (ρ) +νρgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νρgν0 (ρ) g (ρ)−1 gρ0 (ρ) + ρgν00 (ρ) 2 +2ρg (ρ)−3 g 0 (ρ) gρ (ρ) − ρg (ρ)−2 g 00 (ρ) gρ (ρ)  −2ρg (ρ)−2 g 0 (ρ) gρ0 (ρ) + ρg (ρ)−1 gρ00 (ρ) = 1 lim (exp (−νgν (ρ))) 2 ρ→0    1 lim −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) ρ→0 ρ (241) −νgν00 (ρ) gν (ρ) − νgν0 (ρ) gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) 2 +gν00 (ρ) + 2g (ρ)−3 g 0 (ρ) gρ (ρ) − g (ρ)−2 g 00 (ρ) gρ (ρ)  −2 0 −1 00 0 −2g (ρ) g (ρ) gρ (ρ) + g (ρ) gρ (ρ) = 1 lim (exp (−νgν (ρ))) 2 ρ→0     1 −1 0 0 lim −νgν (ρ) gν (ρ) − νgν (ρ) g (ρ) gρ (ρ) ρ→0 ρ  + lim −νgν00 (ρ) gν (ρ) − νgν0 (ρ) gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) ρ→0 +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) + gν00 (ρ) 2 +2g (ρ)−3 g 0 (ρ) gρ (ρ) − g (ρ)−2 g 00 (ρ) gρ (ρ) (242) .

where the numerator converges to zero if kρ (ρ) converges to zero for ρ → 0 kρ (ρ) = −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) (243) kρ (0) = (244) lim kρ (ρ)   = lim −νgν0 (ρ) gν (ρ) − νgν0 (ρ) g (ρ)−1 gρ (ρ) ρ→0 ρ→0 = −νgν0 (0) gν (0) − νgν0 (0) g (0)−1 gρ (0) νδ (1 − δ) = − (ln x1 − ln x2 )2 (−δ ln x1 − (1 − δ) ln x2 ) 2 νδ (1 − δ) − (ln x1 − ln x2 )2 (δ ln x1 + (1 − δ) ln x2 ) 2 = 0 (245) (246) (247) (248) As both the numerator and the denominator converge to zero. fρ0 (0) = 1 lim (exp (−νgν (ρ))) 2 ρ→0    1 −1 0 0 lim −νgν (ρ) gν (ρ) − νgν (ρ) g (ρ) gρ (ρ) ρ→0 ρ  + lim −νgν00 (ρ) gν (ρ) − νgν0 (ρ) gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) (251) ρ→0 +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) 2 +gν00 (ρ) + 2g (ρ)−3 g 0 (ρ) gρ (ρ) − g (ρ)−2 g 00 (ρ) gρ (ρ) !  −2g (ρ)−2 g 0 (ρ) gρ0 (ρ) + g (ρ)−1 gρ00 (ρ) = 1 lim (exp (−νgν (ρ))) 2 ρ→0  2 lim −νgν00 (ρ) gν (ρ) − ν gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) ρ→0  +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) (252) . we have to check if also the numerator converges to zero. we can apply de l’Hospital’s rule. We do this by defining a helper function kρ (ρ). G´eraldine Henningsen −2g (ρ) −2 0 g (ρ) gρ0 (ρ) + g (ρ)−1 gρ00 (ρ) 73  Before we can apply de l’Hospital’s rule again.Arne Henningsen. kρ (ρ) ρ→0 ρ lim = lim kρ (ρ)  2 = lim −νgν00 (ρ) gν (ρ) − ν gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) ρ→0  +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) ρ→0 (249) (250) and hence.

74 Econometric Estimation of the Constant Elasticity of Substitution Function in R  + lim −νgν00 (ρ) gν (ρ) − νgν0 (ρ) gν0 (ρ) − νgν00 (ρ) g (ρ)−1 gρ (ρ) ρ→0 +νgν0 (ρ) g (ρ)−2 g 0 (ρ) gρ (ρ) − νgν0 (ρ) g (ρ)−1 gρ0 (ρ) + gν00 (ρ) 2 +2g (ρ)−3 g 0 (ρ) gρ (ρ) − g (ρ)−2 g 00 (ρ) gρ (ρ) !  −2 0 −1 00 0 −2g (ρ) g (ρ) gρ (ρ) + g (ρ) gρ (ρ) =  2 1 exp (−νgν (0)) −νgν00 (0) gν (0) − ν gν0 (0) 2 −νgν00 (0) g (0)−1 gρ (0) + νgν0 (0) g (0)−2 g 0 (0) gρ (0) (253) −νgν0 (0) g (0)−1 gρ0 (0) − νgν00 (0) gν (0) − νgν0 (0) gν0 (0) −νgν00 (0) g (0)−1 gρ (0) + νgν0 (0) g (0)−2 g 0 (0) gρ (0) 2 −νgν0 (0) g (0)−1 gρ0 (0) + gν00 (0) + 2g (0)−3 g 0 (0) gρ (0)  −g (0)−2 g 00 (0) gρ (0) − 2g (0)−2 g 0 (0) gρ0 (0) + g (0)−1 gρ00 (0)  2 1 = exp (−νgν (0)) −νgν00 (0) gν (0) − ν gν0 (0) − νgν00 (0) gρ (0) 2 +νgν0 (0) g 0 (0) gρ (0) − νgν0 (0) gρ0 (0) −νgν00 (0) gν (0) − νgν0 (0) gν0 (0) − νgν00 (0) gρ (0) + νgν0 (0) g 0 (0) gρ (0) 2 −νgν0 (0) gρ0 (0) + gν00 (0) + 2 g 0 (0) gρ (0) − g 00 (0) gρ (0)  −2g 0 (0) gρ0 (0) + gρ00 (0)  2 1 exp (−νgν (0)) −2νgν00 (0) gν (0) − 2ν gν0 (0) − 2νgν00 (0) gρ (0) = 2 +2νgν0 (0) g 0 (0) gρ (0) − 2νgν0 (0) gρ0 (0) 2 +gν00 (0) + 2 g 0 (0) gρ (0) − g 00 (0) gρ (0)  −2g 0 (0) gρ0 (0) + gρ00 (0) 1 = exp (−νgν (0)) gν00 (0) (−2νgν (0) − 2νgρ (0) + 1) 2  +νgν0 (0) −2gν0 (0) + 2g 0 (0) gρ (0) − 2gρ0 (0) 2 +2 g 0 (0) gρ (0) − g 00 (0) gρ (0)  −2g 0 (0) gρ0 (0) + gρ00 (0) = 1 exp (−ν (−δ ln x1 − (1 − δ) ln x2 )) 2   1 3 − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 ) 3 (−2ν (−δ ln x1 − (1 − δ) ln x2 ) − 2ν (δ ln x1 + (1 − δ) ln x2 ) + 1)  δ (1 − δ) δ (1 − δ) 2 (ln x1 − ln x2 ) −2 (ln x1 − ln x2 )2 +ν 2 2 +2 (−δ ln x1 − (1 − δ) ln x2 ) (δ ln x1 + (1 − δ) ln x2 )   −2 −δ (ln x1 )2 − (1 − δ) (ln x2 )2 +2 (−δ ln x1 − (1 − δ) ln x2 )2 (δ ln x1 + (1 − δ) ln x2 ) (254) (255) (256) (257) .

G´eraldine Henningsen 75   − δ (ln x1 )2 + (1 − δ) (ln x2 )2 (δ ln x1 + (1 − δ) ln x2 )   −2 (−δ ln x1 − (1 − δ) ln x2 ) −δ (ln x1 )2 − (1 − δ) (ln x2 )2  +δ (ln x1 )3 + (1 − δ) (ln x2 )3  1 νδ ν(1−δ) 1 x1 x2 − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 = 2 3 (2νδ ln x1 + 2ν (1 − δ) ln x2 − 2νδ ln x1 − 2ν (1 − δ) ln x2 + 1)   1 + νδ (1 − δ) (ln x1 )2 − 2 ln x1 ln x2 + (ln x2 )2 2 (258) −δ (1 − δ) (ln x1 )2 + 2δ (1 − δ) ln x1 ln x2 − δ (1 − δ) (ln x2 )2 −2δ 2 (ln x1 )2 − 4δ (1 − δ) ln x1 ln x2 − 2 (1 − δ)2 (ln x2 )2  +2δ (ln x1 )2 + 2 (1 − δ) (ln x2 )2 +2δ 3 (ln x1 )3 + 6δ 2 (1 − δ) (ln x1 )2 ln x2 + 6δ (1 − δ)2 ln x1 (ln x2 )2 +2 (1 − δ)3 (ln x2 )3 − δ 2 (ln x1 )3 − δ (1 − δ) (ln x1 )2 ln x2 −δ (1 − δ) ln x1 (ln x2 )2 − (1 − δ)2 (ln x2 )3 − 2δ 2 (ln x1 )3 −2δ (1 − δ) ln x1 (ln x2 )2 − 2δ (1 − δ) (ln x1 )2 ln x2 − 2 (1 − δ)2 (ln x2 )3  +δ (ln x1 )3 + (1 − δ) (ln x2 )3 =  1 νδ ν(1−δ) 1 x1 x2 − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 2 3   1 1 2 2 + νδ (1 − δ) (ln x1 ) − νδ (1 − δ) ln x1 ln x2 + νδ (1 − δ) (ln x2 ) 2 2   2 2 −δ (1 − δ) − 2δ + 2δ (ln x1 ) (259) + (2δ (1 − δ) − 4δ (1 − δ)) ln x1 ln x2    + −δ (1 − δ) − 2 (1 − δ)2 + 2 (1 − δ) (ln x2 )2  + 2δ 3 − δ 2 − 2δ 2 + δ (ln x1 )3  + 6δ 2 (1 − δ) − δ (1 − δ) − 2δ (1 − δ) (ln x1 )2 ln x2   + 6δ (1 − δ)2 − δ (1 − δ) − 2δ (1 − δ) ln x1 (ln x2 )2    + 2 (1 − δ)3 − (1 − δ)2 − 2 (1 − δ)2 + (1 − δ) (ln x2 )3  1 1 νδ ν(1−δ) x x − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 = 2 1 2 3   1 1 2 2 + νδ (1 − δ) (ln x1 ) − νδ (1 − δ) ln x1 ln x2 + νδ (1 − δ) (ln x2 ) 2 2   −δ + δ 2 − 2δ 2 + 2δ (ln x1 )2 (260) .Arne Henningsen.

76 Econometric Estimation of the Constant Elasticity of Substitution Function in R −2δ (1 − δ) ln x1 ln x2   + −δ + δ 2 − 2 + 4δ − 2δ 2 + 2 − 2δ (ln x2 )2  + 2δ 3 − 3δ 2 + δ (ln x1 )3  + 6δ 2 − 6δ 3 − δ + δ 2 − 2δ + 2δ 2 (ln x1 )2 ln x2  + 6δ − 12δ 2 + 6δ 3 − δ + δ 2 − 2δ + 2δ 2 ln x1 (ln x2 )2   + 2 − 6δ + 6δ 2 − 2δ 3 − 1 + 2δ − δ 2 − 2 + 4δ − 2δ 2 + 1 − δ (ln x2 )3  1 νδ ν(1−δ) 1 = x1 x2 − (1 − 2δ) δ (1 − δ) (ln x1 − ln x2 )3 2 3   1 1 2 2 + νδ (1 − δ) (ln x1 ) − νδ (1 − δ) ln x1 ln x2 + νδ (1 − δ) (ln x2 ) 2 2     2 2 2 −δ + δ (ln x1 ) − 2δ (1 − δ) ln x1 ln x2 + −δ + δ (ln x2 )2   + 2δ 3 − 3δ 2 + δ (ln x1 )3 + −6δ 3 + 9δ 2 − 3δ (ln x1 )2 ln x2    + 6δ 3 − 9δ 2 + 3δ ln x1 (ln x2 )2 + −2δ 3 + 3δ 2 − δ (ln x2 )3 =  1 νδ ν(1−δ) 1 − (1 − 2δ) δ (1 − δ) (ln x1 )3 x1 x2 2 3 (261) (262) + (1 − 2δ) δ (1 − δ) (ln x1 )2 ln x2 − (1 − 2δ) δ (1 − δ) ln x1 (ln x2 )2 1 + (1 − 2δ) δ (1 − δ) (ln x2 )3 3  1 1 2 2 + νδ (1 − δ) (ln x1 ) − νδ (1 − δ) ln x1 ln x2 + νδ (1 − δ) (ln x2 ) 2 2   δ (1 − δ) (ln x1 )2 − 2δ (1 − δ) ln x1 ln x2 + δ (1 − δ) (ln x2 )2 +δ (1 − δ) (1 − 2δ) (ln x1 )3 − 3δ (1 − δ) (1 − 2δ) (ln x1 )2 ln x2 +3δ (1 − δ) (1 − 2δ) ln x1 (ln x2 )2 − δ (1 − δ) (1 − 2δ) (ln x2 )3   1 νδ ν(1−δ) 1 2 = x x νδ (1 − δ)2 (ln x1 )4 − νδ 2 (1 − δ)2 (ln x1 )3 ln x2 2 1 2 2 1 + νδ 2 (1 − δ)2 (ln x1 )2 (ln x2 )2 − νδ 2 (1 − δ)2 (ln x1 )3 ln x2 2 +2νδ 2 (1 − δ)2 (ln x1 )2 (ln x2 )2 − νδ 2 (1 − δ)2 ln x1 (ln x2 )3 1 + νδ 2 (1 − δ)2 (ln x1 )2 (ln x2 )2 − νδ 2 (1 − δ)2 ln x1 (ln x2 )3 2 1 + νδ 2 (1 − δ)2 (ln x2 )4 2  1 + − (1 − 2δ) δ (1 − δ) + δ (1 − δ) (1 − 2δ) (ln x1 )3 3 + ((1 − 2δ) δ (1 − δ) − 3δ (1 − δ) (1 − 2δ)) (ln x1 )2 ln x2 (263) .

Arne Henningsen. we can approximate ∂y/∂ρ by a second-order Taylor series approximation: ∂y ∂ρ  ≈ γ eλ t ν fρ (0) + ρ fρ0 (0) (267) 1 ν(1−δ) = − γ eλ t ν δ (1 − δ) xνδ (ln x1 − ln x2 )2 1 x2 2 ν(1−δ) +γ eλ t ν ρ δ (1 − δ) xνδ 1 x2   2 1 3 4 (1 − 2δ) (ln x1 − ln x2 ) + νδ (1 − δ) (ln x1 − ln x2 ) 3 2 ν(1−δ) = γ eλ t ν δ (1 − δ) xνδ 1 x2 (268) 1 − (ln x1 − ln x2 )2 2 1 1 + ρ (1 − 2δ) (ln x1 − ln x2 )3 + ρνδ (1 − δ) (ln x1 − ln x2 )4 3 4 B. G´eraldine Henningsen 77 + (− (1 − 2δ) δ (1 − δ) + 3δ (1 − δ) (1 − 2δ)) ln x1 (ln x2 )2    1 3 + (1 − 2δ) δ (1 − δ) − δ (1 − δ) (1 − 2δ) (ln x2 ) 3 = 1 νδ ν(1−δ) x x 2 1 2  1 2 νδ (1 − δ)2 (ln x1 )4 − 2νδ 2 (1 − δ)2 (ln x1 )3 ln x2 2 (264) +3νδ 2 (1 − δ)2 (ln x1 )2 (ln x2 )2 1 −2νδ 2 (1 − δ)2 ln x1 (ln x2 )3 + νδ 2 (1 − δ)2 (ln x2 )4 2 2 3 + δ (1 − δ) (1 − 2δ) (ln x1 ) − 2δ (1 − δ) (1 − 2δ) (ln x1 )2 ln x2 3  2 2 3 +2δ (1 − δ) (1 − 2δ) ln x1 (ln x2 ) − δ (1 − δ) (1 − 2δ) (ln x2 ) 3 = 1 νδ ν(1−δ) x x 2 1 2    1 2 νδ (1 − δ)2 (ln x1 )4 − 4 (ln x1 )3 ln x2 2 +6 (ln x1 )2 (ln x2 )2 − 4 ln x1 (ln x2 )3 + (ln x2 )4 2 + δ (1 − δ) (1 − 2δ) 3  (ln x1 )3 − 3 (ln x1 )2 ln x2 + 3 ln x1 (ln x2 )2 − (ln x2 )3 (265)  ν(1−δ) = δ (1 − δ) xνδ 1 x2   1 1 3 4 (1 − 2δ) (ln x1 − ln x2 ) + νδ (1 − δ) (ln x1 − ln x2 ) 3 4 (266) Hence. Three-input nested CES function (269) ! .

where ∂y/∂γ is calculated according to Equation 277. . Derivatives with respect to coefficients The partial derivatives of the three-input nested CES function with respect to the coefficients are:27 ∂y ∂γ ∂y ∂δ1 ∂y ∂δ ∂y ∂ν ∂y ∂ρ1 =eλ t B −ν/ρ (277) ρ−ρ 1 ν −ν−ρ ρ ρ 1 1 − x−ρ = − γ eλ t B ρ δ B1 1 (x−ρ 1 2 ) ρ ρ1  ρ  −ν−ρ ρ1 −ρ λt ν ρ =−γe B B1 − x3 ρ 1 −ν = − γ eλ t ln(B)B ρ ρ −ν−ρ ν = − γ eλ t B ρ δ ρ    1 ρ   ρ−ρ ρ1 ρ ρ ρ1 ρ1 −ρ1 ln(B1 )B1 − 2 + B1 −δ1 ln x1 x1 − (1 − δ1 ) ln x2 x2 ρ1 ρ1   ρ ν ν −ν−ρ ∂y ν 1 ρ − =γ eλ t 2 ln(B)B ρ − γ eλ t B ρ δ ln(B1 )B1 1 − (1 − δ) ln x3 x−ρ 3 ∂ρ ρ ρ ρ1 (278) (279) (280) (281) (282) 27 The partial derivatives with respect to λ are always calculated using Equation 16. (273) Limits for ρ1 and/or ρ approaching zero The limits of the three-input nested CES function for ρ1 and/or ρ approaching zero are: − ν  ρ lim y =γ eλ t δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ1 →0      ln(B1 ) lim y =γ eλ t exp ν δ − + (1 − δ) ln x3 ρ→0 ρ1 lim lim y =γ eλ t exp {ν (δ L1 + (1 − δ) ln x3 )} ρ1 →0 ρ→0 (274) (275) (276) B. 283. 289.2. or 295 depending on the values of ρ1 and ρ. we make the following definition: L1 = δ1 ln(x1 ) + (1 − δ1 ) ln(x2 ) B.78 Econometric Estimation of the Constant Elasticity of Substitution Function in R The nested CES function with three inputs is defined as y = γ eλ t B −1/ρ (270) with ρ/ρ1 B =δ B1 1 B1 =δ1 x−ρ 1 + (1 − δ)x−ρ 3 + (1 − 1 δ1 ) x−ρ 2 (271) (272) For further simplification of the formulas in this section.1.

Arne Henningsen. G´eraldine Henningsen 79 Limits of the derivatives for ρ approaching zero      ∂y ln(B1 ) + (1 − δ) ln x3 =eλ t exp ν δ − ρ→0 ∂γ ρ1 (283) lim ∂y lim = − γ eλ t ν ρ→0 ∂δ1 " 1 (x−ρ1 − x−ρ 2 ) δ 1 ρ 1 B1 ! #   ln(B1 ) exp −ν −δ − − (1 − δ) ln x3 ρ1   (284) ∂y lim = − γ eλ t ν ρ→0 ∂δ        ln(B1 ) ln(B1 ) + ln x3 exp ν −δ − − (1 − δ) ln x3 ρ1 ρ1     ln(B1 ) δ − + (1 − δ) ln x3 ρ1      ln(B1 ) · exp −ν −δ − − (1 − δ) ln x3 ρ1 ∂y lim =γ eλ t ρ→0 ∂ν 1 1 δ ln(B1 ) −δ1 ln x1 x−ρ − (1 − δ1 ) ln x2 x−ρ ∂y 1 2 = − γ eλ t ν − + lim ρ→0 ∂ρ1 ρ1 ρ1 B1      ln(B1 ) · exp −ν −δ − − (1 − δ) ln x3 ρ1 ∂y lim =γ eλ t ν ρ→0 ∂ρ " 1 − 2  δ ln B1 ρ1 ! 2 2 + (1 − δ) (ln x3 ) 1 + 2 (285) (286) ! (287)  2 # ln B1 δ − (1 − δ) ln x3 ρ1 (288)    ln B1 + (1 − δ) ln x3 · exp ν −δ ρ1 Limits of the derivatives for ρ1 approaching zero  − ν ∂y ρ =eλ t δ · {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ1 →0 ∂γ lim (289)  − ν −1 ∂y ρ {exp(L1 )}−ρ (ln x1 − ln x2 ) (290) = − γ eλ t ν δ δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ1 →0 ∂δ1 lim − ν −1   ∂y ν ρ −ρ −ρ = − γ eλ t δ {exp(L1 )}−ρ + (1 − δ)x−ρ {exp(L )} − x 1 3 3 ρ1 →0 ∂δ ρ lim (291) .

3.80 Econometric Estimation of the Constant Elasticity of Substitution Function in R   − ν ∂y ρ −ρ −ρ −ρ −ρ λt 1 lim =−γe ln δ {exp(L1 )} + (1 − δ)x3 δ {exp(L1 )} + (1 − δ)x3 ρ1 →0 ∂ν ρ (292)    ∂y 1 = − γ eλ t ν δ exp (−ρ L1 ) δ1 (ln x1 )2 + (1 − δ1 ) (ln x2 )2 − L21 ρ1 →0 ∂ρ1 2  − ν+ρ ρ δ exp (−ρ L1 ) + (1 − δ) x−ρ 3 lim   ∂y ν =γ eλ t 2 ln δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ1 →0 ∂ρ ρ  − ν ρ δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 − ν −1 ν ρ − γ eλ t δ {exp(L1 )}−ρ + (1 − δ)x−ρ 3 ρ   −δL1 {exp(L1 )}−ρ − (1 − δ) ln x3 x−ρ 3 lim (293) (294) Limits of the derivatives for ρ1 and ρ approaching zero ∂y =eλ t exp{ν(δ L1 + (1 − δ) ln x3 )} ρ→0 ∂γ (295) ∂y = − γ eλ t ν δ (− ln x1 + ln x2 ) exp{−ν(−δ L1 − (1 − δ) ln x3 )} ∂δ1 (296) ∂y = − γ eλ t ν (−L1 + ln x3 ) exp{ν(δ L1 + (1 − δ) ln x3 )} ρ→0 ∂δ (297) ∂y =γ eλ t (δ L1 + (1 − δ) ln x3 ) exp {−ν(−δ L1 − (1 − δ) ln x3 )} ∂ν (298) lim lim ρ1 →0 lim lim ρ1 →0 ρ→0 lim lim ρ1 →0 lim lim ρ1 →0 ρ→0  ∂y 1 = γ eλ t ν δ δ1 (ln x1 )2 + (1 − δ1 )(ln x2 )2 − L21 ρ→0 ∂ρ1 2 · exp (−ν (δ L1 + (1 − δ) ln x3 )) lim lim ρ1 →0    1 ∂y 1 2 λt 2 2 lim lim =γ e ν − δ L1 + (1 − δ)(ln x3 ) + (−δ L1 − (1 − δ) ln x3 ) ρ1 →0 ρ→0 ∂ρ 2 2 · exp {ν (δ L1 + (1 − δ) ln x3 )} B. Elasticities of substitution (299) (300) .

2. we make the following definitions: L1 = δ1 ln(x1 ) + (1 − δ1 ) ln(x2 ) (312) L2 = δ2 ln(x3 ) + (1 − δ2 ) ln(x4 ) (313) . Four-input nested CES function The nested CES function with four inputs is defined as y = γ eλ t B −ν/ρ (308) with ρ/ρ1 B = δ B1 ρ/ρ2 + (1 − δ)B2 1 1 B1 = δ1 x−ρ + (1 − δ1 )x−ρ 1 2 B2 = 2 δ2 x−ρ 3 + (1 − 2 δ2 )x−ρ 4 (309) (310) (311) For further simplification of the formulas in this section. j = 3 for i = 1. j = 2 (302) with ∗ ρ ρ1 θ = δB1 · y ρ θ = (1 − (303) δ)x−ρ 3 ·y ρ ρ −ρ − 1ρ 1 1 θ1 = δδ1 x−ρ 1 B1 (304) · yρ (305) ρ −ρ − 1ρ 1 1 θ2 = δ(1 − δ1 )x−ρ 2 B1 · yρ θ3 = θ (306) (307) C. 2. j = 2 (301) Hicks-McFadden elasticity of substitution (see Sato 1967) σi. j = 3 text i = 1.j  (1 − ρ)−1         (1 − ρ1 )−1 − (1 − ρ)−1 = + (1 − ρ)−1  1+ρ     y    δ   − ρ1 B1 1 for i = 1.Arne Henningsen. G´eraldine Henningsen 81 Allen-Uzawa elasticity of substitution (see Sato 1967) σi.j 1 1 + θi θj       1 1 1 1 1 1 = (1 − ρ1 ) θ − θ∗ + (1 − ρ2 ) θ − θ + (1 − ρ) θ∗ − θ i j         (1 − ρ1 )−1          for i = 1.

369. ρ2 . where ∂y/∂γ is calculated according to Equation 321. 345. 361. and/or ρ approaching zero are:    δ ln(B1 ) (1 − δ) ln(B2 ) λt + (314) lim y =γ e exp −ν ρ→0 ρ1 ρ2  − ν ρ ρ ρ2 λt δ exp{−ρ L1 } + (1 − δ)B2 lim y =γ e (315) ρ1 →0 lim y =γ e ρ2 →0 λt − ν  ρ ρ ρ1 δB1 + (1 − δ) exp{−ρ L2 )} (316) − νρ lim lim y =γ eλ t (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ2 →0 ρ1 →0    ln(B2 ) λt lim lim y =γ e exp −ν −δ L1 + (1 − δ) ρ1 →0 ρ→0 ρ2    ln(B ) 1 λt lim lim y =γ e exp −ν δ − (1 − δ)L2 ρ2 →0 ρ→0 ρ1 lim lim lim y =γ eλ t exp {−ν (−δ L1 − (1 − δ)L2 )} ρ1 →0 ρ2 →0 ρ→0 (317) (318) (319) (320) C. 329. ρ2 . Derivatives with respect to coefficients The partial derivatives of the four-input nested CES function with respect to the coefficients are:28 ∂y ∂γ ∂y ∂δ1 ∂y ∂δ2 ∂y ∂δ ∂y ∂ν ∂y ∂ρ1 =eλ t B −ν/ρ (321)   ρ−ρ1 −ν−ρ ρ ν ρ 1 1 − x−ρ (322) =γ eλ t − B ρ δB1 1 (x−ρ 1 2 ) ρ ρ1   ρ−ρ2 −ν−ρ ρ ν ρ λt 2 2 =γ e − B ρ (1 − δ)B2 2 (x−ρ − x−ρ (323) 3 4 ) ρ ρ2   h i ν ρ/ρ ρ/ρ − ν −1 λt =γ e − B ρ B1 1 − B2 2 (324) ρ   1 =γ eλ t ln(B)B −ν/ρ − (325) ρ   −ν−ρ ν λt =γ e − B ρ (326) ρ     ρ−ρ1 ρ ρ ρ ρ1 ρ1 −ρ1 −ρ1 δ ln(B1 )B1 − 2 + δB1 (−δ1 ln x1 x1 − (1 − δ1 ) ln x2 x2 ) ρ1 ρ1   −ν−ρ ν ∂y =γ eλ t − B ρ (327) ∂ρ2 ρ     ρ−ρ2 ρ ρ ρ ρ2 ρ2 −ρ2 −ρ2 (1 − δ) ln(B2 )B2 − 2 + (1 − δ)B2 (−δ2 ln x3 x3 − (1 − δ2 ) ln x4 x4 ) ρ2 ρ2 28 The partial derivatives with respect to λ are always calculated using Equation 16. 337.2.82 Econometric Estimation of the Constant Elasticity of Substitution Function in R C. and/or ρ approaching zero The limits of the four-input nested CES function for ρ1 . Limits for ρ1 .1. . or 377 depending on the values of ρ1 . ρ2 . 353. and ρ.

G´eraldine Henningsen 83   −ν−ρ ν ∂y λt −ν/ρ ν λt ρ − B =γ e ln(B)B + γ e ∂ρ ρ2 ρ   ρ ρ ρ1 1 ρ2 1 δ ln(B1 )B1 + (1 − δ) ln(B2 )B2 ρ1 ρ2 (328) Limits of the derivatives for ρ approaching zero    ∂y δ ln(B1 ) (1 − δ) ln(B2 ) λt lim + =e exp −ν ρ→0 ∂γ ρ1 ρ2 ! ∂y lim =γ eλ t ρ→0 ∂δ1 1 1 − x−ρ ν δ(x−ρ 2 ) 1 − ρ1 B1 ∂y lim =γ eλ t ρ→0 ∂δ2 2 2 ν (1 − δ)(x−ρ − x−ρ 3 4 ) − ρ2 B2   · exp −ν ! (329) δ ln(B1 ) (1 − δ) ln(B2 ) + ρ1 ρ2  (330)    δ ln(B1 ) (1 − δ) ln(B2 ) · exp −ν + ρ1 ρ2 (331) ∂y =γ eλ t lim ρ→0 ∂δ       ln(B1 ) ln(B2 ) δ ln(B1 ) (1 − δ) ln(B2 ) −ν − + · exp −ν ρ1 ρ2 ρ1 ρ2 (332)    ∂y (1 − δ) ln(B2 ) δ ln(B1 ) (1 − δ) ln(B2 ) λ t δ ln(B1 ) =−γe lim + · exp −ν + (333) ρ→0 ∂ν ρ1 ρ2 ρ1 ρ2 1 1 δρ1 (−δ1 ln x1 x−ρ − (1 − δ1 ) ln x2 x−ρ δ ln(B1 ) 1 2 ) −ν − 2 ρ1 B1 ρ21    δ ln(B1 ) (1 − δ) ln(B2 ) · exp −ν + ρ1 ρ2 ∂y =γ eλ t lim ρ→0 ∂ρ1 ∂y =γ eλ t lim ρ→0 ∂ρ2 !! 2 2 (1 − δ)ρ2 (−δ2 ln x3 x−ρ − (1 − δ2 ) ln x4 x−ρ (1 − δ) ln(B2 ) 3 4 ) − 2 ρ2 B2 ρ22 −ν (334) !! (335)  · exp −ν  δ ln(B1 ) (1 − δ) ln(B2 ) + ρ1 ρ2     ∂y 1 1 1 =γ eλ t ν − δ(ln B1 )2 2 + (1 − δ)(ln B2 )2 2 (336) ρ→0 ∂ρ 2 ρ1 ρ2   !    1 1 1 2 δ ln(B1 ) (1 − δ) ln(B2 ) + δ ln B1 + (1 − δ) ln B2 · exp −ν + 2 ρ1 ρ2 ρ1 ρ2 lim .Arne Henningsen.

84 Econometric Estimation of the Constant Elasticity of Substitution Function in R Limits of the derivatives for ρ1 approaching zero ∂y lim =eλ t ρ1 →0 ∂γ  − ν ρ ρ ρ2 δ exp{−ρ L1 } + (1 − δ)B2  − ν −1 ρ ρ ∂y ρ2 λt ν =−γe δ exp{−ρ L1 } + (1 − δ)B2 lim ρ1 →0 ∂δ1 ρ · δ exp{−ρ L1 }ρ(− ln x1 + ln x2 ) − ν −1  ρ ρ ∂y ρ2 λt ν lim =−γe δ exp{−ρ L1 } + (1 − δ)B2 ρ1 →0 ∂δ2 ρ  ρ ρρ −1  −ρ2 2 · (1 − δ) B2 2 x3 − x−ρ 4 ρ2  − ν −1 ρ ρ ∂y ρ2 λt ν =−γe δ exp{−ρ L1 } + (1 − δ)B2 lim ρ1 →0 ∂δ ρ   ρ ρ2 · exp{−ρ L1 } − B2   ρ ∂y ρ2 λt 1 lim =−γe ln δ exp{−ρ L1 } + (1 − δ)B2 ρ1 →0 ∂ν ρ  − ν ρ ρ ρ2 δ exp{−ρ L1 } + (1 − δ)B2  − ν −1 ρ ρ ∂y ρ2 λt lim = − γ e δ ν δ exp {ρ (−δ1 ln x1 − (1 − δ1 ) ln x2 )} + (1 − δ) B2 ρ1 →0 ∂ρ1 · exp {−ρ (δ1 ln x1 + (1 − δ1 ) ln x2 )}   1 1 2 2 2 − (−δ1 ln x1 − (1 − δ1 ) ln x2 ) + δ1 (ln x1 ) + (1 − δ1 ) (ln x2 ) 2 2 (337) (338) (339) (340) (341) (342)  − ν −1 ρ ρ ∂y ρ2 λt ν =−γe δ exp{−ρ L1 } + (1 − δ)B2 (343) lim ρ1 →0 ∂ρ2 ρ  ρ  ρ ρ ρρ2 −1  ρ2 −ρ2 −ρ2 −δ2 ln x3 x3 − (1 − δ2 ) ln x4 x4 −(1 − δ) 2 ln(B2 )B2 + (1 − δ) B2 ρ2 ρ2   ρ ∂y ρ2 λt ν lim =γ e ln δ exp{−ρ L1 } + (1 − δ)B2 ρ1 →0 ∂ρ ρ2  − ν ρ ρ ρ2 δ exp{−ρ L1 } + (1 − δ)B2 (344) .

Arne Henningsen. G´eraldine Henningsen λt ν ρ  ρ ρ2 85 − ν −1 ρ δ exp{−ρ L1 } + (1 − δ)B2 −γe   ρ 1−δ ρ2 −δ exp{−ρ L1 }L1 + ln(B2 )B2 ρ2 Limits of the derivatives for ρ2 approaching zero ∂y lim =eλ t ρ2 →0 ∂γ ∂y ν lim = − γ eλ t ρ2 →0 ∂δ1 ρ  ρ ρ1 δB1  − ν ρ ρ ρ1 δB1 + (1 − δ) exp{−ρ L2 } − ν −1  ρ ρ ρρ −1  −ρ1 1 + (1 − δ) exp{−ρ L2 } δ B1 1 x1 − x−ρ 2 ρ1 − ν −1  ρ ρ ∂y ρ1 λt lim = − γ e ν δB1 + (1 − δ) exp{−ρ L2 } ρ2 →0 ∂δ2 (1 − δ) exp{−ρ L2 } (− ln x3 + ln x4 ) − ν −1  ρ ρ ∂y ρ1 λt ν lim =−γe δB1 + (1 − δ) exp{−ρ L2 } ρ2 →0 ∂δ ρ  ρ  ρ1 B1 − exp{−ρ L2 }   ρ ∂y ρ1 λt 1 lim =−γe ln δB1 + (1 − δ) exp{−ρ L2 } ρ2 →0 ∂ν ρ  − ν ρ ρ ρ1 δB1 + (1 − δ) exp{−ρ L2 } (345) (346) (347) (348) (349)  − ν −1 ρ ρ ∂y ρ1 λt ν lim =−γe δB1 + (1 − δ) exp{−ρ L2 } (350) ρ2 →0 ∂ρ1 ρ      ρ ρ  −1  ρ ρ ρ1 ρ1 −ρ1 −ρ1 − 2 +δ B1 −δ1 ln x1 x1 − (1 − δ1 ) ln x2 x2 δ ln(B1 )B1 ρ1 ρ1 ∂y = − γ eλ t (1 − δ) ν exp {−ρ (δ2 ln x3 + (1 − δ2 ) ln x4 )} (351) ρ2 →0 ∂ρ2    ρ − νρ −1 −ρ1 −ρ1 ρ1 (1 − δ) exp {ρ (−δ2 ln x3 − (1 − δ2 ) ln x4 )} + δ δ1 x1 + (1 − δ1 ) x2   1 1 2 2 2 − (δ2 ln x3 + (1 − δ2 ) ln x4 ) + δ2 (ln x3 ) + (1 − δ2 ) (ln x4 ) 2 2 lim .

86 Econometric Estimation of the Constant Elasticity of Substitution Function in R  − ν  ρ ρ ρ ∂y ρ1 ρ1 λt ν δB1 + (1 − δ) exp{−ρ L2 } ln δB1 + (1 − δ) exp{−ρ L2 } lim =γ e 2 ρ2 →0 ∂ρ ρ (352) − ν −1  ρ ρ ν ρ − γ eλ t δB1 1 + (1 − δ) exp{−ρ L2 } ρ   ρ ρ1 1 −(1 − δ) exp{−ρ L2 }L2 + δ ln(B1 ) B1 ρ1 Limits of the derivatives for ρ1 and ρ2 approaching zero   ∂y ν λt lim lim =e exp − ln (δ exp{−ρ L1 } + (1 − δ) exp{−ρ L2 }) ρ2 →0 ρ1 →0 ∂γ ρ ∂y − ν −1 = − γ eλ t ν δ (δ exp{−ρ L1 } + (1 − δ) exp{−ρ L2 }) ρ ρ1 →0 ∂δ1 exp{−ρ L1 } (− ln x1 + ln x2 ) lim lim ρ2 →0 ∂y − ν −1 = − γ eλ t ν ((1 − δ) exp{−ρL2 } + δ exp{−ρ L1 }) ρ ρ1 →0 ∂δ2 (1 − δ) exp{−ρ L2 }(− ln x3 + ln x4 ) lim lim ρ2 →0 ∂y ν − ν −1 = − γ eλ t (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ ρ1 →0 ∂δ ρ (exp {−ρ L1 } − exp {−ρ L2 }) lim lim ρ2 →0 ∂y 1 = − γ eλ t ln (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ1 →0 ∂ν ρ lim lim ρ2 →0 (353) (354) (355) (356) (357) − νρ (δ exp {−ρ L1 } + (1 − δ) exp {−ρL2 })  − ν −1 ∂y ρ = − γ eλ t δ ν δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 } ρ1 →0 ∂ρ1   1 2 1 2 2 · exp {ρ L2 } − L1 + δ1 (ln x1 ) + (1 − δ1 ) (ln x2 ) 2 2 lim lim ρ2 →0 ∂y − ν −1 = − γ eλ t (1 − δ) ν ((1 − δ) exp {−ρ L2 } + δ exp {−ρ L1 }) ρ ρ2 →0 ∂ρ2   1 2 1 2 2 · exp {−ρ L2 } − L2 + δ2 (ln x3 ) + (1 − δ2 ) (ln x4 ) 2 2 lim lim ρ1 →0 ν ∂y =γ eλ t 2 ln (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ1 →0 ∂ρ ρ lim lim ρ2 →0 (358) (359) (360) .

G´eraldine Henningsen 87 −ν (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ ν − ν −1 − γ eλ t (δ exp {−ρ L1 } + (1 − δ) exp {−ρ L2 }) ρ ρ (−δ exp {−ρ L1 } L1 − (1 − δ) exp {−ρ L2 } L2 ) Limits of the derivatives for ρ1 and ρ approaching zero    ∂y (1 − δ) ln(B2 ) λt lim lim =e exp −ν −δ L1 + ρ1 →0 ρ→0 ∂γ ρ2 (361)    ∂y (1 − δ) ln(B2 ) λt lim lim =γ e (−ν(δ(− ln x1 + ln x2 ))) exp −ν −δ L1 + ρ1 →0 ρ→0 ∂δ1 ρ2 (362)    −ρ2 2 ∂y − x−ρ (1 − δ) ln(B2 ) λ t −ν(1 − δ)(x3 4 ) lim lim =γ e exp −ν −δ L1 + ρ1 →0 ρ→0 ∂δ2 ρ2 B2 ρ2 (363)      ln(B2 ) ln(B2 ) ∂y λt = − γ e ν −L1 − exp −ν −δ L1 + (1 − δ) lim lim ρ1 →0 ρ→0 ∂δ ρ2 ρ2 (364) ∂y lim lim =γ eλ t ρ1 →0 ρ→0 ∂ν      ln(B2 ) ln(B2 ) δ L1 − (1 − δ) exp −ν −δ L1 + (1 − δ) ρ2 ρ2   ∂y 1 1 (δ1 (ln x1 )2 + (1 − δ1 )(ln x2 )2 ) − L21 = − γ eλ t ν δ ρ→0 ∂ρ1 2 2    ln(B2 ) · exp −ν −δ L1 + (1 − δ) ρ2 lim lim ρ1 →0 2 2 ln(B2 ) (−δ2 ln x3 x−ρ − (1 − δ2 ) ln x4 x−ρ ∂y 3 4 ) = − γ eλ t ν (1 − δ) − + lim lim ρ1 →0 ρ→0 ∂ρ2 ρ2 B2 ρ22    ln(B2 ) · exp −ν −L1 + (1 − δ) ρ2 ∂y lim lim =γ eλ t ν ρ1 →0 ρ→0 ∂ρ 1 − 2 δL21 + (1 − δ)  ln(B2 ) ρ2 2 ! 1 + 2 (365) (366) ! (367)   ! ln(B2 ) 2 −δ L1 + (1 − δ) ρ2 (368)    ln(B2 ) · exp −ν −δ L1 + (1 − δ) ρ2 .Arne Henningsen.

and ρ approaching zero lim lim lim ρ1 →0 ρ2 →0 ρ→0 ∂y =eλ t exp {−ν (−δ L1 − (1 − δ)L2 )} ∂γ (377) .88 Econometric Estimation of the Constant Elasticity of Substitution Function in R Limits of the derivatives for ρ2 and ρ approaching zero    δ ln(B1 ) ∂y λt − (1 − δ)L2 =e exp −ν lim lim ρ2 →0 ρ→0 ∂γ ρ1 (369)    −ρ1 1 ∂y − x−ρ δ ln(B1 ) λ t −νδ(x1 2 ) lim lim =γ e exp −ν − (1 − δ)L2 ρ2 →0 ρ→0 ∂δ1 ρ1 B1 ρ1 (370)    ∂y δ ln(B1 ) λt lim lim =γ e (−ν((1 − δ)(− ln x3 + ln x4 ))) exp −ν − (1 − δ)L2 ρ2 →0 ρ→0 ∂δ2 ρ1 (371) ∂y lim lim = − γ eλ t ν ρ2 →0 ρ→0 ∂δ ∂y lim lim =γ eλ t ρ2 →0 ρ→0 ∂ν       ln(B1 ) ln(B1 ) + L2 exp −ν δ − (1 − δ)L2 ρ1 ρ1     ln(B1 ) ln(B1 ) −δ + (1 − δ)L2 exp −ν δ − (1 − δ)L2 ρ1 ρ1 1 1 ∂y ln(B1 ) (−δ1 ln x1 x−ρ − (1 − δ1 ) ln x2 x−ρ 1 2 ) lim lim = − γ eλ t ν δ − + ρ2 →0 ρ→0 ∂ρ1 ρ1 B1 ρ21    ln(B1 ) − (1 − δ)L2 · exp −ν δ ρ1   ∂y 1 1 (δ2 (ln x3 )2 + (1 − δ2 )(ln x4 )2 ) − L22 = − γ eλ t ν (1 − δ) ρ→0 ∂ρ2 2 2    ln(B1 ) · exp −ν δ − (1 − δ)L2 ρ1 ∂y =γ eλ t ν lim lim ρ2 →0 ρ→0 ∂ρ 1 − 2  δ ln(B1 ) ρ1 2 ! + (1 − δ)L22 1 + 2  (373) ! lim lim ρ2 →0 (372) ln(B1 ) δ − (1 − δ)L2 ρ1 (374) (375) 2 ! (376)   · exp −ν δ ln(B1 ) − (1 − δ)L2 ρ1  Limits of the derivatives for ρ1 . ρ2 .

2.Arne Henningsen.3. j = 2 (385) for i = 3. G´eraldine Henningsen lim lim lim ρ1 →0 ρ2 →0 ρ→0 lim lim lim ρ1 →0 ρ2 →0 ρ→0 ∂y = − γ eλ t ν δ (− ln x1 + ln x2 ) exp {−ν (−δ L1 − (1 − δ)L2 )} ∂δ1 89 (378) ∂y = − γ eλ t ν (1 − δ)(− ln x3 + ln x4 ) exp {−ν (−δ L1 − (1 − δ)L2 )} (379) ∂δ2 lim lim lim ρ1 →0 ρ2 →0 ρ→0 lim lim lim ρ1 →0 ρ2 →0 ρ→0 ∂y = − γ eλ t ν (−L1 + L2 ) exp {−ν (−δ L1 − (1 − δ)L2 )} ∂δ ∂y =γ eλ t (δ L1 + (1 − δ)L2 ) exp {−ν (−δ L1 − (1 − δ)L2 )} ∂ν   ∂y 1 1 2 λt 2 2 (δ1 (ln x1 ) + (1 − δ1 )(ln x2 ) ) − L1 lim lim lim =−γe νδ ρ1 →0 ρ2 →0 ρ→0 ∂ρ1 2 2 · exp {−ν (−δ L1 − (1 − δ)L2 )} lim lim lim ρ1 →0 ρ2 →0 ρ→0   1 1 ∂y (δ2 (ln x3 )2 + (1 − δ2 )(ln x4 )2 ) − L22 = − γ eλ t ν (1 − δ) ∂ρ2 2 2 · exp {−ν (−δ L1 − (1 − δ)L2 )}    1 1 ∂y 2 λt 2 2 lim lim lim =γ e ν − δL1 + (1 − δ)L2 + (−δ L1 − (1 − δ)L2 ) ρ1 →0 ρ2 →0 ρ→0 ∂ρ 2 2 · exp {−ν (−δ L1 + (1 − δ)(−δ2 ln x3 − (1 − δ2 ) ln x4 ))} (380) (381) (382) (383) (384) C. j = 4 .j   (1 − ρ)−1          (1 − ρ1 )−1 − (1 − ρ)−1    + (1 − ρ)−1  1+ρ     y     δ − ρ1 = B1 1         (1 − ρ2 )−1 − (1 − ρ)−1    1+ρ + (1 − ρ)−1      y    (1 − δ)  − 1    ρ B2 2 for i = 1. j = 3. Elasticities of substitution Allen-Uzawa elasticity of substitution (see Sato 1967) σi. 4 for i = 1.

j = 4 (386) with ρ ρ θ∗ = δB1 1 · y ρ (387) ρ ρ2 θ = (1 − δ)B2 · y ρ ρ −ρ − 1ρ 1 1 θ1 = δδ1 x−ρ 1 B1 (388) · yρ ρ −ρ − 1ρ 1 1 θ2 = δ(1 − δ1 )x−ρ 2 B1 ρ −ρ − 2ρ 2 2 θ3 = (1 − δ)δ2 x−ρ 3 B2 (389) · yρ (390) · yρ (391) ρ −ρ − 2ρ 2 2 θ4 = (1 − δ)(1 − δ2 )x−ρ 4 B2 · yρ (392) .j =  1 1   +   θi θ j           1 1 1 1 1 1    (1 − ρ1 ) − + (1 − ρ2 ) − + (1 − ρ) −   θi θ ∗ θj θ θ∗ θ     (1 − ρ1 )−1          (1 − ρ2 )−1 for i = 1.90 Econometric Estimation of the Constant Elasticity of Substitution Function in R Hicks-McFadden elasticity of substitution (see Sato 1967) σi. 2. 4 for i = 1. j = 2 for i = 3. j = 3.

summary( cesNm3 ) "Y". tName = "time". xNames3. data = GermanIndustry. data = GermanIndustry. summary( cesNm2 ) cesNm3 <. control = list( maxit = 5000 ) ) ## L-BFGS-B cesBfgsCon1 <.cesEst( "Y". xNames1. data = GermanIndustry.c( "K".cesEst( method = "BFGS". data = GermanIndustry. data = GermanIndustry.Arne Henningsen.subset( GermanIndustry. = 2e6 ) ) "Y". control = list( maxit = 5000 ) ) "Y".c( "K". ) # add a time trend (starting with 0) GermanIndustry$time <. control = list( maxit = 5000 ) ) "Y". tName = "time". G´eraldine Henningsen D. tName = method = "SANN". year < 1973 | year > 1975. summary( cesBfgs1 ) cesBfgs2 <. "E". control = list( maxit = 5000 ) ) ## Simulated Annealing cesSann1 <.GermanIndustry$year .cesEst( method = "NM". control = list( maxit = 5000 ) ) "Y".cesEst( "Y". data = GermanIndustry. xNames1.1975 because of economic disruptions (see Kemfert 1998) GermanIndustry <. "A". summary( cesBfgs3 ) "time".cesEst( method = "NM".cesEst( method = "NM". = 2e6 ) ) "time". control = list( maxit summary( cesSann2 ) cesSann3 <. = 2e6 ) ) "time". tName = "time". tName = method = "SANN". tName = "time". xNames1. tName = "time". "E" ) <. tName = "time". "K" ) ################# econometric estimation with cesEst ########################## ## Nelder-Mead cesNm1 <. "A" ) <.cesEst( method = "BFGS". control = list( maxit summary( cesSann3 ) ## BFGS cesBfgs1 <. data = GermanIndustry. xNames1. control = list( maxit summary( cesSann1 ) cesSann2 <. xNames3.cesEst( method = "BFGS".c( "E". xNames2. data = GermanIndustry. xNames2. summary( cesBfgs2 ) cesBfgs3 <. 91 . xNames2. data = GermanIndustry. data = GermanIndustry.cesEst( "Y".1960 # names xNames1 xNames2 xNames3 of inputs <. tName = "time". control = list( maxit = 5000 ) ) "Y". xNames3. tName = method = "SANN". "A".cesEst( "Y". summary( cesNm1 ) cesNm2 <. Script for replicating the analysis of Kemfert (1998) # load the micEconCES package library( "micEconCES" ) # load the data set data( "GermanIndustry" ) # remove years 1973 .

xNames2.control( maxiter = 1000. iterlim = 500 ) "Y". summary( cesPort1 ) cesPort2 <. xNames3. control = list( maxit = 5000 summary( cesBfgsCon3 ) ) ) data = GermanIndustry. summary( cesNewton1 ) cesNewton2 <. data = GermanIndustry. xNames3.max = 1000. method = "PORT". iter. control = nls. data = GermanIndustry. iterlim = 500 ) "Y". data = GermanIndustry. tName = "time".lm. xNames2. control = list( eval. tName = "time". xNames1. data = GermanIndustry.control( maxiter = summary( cesLm1Me ) cesLm2Me <.max = 1000 ) ) "Y".cesEst( "Y".max = 2000. tName = "time".lm. maxfev = 2000 ) ) = GermanIndustry. iter.control( maxiter = 1000. maxfev = 2000 ) ) summary( cesLm2 ) cesLm3 <. iter. control = nls. xNames2.cesEst( "Y". maxfev = 2000 ) ) "Y". xNames1.cesEst( method = "Newton". control = list( eval. iterlim = 500 ) "Y". control = nls. data = GermanIndustry. tName = "time".max = 2000 ) ) .lm. tName = "time".cesEst( method = "PORT".control( maxiter = 1000. summary( cesNewton2 ) cesNewton3 <.cesEst( method = "Newton".92 Econometric Estimation of the Constant Elasticity of Substitution Function in R method = "L-BFGS-B". data = GermanIndustry. 1000. method = "L-BFGS-B". iter.max = 1000. method = "L-BFGS-B". summary( cesNewton3 ) ## PORT cesPort1 <. tName = "time". control = list( maxit = 5000 summary( cesBfgsCon2 ) cesBfgsCon3 <. multErr = TRUE. tName = "time". control = nls. ) ) data = GermanIndustry.lm. data multErr = TRUE. xNames3.cesEst( "Y".cesEst( "Y".cesEst( method = "PORT". control = nls. maxfev = 2000 ) ) = GermanIndustry. xNames1.control( maxiter = summary( cesLm2Me ) cesLm3Me <. xNames3. control = list( eval.cesEst( "Y".cesEst( "Y". summary( cesPort3 ) = GermanIndustry.lm. control = list( eval. multiplicative error cesPort1Me <. 1000.cesEst( "Y". xNames3.lm. data multErr = TRUE.max = 1000 ) ) "Y". tName = "time". data = GermanIndustry. tName = "time".cesEst( method = "PORT". maxfev = 2000 ) ) summary( cesLm3 ) ## Levenberg-Marquardt. xNames1. tName = "time". 1000. data multErr = TRUE. xNames2.control( maxiter = summary( cesLm3Me ) ## Newton-type cesNewton1 <. tName = "time".cesEst( "Y". tName = "time". control = list( maxit = 5000 summary( cesBfgsCon1 ) cesBfgsCon2 <. data = GermanIndustry. xNames1. maxfev = 2000 ) ) summary( cesLm1 ) cesLm2 <. multiplicative error term cesLm1Me <.cesEst( "Y". data = GermanIndustry. xNames2. tName = "time". tName = "time". data = GermanIndustry. ) ) ## Levenberg-Marquardt cesLm1 <.cesEst( method = "Newton". tName = "time". summary( cesPort2 ) cesPort3 <.max = 1000 ) ) ## PORT. control = nls.max = 1000.

xNames1. xNames3. multErr = TRUE. xNames1.cesEst( "Y". maxfev summary( cesSannLm1 ) cesSannLm2 <.control( trace = itermax = 1e4 ) ) summary( cesDe2 ) cesDe3 <. start = coef( cesNm2 ). method = = GermanIndustry. NP = 500. start = coef( cesNm1 ).Levenberg-Marquardt cesSannLm1 <.control( maxiter = 1000.cesEst( "Y".control( maxiter = 1000.lm. NP = 500. start = coef( cesSann2 ).max = 1000.cesEst( "Y". control = list( eval. tName = "time". xNames1.cesEst( "Y".cesEst( "Y". xNames2. G´eraldine Henningsen summary( cesPort1Me ) cesPort2Me <. NP = 500.control( maxiter = 1000. data method = "DE". iter.try( cesEst( vrs = TRUE. xNames2. maxfev summary( cesSannLm2 ) cesSannLm3 <. tName = "time". control = list( eval. control = nls.control( trace = itermax = 1e4 ) ) summary( cesDe3 ) ## nls cesNls1 <. tName = "time". xNames1. data = GermanIndustry.lm.cesEst( "Y". method = cesNls3 <. control = nls. data = GermanIndustry. FALSE. xNames3. tName = "time". xNames3.cesEst( "Y".try( cesEst( vrs = TRUE. control = DEoptim. multErr = TRUE.cesEst( "Y".control( maxiter = 1000. tName = "time". xNames2. method = "PORT". "nls" ) ) ## NM . tName = "time". summary( cesPort3Me ) tName = "time".max = 1000 ) ) ## DE cesDe1 <.cesEst( "Y". maxfev = 2000 ) ) summary( cesNmLm3 ) ## SANN . "nls" ) ) "Y". "nls" ) ) "Y". start = coef( cesNm3 ).Levenberg-Marquardt cesNmLm1 <. xNames3. tName = "time". tName = "time". data = GermanIndustry. control = DEoptim. tName = "time". control = nls. data = GermanIndustry.lm. FALSE.cesEst( "Y". 93 . data = GermanIndustry.lm. maxfev = 2000 ) ) summary( cesNmLm2 ) cesNmLm3 <. = 2000 ) ) data = GermanIndustry. control = DEoptim. FALSE. tName = "time". data method = "DE". tName = "time".cesEst( "Y". = GermanIndustry.max = 1000. start = coef( cesSann1 ). start = coef( cesSann3 ). = GermanIndustry. xNames3. "Y". xNames2. tName = "time".Arne Henningsen. control = nls. summary( cesPort2Me ) cesPort3Me <. data = GermanIndustry. data = GermanIndustry.try( cesEst( vrs = TRUE. = 2000 ) ) data = GermanIndustry. data = GermanIndustry.control( maxiter = 1000.lm. data = GermanIndustry.control( trace = itermax = 1e4 ) ) summary( cesDe1 ) cesDe2 <. xNames2. control = nls. maxfev = 2000 ) ) summary( cesNmLm1 ) cesNmLm2 <. method = "PORT". data method = "DE".max = 1000 ) ) tName = "time". iter. method = cesNls2 <.

lm. data = GermanIndustry.lm.control( maxiter = 1000. ) ) data = GermanIndustry. data = GermanIndustry.max = 1000 summary( cesNmPort2 ) cesNmPort3 <. tName = "time".cesEst( "Y". xNames3. iter. iter. data = GermanIndustry. xNames2.cesEst( "Y". xNames2. tName = "time".control( maxiter = 1000. tName = "time". control = list( eval.max = 1000.max = 1000 ) summary( cesSannPort1 ) cesSannPort2 <.cesEst( "Y".cesEst( "Y". xNames3.max = 1000 data = GermanIndustry. xNames1. control = list( eval.max = 1000 ) summary( cesSannPort3 ) ## DE . control = nls.PORT cesNmPort1 <.cesEst( "Y". method = "PORT".max = 1000 summary( cesNmPort1 ) cesNmPort2 <. start = coef( cesSann2 ).cesEst( "Y". ) data = GermanIndustry. start = coef( cesDe2 ). tName = "time".Levenberg-Marquardt cesDeLm1 <. ) ) . start = coef( cesDe3 ). ) data = GermanIndustry. tName = "time". control = list( eval.max = 1000. xNames1. ) ) data = GermanIndustry. start = coef( cesNm1 ).cesEst( "Y".max = 1000 summary( cesNmPort3 ) data = GermanIndustry.max = 1000 summary( cesDePort2 ) cesDePort3 <.94 Econometric Estimation of the Constant Elasticity of Substitution Function in R control = nls.cesEst( "Y". iter. ) ) ## SANN . control = list( eval. xNames3. control = list( eval.lm. method = "PORT".cesEst( "Y".max = 1000. maxfev = 2000 ) ) summary( cesDeLm1 ) cesDeLm2 <.max = 1000 summary( cesDePort1 ) cesDePort2 <. method = "PORT". maxfev = 2000 ) ) summary( cesSannLm3 ) ## DE . control = list( eval. start = coef( cesSann1 ). iter.max = 1000 ) summary( cesSannPort2 ) cesSannPort3 <.cesEst( "Y". ) ) data = GermanIndustry. control = nls. start = coef( cesSann3 ). ) ) data = GermanIndustry.max = 1000.control( maxiter = 1000. tName = "time". start = coef( cesDe1 ). tName = "time". xNames1. method = "PORT". iter.lm. control = list( eval.max = 1000. start = coef( cesNm3 ). start = coef( cesNm2 ). method = "PORT". xNames2. iter. start = coef( cesDe2 ).control( maxiter = 1000.max = 1000. xNames1.PORT cesDePort1 <.cesEst( "Y". control = list( eval. iter.cesEst( "Y". tName = "time".max = 1000. method = "PORT". ) data = GermanIndustry. start = coef( cesDe3 ). start = coef( cesDe1 ). control = nls. method = "PORT". xNames3. maxfev = 2000 ) ) summary( cesDeLm3 ) ## NM . tName = "time". iter. tName = "time".PORT cesSannPort1 <. control = list( eval. tName = "time".max = 1000. xNames2. maxfev = 2000 ) ) summary( cesDeLm2 ) cesDeLm3 <. iter. method = "PORT". method = "PORT". tName = "time".max = 1000.

00641 * GermanIndustry$time ) # names of adjusted inputs xNames1f <.GermanIndustry$E * exp( 0.GermanIndustry$E * exp( 0. lambda. rho1 = 0.1813. xNames1f. data = GermanIndustry.00641 * GermanIndustry$time ) GermanIndustry$E3 <. data = GermanIndustry.0069 * GermanIndustry$time ) GermanIndustry$E2 <.1816. rho = 5. rho = 0. rho = 0.cesEst( "Y". rho1 = 0. control = list( maxit = 5000 ) ) summary( cesBfgsFixed3 ) ## Levenberg-Marquardt.cesEst( "Y". "A2".GermanIndustry$K * exp( 0.2155. rho1 = 1. rho_1. method = "BFGS".3654.00641 * GermanIndustry$time ) GermanIndustry$A3 <. data = GermanIndustry. xNames1f. rho1 = 0.1816.cesEst( "Y". maxfev = 2000 ) ) summary( cesLmFixed1 ) cesLmFixed2 <.5300.cesEst( "Y".2155.cesEst( "Y".c( "E3".0069 * GermanIndustry$time ) GermanIndustry$K3 <.8327. control = list( maxit = 5000 ) ) summary( cesBfgsFixed2 ) cesBfgsFixed3 <. xNames3f. rho = 1. xNames2f.Arne Henningsen. data = GermanIndustry.1816. data = GermanIndustry.0222 * GermanIndustry$time ) GermanIndustry$E1 <. lambda.0222 * GermanIndustry$time ) GermanIndustry$A1 <. "A1" ) xNames2f <. rho_1.GermanIndustry$E * exp( 0.cesEst( "Y". data = GermanIndustry. and rho fixed cesNmFixed1 <. rho = 0. control = list( maxit = 5000 ) ) summary( cesNmFixed1 ) cesNmFixed2 <. and rho fixed ##################### # removing technological progress using the lambdas of Kemfert (1998) # (we can do this.0069 * GermanIndustry$time ) GermanIndustry$A2 <. control = list( maxit = 5000 ) ) summary( cesNmFixed3 ) ## BFGS. and rho fixed cesBfgsFixed1 <. xNames1f. control = list( maxit = 5000 ) ) summary( cesBfgsFixed1 ) cesBfgsFixed2 <.GermanIndustry$A * exp( 0. data = GermanIndustry. method = "BFGS".GermanIndustry$K * exp( 0.2155. method = "NM".0222 * GermanIndustry$time ) GermanIndustry$K2 <.5300. rho_1.GermanIndustry$A * exp( 0.c( "K1". method = "NM".5300. rho = 1. data = GermanIndustry. rho = 5.1813. rho1 = 0. method = "BFGS".cesEst( "Y". because the model has constant returns to scale) GermanIndustry$K1 <. rho1 = 0. control = nls. "K3" ) ## Nelder-Mead. rho1 = 0. 95 . rho_1. "A3". rho = 1. G´eraldine Henningsen summary( cesDePort3 ) ############# estimation with lambda. xNames2f. "E2" ) xNames3f <. lambda. and rho fixed cesLmFixed1 <. "E1". xNames3f.control( maxiter = 1000.lm. control = list( maxit = 5000 ) ) summary( cesNmFixed2 ) cesNmFixed3 <.8327.3654. rho1 = 1. method = "NM". xNames2f.GermanIndustry$A * exp( 0.1813.cesEst( "Y".GermanIndustry$K * exp( 0.c( "K2".

rho = 5.lm.1813. xNames3f. lambda.2155. rho = 5. rho1 = 0. control = list( eval. rho_1. maxfev = 2000 ) ) summary( cesLmFixed3 ) ## Levenberg-Marquardt.8327.2155. method = "PORT". xNames2f. data = GermanIndustry.cesEst( "Y". multErr = TRUE. rho = 0. control = nls.max = 1000.cesEst( "Y". xNames1f. maxfev = 2000 ) ) summary( cesLmFixed3Me ) summary( cesLmFixed3Me. rho = 1. multiplicative error term cesLmFixed1Me <. data = GermanIndustry. rho = 0. method = "PORT".max = 1000. ncol = 1 ). iterlim = 500 ) summary( cesNewtonFixed3 ) ## PORT. and rho fixed cesNewtonFixed1 <. cesNewtonFixed2$rss. rSquaredLog = FALSE ) ## Newton-type.max = 1000. multErr = TRUE. and rho fixed print( matrix( c( cesNmFixed1$rss. .max = 1000 ) ) summary( cesPortFixed3 ) # compare RSSs of models with lambda.5300. xNames3f.cesEst( "Y".lm. cesLmFixed3$rss. data = GermanIndustry.cesEst( "Y". cesPortFixed1$rss ).lm. rho1 = 0. rho1 = 0. rho1 = 0.3654.cesEst( "Y".max = 1000 ) ) summary( cesPortFixed1 ) cesPortFixed2 <. cesLmFixed2$rss. rho = 0. digits = 16 ) cesFixed2 <. rho1 = 1. lambda.cesEst( "Y".2155. rho = 5. ncol = 1 ).3654.1813.cesEst( "Y". rho_1. and rho fixed. rho1 = 0.5300. maxfev = 2000 ) ) summary( cesLmFixed1Me ) summary( cesLmFixed1Me.cesLmFixed1 print( matrix( c( cesNmFixed2$rss. cesPortFixed2$rss ). iter. iterlim = 500 ) summary( cesNewtonFixed2 ) cesNewtonFixed3 <.cesEst( "Y". iter. rho = 5. xNames3f. xNames1f. method = "Newton". rho1 = 1.max = 1000 ) ) summary( cesPortFixed2 ) cesPortFixed3 <. rSquaredLog = FALSE ) cesLmFixed3Me <. rho1 = 1.cesEst( "Y". rho_1.1813. rSquaredLog = FALSE ) cesLmFixed2Me <.3654. xNames3f. method = "PORT". data = GermanIndustry. maxfev = 2000 ) ) summary( cesLmFixed2 ) cesLmFixed3 <. method = "Newton". rho1 = 1.control( maxiter = 1000.lm.cesEst( "Y". control = nls. digits = 16 ) cesFixed1 <. data = GermanIndustry.1816. cesLmFixed1$rss.8327.96 Econometric Estimation of the Constant Elasticity of Substitution Function in R control = nls.8327. cesNewtonFixed1$rss. data = GermanIndustry.cesLmFixed2 print( matrix( c( cesNmFixed3$rss. rho_1.control( maxiter = 1000. and rho fixed cesPortFixed1 <. rho = 1. iter. xNames2f.1816. lambda. data = GermanIndustry. control = nls. control = nls. data = GermanIndustry. cesBfgsFixed1$rss. xNames2f.lm.1816.5300.8327. rho = 1. control = list( eval. method = "Newton".control( maxiter = 1000. cesBfgsFixed3$rss. control = list( eval. rho1 = 0.3654.control( maxiter = 1024. data = GermanIndustry.control( maxiter = 1000. iterlim = 500 ) summary( cesNewtonFixed1 ) cesNewtonFixed2 <. maxfev = 2000 ) ) summary( cesLmFixed2Me ) summary( cesLmFixed2Me. cesBfgsFixed2$rss. multErr = TRUE. data = GermanIndustry. xNames1f.

nested = TRUE ) all.cesEst( "Y".cesCalc( xNames2f.cesLmFixed3 ## check if removing the technical progress worked as expected Y2Calc <. xNames2.control( maxiter = 1000.2. rho1 = rhoVec. ncol = 1 ). rho = rhoVec. returnGridAll = TRUE.0069. control = list( maxit = 5000 ) ) summary( cesBfgsGridRho1 ) plot( cesBfgsGridRho1 ) cesBfgsGridRho2 <. cesPortFixed3$rss ). "". 14.4 ) ) ## BFGS. returnGridAll = TRUE. 0.cesEst( "Y". nested = TRUE ) all. control = nls. method = "BFGS". start = coef( cesBfgsGridRho1 ). control = list( maxit = 5000 ) ) summary( cesBfgsGridRho2 ) plot( cesBfgsGridRho2 ) cesBfgsGridRho3 <. start = coef( cesBfgsGridRho3 ). xNames1. Y2TcCalc ) ########## Grid Search for Rho_1 and Rho ############## rhoVec <. method = "BFGS". rho = rhoVec.lm. tName = "time". rho1 = rhoVec. data = GermanIndustry. G´eraldine Henningsen cesNewtonFixed3$rss. xNames3. 1. start = coef( cesBfgsGridRho2 ). 0.equal( Y2Calc. rho = rhoVec.lm. tName = "time". tName = "time". coef( cesFixed2 )[-1] ).cesEst( "Y". data = GermanIndustry. rho1 = rhoVec.c( seq( -1. rho1 = rhoVec. xNames2f ).cesEst( "Y". data = GermanIndustry.equal( Y2Calc. tName = "time".1 ). xNames2. tName = "time". xNames2. rho = rhoVec. data = GermanIndustry. seq( 1. digits = 16 ) cesFixed3 <. coef = c( coef( cesFixed2 )[1]. maxfev = 2000 ) ) summary( cesLmGridRho1 ) plot( cesLmGridRho1 ) cesLmGridRho2 <.cesEst( "Y". control = list( maxit = 5000 ) ) summary( cesBfgsGridRho3 ) plot( cesBfgsGridRho3 ) # BFGS with grid search estimates as starting values cesBfgsGridStartRho1 <. data = GermanIndustry. data = GermanIndustry. method = "BFGS". lambda = 0. 0. control = nls. grid search for rho_1 and rho cesBfgsGridRho1 <. data = GermanIndustry. data = GermanIndustry. data = GermanIndustry. xNames1. control = list( maxit = 5000 ) ) summary( cesBfgsGridStartRho1 ) cesBfgsGridStartRho2 <. 4. control = list( maxit = 5000 ) ) summary( cesBfgsGridStartRho2 ) cesBfgsGridStartRho3 <. tName = "time". method = "BFGS". control = list( maxit = 5000 ) ) summary( cesBfgsGridStartRho3 ) ## Levenberg-Marquardt. xNames1. method = "BFGS".cesEst( "Y". rho1 = rhoVec. grid search for rho1 and rho cesLmGridRho1 <. tName = "time". maxfev = 2000 ) ) 97 . xNames3.4.cesEst( "Y". returnGridAll = TRUE. tName = "time". tName = "time". fitted( cesFixed2 ) ) Y2TcCalc <. returnGridAll = TRUE. returnGridAll = TRUE.2 ). rho = rhoVec.control( maxiter = 1000. data = GermanIndustry. coef = coef( cesFixed2 ).cesCalc( sub( "[123]$".cesEst( "Y". seq( 4.Arne Henningsen. method = "BFGS".

rho1 = rhoVec. = 2000 ) ) "time". start = coef( cesNewtonGridRho1 ). xNames3. xNames3. xNames1. returnGridAll = TRUE. iterlim = 500. returnGridAll = TRUE. method = "Newton".cesEst( "Y". method = "Newton". iterlim = 500 ) summary( cesNewtonGridRho1 ) plot( cesNewtonGridRho1 ) cesNewtonGridRho2 <. data = GermanIndustry. iterlim = 500.lm. tName = "time". tName = start = coef( cesLmGridRho3 ). rho1 = rhoVec. maxfev = 2000 ) ) summary( cesLmGridRho3 ) plot( cesLmGridRho3 ) # LM with grid search estimates as starting values cesLmGridStartRho1 <. control = list( eval.analyticals = FALSE ) summary( cesNewtonGridStartRho3 ) ## PORT. iter. grid search for rho1 and rho cesPortGridRho1 <. control = nls. data = GermanIndustry. returnGridAll = TRUE. method = "Newton". rho = rhoVec.cesEst( "Y". rho1 = rhoVec. tName = start = coef( cesLmGridRho2 ). = 2000 ) ) ## Newton-type. data = GermanIndustry. rho1 = rhoVec. maxfev summary( cesLmGridStartRho3 ) "time". control = nls. tName = "time".cesEst( "Y". method = "Newton". tName = start = coef( cesLmGridRho1 ). iterlim = 500. maxfev summary( cesLmGridStartRho1 ) cesLmGridStartRho2 <. data = GermanIndustry.analyticals = FALSE ) summary( cesNewtonGridStartRho1 ) cesNewtonGridStartRho2 <.lm. xNames3.max = 1000 ) ) . iterlim = 500 ) summary( cesNewtonGridRho3 ) plot( cesNewtonGridRho3 ) # Newton-type with grid search estimates as starting values cesNewtonGridStartRho1 <. xNames2.analyticals = FALSE ) summary( cesNewtonGridRho2 ) plot( cesNewtonGridRho2 ) cesNewtonGridRho3 <.control( maxiter = 1000. grid search for rho_1 and rho cesNewtonGridRho1 <.control( maxiter = 1000. tName = "time".cesEst( "Y".control( maxiter = 1000. start = coef( cesNewtonGridRho3 ).cesEst( "Y". tName = "time". = 2000 ) ) "time". xNames3. iterlim = 500. data = GermanIndustry. maxfev summary( cesLmGridStartRho2 ) cesLmGridStartRho3 <. method = "PORT". xNames1. returnGridAll = TRUE. data = GermanIndustry.cesEst( "Y". tName = "time". xNames1. rho = rhoVec.cesEst( "Y". check. tName = "time". method = "Newton".cesEst( "Y".control( maxiter = 1000. data = GermanIndustry. rho1 = rhoVec. data = GermanIndustry. check. tName = "time". data = GermanIndustry. check.lm.cesEst( "Y". xNames2. control = nls. tName = "time". returnGridAll = TRUE. start = coef( cesNewtonGridRho2 ). xNames1.cesEst( "Y". check. control = nls. data = GermanIndustry. xNames2.lm. method = "Newton". data = GermanIndustry. rho = rhoVec.cesEst( "Y". rho = rhoVec. rho = rhoVec.analyticals = FALSE ) summary( cesNewtonGridStartRho2 ) cesNewtonGridStartRho3 <.98 Econometric Estimation of the Constant Elasticity of Substitution Function in R summary( cesLmGridRho2 ) plot( cesLmGridRho2 ) cesLmGridRho3 <.max = 1000.

control = list( eval. xNames2. seq( 0. method = "PORT".c( "E". "PORT_Start" ) # table for parameter estimates tabCoef <.cesEst( "Y". G´eraldine Henningsen summary( cesPortGridRho1 ) plot( cesPortGridRho1 ) cesPortGridRho2 <. rho = rhoVec. 7. tName = start = coef( cesPortGridRho2 ). method = "PORT". control = list( eval. ) "time". data = GermanIndustry.cesEst( "Y".cesEst( "Y". "E" ) xNames[[ 3 ]] <.6. method = "PORT". data = GermanIndustry. control = list( eval.9.max = 1000 ) summary( cesPortGridStartRho2 ) cesPortGridStartRho3 <.max = 1000 ) summary( cesPortGridStartRho3 ) ######## "time". ) estimations for different industrial sectors ########## # remove years with missing or incomplete data GermanIndustry <. "K" ) # list ("folder") for results indRes <. tName = start = coef( cesPortGridRho3 ). iter. "P".5. 50. iter.. "N". "V".1970 # rhos for grid search rhoVec <.c( "C". iter.GermanIndustry$year . 20.cesEst( "Y". "S". control = list( eval. control = list( eval. tName = "time". length( metNames ) ). tName = "time". data = GermanIndustry. "E".max = 1000.max = 1000. 100 ) # industries (abbreviations) indAbbr <. 1.2 ).subset( GermanIndustry.array( NA. xNames3. ) "time". iter. xNames3. seq( 2. data = GermanIndustry. "A". iter.max = 1000.max = 1000 ) ) summary( cesPortGridRho3 ) plot( cesPortGridRho3 ) # PORT with grid search estimates as starting values cesPortGridStartRho1 <. "F" ) # names of inputs xNames <. xNames1. "A". "PORT". "PORT_Grid". 30. 0. 15. rho = rhoVec.Arne Henningsen.list() xNames[[ 1 ]] <. returnGridAll = TRUE. rho1 = rhoVec. 1 ). "I". 0.max = 1000 ) summary( cesPortGridStartRho1 ) cesPortGridStartRho2 <. rho1 = rhoVec. 12.c( "K".list() # names of estimation methods metNames <. dim = c( 9.c( "LM". 99 . tName = start = coef( cesPortGridRho1 ). 0.max = 1000 ) ) summary( cesPortGridRho2 ) plot( cesPortGridRho2 ) cesPortGridRho3 <. xNames2.c( "K". "A" ) xNames[[ 2 ]] <.max = 1000.max = 1000.3 ). year >= 1970 & year <= 1988. data = GermanIndustry.cesEst( "Y". method = "PORT". 10. ) # adjust the time trend so that it again starts with 0 GermanIndustry$time <. returnGridAll = TRUE. method = "PORT".c( seq( -1.

tmpSum$rss tabConsist[ indNo. c(1:3). rep( c( "rho_1".summary( indRes[[ indNo ]][[ modNo ]]$lm ) ) tmpCoef <. modNo. xIndNames. "beta_". "m_" ). "Y".coef( indRes[[ indNo ]][[ modNo ]]$lm ) tabCoef[ ( 3 * modNo . data = GermanIndustry. "rho".tmpSum$r. sep = "_" ) # sub-list ("subfolder") for all models of this industrie indRes[[ indNo ]] <. "LM" ] <. sep = "" ). ")".tabLambda # table for RSS values tabRss <.cesEst( yIndName. rep( 1:3. maxfev = 2000 ) ) print( tmpSum <. "LM" ] <tmpCoef[ c( "rho_1".lm.tmpCoef[ "lambda" ] tabR2[ indNo.cesEst( yIndName. ". 3 ).2 ):( 3 * modNo ). indAbbr. "lambda" ) ] tabLambda[ indNo. ". . sep = "" ) cat( "=======================================================\n\n" ) # names of industry-specific inputs xIndNames <. metNames ) ) # table for R-squared values tabR2 <. sep = "_" ) # sub-sub-list for all estimation results of this model/industrie indRes[[ indNo ]][[ modNo ]] <. length( metNames ) ). tName = "time". "lambda" ). metNames ) ) # table for technological change parameters tabLambda <. indNo. drop = TRUE ] #econometric estimation with cesEst for( indNo in 1:length( indAbbr ) ) { # name of industry-specific output yIndName <. 1. 3 ). "\n".100 Econometric Estimation of the Constant Elasticity of Substitution Function in R dimnames = list( paste( rep( c( "alpha_". modNo. "LM" ] <. " (". "LM" ] <.list() for( modNo in 1:3 ) { cat( "\n=======================================================\n" ) cat( "Industry No. dim = c( 7. . 3. indNo.array( NA.squared tabRss[ indNo. "rho".tabLambda[ . modNo ] <. control = nls.list() ## Levenberg-Marquardt indRes[[ indNo ]][[ modNo ]]$lm <. modNo.control( maxiter = 1024. each = 3 ). model No. dimnames = list( indAbbr.tmpCoef[ "gamma" ] >= 0 & tmpCoef[ "delta_1" ] >= 0 & tmpCoef[ "delta_1" ] <= 1 & tmpCoef[ "delta" ] >= 0 & tmpCoef[ "delta" ] <= 1 & tmpCoef[ "rho_1" ] >= -1 & tmpCoef[ "rho" ] >= -1 ## PORT indRes[[ indNo ]][[ modNo ]]$port <. modNo.tabLambda # table for economic consistency of LM results tabConsist <. xNames[[ modNo ]].paste( indAbbr[ indNo ].paste( indAbbr[ indNo ]. ". xIndNames.

tmpCoef[ "lambda" ] tabR2[ indNo. "PORT grid start". "PORT_Start" ] <tmpCoef[ c( "rho_1". "PORT" ] <. "DE .2 ):( 3 * modNo ).LM".00641 101 . modNo. iter. "PORT" ] <. data = GermanIndustry. "$\\lambda$" ] <.LM".PORT". "BFGS". "PORT_Start" ] <.max = 2000 ) ) print( tmpSum <. control = list( eval. modNo. tName = "time". G´eraldine Henningsen tName = "time".PORT". "SANN . "$".0222 result2[ "Kemfert (1998)".max = 2000. "SANN".tmpCoef[ "lambda" ] tabR2[ indNo. "LM grid". "SANN . "c". tName = "time". ncol = 9 ) rownames( result1 ) <. indNo. "LM grid start" ) colnames( result1 ) <. "BFGS grid start".PORT". modNo. "Newton grid".tmpSum$r. "PORT_Grid" ] <tmpCoef[ c( "rho_1". "fixed".tmpSum$rss } } ########## tables for presenting estimation results ############ result1 <. "rho". xIndNames. modNo.cesEst( yIndName.0.result1 result1[ "Kemfert (1998)". "PORT" ] <.matrix( NA. "PORT_Start" ] <. "lambda" ) ] tabLambda[ indNo.LM". "$\\lambda$" ] <.tmpSum$r.coef( indRes[[ indNo ]][[ modNo ]]$portStart ) tabCoef[ ( 3 * modNo . "DE . grid search indRes[[ indNo ]][[ modNo ]]$portGrid <. "PORT". "BFGS grid". "rho".summary( indRes[[ indNo ]][[ modNo ]]$portStart ) ) tmpCoef <.c( paste( "$\\". sep = "" ). control = list( eval. start = coef( indRes[[ indNo ]][[ modNo ]]$portGrid ). modNo.squared tabRss[ indNo. "RSS".tmpSum$rss # PORT. control = list( eval.tmpSum$r.max = 2000. xIndNames.0069 result3[ "Kemfert (1998)".squared tabRss[ indNo. "lambda" ) ] tabLambda[ indNo. indNo.squared tabRss[ indNo. "lambda" ) ] tabLambda[ indNo. names( coef( cesLm1 ) ). modNo.max = 2000. "Newton".2 ):( 3 * modNo ). "PORT_Grid" ] <.c( "Kemfert (1998)".coef( indRes[[ indNo ]][[ modNo ]]$port ) tabCoef[ ( 3 * modNo .cesEst( yIndName.0. nrow = 24.tmpSum$rss # PORT. "L-BFGS-B". "$R^2$" ) result3 <.max = 2000 ) ) print( tmpSum <. grid search for starting values indRes[[ indNo ]][[ modNo ]]$portStart <. "PORT_Grid" ] <.summary( indRes[[ indNo ]][[ modNo ]]$port ) ) tmpCoef <.coef( indRes[[ indNo ]][[ modNo ]]$portGrid ) tabCoef[ ( 3 * modNo . iter. iter.0. "NM . modNo. "NM". "NM . "PORT_Grid" ] <. modNo. "PORT_Start" ] <. "PORT" ] <tmpCoef[ c( "rho_1". "Newton grid start". modNo. data = GermanIndustry. "rho". data = GermanIndustry. rho = rhoVec.max = 2000 ) ) print( tmpSum <. indNo. method = "PORT". "DE". rho1 = rhoVec. "PORT grid". method = "PORT".summary( indRes[[ indNo ]][[ modNo ]]$portGrid ) ) tmpCoef <.Arne Henningsen. method = "PORT".tmpCoef[ "lambda" ] tabR2[ indNo.result2 <. "$\\lambda$" ] <. "LM".2 ):( 3 * modNo ).

"$R^2$" ] <. summary( cesFixed1 )$r. cesFixed1$rss. "$\\rho_1$" ] <.9996 result2[ "Kemfert (1998)". ifelse( is.1.makeRow( cesLm3 ) result1[ "NM". ] <. ] <.makeRow( cesBfgsCon2 ) result3[ "L-BFGS-B".5.c( coef( cesFixed1 )[1]. ] <. coef( cesFixed2 )[-1]. "$\\rho$" ] <. coef( cesFixed3 )[-1]. ] <. model$convergence ).1813 result2[ "Kemfert (1998)". ] <. coef( cesFixed1 )[-1].null( model$multErr ) ) { model$multErr <.makeRow( cesPort3 ) result1[ "LM".makeRow( cesNm1 ) result2[ "NM".0222. cesFixed3$rss.makeRow( cesNm2 ) result3[ "NM". ] <.makeRow( cesNewton1 ) result2[ "Newton". ] <.FALSE } result <. ] <.makeRow( cesPort1 ) result2[ "PORT". "$\\rho_1$" ] <.squared ) result2[ "fixed". ] <.makeRow( cesNewton2 ) result3[ "Newton".makeRow( cesLm1 ) result2[ "LM". summary( cesFixed3 )$r.makeRow( cesNm3 ) . ] <.squared ) result3[ "fixed". ] <. "$\\rho$" ] <.8327 result1[ "Kemfert (1998)".5300 result2[ "Kemfert (1998)".102 Econometric Estimation of the Constant Elasticity of Substitution Function in R result1[ "Kemfert (1998)". ] <.makeRow( cesBfgsCon3 ) result1[ "PORT".cesFixed2$convergence. ] <. ] <.makeRow( cesBfgs3 ) result1[ "L-BFGS-B".null( model$convergence ).makeRow( cesBfgsCon1 ) result2[ "L-BFGS-B". model$rss. 0.0069.1.0. ] <.c( coef( model ). ] <.2155 result3[ "Kemfert (1998)". NA.function( model ) { if( is. ] <. 0. ] <. ] <.makeRow( cesBfgs2 ) result3[ "BFGS". summary( cesFixed2 )$r.c( coef( cesFixed2 )[1].0.makeRow( cesNewton3 ) result1[ "BFGS".3654 result1[ "Kemfert (1998)".squared ) return( result ) } result1[ "Newton".9986 result1[ "fixed". summary( model )$r. "$R^2$" ] <. "$\\rho_1$" ] <.0.00641.cesFixed3$convergence.makeRow( cesLm2 ) result3[ "LM". 0.makeRow( cesPort2 ) result3[ "PORT".c( coef( cesFixed3 )[1].squared ) makeRow <. ] <. ] <.0. "$\\rho$" ] <. cesFixed1$convergence.786 result3[ "Kemfert (1998)". cesFixed2$rss.0. "$R^2$" ] <.1816 result3[ "Kemfert (1998)".0.makeRow( cesBfgs1 ) result2[ "BFGS".

] <.LM". ] <.makeRow( cesNmLm2 ) result3[ "NM . ] <.makeRow( cesDePort1 ) result2[ "DE . ] <. ] <.makeRow( cesBfgsGridStartRho3 ) result1[ "PORT grid". ] <. ] <.makeRow( cesSannLm1 ) result2[ "SANN . ] <. ] <.makeRow( cesNmLm1 ) result2[ "NM .makeRow( cesPortGridRho1 ) result2[ "PORT grid". ] <. ] <.makeRow( cesSann1 ) result2[ "SANN".makeRow( cesNewtonGridRho2 ) result3[ "Newton grid".makeRow( cesSannLm3 ) result1[ "SANN .PORT".makeRow( cesBfgsGridRho1 ) result2[ "BFGS grid".makeRow( cesDeLm3 ) result1[ "DE .makeRow( cesPortGridStartRho1 ) result2[ "PORT grid start".makeRow( cesNmLm3 ) result1[ "NM .makeRow( cesPortGridStartRho3 ) result1[ "LM grid".makeRow( cesPortGridRho3 ) result1[ "PORT grid start".PORT".PORT".PORT".makeRow( cesNmPort3 ) result1[ "SANN".LM".makeRow( cesDe3 ) result1[ "DE . ] <.makeRow( cesNewtonGridStartRho1 ) result2[ "Newton grid start".LM".makeRow( cesNewtonGridStartRho3 ) result1[ "BFGS grid". ] <.makeRow( cesSannPort1 ) result2[ "SANN . ] <.LM". ] <. ] <.makeRow( cesNmPort2 ) result3[ "NM . ] <.makeRow( cesNewtonGridRho3 ) result1[ "Newton grid start".makeRow( cesNewtonGridRho1 ) result2[ "Newton grid". ] <.makeRow( cesDeLm2 ) result3[ "DE . ] <.makeRow( cesDePort2 ) result3[ "DE .LM".LM". ] <.LM". ] <.PORT".makeRow( cesNmPort1 ) result2[ "NM .makeRow( cesDePort3 ) result1[ "Newton grid". ] <. ] <.Arne Henningsen.PORT". ] <. ] <.makeRow( cesSannLm2 ) result3[ "SANN .makeRow( cesPortGridStartRho2 ) result3[ "PORT grid start". ] <. ] <. ] <. ] <. ] <.PORT".makeRow( cesBfgsGridStartRho2 ) result3[ "BFGS grid start".makeRow( cesSann2 ) result3[ "SANN".makeRow( cesSann3 ) result1[ "SANN . ] <.makeRow( cesBfgsGridStartRho1 ) result2[ "BFGS grid start".makeRow( cesDe1 ) result2[ "DE". ] <.PORT". ] <.LM". ] <. G´eraldine Henningsen result1[ "NM . ] <. ] <. ] <.makeRow( cesBfgsGridRho2 ) result3[ "BFGS grid".makeRow( cesDeLm1 ) result2[ "DE . ] <.makeRow( cesSannPort3 ) result1[ "DE".makeRow( cesSannPort2 ) result3[ "SANN .PORT".LM".makeRow( cesPortGridRho2 ) result3[ "PORT grid". ] <. ] <.makeRow( cesLmGridRho1 ) 103 . ] <.makeRow( cesBfgsGridRho3 ) result1[ "BFGS grid start".makeRow( cesDe2 ) result3[ "DE". ] <.makeRow( cesNewtonGridStartRho2 ) result3[ "Newton grid start". ] <.

4 ) ) printTable( xTab1. align = "lrrrrrrrrr". "$\\delta_1$" ] ) & ( result[ .makeRow( cesLmGridStartRho2 ) result3[ "LM grid start". file = tempFile. 0. rownames( result ). "& \\\\color{red}" . sep = "" ) return( result ) } printTable <. "$\\delta$" ] > 1 ) | result[ .104 Econometric Estimation of the Constant Elasticity of Substitution Function in R result2[ "LM grid". fileName = "kemfert1Coef. digits = c( 0. ] <. "$\\delta$" ] < 0 | result[ . 6 ).makeRow( cesLmGridRho2 ) result3[ "LM grid".makeRow( cesLmGridStartRho3 ) # create LaTeX tables library( xtable ) colorRows <.text.file() print( xTab. "$\\delta_1$" ] < 0 | result[ . floating = FALSE. "$\\delta_1$" ] >1 | result[ . ] <. 6 ). align = "lrrrrrrrrr".na( result[ .latexLines[ i ] ) latexLines[ i ] <. 6 ). "\\\\color{red}" . fileName ) { tempFile <.sub( "MarkThisRow". fileName = "kemfert2Coef. 0. "" ).xtable( result2.xtable( result3.makeRow( cesLmGridRho3 ) result1[ "LM grid start".tex" ) result3 <.function = function(x){x} ) latexLines <. rep( 4.tex" ) . "MarkThisRow ". value = FALSE ) ) { latexLines[ i ] <. rep( 4. ] <.latexLines[ i ] ) } writeLines( latexLines. rep( 4.colorRows( result1 ) xTab1 <.colorRows( result3 ) xTab3 <. ] <. fileName = "kemfert3Coef. 0.makeRow( cesLmGridStartRho1 ) result2[ "LM grid start". "$\\rho$" ] < -1. 0. ] <.function( result ) { rownames( result ) <.tex" ) result2 <.colorRows( result2 ) xTab2 <.readLines( tempFile ) close( tempFile ) for( i in grep( "MarkThisRow ". 4 ) ) printTable( xTab3. sanitize.gsub( "&". 0. digits = c( 0. 0. digits = c( 0.paste( ifelse( !is.function( xTab. align = "lrrrrrrrrr". "$\\rho_1$" ] < -1 | result[ . latexLines. fileName ) invisible( latexLines ) } result1 <.xtable( result1. 4 ) ) printTable( xTab2.

885–895. 31(10). 79.). 886–895. McCullough BD. Article 4. Caselli F. Springer-Verlag. . URL http://www. London. pp. Russel RR (1989).).” Contributions to Macroeconomics. Anderson R. Lu P.” SIAM Journal for Scientific Computing. Coleman Wilbur John I (2006).S. Arrow KJ. “The Convergence of a Class of Double-rank Minimization Algorithms. Moroney JR (1994). 1055–1089.E.” The American Economic Review.” Journal of Applied Probability. B´elisle CJP (1992).” Journal of Economic Methodology. 76–90. SN Durlauf (eds. “The World Technology Frontier. 1703– 1725.S. 225–250. Donoho DL (1995). 29. “Capital-labor Substitution and Economic Efficiency. Minhas BS.” Journal of the Institute of Mathematics and Its Applications. 3(1). “Accounting for Cross-Country Income Differences. 6. “Why Do New Technologies Complement Skills? Directed Technical Change and Wage Inequality. 96(3). Gilles SP (2006). Blackorby C. 15(1).” In FA Lootsma (ed. Broyden CG (1970). “Is the U.jstor.” Southern Economic Journal.org/stable/1927286. Article 9. Anderson RK. G´eraldine Henningsen 105 References Acemoglu D (1998). Aggregate Production Function Cobb-Douglas? New Estimates of the Elasticity of Substitution. Buckheit J. Caselli F (2005). Handbook of Economic Growth. URL http://www. Nocedal J. 679–742. Vinod HD (2008). “Population Set-Based Global Optimization Algorithms: Some Modifications and Numerical Studies. Numerical Methods for Nonlinear Optimization.org/stable/30034059. 39–43. 1190–1208.Arne Henningsen. “Will the Real Elasticity of Substitution Please Stand up? (A Comparison of the Allan/Uzawa and Morishima Elasticities). Wavelets and Statistics.” Contribution in Macroeconomics. Greene WH. Ali MM. 113(4). 60(4). 882–888.” In P Aghion.” The Review of Economics and Statistics. “A Limited Memory Algorithm for Bound Constrained Optimization. T¨ orn A (2004). Byrd R. Bentolila SJ. Solow RM (1961).” Computers & Operations Research. “A Derivation of Conjugate Gradients.” The Quarterly Journal of Economics. “Substitution and Complementarity in C.).jstor. 16. Chenery BH. North Holland. pp. “The Role of Data/Code Archives in the Future of Economic Research. 99–119. Models. Zhu C (1995).” In A Antoniadis (ed. 43(3). Academic Press. “Explaining Movements in the Labour Share. 4(1). “Wavelab and Reproducible Research. “Convergence Theorems for a Class of Simulated Annealing Algorithms on Rd .” The American Economic Review. Amras P (2004). Beale EML (1972). 499–522.

microsimulation.” Journal of Optimization Theory and Applications. 24. R package version 0. “A Theory of Production. Cambridge University Press. Cobb CW (1928). Cambridge.” Mathematics of Computation. Goldfarb D (1970).lm: R Interface to the Levenberg-Marquardt Nonlinear Least-Squares Algorithm Found in MINPACK. Racine JS (2008). 7. http://CRAN.bell-labs. Prentice Hall. Fletcher R. Applied Production Analysis. 23–26. “Multiple Regimes and Cross-Country Growth Behavior.” Computing Science Technical Report 153.R-project.106 Econometric Estimation of the Constant Elasticity of Substitution Function in R Cerny V (1985). minpack.R-project. URL http: //CRAN.org/package=minpack. A Dual Approach. Davies JB (2009). “Function Minimization by Conjugate Gradients. car: Companion to Applied Regression. . AT&T Bell Laboratories. URL http://www. “A Thermodynamical Approach to the Travelling Salesman Problem: an Efficient Simulation Algorithm.jstor. 1–32. Griliches Z (1969). 49–65.com/ cm/cs/cstr/153. Gay DM (1990). “Combining Microsimulation with CGE and Macro Modelling for Distributional Analysis in Developing and Transition Countries.pdf. 10. R package version 1. 465–468.” The American Economic Review. “A Family of Variable Metric Updates Derived by Variational Means. URL http://netlib. Fox J (2009).” Computer Journal.” Journal of Applied Econometrics. URL http://www. Greene WH (2008).2-16. London.R-project. Mullen KM (2009). Econometric Analysis. Englewood Cliffs (NJ.lm. 48–154.pdf.1-4. Chambers JM. 45.6.org/pss/1926439. pp. Schnabel RB (1983). 6th edition. Johnson PA (1995). Prentice-Hall. Reeves C (1964). R package version 1. Chapman & Hall. Statistical Models in S. “Capital-Skill Complementarity. “Nonparametric Econometrics: The np Package. Dennis JE. URL http://CRAN. Fletcher R (1970). USA). Chambers RG (1988). Douglas PC.org/IJM/V2_1/IJM_2_1_4. Hastie TJ (1992). 18(1). 13. 139–165.” Journal of Statistical Software. 27(5). 51(4). 317–322. Durlauf SN. micEcon: Tools for Microeconomic Analysis and Microeconomic Modeling. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Elzhov TV. “Usage Summary for Selected Optimization Routines. “A New Approach to Variable Metric Algorithms.” International Journal of Microsimulation. Hayfield T.” The Review of Economics and Statistics.” Computer Journal.org/package=car. 365–384.org/package=micEcon. Henningsen A (2010). 41–51.

R-project. micEconCES: Analysis with the Constant Elasticity of Scale (CES) Function. “Estimated Substitution Elasticities of a Nested CES Production Function Approach for Germany. 76(3). Papageorgiou C (2008). Kmenta J (1967). “On Estimation of the CES Production Function. Henningsen A. 1029–1053.org/package=AER.” Marine Resource Economics. Vecchi MP (1983). 671–680. Hamann JD (2007). “The Aggregate Production Function of the Finnish Economy in the Twentieth Century. Ohanian LE. 107. McAdam P.9. Kemfert C (1998).org/package= micEconCES. Klump R. Philadelphia. “Optimization by Simulated Annealing. Mankiw NG. 89(1). “Identifying the Elasticity of Substitution with Biased Technical Change. 407–437. URL http: //www.” Journal of Macroeconomics. R´ıos-Rull JV. 19. Klump R. Le´on-Ledesma MA. URL http://EconPapers. “Editorial Introduction: The CES Production Function in the Theory and Empirics of Economic Growth. Hoff A (2004). Romer D. Luoma A. 24.” International Economic Review. 295–306. 183–192. Krusell P.jstatsoft. 100. http://CRAN.” Econometrica. SIAM Society for Industrial and Applied Mathematics.org/RePEc:foi:wpaper:2011_9. 419–423. 20(3). Kadane J (1967).” American Economic Review. 23(4). 30(2).” Quarterly Journal of Economics.1. Kirkpatrick S. Gelatt CD. “systemfit: A Package for Estimating Systems of Simultaneous Equations in R.” Journal of Statistical Software. Kleiber C. McAdam P. G´eraldine Henningsen 107 Henningsen A. 1330–1357.” Science. “The Linear Approximation of the CES Function with n Input Variables. Violante GL (2000). Luoto J (2010).” The Review of Economics and Statistics. 180–189. repec. Henningsen A. University of Copenhagen. Institute of Food and Resource Economics. AER: Applied Econometrics with R. Maddala G. Iterative Methods of Optimization. R package version 0. http://CRAN. Henningsen G (2011b).” Southern Economic Journal.” FOI Working Paper 2011/9. 68(5). Kelley CT (1999).” Energy Economics. 1–40. 599–600. “Estimation of Returns to Scale and the Elasticity of Substitution. Weil DN (1992). Zeileis A (2009).org/v23/i04/. 220(4598). R package version 1.Arne Henningsen. “Econometric Estimation of the “Constant Elasticity of Substitution” Function in R: Package micEconCES. “A Contribution to the Empirics of Economic Growth. “Factor Substitution and Factor-Augmenting Technical Progress in the United States: A Normalized Supply-Side System Approach.” Econometrica. 723–737. “Capital-skill Complementarity and Inequality: A Macroeconomic Analysis. . Henningsen G (2011a). Willman A (2007). 8. Willman A (2010). 249–264.R-project.

41(4). Ardia D. “The Solow Model with CES Technology: Nonlinearities and Parameter Heterogeneity. McCullough BD. Masanjala WH.org/minpack/. Pandey M (2008). “A Modular System of Algorithms for Unconstrained Minimization. Lampinen JA (2006).” Revue Francaise d’Informatique et de Recherche Op´erationnelle. Papageorgiou C. Papageorgiou C (2004). .org/stable/2098941.jstor.” Journal of Statistical Software. “A Note on Numerical Estimation of Sato’s Two-Level CES Production Function. 11(2).” Economic Analysis and Policy. R: A Language and Environment for Statistical Computing.pdf.R-project. R Foundation for Statistical Computing. Mor´e JJ.” Computer Journal.jstor. Ribi`ere G (1969). Price KV. “Note Sur la Convergence de M´ethodes de Directions Conjugu´ees. Sato K (1967). “A Two-Level Constant-Elasticity-of-Substitution Production Function. 30(4).” MPRA Paper 1019. North-Eastern Hill University. “Human Capital Aggregation and Relative Wages Across Countries. 16. Polak E.org/stable/2296809. 201–218. 11(4). URL http://www. “Open Access Economics Journals and the Market for Reproducible Economic Research. Garbow BS. 43.org/v40/i06. Department of Economics. Natural Computing. “Constant Elasticity of Substitution Production Function.org. 35–43. “An Algorithm for Least-Squares Estimation of Non-linear Parameters. Differential Evolution – A Practical Approach to Global Optimization.” Working paper 2005-07.” ACM Transactions on Mathematical Software. URL http://www. Austria.” The Review of Economic Studies.jstatsoft.” Journal of Applied Econometrics. McFadden D (1963). “Two-Level CES Production Technology in the Solow and Diamond Growth Models. Mead R (1965). Gil DL. “Do Economics Journal Archives Promote Replicable Research?” Canadian Journal of Economics. 419–440. McGeary KA. Louisiana State University. 7. Schnabel RB. Nelder JA. “DEoptim: An R Package for Global Optimization by Differential Evolution.” Journal of Macroeconomics. 431–441.netlib. URL http: //www. URL http://www. URL http://www.” The Review of Economic Studies. 73–83. R Development Core Team (2011). “A Simplex Algorithm for Function Minimization. Harrison TD (2008). pages. 1406–1420. Koontz JE.108 Econometric Estimation of the Constant Elasticity of Substitution Function in R Marquardt DW (1963). Shillong. MINPACK. 117–126. Springer-Verlag.edu/~bdm25/eap-1. URL http://www. 39(1). ISBN 3-900051-07-0. Windover D. 19(2).” Journal of the Society for Industrial and Applied Mathematics. Cline J (2011). Vienna. ISBN 540209506. 40(6). 171–201. Mishra SK (2007). 1–26.drexel. 308–313. McCullough BD (2009). 1587–1601. Hillstrom KE (1980). Mullen KM. Weiss BE (1985). Storn RM. Saam M (2005). 30. Argonne National Laboratory.

” Mathematics of Computation.” Macromoli: Goetz Uebe’s notebook on macroeconometric models and literature. Denmark E-mail: gehe@risoe. Thursby JG (1980). Uebe G (2000). Kumbhakar SC (2011).” Computing in Science & Engineering. 2(6). Karrenbach M. “CES Estimation Techniques. Affiliation: Arne Henningsen Institute of Food and Resource Economics University of Copenhagen Rolighedsvej 25 1958 Frederiksberg C.henningsen@gmail.” Journal of Applied Econometrics.name/ G´eraldine Henningsen Systems Analysis Division Risø National Laboratory for Sustainable Energy Technical University of Denmark Frederiksborgvej 399 4000 Roskilde.hsu-hh. “Conditioning of Quasi-Newton Methods for Function Minimization. 341–359. 647–656. 421–441. “Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. 29. Shanno DF (1970). 295–299.de/ uebe/Lexikon/K/Kmenta.” Journal of Global Optimization. 363–377. Denmark E-mail: arne.” The Review of Economic Studies. 26(4).dk . http://www2. “Making Scientific Computations Reproducible.” International Economic Review. 288(6). Price K (1997). 291–299.dtu. Sun K.Arne Henningsen. Storn R. 19(2). Claerbout J (2000).com URL: http://www. “Biases in Approximating Log Production. Sorenson HW (1969). “In Investigation of the Kmenta Approximation to the CES Function.pdf [accessed 2010-02-09]. Uzawa H (1962). Thursby JG. “Comparison of Some Conjugate Direction Procedures for Function Minimization. 11.” Journal of the Franklin Institute. Lovell CAK (1978). G´eraldine Henningsen 109 Schwab M. 61–67. “Production Functions with Constant Elasticity of Substitution. Henderson DJ. “Kmenta Approximation of the CES Production Function.arne-henningsen. 24. 62.” The Review of Economics and Statistics. 708–714.