You are on page 1of 14

Two-Stage Least Squares (2SLS) and Structural Equation Models (SEM) by Eddie Oczkowski http://csusap.csu.edu.au/~eoczkows/home.

htm May 2003 These notes describe the 2SLS estimator for latent variable models developed by Bollen (1996). The technique separately estimates the measurement model and structural model of SEM. One can therefore use it either as a stand alone procedure for a full SEM or combine it with factor analysis, for example, establish the measurement model using factor analysis and then employ 2SLS for the structural model only. The advantages of using 2SLS over the more conventional maximum likelihood (ML) method for SEM include: It does not require any distributional assumptions for RHS independent variables, they can be non-normal, binary, etc. In the context of a multi-equation non-recursive SEM it isolates specification errors to single equations, see Bollen (2001). It is computationally simple and does not require the use of numerical optimisation algorithms. It easily caters for non-linear and interactions effects, see Bollen and Paxton (1998). It permits the routine use of often ignored diagnostic testing procedures for problems such as heteroscedasticity and specification error, see Pesaran and Taylor (1999). Simulation evidence from econometrics suggests that 2SLS may perform better in small samples than ML, see Bollen (1996, pp120-121). There are however some disadvantages in using 2SLS compared to ML, these include: The ML estimator is more efficient than 2SLS given its simultaneous estimation of all relationships, hence ML will dominate 2SLS always in sufficiently large samples if all assumptions are valid and the model specification is correct. Effectively ML is more efficient (if the model is valid) as it uses much more information than 2SLS. Unlike the ML method, the 2SLS estimator depends upon the choice of reference variable. The implication being that different 2SLS estimates result given different scaling variables. Programs with diagram facilities such as EQS do not exist for 2SLS. One needs to logically work through the structure of the model to specify individual equations for all the relationships for the 2SLS estimator. Substantive applications of the 2SLS estimator for latent variable models include: Li and Harmer (1998), Oczkowski and Farrell (1998), Farrell (2000) and Oczkowski (2001), Farrell and Oczkowski (2002), and Smith, Oczkowski, Noble and Macklin (2002, 2003).

2SLS Estimation Basics Consider a simple regression model: y = + x + u where y is the dependent variable x is the independent variable and are estimable parameters u is the error term (1)

If x and u are correlated then this violates an assumption of the regression framework. Applying standard ordinary least squares (OLS) to eqn (1) under these circumstances results in inconsistent estimates, that is, even as the sample size approaches infinity the estimates of the parameters on average will not equal the population estimates. To remedy this problem one can apply 2SLS, also called the instrumental variables (IV) procedure. To implement 2SLS we need to identify one or more instruments for x. These instruments (call them z) must satisfy two conditions: 1. 2. z must be uncorrelated with u. z must be correlated with x.

In any software package which supports 2SLS/IV simply specify x as the independent (explanatory) variable and z as the instrumental variables. This method will produce consistent parameter estimates even given the correlation between x and u. The only condition for identification is that the number of instruments is greater than or equal to the number of independent variables. There are two ways to get 2SLS/IV estimates. The first is through a direct 2SLS/IV option available in packages such as SPSS and SHAZAM. These are based on a single IV expression which involves matrix algebra. This approach will produce consistent estimates and accurate standard errors. The second method (as the 2SLS name suggests) to get the parameter estimates is to run two OLS regressions: 1. OLS regression x on z and get predictions for x, say x . 2. OLS regression y on x By forming predictions for x in the 2nd stage through the instruments z we correct for the correlation between the error term and the independent variable. This will produce 2SLS parameter estimates, the same as the estimates produced by the direct 2SLS/IV option. However the standard errors from the two-step procedure will be in the second stage rather than x, inaccurately incorrect. Effectively, employing x measures the standard error estimates. It is recommended therefore that the direct 2SLS/IV option be employed to get parameter and standard error estimates. In choosing the number of instruments to employ in 2SLS asymptotically (as n approaches infinity) the larger the no of instruments the better in terms of efficiency. However, the small sample bias of the estimator may get worse as the number of

instruments increases. Further as more instruments are employed degrees of freedom are lost and this will weaken the power of statistical tests. To present the mechanics of the 2SLS estimator for latent variable models we will work through a series of simple examples. These are examples only and generalise to more complex models. Structural Equation Estimation: The Single Equation Case Consider the following latent variable model:

1
where

1
(2) 1 is the latent dependent variable with 3 indicators ( y1 , y 2, y 3 )

1 = o + 11 + u1

1 is the latent independent variable with 3 indicators ( x1 , x 2, x3 )

0 and 1 are estimable parameters u1 is the disturbance error term.


Assume that the measurement models for 1 and 1 are:
y1 = y11 + 1 y 2 = y 21 + 2 y 3 = y 31 + 3 x1 = x11 + 1 x 2 = x 21 + 2 x3 = x 31 + 3

(3)

where

are factor loadings and are measurement errors

Bollen (1996) suggests the following procedure. Choose a scaling (or reference) variable for each latent variable, say y1 for 1 and x1 for 1 , this implies the corresponding loadings are set to unity. These scaling variables should be those that best reflect the constructs theoretically or empirically (the highest standardised factor loading). This allows us to write:
y1 = 1 + 1 x1 = 1 + 1

(4)

Combining eqns (2) and (4) allows us to write (2) in observable variable terms only:

y1 = o + 1 x1 + u where u = u1 + 1 1 1

(5)

u is a new composite error term. Clearly x1 is correlated with u since both x1 and u depend upon 1 . This mimics eqn (1) and therefore OLS cannot be applied to (5) and so instead a 2SLS procedure is needed. To identify suitable instruments for (5) we need to find variables which are not correlated with u, but are highly correlated with x1 . The non-scaling items for 1 ( x 2, x3 ) are suitable as they are expected to be highly correlated with x1 given that they are all indicators of the same construct and they are not correlated with u (as we assume that measurement errors are uncorrelated.). Note, y 2, y 3 are not suitable instruments as they are correlated with u, since u, y 2, y 3 all depend upon u1 . In sum to estimate eqn (2) perform a 2SLS regression of y1 on x1 , with instruments x 2, x3 . So the general principle is that non-scaling item indicators of the independent variable can be used as instruments, but not non-scaling items of the dependent variable as they correlate with the composite error term. Effectively, any variable that has either a direct or indirect effect on the dependent variable is not a candidate as an instrumental variable as it will be correlated with the composite error term. That is, if a causal chain exists between the composite error term and a variable then that variable is not a valid instrument. In some situations it is difficult to determine whether an instrument is valid. To ascertain the validity of an instrument you need to explicitly determine if the covariance between the instrument and composite error is zero. To illustrate this we evaluate, from first principles, some examples. The following rules are useful:
cov( X , Y ) = E ( XY ) E ( X ) E (Y ) cov(aX + bY , cZ ) = cov(aX , cZ ) + cov(bY , cZ ) (5a) (5b)

Consider eqn (5) and first the validity of x 2 as an instrument (i.e., cov( x 2 ,u) = 0) and then the invalidity of y 2 as an instrument, (i.e., cov( y 2 ,u) 0).
cov( x 2 , u ) = cov( x 21 + 2 , u1 + 1 1 1 ) (use (3), (5)) (use (5b))

= cov( x 21 , u1 ) + cov( x 21 , 1 ) cov( x 21 , 1 1 ) + cov( 2 , u1 ) + cov( 2 , 1 ) cov( 2 , 1 1 ) = E ( x 21u1 ) E ( x 21 ) E (u1 ) + E ( x 21 1 ) E ( x 21 ) E ( 1 ) E ( x 21 1 1 ) + E ( x 2 1 ) E ( 1 1 ) + E ( 2 u1 ) E ( 2 ) E (u1 ) + E ( 2 1 ) E ( 2 ) E ( 1 ) E ( 2 1 1 ) + E ( 2 ) E ( 1 1 ) (use (5a))

Because of the assumptions of zero mean errors and independence between u1 , 1 and 1 , and that 1 is independent of all error terms, all the expected value expressions are zero and so cov( x 2 ,u) = 0.
cov( y 2 , u ) = cov( y 21 + 2 , u1 + 1 1 1 ) (use (3), (5)) (use (5b))

= cov( y 21 , u1 ) + cov( y 21 , 1 ) cov( y 21 , 1 1 ) + cov( 2 , u1 ) + cov( 2 , 1 ) cov( 2 , 1 1 ) = E ( y 21u1 ) E ( y 21 ) E (u1 ) + E ( y 21 1 ) E ( y 21 ) E ( 1 ) E ( y 21 1 1 ) + E ( y 21 ) E ( 1 1 ) + E ( 2 u1 ) E ( 2 ) E (u1 ) + E ( 2 1 ) E ( 2 ) E ( 1 ) E ( 2 1 1 ) + E ( 2 ) E ( 1 1 ) (use (5a ))

Because of the assumptions of zero mean errors and independence between u1 , 1 , 2 and 1 many of these expressions are zero, however not all. In these terms for 1 we need to substitute the RHS of eqn (2), which among other terms inserts u1 wherever 1 appears. This makes (at least) the first term non-zero:
E ( y 21u1 ) = E ( y 2 ( 0 + 11 + u1 )u1 ) = E ( y 2 0 ) + E ( y 2 11 ) + E ( y 2 u1u1 )

The last term is non-zero here (it depends on the variance of u1 ) and so cov( y 2 ,u) 0, which means that y 2 is not a valid instrument. To assess how good the instruments are, the R 2 s from the equations used to generate from the 1st stage of 2SLS should be examined. If these are lower than 0.10 then x the instruments are most likely to be inappropriate, that is they are insufficiently correlated with x. In fact, if the instruments are very weakly correlated with the regressors, then tests of hypotheses become very inaccurate, even in large samples. Another useful check of the relevance of instruments is to ensure that the F statistics from the first stage regressions exceed 10, see Stock and Watson (2003, ch 10). If the measurement scales for x have good reliability measurement properties then these 1st stage R 2 s and F statistics will usually be acceptably high.
Other Variables If eqn (2) has more independent latent variables then set a scaling variable for each independent variable, they enter as additional RHS variables and the non-scaling variables enter as additional instruments. If observed variables (not latent) enter directly as independent variables, then these appear as both RHS variables and as instruments, that is, they serve as their own instruments.

Dependent Variable Treatment Note, in this single equation case consistent 2SLS estimates can be gained even if summated or factor based composite variables are used for 1 . Instruments are only needed for the RHS variables. Measurement error in the dependent variable does not affect the consistency of the regression estimator, only measurement error in the independent variables causes estimation problems. Structural Equation Estimation: Simultaneous Equations The Recursive (No Feedback) Case

Consider the following latent variable model without feedback but involving two dependent variables and hence two equations:

1 2 2
1

Note there is no feedback of 1 back into 2 , all arrows move left to right. In equation form we have: 1 = o + 1 2 + 21 + u1 (6) 2 = o + 1 2 + u 2 (7) where 1 is a latent variable with 3 indicators ( y1 , y 2, y 3 )

2 is a latent variable with 3 indicators ( z1 , z 2, z 3 ) 1 is a latent variable with 3 indicators ( x1 , x 2, x3 ) 2 is a latent variable with 3 indicators ( w1 , w2, w3 ) and are estimable parameters
u1 and u 2 are the disturbance error terms.

Note, here 2 serves as a dependent variable in (7) but as an independent variable in (6). The procedure as described for the single equation case can also be applied here. Consider eqns (6) and (7), assume the following scaling variables ( y1 , z1 , x1 , w1 ) , this allows us to re-write (6) and (7) in observed variables as: y1 = o + 1 z1 + 2 x1 + u *1 z1 = o + 1 w1 + u
* 2 * * here u1 and u 2 define new composite error terms.

(8) (9)

For eqn (8) the non-scaling zs, xs, and ws can be employed as instruments, but not * the non-scaling ys as they depend upon u1 . Note, the non-scaling items for 2 (zs) are valid for eqn (8) since 2 does not depend upon 1 , as there is no feedback. For eqn (9) the non-scaling xs and ws are valid instruments, but not the non-scaling * zs as they depend upon u 2 . Note, the non-scaling items for 1 (ys) are not valid for eqn (9) since 1 does depend upon 2 , and hence is related to u * 2 (the chain is u * 2 zy). So generally the same principle as in the single equation case applies here, the nonscaling RHS variables can be used as instruments. This principle applies even if one the RHS variables is a dependent variable. This is permitted given the recursive nature of the equations. In this case however, other variables may also serve as instruments depending upon the causal chain from the composite error term.
Structural Equation Estimation: Simultaneous Equations The Non-Recursive (Feedback) Case

Consider the following latent variable model with feedback but involving two dependent variables and hence two equations:

1 2 2

Note there is feedback between 1 and 2 . In equation form we have: 1 = o + 1 2 + 21 + u1 (10) 2 = o + 11 + 2 2 + u 2 (11) where 1 is a latent variable with 3 indicators ( y1 , y 2, y 3 )

2 is a latent variable with 3 indicators ( z1 , z 2, z 3 ) 1 is a latent variable with 3 indicators ( x1 , x 2, x3 ) 2 is a latent variable with 3 indicators ( w1 , w2, w3 ) and are estimable parameters
u1 and u 2 are the disturbance error terms.

Note, here 1 serves as a dependent variable in (10) but as an independent variable in (11), while 2 serves as a dependent variable in (11) but as an independent variable in (10). Assume the following scaling variables ( y1 , z1 , x1 , w1 ) , this allows us to re-write (10) and (11) in observed variables as: y1 = o + 1 z1 + 2 x1 + u *1 z1 = o + 1 y1 + 2 w1 + u
* 1 * 2

(12) (13)

here u and u define new composite error terms. Consider eqn (12) the non-scaling xs and ws can be employed but not the non* scaling zs and ys. As before the ys are directly correlated with u1 . The logic for * the unsuitability of the zs is more involved but the casual link exists as: u1 yz. Consider eqn (13) the non-scaling xs and ws can be employed but not the non* scaling zs and ys. As before the zs are directly correlated with u 2 . The logic for * the unsuitability of the ys is more involved but the casual link exists as: u 2 zy. So for the non-recursive case the general the principle that all non-scaling RHS variables can be used as instruments is not longer valid. Only the non-scaling items of the RHS variables which are not dependent variables in other equations can be employed as instruments. In this case however, other variables may also serve as instruments depending upon the causal chain from the composite error term.
Modelling Interaction Effects

Bollen and Paxton (1998) discuss modelling interaction or moderator effects in great detail. Effectively most of the concepts apply above except for a few minor modifications. Consider the following model:

where

1 = o + 11 + 2 2 + 31 2 + u1 (14) 1 is the latent dependent variable with 3 indicators ( y1 , y 2, y 3 ) 1 is a latent independent variable with 3 indicators ( x1 , x 2, x3 )
2 is a latent independent variable with 3 indicators ( w1 , w2, w3 ) 1 2 is the moderator or interaction term 0 , 1 and 2 are estimable parameters
u1 is disturbance error term.

Assume the following scaling variables ( y1 , x1 , w1 ) , this allows us to re-write (14) in observed variables as: y1 = o + 1 x1 + 2 w1 + 3 x1 w1 + u *1 (15)

Note here the interaction latent variable is replaced by the interaction between the corresponding scaling variables. The valid instruments are the non-scaling xs and ws, but also all the products of the non-scaling xs and ws ( x 2 w2 , x 2 w3 , x3 w2 , x3 w3 ), these are instruments for x1 w1 . As before the non-scaling ys are not valid as they directly depend upon the composite error term. In general the number of instruments resulting from the new interaction term equals the no. of non-scaling items for 1 times the number of non-scaling items for 2 , for our example it is 2 x 2 = 4. Clearly, for scales with a large number of items this number will increase quickly and may consume a large no. of degrees of freedom. A resolution to this dilemma is to parcel the non-scaling items into groups (sum them) before multiplying to get the interaction instruments. A similar strategy to modelling interaction effects can be used for a non-linear model which includes a squared latent independent variable. Here the chosen scaling variable is squared to act as the independent variable and the squares of the corresponding non-scaling variables act as instruments. Note however, in this quadratic case a modification is needed to the intercept parameter to get consistent estimates, see Bollen (1995, pp 240-241).
Intrinsically Non-Linear Models

The interaction model is an example of a linear in parameters non-linear model. As such 2SLS can be easily adapted to gain consistent estimates. In other cases models may be non-linear in parameters and here the 2SLS method is no longer applicable. For example when dealing with dichotomous dependent variables we need to use logit or probit models. These models are intrinsically non-linear. To estimate a non-linear model in the spirit of 2SLS we can employ the Generalised Methods of Moments estimator (GMM). GMM is effectively a more general version of 2SLS which can handle non-linear models. Conceptually we can use the Bollen (1996) methodology with GMM. That is, choose scaling variables as independent variables and nonscaling variables as instruments and apply GMM. Programs such as SHAZAM and LIMDEP support GMM, one needs to write some code to define the non-linear function. For example, the LIMDEP code for the logit GMM estimator is provided by Foster (1997). For an application of combining probit GMM with Bollens latent variable approach see Smith, Oczkowski, Noble and Macklin (2002, 2003).
Measurement Model Estimation

Even though the 2SLS estimator has mainly been employed to estimate the structural model it can also be used to estimate the measurement model. Consider a single latent variable ( 1 ) with three observed indicators: x1 = x11 + 1 x 2 = x 2 1 + 2 x3 = x 31 + 3 (16) (17) (18)

Choose a scaling (or reference) variable say x1 for 1 , this implies the corresponding loading is set to unity. This allows us to write:
x1 = 1 + 1

(19)

Eqn (19) can be substituted into (17) to get: (20) x 2 = x 2 x1 + u where u = 2 x 2 1 As before (20) can be estimated via 2SLS using x3 as an instrument, x 2 is not valid as it directly depends upon u. The same logic applies for the equation of x3 , x1 is the regressor and x 2 is the instrument. To show why x3 is a valid instrument for eqn (20) consider the following:

cov( x3 , u ) = cov( x 31 + 3 , 2 x 2 1 ) cov( 3 , 2 ) cov( 3 , x 2 1 ) = E ( x 31 2 ) E ( x 31 ) E ( 2 )

(use (3), (20)) (use (5b))

= cov( x 31 , 2 ) cov( x 31 , x 2 1 ) +

E ( x 31 x 2 1 ) + E ( x 31 ) E ( x 2 1 ) + E ( 3 2 ) E ( 3 ) E ( 2 ) E ( 3 x 2 1 ) + E ( 3 ) E ( x 2 1 ) (use (5a))

Because of the assumptions of zero mean errors and independence between 1 , 2 , and 3 , and because 1 is independent of all the s , then all these expressions are zero, and so x3 is a valid instrument for eqn (20). In general an equation has to be estimated for each non-scaling item, the specific nonscaling item is the dependent variable, the independent variable is the scaling item and all other non-scaling items are the instruments. This approach permits the estimation of the factor loadings. One can extend this approach to estimate higher order factor models using 2SLS, see Bollen and Biesanz (2002).
Diagnostic Testing and Goodness of Fit in 2SLS Models

Most of this discussion is based on Pesaran and Smith (1994) and Pesaran and Taylor (1999). Initially we need to make a distinction between various types of residuals and forecasts when using 2SLS/IV methods. These distinctions relate to the two alternate ways to produce 2SLS/IV estimates. Reconsider the basic representation: y = + x + u where z are valid instruments for x. (21)

10

, using and After estimating this model we get estimates for the parameters of these we can formulate two types of predictions. Most programs (including SPSS and SHAZAM) allow you to save: x = + y (22)

these are termed 'fitted values or IV predictions. These fitted values are typically used in the calculation of the goodness of fit R 2 . A problem with this is that this value can be negative and therefore lacks its conventional interpretation. The other type of prediction is termed forecast or 2SLS predictions, these are formed as: ~ x + y = (23)

here the predicted values for x from the 1st stage regression are used instead of x. These forecasts ( ~ y ) are gained from the 2nd OLS regression in the 2SLS procedure rather than the direct IV option. The R 2 based on this 2nd OLS regression of y on x 2 2 is termed GR (the generalised R ) this measure of goodness of fit does fall between zero and unity. Pesaran and Smith (1994) show asymptotically, that GR 2 will be greatest for the correctly specified model and is valid for both nested and non-nested model comparisons. A similar distinction is made between IV and 2SLS for residuals. The IV residuals correspond to the fitted values and are defined as:

x = y u (24) These residuals are typically available as saved options in packages such as SPSS and SHAZAM. The 2SLS residuals correspond to forecasts and are defined as: ~ = y x u
Over-identifying Restrictions Test

(25)

Bollen (1996) proposes this as a test of whether the general model specification and the instruments are valid. It is similar to the chi-square test in maximum likelihood SEM. The test checks whether the extra variables which over-identify the model are valid for the specification. To conduct the test proceed as follows: 1. 2. 3. 4.
(IV residuals) against all the instruments and get the R 2 Regress u Using the R 2 from step 1, form the test statistic: N * R 2 (N is the sample size) The test statistic has a 2 (chi-square) distribution with degrees of freedom equal to the no. of instruments less the no. of RHS variables in the equation. If the statistic is significant then a problem exists.

11

The logic underlying this test is that the residuals should be independent of the instruments as required by the estimation procedure. A high R 2 in step 1 would indicate that this would not be the case leading to a rejection of the model specification or instruments.
RESET (Specification Error Test)

Pesaran and Taylor (1999) propose the RESET ( FF2 in their paper) general specification error test for appropriate functional form and/or omitted variables. The test has good relative power (compared to other tests in their paper) and is robust to heteroscedasticity. To conduct the test proceed as follows: 1. 2. 3. 4. Square the 2SLS forecasts from the estimated model : ~ y2. Run a 2SLS/IV regression of y against the original RHS variables and ~ y 2 , use as instruments the original instruments and ~ y2. The t-statistic on the ~ y 2 variable is the test statistic. If the test statistic is significant then there is a specification error.

The logic underlying this test suggests, in a correctly specified model the predictions from the model should not have any explanatory power in the original model. Effectively, the predictions from the model should not be able to explain any variation in the residuals in a correctly specified model.
Heteroscedasticity Test

Pesaran and Taylor (1999) propose a heteroscedasticity ( HET1 ) test. This test has good size and power properties (compared to other tests in their paper) and is robust to non-normality in cross section data sets. To conduct the test proceed as follows: 1. 2. 3.

2 (the squared IV residuals) on ~ Regress u y 2 (the squared 2SLS forecasts) The t-statistic on the ~ y 2 variable is the test statistic. If the test statistic is significant then there is a heteroscedasticity problem.

Non-nested Testing

Oczkowski and Farrell (1998) and Oczkowski (2002) develop non-nested tests for competing models for the 2SLS latent variable estimator. In a series of Monte Carlo experiments it appears the augmented encompassing test has useful small sample properties and can be recommended for testing alternative models. Label the two competing models as H 0 the null and H 1 the alternative:
H0 : H1 : y = 0 0 + u 0 y = 11 + u1

where, y is an observed dependent variable (could be summated or factor-scores)


12

0 is a latent independent variable with 3 indicators ( x1 , x 2, x3 ) 1 is a latent independent variable with 3 indicators ( w1 , w2, w3 )
Assume the scaling variables are x1 and w1 , the competing models in observed variables become: H0 : y = 0 x1 + e 0 H1 : y = 1 w1 + e1 with new composite error terms. Encompassing is the notion that a preferred model should be able to account for the salient features of rival models. In the context of non-nested testing this implies that H 0 is validated if H 0 can account for the salient features of H 1 . The augmented encompassing test requires the following two regressions: (i) (ii) (iii) OLS regression: w1 on w2 , w3 and save predictions, call then p. 2SLS regression: y on x1 and p ( x 2 , x3 , p as instruments), and test for the significance of the p variable. If the p variable is significant then H 1 rejects H 0 .

If there is more than one different independent variable between competing models then step (i) involves as many regressions as differing variables producing multiple prediction variables. In step (ii) these prediction variables are then tested for their joint significance using an F test. The test described for the null H 0 against the alternative H 1 is only uni-directional and there is persuasive argument that one should not make inferences about the alternative model based on tests of the null model alone. As a consequence it is common practice to perform paired tests where the null and alternative models alternate. For example, consider the paired tests for competing models A and B: employ the procedure using: (i) Model A as H 0 and Model B as H 1 and (ii) Model B as H 0 and Model A as H 1 . One of four scenarios will result from the paired tests for comparing models A and B: (i) accept both A and B; (ii) reject both A and B; (iii) accept A and reject B, (iv) accept B and reject A. All the extensions discussed previously for 2SLS carry over to this non-nested testing procedure.

13

References

Bollen, K.A. (1995) Structural Equation Models that are Nonlinear in Latent Variables: A Least-Squares Estimator. Ch 6, pp223-252 in P. Marsden (ed) Sociological Methodology, Blackwell Publishers: Oxford. Bollen, K.A. (1996) An Alternative Two Stage Least Squares (2SLS) Estimator for Latent Variable Equations. Psychometrika, 61, 109-121. Bollen, K.A. (2001) Two-Stage Least Squares and Latent Variable Models: Simultaneous Estimation and Robustness to Misspecifications. Ch 7, pp 199138 in R. Cudeck, S. du Toit and D. Sorbom (eds) Structural Equation Modeling: Present and Future: A Festschrift in honor of Karl Joreskog, Scientific Software International: Lincolnwood. Bollen, K.A., and Biesanz, J.C. (2002) A Note on a Two-Stage Least Squares Estimator for Higher-Order Factor Analyses. Sociological Methods and Research, 30(4), 568-579. Bollen, K.A. and Paxton, P. (1998) Interactions of Latent Variables in Structural Equation Models. Structural Equation Modeling, 5, 267-293. Farrell, M.A. (2000) Developing a Market-Oriented Learning Organisation. Australian Journal of Management, 25, 201-222. Farrell, M.A., and Oczkowski, E. (2002) Are Market Orientation and Learning Orientation Necessary for Superior Organizational Performance? Journal of Market-Focused Management, 5, 197-217. Foster, E.M. (1997) Instrumental Variables for Logistic Regression: An Illustration. Social Science Research, 26, 487-504. Li, F. and Harmer, P. (1998) Modeling Interaction Effects: A Two-Stage Least Squares Example. Ch 7, pp153-166 in R.E. Schumacker and G.A. Marcoulides (eds) Interaction and Nonlinear Effects in Structural Equation Modeling, Lawrence Erlbaum: Mahwah Oczkowski, E. (2001) 'Hedonic Wine Price Functions and Measurement Error.' Economic Record, 77, 374-382.. Oczkowski, E. (2002) 'Discriminating Between Measurement Scales using Nonnested Tests and 2SLS: Monte Carlo Evidence.' Structural Equation Modeling, 9, 103-125. Oczkowski, E. and M. Farrell (1998) Discriminating between Measurement Scales using Non-Nested Tests and Two Stage Estimators: The Case of Market Orientation. International Journal of Research in Marketing, 15, 349-366. Pesaran, M.H., and Smith, R.J. (1994) A Generalized R 2 Criterion for Regression Models Estimated by the Instrumental Variables Method. Econometrica, 62, 705-710. Pesaran, M.H. and Taylor, L.W. (1999) Diagnostics for IV Regressions. Oxford Bulletin of Economics and Statistics, 61, 255-281. Smith, A. Oczkowski, E, Noble, C. and Macklin, R. (2002) New Management Practices and Enterprise Training, Report for the National Centre for Vocational and Educational Research, Adelaide, Australia. Smith, A. Oczkowski, E, Noble, C. and Macklin, R. (2003) Organisational Change and the Management of Training in Australian Enterprises. International Journal of Training and Development, 7(1), 2-15. Stock, J.H., and Watson, M.W. (2003) Introduction to Econometrics, Addison Wesley: Boston.

14

You might also like