Hierarchical Multiple Regression in SPSS This example shows you how to perform hierarchical multiple regression, a variant of the

basic multiple regression procedure that allows you to specify a fixed order of entry for variables in order to control for the effects of covariates or to test the effects of certain predictors independent of the influence of others. As with standard or stepwise multiple regression, the basic command for this procedure is “regression”: “linear.”

input the dependent variable. This ensures that they will get “credit” for any shared variability that they may have with the predictor that I am really interested in. or race/ethnicity might be associated with both employment status and hours of outpatient contact. or “opcontact”). To make sure that these variables do not explain away the entire association between employment status and hours of treatment. I want to find out whether employment status predicts the number of sessions of treatment that someone receives. I put them into the model first. age. Next. These are the variables that you want SPSS to put into the model first – generally the ones that you want to control for when testing the variables that you are truly interested in. Any observed effect of employment status can then be said to be “independent of” the effects of these variables that I have already controlled for. enter a set of predictor variables into this box. looking at data from a substance abuse treatment program (the same data as in the stepwise multiple regression example). employment status. .In the main dialog box. In this case. but I am concerned that other variables like socioeconomic status. in this analysis. we want to predict amount of treatment that a patient receives (hours of outpatient contact. For instance.

if you wanted to know which demographic predictors were most effective. is what makes this a “hierarchical” regression procedure – some variables take precedence in the hierarchy over others. Often researchers will enter variables as related sets – for example. you could use a stepwise procedure in the first block. if you wanted to enter a third (or fourth. based on the order in which you enter them into the model. or fifth. To put it into the model. The “Next” button clears out the list of independent variables from your first step. click the “next” button. and then the variable that you are most interested in as a third step. and then still enter employment status in the second block. Now hit the “OK” button to run the analysis. just not on the current screen. To do this. SPSS also lets you specify a “Method” for each step – for example.” . You will see all of the predictor variables that you previously entered disappear – don’t panic! They are still in the model.The next step is to input the variable that I am really interested in. where some predictors are considered before looking at others. etc. all demographic variables in a first step. This order-of-entry feature.) block of variables. This is not necessarily the only way to proceed – you could also enter each variable as a separate step if that seems more logical based on the design of your experiment. and lets you enter the next “block” of predictors. which is employment status. you would just choose “Forward Stepwise” instead of “Enter” from the drop-down box labeled “Method. You could also hit “Next” again. all potentially confounding psychological variables in a second step.

Using just the default “Enter” method. educ.125 Adjusted R Square . Method Enter Enter a. ethnic. a age employst a Variables Removed . The change in R2 is a way to evaluate how much predictive power was added to the model by the addition of another variable in step 2.18930 a. followed by employment status as a predictor in Block 2. sex. Predictors: (Constant). marstat. the % of variability accounted for went up from 12. marstat.94411 35. employst The next table shows you the percent of variability in the dependent variable that can be accounted for by all the predictors together (that’s the interpretation of R-square).121 . All requested variables entered. ethnic.029 Std. sex.348a . ethnic. marstat. Model Summary Model 1 2 R .1% to 12. and employment status in step 2. In this case. Error of the Estimate 34. educ. sex.043 . Predictors: (Constant). Dependent Variable: opcontac This table confirms which variables were entered in each step – the five demographic variables in step 1. age b. . b.5% – not much of an increase. we get the following output: b Variables Entered/Removed Model 1 2 Variables Entered educ. age. with all the variables in Block 1 (demographics) entered together. .353b R Square .

736 28. Coefficientsa Unstandardized Coefficients B Std.201 .531 3.ANOVAc Model 1 Sum of Squares 9420.968 12. ethnic.390 -5.601 28.629 .050 -.05 to say that you have a statistically significant result).887 .064 .613 3. marstat.607 11.490 .870 5.421 Standardized Coefficients Beta . ethnic. Predictors: (Constant). sex. in order to obtain that individual’s predicted score on the dependent variable.191a 2 Regression Residual Total Regression Residual Total 1.513 . .595 1.719 .943 .699 .522 1. In this case. Error -12. Unfortunately.231 .639 a. age b.019 .362 -.099 .740 2. sex.287 F 1. .153 .705 .054 -. Dependent Variable: opcontac This table confirms our suspicions: Neither the first model (demographic variables alone) nor the second model (demographics plus employment status) predicted scores on the DV to a statistically significant degree.091 1615.472 Sig. If the predictors had been statistically significant.397 -.094 .339 68381.116 .419 df 5 56 61 6 55 61 Mean Square 1884.503 -12.063 Model 1 2 (Constant) age sex ethnic marstat educ (Constant) age sex ethnic marstat educ employst t -. Predictors: (Constant).419 9695.693 . educ.942 1238.515 .649 68105.485 1.388 1.072 1.305 .677 -. these betas (B) are the weights that you could multiply each individual person’s scores on the independent variables by.010 .136 . age.521 1.209 .598 . it would mean that employment did not have an effect above and beyond the effects of demographics. the coefficients aren’t interpretable.270b a.442 .462 . since they weren’t. in this case.225 .386 -4.384 2.319 3.693 2. (Look in the “sig” column for p-values. marstat. If the first set of predictors was significant.543 Sig.660 .080 77801. none of the predictors are significant.141 . Dependent Variable: opcontac This “coefficients” table would be useful if some of our predictors were statistically significant.297 5. employst c.450 .050 . educ.198 2.068 1221. but the second wasn’t. .770 77801.290 . which need to be below .654 .143 1.

Sign up to vote on this title
UsefulNot useful