You are on page 1of 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/232602824

Computer Administration of Questions: More Desirable or More Social


Desirability?

Article in Journal of Applied Psychology · June 1990


DOI: 10.1037/0021-9010.75.3.310

CITATIONS READS

182 672

2 authors, including:

Gary J. Lautenschlager
University of Georgia
63 PUBLICATIONS 4,832 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Criteria for the number of factors to retain View project

Cognition & Aging View project

All content following this page was uploaded by Gary J. Lautenschlager on 13 March 2016.

The user has requested enhancement of the downloaded file.


Journalof AppliedPsychology Copyright1990by the AmericanPsychologicalAssociation,Inc.
1990,Vol,75, No. 3, 310-314 0021-9010/90/$00.75

Computer Administration of Questions:


More Desirable or More Social Desirability?

G a r y J. L a u t e n s c h l a g e r a n d Vicki L. F l a h e r t y
The University of Georgia

Investigated the effect of computer vs. paper-and-pencil administration on 2 components of so-


ciaUy desirable responding (SDR), impression management (IM), and self-deception (SD). Ss'
degree of anonymity was also manipulated. Independent variables were expected to affect only 1M
scores, with the computer anonymous condition resulting in the least amount of IM. Results
indicated that IM and SD scores were influenced by main effects of both administration and
anonymity manipulations. In contrast to previous research, computer administration produced the
greatest amount of IM. The findings are discussed relative to results reported by Martin and Nagao
(1989) on the impact of computer administration of interviews and recent research on the dimen-
sions of SDR,

With the widespread popularity of microcomputers today, dent may be motivated to choose answers that create a favorable
computer administration of tests has become very common impression, as when applying for a job (Anastasi, 1982).
(Lee, 1986), computerized selection and training systems are
increasing in use (Feuer, 1986), and, in clinical settings, comput- Ostensible Benefits o f Computer Administration
erized personality assessment and diagnostic interviewing are
also commonplace (Butcher, 1987). Practitioners have been Researchers have hypothesized that computer administra-
quick to adopt computer administration of tests because of tion of noncognitive instruments containing sensitive and per-
several advantages, including increased objectivity, reliability, sonal items may reduce socially desirable responding, as it
and cost effectiveness; however, research examining possible might offer greater anonymity and might be perceived as im-
response differences when computerized testing is used in personal and nonjudgmental (Evan & Miller, 1969; Koson, Kit-
place of paper-and-pencil administration has been limited chen, Kochen, & Stodolosky, 1970; Martin & Nagao, 1989; Rez-
(Greaud & Green, 1986; Lee, Moreno, & Sympson, 1986). movie, 1977; Smith, 1963). For example, for very personal and
Socially desirable responding is a major concern in personal- embarrassing material, an individual might respond less truth-
ity assessment and organizational research (Zerbe & Paulhus, fully to another person but more freely to an impartial ma-
1987). However, explorations on socially desirable responding chine.
have not been without critics, or at least those who believe that Additionally, the increased standardization that computer
clarification is needed regarding the various meanings used for administration affords may also reduce socially desirable re-
this construct label (R. Hogan & Nicholson, 1988; Nunnally, sponding (Elwood, 1969; Feuer, 1986; Hedl, O'Neil, & Hansen,
1978). Many researchers view social desirability contamination 1973). That is, socially desirable responding may be reduced by
as a potential problem whenever self-report inventories are controlling the respondent's ability to preview, review, or skip
used to assess emotional, attitudinal, or other personal charac- items and change responses such that exposure to items is con-
teristics. Items on such inventories have one answer that is rec- sistent for all respondents. The greater structure imposed on
ognizably more socially desirable than the others, and a respon- responding by increased standardization may operate to fore-
stall respondent dissimulation.
Recent work by Martin and Nagao (1989) has examined how
simulated applicant interview responses were influenced by
This article is based in part on Vicki L. Flaherty's master's thesis, different administration modes, including paper-and-pencil
completed at the Universityof Georgia, May 1988. and computer administrations of the interview. Two separate
Portions of this article were presented at the 4th annual meeting of methods were used to measure socially desirable responding as
the Society for Industrial-Organizational Psychology,Boston, April a dependent variable, namely the Marlowe-Crowne Social De-
1989. sirability Scale and several forms of verifiable self-reports. Mar-
We wish to thank Del Paulhus, Bill Graziano, Jim Hogan, and three
tin and Nagao found that the least amount of socially desirable
anonymous reviewers for helpful comments regarding this research.
Wealso thank Kim Wilson, Natalie Milton, Tim McCall, Chin-Chi responding occurred for the computer interview format and
Lu, and Barbara Gore for their help with data collection. the most occurred for the face-to-face interview format. A po-
Correspondence concerning this article should be addressed to tential explanation for their findings focused on voluntary state-
Gary J. Lautenschlager, Department of Psychology,The Universityof ments provided by the participants that suggested the opera-
Georgia, Athens, Georgia 30602. tion of a "big brother effectY The findings were generally con-

310
COMPUTER ADMINISTRATION 311

sistent with the expectations cited previously and lend support anonymous or identified. These factors should produce testing
to those benefits o f the computerized interview format, within conditions similar to those that exist in organizational settings
some boundary conditions noted by Martin and Nagao (1989). for various t y p e s o f selection and organizational research
(Zerbe & Paulhus, 1987).
The Dimensionality of Socially Desirable Responding Impression management (IM) and self-deception (SD) were
Recently, Paulhus (1984, 1986) examined the dimensionality assessed by means of Paulhus's 0986) Balanced Inventory of
o f the socially desirable responding construct and determined Desirable Responding (BIDR). Differences in the effects of ad-
that available measures o f socially desirable responding, such as ministration mode and anonymity level on socially desirable
the Marlowe-Crowne scale, may confound two distinct forms responding were predicted for the BIDR-IM and BIDR-SD
of socially desirable responding bias. These two structures have measures as follows:
been identified and labeled as impression management and 1. Significantly lower BIDR-IM scores would occur in the com-
self-deception. Impression management refers to "lying; the puter administration mode than in the paper-and-penciladminis-
conscious, purposeful deception of others. Impression manage- tration modes.
ment is situation specific and, therefore, should be heavily in- 2. Significantly lower BIDR-IM scores would occur in the anony-
fluenced by contextual factors. Indeed, Paulhus (1984) found mous conditions than in the identified conditions.
that individuals use significantly more impression manage- 3. No significant differences in BIDR-SD scores would occur as a
ment in a group testing situation under public (identified) con- result of the administration mode or anonymity manipulations.
ditions than under anonymous conditions.
On the other hand, self-deception refers to a lack of self- Method
knowledge or protection of self-beliefs, including maintenance
o f self-esteem. In effect, a subject actually believes in his or her Subjects
positive self-reports in contradiction to objective reality. Self-
Psychology students (N = 241) taking an introductory psychology
d e c e p t i o n is viewed as a stable p e r s o n a l i t y trait, and thus
course at the University of Georgia participated in the study. All sub-
should not be easily influenced by contextual factors. Paulhus jects received credit toward fulfillment of a course requirement. Fe-
(1984) reported that self-deception may show slight but nonsig- male students composed 70% of the sample, and the average age was
nificant increases in group testing situations under public 18.8 years.
(identified) conditions. These two forms of response bias are
consistent with distinctions that have been presented by Nun-
nally (1978) regarding a respondent's frankness versus level o f
Materials
self-knowledge. Paulhus's (I 986) Balanced Inventory of Desirable Responding
Zerbe and Paulhus (1987) have raised concerns regarding the (BIDR) was used to measure the self-deception and impression man-
use o f methods to control socially desirable responding in orga- agement variables. The instrument comprises 20 self-deception
nizational research. Because central dimensions of personality, (BIDR-SD) and 20 impression management (BIDR-IM) items, with
such as adjustment and achievement motivation, also involve a responses made on a 7-point scale ranging from not true(l) to verytrue
c o m p o n e n t o f self-deception, any a t t e m p t to control self- (7). The BIDR-IM items refer to desirable but statistically infrequent
deception in measuring these personality constructs may un- behavior and undesirable but common behavior. A sample item from
this measure is "I have taken sick leave from work or school even
dermine predictive power. Impression management, however,
though I wasn't really sickY The BIDR-SD items refer to psychologi-
may be viewed as a potential contaminating factor, and it may cally threatening thoughts and feelings that are assumed to be experi-
be desirable to control this factor (see Z,erbe & Paulhus, 1987, enced universally. A sample item is "I could easily quit any of my bad
for possible exceptions). habits if I wanted to" In scoring the BIDR, equal numbers of items
were keyed positively and negatively. BIDR-IM and BIDR-SD scores
T h e Influence o f A n o n y m i t y were obtained by reversing the ratings on the negatively keyed items
and then summing responses. Higher total scores represent greater
In real-world settings, such as in selection situations, a job
amounts of impression management and self-deception, respectively.
applicant's identity is known and the applicant stands to gain
by responding in a favorable manner. This may be especially
true for questions o f a nonverifiable nature. On the other hand, Procedure
many research studies are conducted in laboratory settings
The two independent variables were administration mode and ano-
where subjects are assured of their anonymity and have little or
nymity level. The three levels of administration mode employed were
no incentive for behaving in any certain fashion. Previous re- group paper-and-pencil, individual paper-and-pencil, and individual
search indicates that observed levels o f socially desirable re- computer. The two levels of anonymity were anonymous and identi-
sponding vary as a function of anonymity level. That is, as fied. Subjects were randomly assigned to one of six treatment condi-
anonymity seems more assured, generally less socially desirable tions as follows: (a) group paper-and-pencil anonymous (n = 39), (b)
responding is present (Bradburn & Sudman, 1979; Nederhof, group paper-and-pencil identified (n = 40), (c) individual paper-and-
1984; Wiseman, 1972). pencil anonymous (n = 40), (d) individual paper-and-pencil identified
(n = 38), (e) computer anonymous (n = 43), and (f) computer identified
The Present Study (n= 41).
Subjects were informed that the experiment was titled "Question-
We focused on the use of different questionnaire administra- naire Use in Personnel Selection" and were told that they would be
tion modes under conditions where the respondents were either completing a series of questionnaires. Prior to the completion of the
312 GARY J. LAUTENSCHLAGER AND VICKI L. FLAHERTY

BIDR, subjects completed two sets of preexperiment questions de- Table 1


signed to assess their prior computer experience and general attitude Mean Impression Management and Self-Deception Scores
toward computers. The preexperiment questions were completed by Broken Down by Experimental Condition
all subjects using the same paper-and-pencil format across all condi-
tions. Anonymitylevel
In the paper-and-pencil administration modes, subjects were free to
preview, review, or skip items, as well as change responses. For the Administration mode Anonymous Identified Overall
group paper-and-pencil mode, subjects were tested in groups of 18 to
24 in a 300-sq-ft group testing room. For the individual paper-and- Impression management
pencil administrations, subjects were tested at a table in a 150-sq-ft Computer
furnished office. M 84.93 86.61 85,75
In the computer administrationmode, the items were presented on a SD 12.16 13.18 12,62
computer monitor, and subjects responded using a standard IBM- Individual paper-and-pencil
compatible computer keyboard. Subjects in the computer conditions M 76.39 85.49 80.83
could not previe~ review, or skip items or change responses to items. SD 13.51 18.72 16.79
Group paper-and-pencil
Instructionsrelating to the computer operating procedures were given M 79.90 81,70 80.81
in the computer administration mode. Otherwise, instructionsto sub- SD 13.21 13.01 13.05
jects were the same in all six conditions. No time limit for completion Overall
of the BIDR was imposed in any condition. M 80,52 84.60 82.54
In the identified conditions subjects were asked to provide identify- SD 13.33 15.14 14.37
ing information (name, social security number, home address, and
telephone number) and were told that, dependingon their responses to Self=deception
the questionnaire, they might qualify to participate in future research Computer
(cf. Paulhus, 1984, p. 605). We assumed that the likelihood of future M 95.37 99.37 97.32
contact implied by this statement would strengthen the press for im- SD 11.03 12.74 12.00
pression management in the identified conditions. Subjects in the Individual paper-and-pencil
anonymousconditions were not asked to provide any identifying infor- M 92.62 98.76 95.62
mation and were explicitly assured of their anonymity. SD 12.67 12.08 12.68
Upon completion of the BIDR, subjects completed a set of post- Group paper-and-pencil
experiment questions that assessed their reactions to the experiment M 85.80 88.08 86.95
SD 13.99 15.36 14.65
and provided manipulation checks. The postexperiment questions Overall
were completed using the same paper-and-pencil format in all condi- M 91.41 95.38 93.37
tions. SD 13.10 14.36 13.85

Note. Higher values indicate greater socially desirable responding.


Results

Responses to the preexperiment questionnaire indicated


that subjects did not differ across conditions in their levels of puter administration mode was significamly different from
prior computer experience or general attitude toward com- both paper-and-pencil administration modes, but opposite the
puters. They had moderate levels of computer experience and hypothesized direction. BIDR-IM scores were higher for the
moderately positive attitudes toward computers. computer a d m i n i s t r a t i o n mode than for either paper-and-
The conceptual distinction between the two dependent mea- pencil administration mode. The main effect for anonymity
sures, as provided by Paulhus (1986), argued against consider- level was in the predicted direction for the BIDR-IM scores.
ing them as a joint system of variables (Huberty & Morris, No administration mode effects were expected for the BIDR-
1989). Collapsing across all conditions, internal consistency SD scores, yet scores were significantly higher in the computer
measures were computed for the BIDR-IM and BIDR-SD administration mode than in the group paper-and-pencil con-
items; values of coefficient alpha were .72 and .68, respectively dition. Scores were not significantly different when comparing
The correlation between the two subscales, r = .43, was moder- the computer administration mode with the individual paper-
ate. Therefore, a separate 3 × 2 analysis of variance (ANOVA)was and-pencil condition. The main effect of anonymity level on
conducted for each dependent variable rather than a single mul- the BIDR-SD scores was in the same direction as for the BIDR-
tivariate analysis. IM scores, with lower scores in the anonymous condition.
The results using the BIDR-IM scores revealed a significant Responses to the postexperiment questionnaire revealed that
main effect of administration mode, F(2,235) = 3.32, p < .04, a subjects in the anonymous conditions felt more anonymous
significant main effect of anonymity level, F(1,235) = 5.35, p < than subjects in the identified conditions, F(1,239) = 22.83,
.02, and no interaction effect, F(2, 235) = 1.80, ns. There was a p < .001, indicating that the anonymity manipulation was suc-
significant main effect of administration mode for the BIDR- cessful. Subjects in both the computer and individual paper-
SD scores, F(2,235) = 14.82, p <.005, a main effect for anonym- and-pencil administration modes indicated a greater concern
ity level, F(I, 235) = 6.06, p < .02, and no significant interaction that the experimenter might evaluate their responses immedi-
effect, F(2, 235) = 0.43, ns. ately upon their completion of the BIDR than did subjects in
Table 1 presents means and standard deviations for the the group paper-and-pencil administration mode, F(1,239) =
BIDR-IM and BIDR-SD scores by experimental condition. 16.81, p < .001.
Bonferroni adjusted tests (effect-wide a = .05) for the adminis- A separate questionnaire was used only in the computer ad-
tration mode effect on BIDR-IM scores indicated that the corn- ministration conditions to assess unique aspects of that mode.
COMPUTER ADMINISTRATION 313

The only difference on these items was that subjects in the they tap different constructs. This alone does not explain why
anonymous condition felt more strongly that giving their re- the BIDR-SD score was affected by the experimental manipula-
sponses to the computer helped preserve their anonymity, F(I, tions. Clearly, the BIDR-SD construct could stand further clari-
239) = 4.40, p < .04, and this was consistent with findings from fication. To this end, Paulhus and Reid (in press) have indicated
the anonymity manipulation check across all conditions. How- that a large number o f the BIDR-SD items on the version o f the
ever, subjects in the computer administration mode reported BIDR used in the present study were confounded with impres-
very little to moderate concern (M = 2.4, SD = 1.0 on a 5-point sion management and other traits. A rescoring o f the smaller
scale) with the computer's ability to immediately evaluate their subset o f the I0 BIDR-SD items that better represent the self-
BIDR responses once the responses had been entered into the deception construct resulted in no significant differences for
computer, and there were no differences between anonymity administration mode, anonymity condition, or their interac-
conditions within the computer administration condition on tion effects, and this new scale only correlated r = .24 with the
this item. This finding suggests that the subjects in the com- BIDR-IM scores. The most recent version o f the BIDR is pre-
puter conditions were not overly concerned about the "big sented in Paulhus (in press) and contains a redefined BIDR-SD
brother effect" In general, these subjects indicated that the subscale.
computer administration was moderately to extremely enjoy- Results from a study by Allred and Harris (198.8) suggest that
able. They also reported very little concern with pressing the the structured nature o f the computer administration mode
wrong key and indicated that they did not find the computer used in the present study may account for the higher BIDRdM
difficult, confusing, or frightening to use. scores. Allred and Harris compared the equivalence of com-
puter and paper-and-pencil administrations of Gough's Adjec-
Discussion tive Checklist. Subjects in the computer condition were re-
The findings of the present study run counter to results ob- quired to respond to each item before the next item appeared
tained by Martin and Nagao (1989) and to a widely held belief on the screen, whereas subjects in the paper-and-pencil condi-
that computer administration of questionnaires should tend to tion were able to visually scan the page before making re-
reduce socially desirable responding. Greater levels of impres- sponses. The subjects endorsed significantly more favorable ad-
sion management were observed in the computer administra- jectives in the computer administration condition. Without visi-
tion mode than in either o f the paper-and-pencil administra- ble feedback to provide a check and balance regarding how
much one has already distorted responses in certain situations,
tion modes. Also contrary to previous research (Evan & Miller,
one may be more prone to further distort on any given item.
1969; Koson et al, 1970; Rezmovic, 1977; Smith, 1963), the
Thus, being primed for a situation requiring impression man-
computer administration mode did not appear to enhance re-
agement and not being able to scan over either the answers to
spondents' feelings o f anonymity. It is interesting to note here
questions already given or the questions yet to be asked may
that had one-tailed directional tests been strictly adhered to,
result in greater consistency in managing one's impression. On
the conclusions drawn would have indicated no significant dif-
the other hand, when the context does not evoke impression
ferences for administration modes. The alternative hypothesis
management, the computer administration o f questionnaires
for BIDR-IM scores was based on the ostensive impartiality
may reduce other undesirable response tendencies. More re-
that has generally been assumed o f the computer administra-
search focused on such factors is clearly needed to help define
tion condition.
the most beneficial combinations o f computerized question-
Potential reasons for the divergence o f the present results
naire administration with types o f organizational research con-
from those of Martin and Nagao (1989) relate to the use of texts.
different procedures and questionnaires. In the present study, If computer administration leads to increases in the impres-
subjects were informed that the experiment involved question- sion m a n a g e m e n t c o m p o n e n t o f o b s e r v e d q u e s t i o n n a i r e
naire use in personnel selection, which was vague compared to scores, it is likely that this will have implications for the psycho-
the more focused role-playing situation used by Martin and metric properties o f the questionnaire(s) used. One reviewer
Nagao (1989). Their procedures were unclear as to whether speculated that increased motivation to appear socially desir-
respondents could preview, review, or skip items or change re- able would probably lead to increased interitem correlations as
sponses in the computer condition. In the present study respon- well as increased interscale correlations, if the present results
dents assigned to the computer administration mode did not are generalizable. This would suggest a source o f common vari-
have available any of these options. Yet another difference was ance that might underlie data gathered by use of a computer,
that the present study used the BIDR, whereas Martin and affecting the structure o f a given measure and potentially in-
Nagao (1989) used the Marlowe-Crowne scale to measure so- creasing correlations among a set o f measures. Whether to con-
cial desirability. sider such changes in the impression management component
Neither the administration mode nor anonymity manipula- of scores as a contaminating influence or an important compo-
tions were expected to influence self-deception, yet greater lev- nent o f a given measure is still a matter o f some debate (R.
els o f self-deception were observed in the computer and individ- Hogan & Nicholson, 1988; Paulhus, 1986), and both the context
ual paper-and-pencil administration modes than in the group and purposes of the measurement would seem important con-
paper-and-pencil administration mode. Additionally, greater siderations in making this distinction.
levels of self-deception were observed in the identified condi- On the practical side, the ability to manage one's impression
tions. may be quite valuable as a variable in its own right, especially in
The similarity o f findings for the BIDR-IM and BIDR-SD contexts where either social influence or conformity is impor-
measures would seem to indicate that the scales are redundant. tant, such as in sales settings (cf. Kirchner, 1962). For example,
However, the moderate correlation between the scales suggests L B. Hogan (1988) found that a social desirability score ob-
314 GARY J. LAUTENSCHLAGER AND VICKI L. FLAHERTY

tained from biographical data items used to select salespersons Feuer, D. (1986). Computerized testing: A revolution in the making.
was predictive of a turnover criterion. Computer administra- 7~aining, 23, 80-86.
tion of those same questions might produce better discrimina- Greaud, V. A., & Green, B. E (1986). Equivalence of conventional and
tion of individual differences in impression management, per- computer presentation of speed tests. Applied Psychological Mea-
haps enhancing the validity o f inferences made from those surement, 10(l), 23-34.
Hedl, J. L, O'Neil, H. E Jr., & Hansen, D. N. (1973). Affective reactions
scores. In other selection contexts, increases in the impression
toward computer-based intelligence testing. Journal of Consulting
management component of questionnaire scores may be con-
and Clinical Psychology, 40, 217-222.
sidered a contaminant. This would seem important for ques- Hogan, J. B. (1988). The influence of socially desirable responding on
tionnaires that contain personal or sensitive items (e.g., biodata biographical data of applicant versusincumbent samples: Implications
or honesty tests) where job candidates are not anonymous and for predictive and concurrent research designs. Unpublished doctoral
are likely to be motivated to enhance how they are perceived. dissertation, University of Georgia, Athens.
Impression management would operate at cross-purposes with Hogan, R., & Nicholson, R. A. 0988). The meaning of personality test
any test intended to measure honesty. In addition, the adminis- scores. American Psychologist, 43, 621-626.
tration of job satisfaction, organizational climate, and other Huberty, C. J., & Morris, J. D. (1989). Multivariate analysis versus multi-
attitude questionnaires in organizational research may be ad- ple univariate analyses. Psychological Bulletin, 105, 302-308.
versely affected when converted from paper-and-pencil format. Kirchner, W. K. (1962). "Real life" faking on the Edwards Personal
Preference Schedule by sales applicants. Journal of Applied Psychol-
Increases due to impression management on such diagnostic
ogy, 46, 128-130.
measures may produce inaccurate and potentially misleading
Koson, D., Kitchen, C., Kochen, M., & Stodolosky, D. (1970). Psycholog-
results. ical testing by computer: Effect on response bias. Educational and
At the same time, additional basic research is needed to in- Psychological Measurement, 30, 803-810.
vestigate the effects of using computer administration in place Lautenschlager, G. J. (1986). Within-subject measures for the assess-
of traditional paper-and-pencil or face-to-face data collection ment of individual differences in faking. EducationalandPsycholog-
formats. Intraindividual differences might be examined using ical Measurement, 46, 309-316.
within-subjects designs while a l t e r n a t i n g a d m i n i s t r a t i o n Lee, L A. (1986). The effects of past computer experience on computer-
modes. In addition to examining changes in the rank ordering ized aptitude test performance. Educational and Psychological Mea-
of individuals' scores due to administration mode differences, surement, 46, 727-733.
Lee, J. A, Moreno, K. E., & Sympson, J. B. (1986). The effects of mode
consideration should be given to the use o f intraindividual
of test administration on test performance. EducationalandPsycho-
measures of response changes where feasible (cf. Lautenschla- logical Measurement, 46, 467-474.
ger, 1986). The goal here would be to investigate whether and Martin, C. L, & Nagao, D H. 0989). Some effects of computerized
how administration mode differences affect respondents and, interviewing on job applicant responses. JournalofApplied Psychol-
in turn, influence the internal structure o f a measure. Any such ogy, 74, 72-80.
influences will have implications for validity. Nederhof, A. J. (1984). Visibility of response as a mediating factor in
To conclude, the long-accepted belief that computer adminis- equity research. Journal of Social Psychology, 122, 211-215.
tration of sensitive questionnaire items would generally tend to Nunnally, J. C. 0978). Psychometric theory New York: McGraw-Hill.
reduce socially desirable responding has been called into ques- Paulhus, D. L. (1984). Two-component models of socially desirable
responding. Journal of Personality and Social Psychology 46,
tion by the present findings. Thus, computer questionnaire ad-
598-609.
ministration may or may not be a more desirable method of Paulhus, D. L. (1986). Self-deception and impression management in
administration, depending on whether or not socially desirable test responses. In A. Angleitner & J. S. Wiggins (Eds.), Personality
responding is considered a contaminant or a focal variable of assessment via questionnaires (pp. 143-165). Berlin: Springer-Verlag.
interest. Relative comparisons of the internal structure and the Paulhus, D. L. (in press). Measures of personalityand social psychologi-
validity of computer administration of measures vis-h-vis other cal attitudes. In L P Robinson, P R. Shaver, & L. Wrightsman (Eds.),
administration modes are clearly needed. Measures of social-psychological attitudes. New York: Academic
Press.
Paulhus, D. L., & Reid, D. B. (in press). Attribution and denial in so-
References ciallydesirableresponding. JournalofPersonalityandSocialPsychol-
Allred, L. J., & Harris, W. G. (1988, March). Computerized vs. paper- ogy
Rezmovic, V. (1977). The effects of computerized experimentation on
and-pencil forms of the adjective checklist. Paper presented at the
annual Southeastern Psychological Association Convention, New response variance. Behavior Research Methods and Instrumentation,
9, 144-147.
Odeans.
Smith, R. E. (1963). Examination by computer. Behavioral Science, 8,
Anastasi, A. (1982). Psychological testing. New York: Macmillan.
Bradburn, N. M., & Sudman, S. (1979). Improving interview methods 76-79.
Wiseman, E (1972). Methodological bias in public opinion surveys.
and questionnaire design. San Francisco: Jossey-Bass.
Public Opinion Quarterly, 36, 105-108.
Butcher, J. N. (Ed.). (1987). Computerized psychological assessment.
Zerbe, W. J., & Paulhus, D. L. (1987). Socially desirable responding in
New York: Basic Books.
organizational behavior: A reconception. Academy of Management
Elwood, D. L. (1969). Automation of psychological testing. American
Review, 12, 250-264.
Psychologist, 24, 287-289.
Evan, W. M., & Miller, J. R. II1. (1969). Differential effects on response
bias of computer vs. conventional administration of a social science Received May 1, 1989
questionnaire: An exploratory methodological experiment. Behav- Revision received January 2, 1990
ioral Science, 14. 216-227. Accepted January 2, 1990 •

View publication stats

You might also like