You are on page 1of 5

What's Right With Conjoint A

24 Spring 2002


Green and Krieger defend a malignedbut vitalresearch technique.

Uncharacteristic of his usual upbeat and jovial manner, Gibson seems downright unhappy about conjoint analysis and the direction in which it's headed. In the Winter 2001 issue of this magazine, he wonders why self-explicated preferences aren't accorded the respect they deserve ("What's Wrong With Conjoint Analysis?"). He finds that full-profile conjoint models are woefully inadequate in capturing the large number of attributes and levels that characterize such products as computer printers, gasoline stations, chewing gums, and cake mixes. He longs for a return to the "good old days" when simple compositional models, involving desirability and importance ratings, were all that mattered. Sad to say, the chances of most researchers returning to self-explicated (or compositional) preference modeling are little short of zero. Ironically, the most recent developments in choice modeling dispense with self-explicated data entirely. Discrete choice modelers are concerned with buyers' choices, as derived from their responses to partial or full profiles in the context of choice sets. Recent research in multinomial logit modeling places emphasis on the selections that respondents make to profiles in choice setsnot attribute-level desirabilities and attribute importances. Respondents typically choose one item from the proffered set. Alternatively, they may be asked to divide a fixed number of points, say 100, across the options to reflect their strength of preference. This approach is strictly decompositional. Advocates of choice-based modeling now include researchers from economics, operations research, statistics, psychometrics, and engineering. Logit modeling of decision choices is on the rise, currently benefiting from recently available software (e.g., SAS) that can fit the multinomial logit, heteroscedastic extreme value, nested logit, mixed logit, or multinomial probit models. In short, the trend is moving further and further away from self-explicated preference models. With the advent of hierarchical Bayes, choice modelers seem to be capturing the high ground, now that individual-level parameters can be estimated. Gibson raises two basic (if plaintive) questions: Why go beyond the use of simple, self-explicated models that have served us so well? What's wrong with simply collecting potential buyers' attributelevel desirabilities and attribute importances and then constructing self-explicated models? Indeed, Gibson answers his own questions by asserting that this is sufficient insofar as he is concerned.


P a u l

E . G r e e n

a n d

A b b a

K r i ege r



In contrast to Larry Gibson's bleak view of the value of conjoint analysis, we believe that this methodology and its suc-

cessor models (e.g., discrete choice modeling) provide managers with useful information that goes beyond the

self explicated models Gibson favors. We also believe the

addition of full, or partial, profile information adds markedly

to selfexplicated data alone, without denigrating the role

that such information can provide. In short, we believe more

information is better in estimating reliable part worths.

UNRELIABILITY OF SELF-EXPUCATED MODELS If only the simple, self-explicated model were all we needed. Unfortunately, there are problems. In a 1964 landmark essay, Roger Shepard described individuals' dramatically limited abilities to judge the relative importance of large sets of attributes (as typically used in Gibson's studies) with high reliability. In 1962, Pollak reported that, although respondents believed their stated (subjective) importances were fairly evenly distributed, their observed importances (as found by multiple regression) were typically concentrated on a much smaller number of attributes. In effect, PoUak's research found that subjective importance weights tended to err in the direction of ascribing "too much importance to the less important variables." As Shepard suggests, individuals may remember that at one time each attribute was important (even though it then gets lost in the shuffle of the present decision context). Of course, if the levels of each attribute are monotonic (e.g., more of the attribute is always preferred to less), then the application of importance weights may not matter much because the composite orderings of options tend to be highly correlated. For example, if the overall value of a house increases with improvements to any of its component factors (location, price, resale value, nearness to shopping), the selection of alternative importance weight assignments may not highly affect the choice outcome. HYBRID CONJOINT MODELS It's unfortunate for us that Gibson doesn't spend much (if any) time on hybrid conjoint models. Hybrid conjoint models embrace the idea that the collection of self-explicated data on attribute-level desirabilities and attribute importances is a good thing. But so are full-profile evaluations. Sawtooth's Adaptive Conjoint Analysis is also a hybrid model that collects attributelevel desirabilities, attribute importances, and graded pairwise preferences on sets of partial profiles. Both Green and Krieger's individual-based conjoint models and Sawtooth's ACA model collect self-explicated data. But this is only part of the story. Both models include a decompositional elementa procedure that exposes respondents to either full or partial profiles. All three sources of input data (desirabilities, importances, profile judgments) go into their models.
26 Spring 2002

Green and Krieger's individualized conjoint models artrelated to three research questions: How should we combine multiple sources of conjoint data? Should the individual's profile data be used to update both self-explicated desirabilities and importances or only the importances? When we estimate part worths for individual , what use can be made of other individuals' responses to the same data? Four models of varying generality were developed to estimate individual part worths: Updates both self-explicated desirabilities and importances with full-profile response information Updates importance data only; maintains the ordering of the self-explicated desirabilities Modifies importances data alone; maintains the original self-explicated desirabilities Solves for a group-based set of part worths. Group-level importances are then derived and used to compute a convex combination of individual and group; self-explicated desirabilities are not modified. It should be noted that all four models use self-explicated data. However, these data are then augmented by individuals' responses to full-profile stimuli as well. In short, the models incorporate both self-explicated modeling and full (or partial) profile modeling. While Gibson wants to settle for self-explicated data alone, it seems to us that hybrid modeling provides a more general approach with relatively little increase in data collection. The four models described here all incorporate self-explicated data, while augmenting these inputs with decompositional information as well. Perhaps Gibson might like to take this additional step. Do HYBRID CONJOINT MODELS WORK? Lest Gibson feels that hybrid models are not worth the trouble, it may be of interest to describe a 1998 study we conducted that applied six different conjoint models to the same data set: (1) self-explicated data alone, (2) hybrid, (3) individual-level OLS, (4) aggregate OLS, (5) clusterwise regression, and (6) latent class regression. The product class consisted of automobiles, described by make and model, miles per gallon, exterior color, sound system, warranty, and price. Four attributes (make and model, color, sound system, and warranty) were non-monotonic. For example, the better the sound system, the higher the price. See Exhibit 1 for a description of the attributes and levels. Exhibit 2 shows the calibration fits for the six models listed previously. As noted, the individual-level OLS and hybrid models performed best. The self-explicated model did reasonably well, while the three aggregate models (aggregate OLS, clusterwise regression, and latent class regression) performed relatively poorly. Exhibit 3 shows the models' validation performances. The individual-level OLS and hybrid models performed comparatively well on the criteria: correlation, root mean squared error, and rank position. However, the self-explicated model did not do well. Exhibit 4 shows market share performance. The individual-level OLS and hybrid models performed well; the self-

Attribute Make and model

Attributes and levels of conjoint stimuli

Levels Honda Prelude Attribute
Exterior color

Green Red Wtiite


MilsuDistii Eclipse Nissan 240 SX Toyota Celica Sound system

AM/FM radio (no additional ctiarge) AH/FM and cassette at an additional $200 cost AM/FM. cassette, and six-flisc CDal an additional $600 cost 1 -year warranty (no additional Charge) 4-year warranty at an additional $400 cost 7-year warranty at an additional $700 cosi

explicated model performed adequately; and the aggregated models performed most poorly. The results in Exhibits 3 and 4 clearly show that the more highly aggregated models (aggregate OLS, clusterwise regression and latent class regression) perform comparatively worse than the individual models. WHAT CAN W E CONCLUDE? These results illustrate the value of the hybrid model and its use of both self-explicated and full-profile data. While Gihson may still argue that self-explicated (alone) models show higher predictive validity than hybrid modeling, we remain unconvinced. Certainly, self-explicated models can, at times, out-predict conjoint models that include full or partial profile information. But we wouldn't want to bet the farm on it. As for the discrete choice modelers, we suspect they will find little interest in the whole concept of self-explicated modeling, particularly if they are using hierarchical Bayes modeling. However, for those of us interested in hybrid conjoint modeling and its value in commercial applications, we would urge Gibson to give some of these more data-rich models a try. Perhaps he will be pleasantly surprised to keep his cake while adding some tasty icing. While Gibson's story (and ours as well) emphasizes the virtues of a specific method, unfamiliarity or carelessness in implementing any research technique can be catastrophic to client and researcher alike. In the final analysis, it's wise to put trust in the supplier's experience and knowledge of the chosen technique, whether self-explicated, hybrid, or discrete choice. Gibson's wide knowledge of self-explicated models makes him a natural spokesperson for the merits of this approach.

18 rripg average

24 mpo average 30 mpg average

Base price $16,000 Warranty

$20,500 $25,000


Calibration fits for conjoint models

Average (over individuals) correlation .683 .708 .803 .410 Average (over individuals) root mean squared error .176 .152 " l a

Model Self-explicated Individual-level hybrid Individual-level OLS Aggregate OLS Clusterwise regression Latent class regression

.258 .218

.499~|a 482 J


Ttte pairs (or Exhibits 2.3. and 4 within column are nol signilicantly different af Itie .05 level. All ottier (conelated-sample) pair comparisons are signilicantly difleient, within column,


Validation performance for conjoint models

Average (over individuais) correlation .595 -613~|a Average (over individuals) root mean squared error .412 .322 .318 ,350 ,319 ,321 Average {over individuals) rank position ,066 ,075.~|a

Green, Paul E., Stephen M. Goldberg, and Mila Montemayor (1981), "A Hybrid Utility Estimation Model for Conjoint Analysis,''' Journal of Marketing, 45 (Winter), 33-41. Green, Paul E. and Abba M. Krieger (1996), "Individualized Hybrid Models for Conjoint Analysis," Management Science, 42 (June), 850-867. Johnson, Richard M. (1987), "Adaptive Conjoint Analysis," in Proceedings of the Sawtooth Conference on Perceptual Mapping, Conjoint Analysis, and Computer Interviewing, M. Metegrano, ed. Ketchum, ID: Sawtooth Software, 253-265. Krieger, Abba M., Paul E. Green, and U. N. Umesh (1998), "Effect of Level of Disaggregation on Conjoint Cross Validation: Some Comparative Findings," Decision Sciences, 29(4), 1049-1060.

Model Self-explicated Individual-level tiybrid Indlviduahlevel OLS Aggregate OLS Clusterwise regression Latent class regression


.399 .452~|a

"1 a

,046 .055



Model Selt-explicated tndividual-level tiybrid Individual-level OLS Aggregate OLS Clustefwise regression Latent class regression

Market share prediction performance

Mean absolute error ,079 .066"1 a Root mean squared error ,098~| a

.069 .372 .193 .251

Paul E. Green is a marketing professor at the Wharton School, University of Pennsylvania. He may be reached at Abba M. Krieger is a statistics professor at the Wharton School, University of Pennsylvania. He may be reached at marketing research 27

.304 .148 .205

Related Interests