Dunn and Dunn Model of Learning-Style Preferences: Critique of Lovelace Meta-Analysis

KENNETH A. KAVALE GRETCHEN B. LEFEVER
Regent University

ABSTRACT The authors critiqued the M. K. Lovelace (2005) meta-analysis of the Dunn and Dunn Model of Learning-Style Preferences (DDMLSP). The conclusion that Lovelace reported in her meta-analysis that learning-style instruction is a beneficial form of instructional delivery is unjustified because of critical conceptual and practical problems. Those problems surround interpretation of effect size, narrow focus on a single model, missing information, and, most notably, a sampling bias. Meta-analysis relies on the synthesis of many different types of studies. However, 96% of studies cited in the Lovelace meta-analysis were dissertations (70% with authors of the DDMLSP), leading to potential “home-team” bias. The proponents of the DDMLSP must address such concerns before the DDMLSP can be accepted by the education community. Keywords: Dunn and Dunn Model of Learning-Style Preferences, learning-style instruction, Lovelace meta-analysis

I

n a meta-analysis investigating the Dunn and Dunn Model of Learning-Style Preferences (DDMLSP), Dunn, Griggs, Olson, Beasley, and Gorman (1995) concluded that “providing educational interventions that are compatible with students’ learning-style preferences is beneficial” (p. 357). Ten years later, in a similar meta-analysis, Lovelace (2005) affirmed the Dunn et al. findings: “The results from the current and previous meta-analyses were consistent and robust. . . . I strongly suggest that learning-style responsive instruction would increase the achievement of and improve the attitudes toward learning” (p. 181). The positive conclusions about the importance of learning-styles instruction stand in contrast to an earlier metaanalysis by Kavale and Forness (1987) who found little empirical support for learning-style instruction and concluded that “learning appears to be really a matter of substance over style” (p. 238). Dunn et al. (1995; see also Dunn, 1990) contended that the Kavale and Forness (1987) meta-analysis was flawed because of a number of limitations, including the (a) addition of studies from diverse models in which researchers used different populations and identification assessments than used in the DDMLSP, (b)
94

omission of studies that focused on the specific variables purportedly investigated, and (c) assumption that specific terms were defined and treated similarly in the included studies. However, such criticisms are not warranted because they ignore the primary purpose of meta-analysis. As a research synthesis technique, meta-analysis combines studies that may vary across a number of dimensions to achieve generalizations across an entire research domain. Although one may perceive that process as “mixing apples and oranges,” Glass, McGaw, and Smith (1981) commented that “the claim that only studies which are the same in all respects can be compared is self-contradictory; there is no need to compare them since they would obviously have the same findings within statistical error” (pp. 22–23). The Dunn et al. (1995) and Lovelace (2005) metaanalyses did not synthesize different studies (i.e., studies investigating models other than the DDMLSP). That problem, along with other difficulties, raises questions about the positive conclusions offered in the two meta-analyses. Our purpose was to elucidate the problems and demonstrate that the recent Lovelace meta-analysis does not offer further validation of the DDMLSP. The Lovelace Meta-Analysis In her meta-analysis, Lovelace (2005) investigated “the overall effectiveness of the model and [examined] moderating variables that might affect outcomes resulting from use of the model” (p. 178). On the basis of 76 “original research investigations” (p. 179) and 168 effect sizes, Lovelace found a weighted d of .67 for achievement and a weighted d of .80 for improved attitude toward learning. Her interpretation of effect-size (ES) magnitudes was that they demonstrated that “The data overwhelmingly supported the position that matching students’ learning style preferences with complementary instruction improved academic achievement and
Address correspondence to Kenneth A. Kavale, School of Education, Regent University, 1000 University Drive, Virginia Beach, VA 23464. (E-mail: kkavale@regent.edu) Copyright © 2007 Heldref Publications

November/December 2007 [Vol. 101(No. 2)]

95

student attitudes toward learning” (p. 181). Although the Lovelace meta-analysis is well done and possesses no major methodological difficulties, several conceptual and practical problems significantly limit findings. Consequently, we do not believe that the Lovelace meta-analysis provides the intended level of support for the DDMLSP. Instead, caution is necessary before one can accept the optimistic picture about the nature of the DDMLSP. Conceptual and Practical Problems Status of the Dunn and Dunn Model of Learning-Style Preferences Although well known and widely used, the DDMLSP is not the only available learning-style model. Far greater insight into the efficacy of instruction based on learningstyle might have been achieved if DDMLSP researchers compared and contrasted the DDMLSP with other models offering divergent interpretations of the learning-style construct. Meta-analysis, with its comprehensive search perspective (i.e., seeking all available empirical research) offers the possibility of simultaneously investigating the efficacy of different learning-style models. The focus on a single model in the Lovelace (2005) meta-analysis provides no context for evaluating alternative models. For example, would another learning-style model produce larger effect sizes than the Dunn and Dunn model? Using Cohen’s (1988) criteria, Lovelace (2005) suggested that the obtained ES values for achievement were moderate to large and that “learning-style instruction might be expected to increase student achievement by 25 to 30 percentile points” (p. 179). A simple “statistical” interpretation lacks a context of comparative value. Consequently, without comparisons to other learning-style models, it is difficult for one to judge the real importance of the Lovelace findings. Meaning of ES In contrast to the moderate-to-large ES reported by Lovelace (2005), Kavale and Forness (1987) found a small ES (.14) across 30 studies investigating different interpretations of learning-style instruction, including the DDMLSP. Kavale and Forness (1990) placed learning-style instruction in the context of process training and found that it fell between perceptual-motor training (ES = .08) and social skills training (ES = .20). Any form of process training (e.g., learning-style instruction) may reveal limited efficacy because of the inherent difficulties in dealing with hypothetical (unobservable) constructs that make the conceptual foundation for learning-style instruction enormously complex and not easily defined (see Cronbach & Snow, 1977). Although one may argue that the DDMLSP represents a special case of effective process training, how does the

Dunn and Dunn model fare in the context of instructional effectiveness? When compared with other instructional practices, the DDMLSP reveals more modest efficacy. For example, Kavale (2007) showed that practices like providing reinforcement (ES = 1.17), drill and practice (ES = .99), and providing feedback (ES = .97) reveal very positive outcomes and are easier to implement than are the machinations required for assessing and matching instruction to preferred learning style. In addition, instruction methods designed to enhance academic performance reveal larger effects than learning-style instruction. Mnemonic instruction (ES = 1.62), strategy instruction (ES = .98), and direct instruction (ES = .93) are superior to learning-style instruction and focus immediately on teaching content (i.e., substance). The prerequisite work required to implement the DDMLSP “will serve only to deflect attention away from the primary requirement for learningsubstance” (Kavale & Forness, 1990, p. 360). Missing Information Lovelace (2005) provided a number of different interpretations for the obtained ES that were useful in understanding the findings. Like the Dunn et al. (1995) report, however, Lovelace did not report measures of variability associated with the mean values, which is a significant limitation. The mean as a measure of central tendency targets the center of a distribution but does not describe the extent to which contributing individual scores differ. Most metaanalyses, when reporting mean values, also report an associated standard deviation (SD) that indicates the amount of dispersion around the mean. That statistic is important because distributions may possess equal mean values but possess significantly different shapes because of more-orless associated variability. When variability is comparatively small, the contributing scores cluster around the mean, allowing for the possibility of greater confidence about the stability of the mean value. The lack of a reported measure of variability in the Lovelace (2005) meta-analysis limits interpretation of the mean value (see Hunter & Schmidt, 2004). Kavale and Forness (1990) showed that many educational interventions reveal more variability than effectiveness (i.e., the SD is larger than the ES). For example, Kavale and Forness (1987) found an ES of .14 and SD of .28, indicating that learning-style instruction is twice as variable as it is effective. If the two statistics are used to represent a theoretical expectation (ES ± SD) about where a particular effect may fall, then learning-style instruction may vary from negative to zero to positive (.14 ± .28). Theoretically, learningstyle instruction can be moderately effective (.42), very ineffective (–.14; i.e., students not receiving learning-style instruction perform better), or something in between. The positive skewness of achievement distribution suggests that the ES cluster at the low end and “tail off” at the high

96

The Journal of Educational Research

end (see Lovelace, p. 179). Without a measure of variability, one cannot place the mean in context, suggesting that Lovelace’s interpretation of the mean ES cannot be unequivocally accepted. Sampling Locating research studies is a critical aspect of metaanalysis: “How one searches determines what one finds; and what one finds is the basis of the conclusions of one’s integration of studies” (Glass et al., 1981, p. 61). Simultaneously, “Locating studies is the stage at which the most serious form of bias enters a meta-analysis since it is difficult to assess the impact of a potential bias” (p. 57). Bias can be avoided with a comprehensive description of search procedures that permits an “assessment of the representativeness and completeness of the data base for a meta-analysis” (p. 57). In addition, the goal of the literature search should be the inclusion of every available study because it “avoids the dilemma of choosing among studies and justifying why only some are included. It eliminates debates about which studies are worthy of inclusion” (Light & Pillemer, 1984, p. 32). Kavale, Hirshoren, and Forness (1998) criticized the earlier Dunn et al. (1995) meta-analysis for its potentially biased sampling: “The Dunn et al. meta-analysis seems to have a dearth of published literature [i.e., peer-reviewed journal articles] because 35 of the 36 studies included were dissertations. When 97% of included studies are dissertations, can we assume that a comprehensive literature search was achieved?” (p. 76). Without the level of scrutiny offered by the peer-review process for most journal articles, it is not possible for one to have confidence in the reliability of findings from dissertations. Regardless of whether the DDMLSP “has been developed, researched, and refined during the past three decades by at least 18 professors and more than 200 graduate students at St. John’s University, New York” (Kritsonis, 1997–1998, p. 2), the possibility of bias exists when conducted under the direction of those who developed the learning-style model (see Curry, 1990). The bias becomes more probable “when it is realized that 21 (58%) of the 36 studies included were completed at St. John’s University, where Dunn heads the Center for the Study of Learning and Teaching Styles. Some tangible proof that no bias exists is absolutely necessary under such circumstances” (Kavale et al., 1998, p. 77). The Lovelace (2005) meta-analysis also appears to possess a dearth of published literature. Although Lovelace “conducted a comprehensive literature search to locate published and unpublished experimental research investigations” (p. 178), 96% (n = 73) of the included studies were dissertations (i.e., unpublished literature). Only three items (i.e., two journal articles and one book chapter that may or may not have been peer reviewed) represented the published literature on the DDMLSP. Also, 70% (n = 51) of the included dissertations were completed at St. John’s

University, which again increases the potential for “hometeam” bias. Thus, questions about potential bias and reliability of findings are equally applicable for the Lovelace meta-analysis. The sampling timeframe for the Lovelace (2005) metaanalysis raises questions about the extent to which it provides new evidence of validity for the DDMLSP. Lovelace searched the literature from 1980–2000 and found 76 studies that met the stipulated inclusion criteria. Of those studies, however, 36 were used previously in the Dunn et al. (1995) meta-analysis, whose sampling timeframe was 1980–1990. Thus, Lovelace’s (2005) literature base included only 53% of “new” studies (n = 40), which suggests some limits on interpretation. Given the 47% overlap in the literature base, it is not surprising (but relatively uninformative) that Lovelace found, “The effect-size values and general findings were similar in both the previous and the current meta-analyses” (p. 180). If Lovelace (2005) had limited the search to the years 1990–2000, researchers could have compared and contrasted findings from one time period (1980–1990) with another time period (1990–2000) and achieved greater insight into the theoretical status of the DDMLSP. With almost one half of the findings already known, determining the extent to which the Lovelace meta-analysis provided enhanced understanding of the DDMLSP is difficult. The completeness of the Lovelace (2005) literature search is also open to question. A review of one of her major sources (“Research based on the Dunn and Dunn model”) reveals that Lovelace included a dissertation from St. John’s University by Ciarletta (1998) titled, “Effects on first- and second-graders’ achievement and attitudes through a learning-style and multicultural literature-based approach” but did not include the next citation that was a dissertation from St. John’s University by Cirelli (1998) titled, “An experimental investigation of the effects of learning-style perceptual strengths and instructional strategies on special education and general education intermediate students’ achievement and attitudes.” The Cirelli dissertation appears to meet Lovelace’s inclusion criteria and seems as appropriate as the Ciarletta dissertation, so it seems reasonable for one to ask why it was not included. The sampling timeframe, however, had little influence on the fact that dissertations predominated the Lovelace (1995) meta-analysis. Finding a predominance of unpublished literature does not make for a better literature base because it has not undergone the rigors of the peer-review process before reaching the professional community. If not published, dissertations do not receive a second level of scrutiny (i.e., independent reviewers), which ensures greater confidence about the validity and trustworthiness of findings. Why is relatively little published research available that investigates the DDMLSP? Several explanations are possible. One possibility is that students have little desire to craft journal pieces. A second possibility, more

November/December 2007 [Vol. 101(No. 2)]

97
REFERENCES Ciarletta, M. (1998). Effects on first- and second-graders’ achievement and attitudes through a learning-style and multicultural literature-based approach. (Doctoral dissertation, St. John’s University). Dissertation Abstracts International, 59(08), 2824A. Cirelli, M. E. (1998). An experimental investigation of the effects of learningstyle perceptual strengths and instructional strategies on special education and general education intermediate students’ achievement and attitudes (Doctoral dissertation, St. John’s University). (Pending). Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook on research on interactions. New York: Irvington. Curry, L. (1990). A critique of the research on learning styles. Educational Leadership, 49, 50–52, 54–56. Dunn, R. (1990). Bias over substance: A critical analysis of Kavale and Forness’ report on modality-based instruction. Exceptional Children, 56, 352–356. Dunn, R., Griggs, S. A., Olson, J., Beasley, M., & Gorman, B. S. (1995). A meta-analytic validation of the Dunn and Dunn model of learning-style preferences. The Journal of Educational Research, 88, 353–362. Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research. Beverly Hills, CA: Sage. Hunter, J. E., & Schmidt, F.L. (2004). Methods of meta-analysis: Correcting error and bias in research findings (2nd ed). Thousand Oaks, CA: Sage. Kavale, K. A. (2007). Quantitative research synthesis: Meta-analysis of research on meeting special educational needs. In L. Florian (Ed.), The Sage handbook of special education. London: Sage. Kavale, K. A., & Forness, S. R. (1987). Substance over style: Assessing the efficacy of modality testing and teaching. Exceptional Children, 54, 228–239. Kavale, K. A., & Forness, S. R. (1990). Substance over style: A rejoinder to Dunn’s animad versions. Exceptional Children, 56, 357–361. Kavale, K. A., Hirshoren, A., & Forness, S. R. (1998). Meta-analytic validation of the Dunn and Dunn model of learning-style preferences: A critique of what was Dunn. Learning Disabilities Research and Practice, 13, 75–80. Kritsonis, W. (1997–1998). National learning-styles studies impact classroom pedagogy. National Forum of Applied Educational Research Journal, 11, 1–3. Light, R. J., & Pillemer, D. B. (1984). Summing up: The science of viewing research. Cambridge, MA: Harvard University Press. Lovelace, M. K. (2005). Meta-analysis of experimental research based on the Dunn and Dunn model. The Journal of Educational Research, 98, 176–183.

distressing, is that journal articles are submitted for publication but rejected after peer review. A third possibility is that beyond the faculty and students at the St. John’s University Center for the Study of Learning and Teaching Styles, there is little interest in the DDMLSP in the wider educational research community. Whatever the reason, it is incumbent on Lovelace (2005) to explain why dissertations (and particularly St. John’s dissertations) dominated the literature base. Conclusion The Lovelace (2005) meta-analysis represents a continuing effort to validate the DDMLSP. Like its predecessor, the Dunn et al. (1995) meta-analysis, the Lovelace synthesis possesses a number of significant problems that limit confidence in the findings. The problems surround interpretation of effect size, narrow focus on a single model, missing information, and, most notably, the nature of the literature base. None of the problems of the Lovelace synthesis are new; an earlier critique by Kavale et al. (1998) raised similar issues about the Dunn et al. (1995) meta-analysis. Yet, Lovelace did not address the fundamental difficulties, and consequently, the findings must again be called into question. Although the Lovelace (2005) meta-analysis shows several technical advances (e.g., different interpretations of the ES statistic), the failure to address previous concerns means that the Dunn and Dunn model has not yet been validated. Some answers to the questions raised in this article are necessary before the DDMLSP can be accepted by the educational community.

�����������������������������������
����������������������������������� ��������������� ����������

����������������������

� � �� �� ��

�� �� �

��������������� ������������������ �������������� �����������������

� �

����� �����

���� ����

��������������������������������������������������� ������������ ���������������������������������������������������

��

��������������� ���������������������������������������� ��������������������������������������������������� �������������������������� ���������������������������������������������������

����

���

� � � � �
����������������������������������������� ���������������������������������������������������

�� ����� ��� �� �� �� ��� ����� ���� ����� ���

� ���� � � � � � ���� ��� ���� ���

����������� ���������������������������������������������������

� � � �

� �


������������

������������������

Sign up to vote on this title
UsefulNot useful