You are on page 1of 20

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/225998238

Design and development research: A model validation case

Article  in  Educational Technology Research and Development · August 2009


DOI: 10.1007/s11423-007-9075-0

CITATIONS READS

30 1,754

1 author:

Monica W. Tracey
Wayne State University
82 PUBLICATIONS   1,019 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Monica W. Tracey on 16 June 2015.

The user has requested enhancement of the downloaded file.


Education Tech Research Dev (2009) 57:553–571
DOI 10.1007/s11423-007-9075-0

DEVELOPMENT ARTICLE

Design and development research: a model validation


case

Monica W. Tracey

Published online: 2 November 2007


Ó Association for Educational Communications and Technology 2007

Abstract This is a report of one case of a design and development research study that
aimed to validate an overlay instructional design model incorporating the theory of mul-
tiple intelligences into instructional systems design. After design and expert review model
validation, The Multiple Intelligence (MI) Design Model, used with an Instructional
Systems Design (ISD) Model, was tested for use by four practicing instructional designers.
Instruction developed for learners using this model was then evaluated measuring post-test
and attitudinal scores with 102 participants. This report also provides a reflection on the
lessons learned in conducting design and development research on model validation. The
procedures and findings have implications for the processes involved in instructional
design model validation through designer use and program implementation.

Keywords Design and development research  Model validation  Instructional design 


Internal validation  External validation  Multiple intelligences

Design and development research is ‘‘the systematic study of design, development and
evaluation processes with the aim of establishing an empirical basis for the creation of
instructional and non-instructional products and tools and new or enhanced models that
govern their development’’ (Richey and Klein 2007, p.xv). It is a practical form of research
that attempts to test theory and validate practice. Numerous models exist in the field of
instructional design that assist designers working in a variety of settings (Gustafson and
Branch 2002). Historically many of these models have not undergone rigorous or sys-
tematic review. In recent years, there has been an increased focus on systematically
studying the processes involved in the construction, validation, and implementation of
instructional design models (Seels 1994; Richey 1998; Richey et al. 2004).
Current literature focused on the implementation and systematic validations of
instructional design models describe five different approaches to model validation. These
include expert review, usability documentation, component investigation, field evaluation,

M. W. Tracey (&)
Wayne State University, 383 Education, Detroit, MI 48202, USA
e-mail: MonicaTracey@wayne.edu

123
554 M. W. Tracey

and controlled testing (Richey 2005; Richey and Klein 2007). This article reports one case
of a design and development study implementing some of these approaches to validate The
Multiple Intelligences (MI) Design Model, an overlay model used in conjunction with an
Instructional Systems Design model (ISD). The study specifically measured model use,
learner achievement, and program implementation and is an example of design and
development research in practice.

Instructional design, multiple intelligences, and the instructional designer

In an effort to better understand this design and development model validation case, we
begin with a description of the context related to the development of the model. This
includes definitions and background content on instructional design, multiple intelligences,
and the instructional designer.
Instructional design (ID) is defined as an arrangement of resources and procedures used
to promote learning (Gagné et al. 2005). ID models are visual representations of the ID
process and are used to guide design in many settings and for many purposes (Seels and
Glasgow 1998). They are typically a result of the combination of abstract principles of
General Systems Theory and analyses of practitioner experience (Banathy and Jenlink
2004). Instructional design is the process used to create the type and delivery of instruction.
Some see design as problem solving; others view it as a process of reflection-in-action,
where designers take on the task of turning indeterminate situations into determinate ones
(Tripp 1991).
Learning is the acquisition of the knowledge of a skill, art, or trade, by study, and/or
experiences (Lindvall 1995). It is generally agreed that learners develop understanding for
themselves in ways that differ, sometimes quite sharply, from other learners (Winn 2004).
Multiple intelligences can be thought of as a learners’ tool that facilitate knowledge
acquisition. Gardner (1983) states ‘‘I have posited that all human beings are capable of at
least seven different ways of knowing the world (p. xi)’’. Rooted in cognitive brain
research, Gardner proposes that people learn in a variety of ways and have diverse
strengths and abilities which, if recognized, can be developed to enable learners to reach
their potential. He defines intelligence as ‘‘the capacity to solve problems or to fashion
products that are valued in one or more cultural settings’’ (Gardner and Hatch 1989, p. 4).
Gardner’s seven ways of knowing the world or intelligences are: verbal-linguistic, logical-
mathematical, musical-rhythmic, visual-spatial, bodily-kinesthetic; and two forms of per-
sonal intelligences, one directed toward other persons, interpersonal intelligence, and one
directed toward oneself, intra-personal intelligence.
While learners bring their own needs, experiences, and tools to a learning situation,
instructional designers must incorporate those needs and experiences into instructional
strategies helping learners take ownership and responsibility for their own learning
(Grabinger 1996). The significance of knowledge construction for Instructional Technol-
ogy lies in the building of environments that make it easy for learners to formulate their
own understandings of the world (Winn 2004). Instructional design is a process used to
create instructional products, programs, and delivery systems. Multiple intelligences theory
is built on the premise that learners acquire knowledge based on their learning potential
and that people learn in at least seven different ways.
The first two phases of this design and development study previously reported (Tracey
and Richey 2007) included the design and internal validation of the MI Design Model.
Based on research on instructional design and multiple intelligences, the MI Design Model

123
Design and development research 555

was designed to be used as an overlay with an ID model. In order to achieve this, the
theoretical foundations of multiple intelligences and instructional design were examined to
guide the development of such a model. The model components were then determined and
an initial model was constructed. The model was then reviewed and validated by experts in
the field of instructional design through a three-round Delphi study. The result was a revised
and internally validated MI Design Model (Tracey and Richey 2007). The newly developed
model had to now face a usability test by instructional designers and course validation.
This article reports on the final two phases of the design and development research
study, the process and results of model use by practicing instructional designers and the
delivery of instruction developed for learners specifically measuring performance on post-
test and instructional appeal. In addition, reflections on lessons learned when conducting
design and development research on model validation are presented.

Approaches to model validation

Once designed and validated by subject matter experts (Tracey and Richey 2007), the MI
Design model was ready to be tested using two separate model validation approaches—a
designer usability test while instructional designers designed and developed a course and a
product impact test measuring learners after they participated in the course. These vali-
dation efforts were guided by the following research questions:

Designer usability study


1. What are the advantages and disadvantages of using the overlay design model
incorporating multiple intelligences? Factors to consider included the following: (a)
Ease of use; (b) Ability to overlay the model with an existing design model; (c)
Capability to meet course intent; and (d) The ease of instructional design strategy
creation utilizing each multiple intelligence.
2. What is required to facilitate the effective use of the validated design model
incorporating multiple intelligences? Factors to consider included the following: (a)
Guidance required for designer model use; (b) Ability to incorporate content accuracy;
and (c) Ease of use.

Product impact study


3. To what extent does the design model incorporating multiple intelligences influence
post-test performance?
4. To what extent does the design model incorporating multiple intelligences influence
learner confidence and attitudes? Factors considered included: (a) Learner attitude
toward the instructional strategies; (b) Learner confidence in ability to apply content;
and (c) Learner attitude toward training.

Designer usability study

Usability documentation is defined as ‘‘information on the extent to which a product, tool,


or model can be effectively, efficiently, and satisfactorily used in the context for which it
was intended’’ (Richey and Klein 2007 p. 160). Designer usability documentation is one of
three methods of internal design model validation (Richey 2005) which in this case studied
the ease with which the model was used by instructional designers. Similar to other design

123
556 M. W. Tracey

and development studies, (Forsyth 1998), this phase included documenting the instruc-
tional designer reactions to using the MI Design Model while designing and developing a
team-building course. This data, combined with the program tryout and evaluation data
provided the basis for final model revisions. This internal model validation step was the
third phase of the four phase study; one part of the larger design and development research
project. It is important to note that this phase was limited as it included only two designers
using the model. Additional validation studies with designers using the MI Design Model
are necessary in order to reach conclusions regarding the model. The following section
describes the designer usability participants, procedures and results.

Participants

The designer usability phase of the study consisted of a total of four volunteer instructional
designers with Master’s Degrees in Instructional Technology. Each designer, chosen from
a list of volunteers, was a current practitioner with two coming from business and two
coming from education. Each volunteer was a recent graduate of the same university
Instructional Technology program. The designers had knowledge and experience in
instructional design with less than two years practice, but little or no prior experience with
multiple intelligence theory and practice. Once the four subjects were chosen, they were
divided into two teams based on an equal distribution of skills, knowledge and back-
grounds. Each team, labeled Team 1 and Team 2 included one volunteer from business and
one from education. Since the goal was to compare the instructional products developed
between the two teams, Team 1, using the Dick and Carey Model (Dick et al. 2001) and
the overlay MI Design Model was the experimental group while Team 2, using only the
Dick and Carey Model (Dick et al. 2001) alone, was the control group. The same ID Model
was used for both as the study focused on the overlay MI Design Model.

Procedures

The two teams of designers each received: (1) materials regarding the organization in
which the newly developed instruction would be used; (2) written content on team-
building, the content of the instruction to be developed; (3) audience, environment, and gap
analysis information; and (4) the Dick and Carey Instructional Systems Design (ISD)
model (Dick et al. 2001). In addition, Team 1 received the validated overlay MI Design
Model (see Fig. 1). Both teams were instructed to design a two-hour instructor-led
classroom based course on team-building for JARC (Jewish Association for Residential
Care), a non-profit organization.
To ensure consistency between the two resulting courses in terms of goals, objectives
and content, all four instructional designers worked together to assess the needs and
identify the goals of the team-building training and review of the content.
The teams then separated to conduct their own learner and environmental analyses
based on the information given and the model(s) they were assigned to use. Both teams
worked individually to create their performance objectives based on the results of their
analysis and the steps of the design model they were utilizing. The teams reconvened to
review their objectives to ensure each met the same team-building goal. Based on the
objectives, post-test questions to be used in the product impact study were then developed
together by the two teams of designers.

123
Design and development research 557

Conduct
Conduct
Instructional
Instructional Revise Instruction
Revise Instruction
Analysis
Analysis

Assess Needstoto
Assess Needs
Identify Goal(s)
Identify Goal(s)
Example: Goal: teams
Write
Write Develop
Develop Develop
Develop Develop
Develop and
and Designand
Design and
solve design process
problems by coming to Performance
Performance Assessment
Assessment Instruments
Instruments Instructional
Instructional Strategy
Strategy Select
Select Conduct
Conduct
consensus. Objectives
Objectives Match identified MI Instructional
Instructional Formative
Formative
Use information developed
behaviors, appropriate Materials
Materials Evaluation
Evaluation ofof
during analysis to make
performance objectives, and
Instruction
Instruction
Analyze
AnalyzeLearners
Learners and Environment
and Environment instructional strategy
environment considerations decisions.
Learners when designing assessment *Refer to next page for guidelines.
Use learner characteristics from instruments.
analysis to determine present MI Example:
behaviors. Using the performance Organizational Strategies:
*Refer to next page for guidelines. • VL: Debating, reading, and
objectives and identified MI
journal keeping
behaviors, design
Example: Cognitive characteristics of learners • LM: Thinking formulas and
Instruments including:
indicates high-level of visual literacy; strategies, problem-solving,
incorporate VS strategies. ••TType, Form: observation, complex lines of reasoning,
simulation, pencil-paper, strategy, and reasoning games
Design and
Design and
etc. • VS: Visualizing concepts Conduct
Conduct
Learning Environment
••WWhere
here in instructional through mind-mapping Summative
Summative
Determine if practice of desired
strategy? • ITE: Discussing, creating, and Evaluation
Evaluation
performance and matched MI
••AA match with identified maintaining synergy in teams,
behaviors can take place in the learning cooperative learning, and
environment, if not make alterations. objective: conditions
consensus building
presented and
• ITA: Valuing clarifications,
Example: To practice performance, the performance required. self-reflections, questionnaires,
environment must encourage verbal and written and surveys
communication (VL), have materials for Example: Assessments can include:
problem-solving (LM), imagining (VS), team written clarification of problem, Delivery Strategies:
learning and discussion (ITE), and quiet self- observation of verbal debate and • VL: Written and verbal
reflection for consensus (ITA). report of process and solution, strategies
documentation of logical rationale • LM: Computer problem-solving Legend
for solutions, assessment of software programs Verbal-Linguistic (VL)
diagrams, flowchart development of • VS: Computer software Logical-Mathematical (LM)
process and reflection of thought programs Visual-Spatial (VS)
process. • ITE: Group work Bodily-Kinesthetic (BK)
• ITA: Independent assessments Musical-Rhythmic (MR)
and studies Interpersonal (ITE)
Intrapersonal (ITA)
Management Strategies:
• VL: Tables, microphones
• LM/VS: Computer stations

Learner Characteristics MI Behaviors Environment


Language Development Level; Reading Level VL: Writing/Speaking VL: Verbal/Written—tables, microphones
Developmental Level LM: Inductive/Deductive Thinking LM: Problem solving, whiteboards, round tables
Level of Visual Literacy; VS: Visualize/Mental Images VS: Manipulation of objects, imagining
Sensory Perception, General Health, Age BK: Physical Movement BK: Large environment for physical movement
Musical, Rhythm Development MR: Tonal Patterns/Sounds/Rhythms MR: Musical instrument tools for vocal practice
Social Characteristics: Relationships to Peers ITE: Person to Person Relationships ITE: Team play, learning/discussion
Affective Characteristics, Interests, Attitudes ITA: Self-Reflection/Metacognition ITA: Quiet, self-reflection, independent work
Instructional Strategies
Develop: Determine:
1. Organizational Strategy: How 1. MI Behaviors: Based on characteristics from analysis, determine
instruction will be sequenced, identified MI behaviors, and include strategies that exhibit the
what particular content will be behaviors.*
presented, and how it will be 2. Learner Control: How learners encounter the content. Encouraged to
presented. construct their own meanings from the instruction or limit the amount of
2. Delivery Strategy: What responsibility for structuring the learning situation.
instructional medium will be 3. Structure:
used and how learners will be • World-related—cluster/sequence content according to the way things
grouped. in the world are organized.
3. Management Strategy: The • Inquiry-related—teach ideas together because they represent similar
scheduling and allocation of phases of inquiry.
resources to implement the • Utilization-related—group ideas together according to which skills
instruction that is organized and used in priority in the future.
delivered as planned within the 4. Each strategy should have an introduction, a body, a conclusion, and an
previous two strategy aspects. assessment.
MI Instructional Strategy Examples

Fig. 1 MI design model

123
558 M. W. Tracey

Verbal-Linguistic Bodily-Kinesthetic
• Word Games, i.e., crosswords, vocabulary activities • Body Gestures, Physical Movements, i.e., physical challenge
• Reading, i.e., spelling, reading prose/poetry games, sports, mime/dramatic reenactments
• Writing, i.e., journal/diary keeping, creative writing • Physical Creation/Manipulation, i.e., creating models, putting
• Speaking, i.e., impromptu speaking and storytelling, linguistic things together/taking them apart, human graph
twist games, formal speaking and debates • Motor-coordination and Multi-tracking, i.e., balancing and
body language games
Logical-Mathematical
• Pattern recognition, i.e., abstract pattern recognition, Musical-Rhythmic
calculations • Tunes/Melodies and Songs/Rhythms, i.e., working with
• Problem Solving, i.e., case studies scales/notes, music/song composition
• Thinking Formulas and Strategies, i.e., graphic/cognitive • Musical Elements in Musical or Nonmusical Situations, i.e.,
organizers, forcing relationship exercises rhythm, music recognition games
• Reasoning Inductively and Deductively, i.e., story problems, • Organization and Manipulation of Sounds/Tones, i.e., musical
arithmetic problems note organization, instrument play
• Complex Lines of Reasoning and Thought, i.e., strategy
Interpersonal
games and experiments
• Discussions, Listening, and Communicating, i.e., consensus
• Category Sorting and Creating, i.e., list building,
building, cooperative learning
brainstorming, pattern games
• Personal Interactions, i.e., group/team projects
Visual-Spatial • Creating Synergy, i.e., human interest activities, feedback,
• Visualizing Concepts, i.e., mazes, puzzles, seeing patterns empathy practice
games/designs Intrapersonal
• Color Texture Schemes, Images/Pictures, i.e., painting, • Self-Reflection and Introspection, i.e., self-analysis,
drawing, sculpting mind-mapping, graphic creation surveys/questionnaires
• Image Creation, i.e., daydreaming, scenario creation, • Questioning, i.e., values clarification, higher order reasoning,
montages/collages, guided imagery thinking strategies
• Spatial Directions, i.e., scavenger hunts, map reading • Self-Motivation and Motivation, i.e., independent studies,
challenges silent reflection activities, concentration games

Fig. 1 continued

Upon consensus from all of the designers and a JARC expert review of the objectives,
the teams then separated and worked simultaneously in the same location in two
different offices during the remainder of the design and development process. Each
team was provided computers, an office; any desired training materials, and a desktop
publisher to create the instructor guide and participant guide templates and all finished
products.
Two formative evaluations in the form of an expert review and small group evaluation
of a rough draft of the instruction were conducted (Tessmer 1995). The expert review of
content, objectives and learning strategies of both courses was conducted with the
Assistant Executive Director of JARC, and revisions were made based on resulting
feedback.
A small group evaluation of the training materials from both courses and post-test was
conducted to improve the materials and field test the post assessment. This evaluation was
conducted at the JARC offices in the training rooms to provide similar circumstances under
which the learners would interact with the materials. One designer from each team con-
ducted the training with the materials that team produced, while the other documented
learner reactions. The designers documented how to improve the effectiveness, appeal and
efficiency of the instruction. Revisions of the materials and post-test instrument were
made.
After revisions were completed for the team-building training courses, Team 1, the team
using the overlay MI Design Model, was given a Likert-Scale questionnaire using (a)
strongly agree, (b) agree, (c) disagree, and (d) strongly disagree as the scale to respond to
each question regarding the use of the model. Space was provided for written comments
from the designers for additional feedback after each question. Because the questionnaire
specifically addressed questions regarding the overlay MI Design Model, Team 2 did not
receive it.

123
Design and development research 559

Table 1 Summary of specific training components


Subject matter Design team utilizing the MI model Design team not utilizing the MI model

Instructor guide Timing, description of instruction, Agenda, delivery directions, content,


content, strategies, needed course timing, materials needed, overheads,
materials needed, delivery directions. note taking and group activities,
lecture notes.
Participant guide ½ half page booklet, table of contents, 8 ½ by 11 inch, agenda, course
course purpose, objectives, agenda. objectives.
Content: team definition, characteristics, Content, team definition, characteristics,
working in a team, team roles, working in a team, team roles,
communication and feedback communication and feedback
guidelines. guidelines, problem-solving method.
Worksheets: team definition, recall and Worksheets: note taking on team
reflection activities. characteristics and individual
strengths of team members.
Guide is intended as a reference after the Guide used during the course for
course, Each participant received a activities for future reference.
group photo from final group activity
as a visual reminder of the course and
group work.
Introduction activity Small groups draw large picture Participants introduce themselves and
illustrating teamwork. Groups present take a written pre-assessment on
drawings & team characteristics definition of team. Debriefed and
identified and written down on correct definition of team is given.
flipchart paper.
Team definition Small groups work together to create and Activity sheet is given with the definition
illustrate their own definition of team. written out. Each component of the
Discussion comparing common/ definition is discussed.
different elements in each definition.
Definition given by instructor. Recall
activity of the characteristics is
completed by participants writing
each one in their participant guide.
Team purpose/ Team purpose poster shown. Discuss how working as a team is more
characteristics advantageous than working alone.
‘‘Amazing Skills’’ Activity: In small Individual Strength Activity: Each team
groups teams solve a problem given member comes up with three strengths
index cards with amazing skills for a team member whose name they
(personal strengths that help a team), draw. Each stands and shares the
problem descriptions and visual strengths about their team members.
activity map.
Reflection Activity: Participants reflect Instructor lists/explains characteristics of
on activity and their personal team on flipchart paper. Ask which
strengths. ones they have observed in workplace.
‘‘Family Feud’’ chart of team
characteristics revealed.
Team roles/strengths Team roles are defined with illustration. Present the nine typical roles of a team,
Strengths are discussed in terms of briefly discuss responsibilities of each.
previous Amazing Skills activity. Activity: In small groups, read
scenario, identify each character team
role played.

123
560 M. W. Tracey

Table 1 continued
Subject matter Design team utilizing the MI model Design team not utilizing the MI model

Team communication One person describes an object while the Instructor defines communication and
feedback other draws it asking yes/no questions feedback.
for clarification.
Debrief with effective feedback Feedback Activity: 2 volunteer teams are
strategies taught illustrating how they given a scenario; read it and role play
could be used in this activity. the conversation using feedback
characteristics.
Team problem Building Activity: Each participant Definition of problem solving and
solving given a lunch bag filled with various instruction on the four steps involved
items. In a small group they must in solving problems. Scenario given of
build a structure that is at least 40 high a problem and participants discuss
using all of the teamwork skills how they could solve it using the
learned. concept.
Debrief building activity discussing the
problems the teams had building the
structure. Introduce the four steps of
problem solving and have participants
implement the four steps in solving
the problems they had with the
building activity.
Summary Summary: participants describe and Summary: Revisit the content covered
reflect how they can use team skills. during the training. Tell participants
the instructions will provide them with
a good foundation to use for
participating in a team environment.

Results

Instruction produced

Both teams, Team 1, using the Dick and Carey and the overlay MI Design Model and
Team 2 using only the Dick and Carey Model, created an Instructor and Participant Guide
for the two-hour team-building course. The courses were be held in a training room with
conference tables and all requested materials. The subject matter, descriptions of the
Instructor and Participant Guides and the instruction designed are in Table 1.

Designers respond

Upon completion of the course design utilizing the overlay MI Design Model, each of the
two members in Team 1 filled out a 14-question Agree-Disagree Scale evaluation and
added additional comments as desired. The following is a summary of the written designer
feedback from the questions from the questionnaire with the complete responses in
Table 2.
Overall, the two designers who used the MI Design Model with the Dick and Carey
Model responded favorably to each component. These designers found two steps the most
useful: (1) the development of assessment instruments considering the multiple intelli-
gences of the learners; and (2) the development of instructional strategies. Considering the

123
Table 2 Summary of designer responses to use of MI model
Factors Subject Comments

Learner analysis Using ISD learner analysis information to D1: The focus in the MI Model seems to be in determining the audience’s present MI behaviors
determine present MI behaviors. more so than identifying the skill/knowledge ‘‘gap’’ analysis. These analyses are quite different
and should not be combined. Model would be much stronger if there were two different steps,
one for a skill/knowledge ‘‘gap’’ and one for MI analyses.
Learner analysis enhanced with the D1: With a bit of refining, this could be the most valuable step in the entire model.
incorporation of (MI) behaviors.
D2: Using MI behaviors allowed us to focus our instruction towards the strengths of the learners.
Design and development research

Very helpful and applicable.


Environmental Using MI behaviors and desired D1: Does ‘‘environment’’ refer to the audience’s work environment or the training environment?
analysis performance to make environmental Some models include an analysis of the work environment to determine the extent to which it
alterations. is contributing to the performance problem, but that doesn’t seem to be the case in this model.
It seems more like the designer is to consider the training environment.
Environmental alteration to assist in D2: A straightforward step.
instructional delivery step is useful.
D1: Do not understand how to alter the environment to assist in instructional delivery.
D2: The MI Design Model gave insight into what would be ‘‘best’’ for learning in the appropriate
environment. This step provided options for designing instruction by presenting suggestions
that would benefit the learners.
Assessment The development of assessment instruments D1: This step was helpful. Without this guidance I would not have considered the learner’s
including identified MI behaviors is preferences while designing the evaluation measures.
clearly explained.
D2: This step was clearly explained and well detailed.
Instructional Incorporating MI behaviors in developing D1: Step is very thorough in terms of example instructional strategies.
strategies strategies is clear.
MI strategy example list is easy to use. D2: Clearly explained with appropriate amount of detail.
MI strategy examples assist in creating D1: Wide range of examples helpful during the brainstorming process of course design. Provided
strategies incorporating MI into ISD. a good foundation for each intelligent category.
Course strategies are enhanced with the D2: Examples were straightforward and logically organized.
incorporation of MI strategies.
561

123
Table 2 continued
562

Factors Subject Comments

123
D1: List was valuable; it provided a variety of examples applicable to many training situations.
D2: The instructional strategy list provided specifics that were helpful when making design
decisions and it started the creative juices flowing.
D1 and D2: Instructional strategies were enhanced with the incorporation of the MI strategies.
D1: I believe that retention and the quality of the experience are greatly increased when learners
are presented information in the way in which they learn best.
D2: Model allowed for interactive creative instructional strategies that accounted for various
learning styles.
Model use It is clear where to begin the model. D1: Like examples provided, Do not find it necessary to have so many.
Examples identified at each step in the D2: Examples are a specific reference to assist first time users and reduce questions.
model are helpful in using the model.
It is clear how to use the model. D1 and D2: Model was on the whole clear.
The course goal is met while using MI D2: Model is well mapped out for the designer.
behaviors/ strategies.
The course is enhanced with the use of the D1 and D2: The course goal was met using the MI Design Model.
MI Design Model.
D1: Although it was not necessary to use MI behaviors/strategies, they definitely enhanced the
quality of the instruction.
D2: The course goal could have been met with other models but the MI Design Model met the
goal using strategies that will engage the learner throughout the instruction using strategies that
are appropriate to meet the learner’s needs.
D1: By using the MI Design Model, we were able to design a course that specifically met the
learner’s needs. This approach was very valuable.
D2: The course was designed with strategies that played into the strengths of the learners.
M. W. Tracey
Design and development research 563

participants, in addition to the content, while developing the assessment instruments was
new for these designers. Additionally, they found the instructional strategy step helpful
when they were brainstorming to identify different activities. The designers found the
model, on the whole, clear and easy to use. They indicated that it was a model that could be
used in conjunction with an ISD Model. The feedback from the designers was documented
for future model revisions.

Product impact study

Product impact studies address the effects of using the model through the creation of the
instructional products themselves, and the impact of these products on learners (Richey
2005). In this case, the goal of the product impact study was to externally validate the MI
Design Model. Specifically, in this phase of the model validation study, the outcomes of
the products created were tested in a controlled testing environment. An effort to compare
the outcomes of the developed products using one model as opposed to outcomes of the
products developed without the model was made.
Two team-building courses were developed, one with the Dick and Carey Model and
one with the overlay MI Design Model used in conjunction with the Dick and Carey
Model. They were then delivered and evaluated to determine the extent to which the use of
the overlay MI Design Model influenced performance on post-test and attitudes toward
learning in the course.

Participants

The participants in the training were 102 employees of a non-profit organization JARC.
The executive director of JARC volunteered her employees as they had not had any
previous training on team-building and was willing to provide their offices and employees
for companywide training. The employees were divided into ten different classes, with
eight to fifteen JARC employees in each group.

Procedures

Employees of JARC volunteered to attend a team-building course offered in one of ten


sessions. Five sessions were allocated for the course designed with only the Dick and Carey
model and five sessions consisted of the course designed with both the Dick and Carey and
the overlay MI Design Model. The courses ran simultaneously, two per time slot. To ensure
multiple intelligence representation in each course, participants agreed to complete a
Multiple Intelligence Assessment (Armstrong 1999). This was done in an attempt to include
representation of all seven multiple intelligences in both courses. When arriving at the
training site participants were randomly stratified according to their results on the MI
Intelligence assessment into the sessions. Since the focus of the study was specifically on the
overlay MI Design Model there was no need to further pre-assess the participants.
One designer from each of the two designer teams instructed each of the five courses
developed with their assigned model. Each designer, chosen to facilitate the trainings
currently worked in education and had experience in the classroom, similar facilitation
skills and styles. This ensured consistency and thoroughness in each course. The instructors

123
564 M. W. Tracey

set up the training rooms and were prepared with Participant Guides and all other training
materials needed. Upon completion of the two-hour training, the instructors administered
the post-assessment (post-test) designed to determine the levels of knowledge about team-
building after training. In addition, participants were asked to complete the 13-item opinion
survey in an attempt to measure their attitudes toward the training. The post-test, and
opinion questionnaires, were then collected and labeled by course number, date and time.
Individual post-tests were also labeled to match the MI Assessment participants completed
prior to the training.

Instrumentation

Post-test assessment

A 10-item post-test was designed and evaluated to measure participants’ knowledge of


team-building concepts immediately following the training sessions. The instructional
designers developed the multiple choice and true/false question items during course
development. Each question was reviewed by the four designers, a JARC representative, a
small group of learners, and the researcher.

Opinion questionnaire

Participants also completed a 13 item-opinion questionnaire to provide general reactions to


the course and the materials. Responses to each of these measures used a Strongly Agree-
Strongly Disagree Scale, with (1) being Strongly Agree. In addition, the questionnaire
addressed perceptions of learner’s confidence levels during and after the training and
general involvement in the team-building training.

Results

Table 3 presents the results of the differences between MI and ISD groups on overall post-
test scores while Table 4 presents the differences between MI and ISD groups on attitudes
toward the training. The following are a summary of the results.

Performance on post-test results

The difference between the MI group and the ISD group appeared to be significant on the
overall post-test score (t = 2.79, p \ .01). Out of a possible ten points, the MI group scored
significantly higher (mean = 8.78) than the ISD group (mean = 7.90) on the knowledge

Table 3 Differences between MI and ISD group on overall post-test scores


N Mean SD t Sig.

MI 51 8.78 1.57 2.79 .01


ISD 51 7.90 1.56

123
Design and development research 565

Table 4 Differences between MI and ISD group on attitude toward training responses
Item Group N Meana SD t

1 Because of this training, I felt confident I would MI 51 1.57 .5 @.47


answer the first 10 assessment questions ISD 50 1.62 .6
correctly.
2 Because of this training, I know the basics of team- MI 51 1.41 .57 @1.69*
building. ISD 51 1.61 .6
3 This training made learning about team-building MI 51 1.43 .5 @1.74*
easy. ISD 51 1.63 .63
4 I felt at ease with the material during the training. MI 51 1.41 .54 @2.21*
ISD 51 1.69 .71
5 I was eager to participate in this training. MI 51 1.67 .59 @.87
ISD 51 1.76 .55
6 I was involved in all activities during this training. MI 51 1.49 .54 @1.95*
ISD 51 1.71 .58
7 The individual activities contributed to my learning. MI 51 1.57 .64 @.66
ISD 51 1.65 .56
8 The group activities contributed to my learning. MI 50 1.38 .49 @2.98**
ISD 51 1.75 .72
9 The variation of activities contributed to my MI 51 1.37 .49 @3.45**
learning. ISD 51 1.8 .75
10 Overall, I enjoyed this training. MI 51 1.37 .56 @2.10*
ISD 51 1.61 .57
11 Overall, I learned all of the material that was MI 51 1.55 .5 @1.80*
presented. ISD 51 1.75 .59
12 Because of this training, I feel confident I can now MI 51 1.61 .49 @.69
operate as a team member. ISD 51 1.69 .65
13 Because of this training, I feel confident I have MI 51 1.51 .54 @.19
something to contribute to a team. ISD 51 1.53 .5
* p \ .05
** p \ .01
a
The lower number is the desired number

portion of the measure. These results were reviewed with caution however, as the learning
performance was based on a 10-item multiple-choice and true-false test with the results
indicating the MI Model being superior from an average difference of less than one
question in the two groups. The outcomes could reflect a ceiling effect in the post-test
scores and for model validation, these results alone provide little in the way of persuasive
evidence that one model was better than the other. Therefore, in this study, the post-test
results were reviewed as only one indicator validating the model.

Attitude results

A t-test was conducted on each item in an attempt to assess any differences in opinions
between the MI and ISD groups. Given the scale used, a negative t-value indicates that the
MI group responded more favorably to the item than the ISD group (Table 4).

123
566 M. W. Tracey

Outcome predictors

A regression analysis was conducted in an attempt to identify those variables that were
predictors of participant performance on the post-test, attitude toward the training, and
confidence. Of the five independent variables used, group membership was the only sig-
nificant predictor of post-test performance F (1, 99) = 6.656, p \ .05 (R2 = .06) suggesting
that the overlay MI Design Model used may have influenced post-test scores. As stated
earlier however, these results were reviewed with caution.
In examining the predictors of participant attitude toward the training, the items related
to level of engagement, and pre-course motivation appeared to be significant predictors. A
more targeted analysis of attitude, examining items related to confidence, indicated con-
fidence as a team member and confidence in the ability to contribute to a team appeared to
be significant attitude predictors.
In the analysis of confidence as a team member, items related to the use of group
activities in the instructional strategy, enjoyment, and group membership appeared to be
significant predictors of team member confidence. Initial analyses indicated confidence in
the ability to contribute to a team due to the training, feeling at ease, being involved in all
activities, group membership, and the spatial and intrapersonal intelligences were signif-
icant predictors.
Upon further review it was determined that both groups reported they felt confident they
could operate as a team member and had something to contribute to a team (Table 4, Items
12 and 13). These results indicate that participants had similar levels of confidence
regarding whether they could actually implement their learning. Table 5 provides a
summary of the four analyses conducted.

Table 5 Summary of stepwise regression analysis for variables predicting performance on post-test,
attitude, and confidence
Dependent Independent variable B SE B b R2 D R2 Omnibus F Sig
variable

1. Performance Design team @.80 .31 @.25 .06 6.66* .01


on post-test Feeling at ease @.55 .25 @.22 .05 4.93 .30
2. Attitude Involvement .33 .10 .36 .42
Presence of group activities .29 .12 .28 .47 .05
Motivation .19 .19 .19 .50 .03 3.16* .00+
Confidence as a team member .40 .12 .40 .38
Confidence in contribution .33 .13 .29 .42 .04 35.17 .00+
3. Confidence as a Presence of group activities .53 .08 .58 .50
team member Enjoyment .27 .09 .27 .54 .04
Design team @.18 .08 @.16 .57 .03 4.19* .00+
4. Confidence in Feeling at ease .39 .08 .47 .50
contribution Involvement .34 .09 .36 .54 .04
Design team @.21 .07 @.20 .56 .02
Spatial .01 .02 .20 .59 .03
Intrapersonal @.01 .02 @.18 .62 .03 3.12* .00+
* p \ .01

123
Design and development research 567

Discussion and implications

The purpose of this article was to report one case of a design and development study
implementing approaches that attempted to validate The MI Design Model, an overlay
model used in conjunction with an instructional systems design model. The study spe-
cifically measured model use, learner achievement, and program implementation and is an
example of design and development research in practice. The procedures and findings have
implications for the processes involved in instructional design model validation through
designer use and program implementation. The following discussion focuses briefly on the
results of this study and provides a reflection on the lessons learned in conducting design
and development research on model validation.

Validating the MI design model in instructional design

Designer use

Designer use validation of the MI Design Model focused on designer reaction to the ease in
using the model, the ability to overlay the model with an existing ID model, the ease of
instructional design strategy creation utilizing multiple intelligences, and the guidance
designers need in order to use the MI Model while designing instruction. The designers in
this study evaluated the overlay MI Design Model as being easy to work with and helpful
in the design process especially during the selection of the instructional strategies. Mc-
Aleese (1988) indicates that in a sense designers are engaged in the learning as well as
design because the designer’s personal knowledge structures are altered by the information
present in the design environment. The designers in this study commented on how the list
of strategies was helpful in their creative thought process and how they customized the
training to match their audience. In this sense, the designers appeared to be engaged in the
learning process as well as the design process as McAleese suggests.

Product impact

Product impact validation of the MI Design Model focused on how the instruction created
with the overlay model influenced learner confidence, attitude, and post-test scores.
Learners’ attitudes toward the instructional strategies, their confidence in their ability to
apply the content, their overall attitude toward the training and their performance on the
post-test were analyzed. Figure 2 illustrates the summary of the regression analyses with
each box representing a variable testing the impact of the MI Design Model on learner
attitudes, confidence and performance on the post-test.
The learning process appeared to be stimulated by the use of the MI Design Model,
especially in terms of learner attitude toward the instructional strategies used. These
strategies in turn appeared to positively affect learner attitudes (see Fig. 2).
One of the four attributes Keller (1987) believes should be considered when designing
instruction that motivates the learner is confidence. For learners to be highly motivated,
they must be confident that they can master the objectives for the instruction. If they lack
confidence then they will be less motivated. Confidence refers to the learners perceived
likelihood of success (expectancy) and whether the learner perceives success as being
under his or her control (Mory 2004). There is a body of research that shows correlations

123
568 M. W. Tracey

MI Design
Model Use
Learner Attitude
Toward Instructional
Strategies Used
Learner Attitude
Learner Post-Test
Toward
Confidence Scores
Group Activities Learning

Engagement

At Ease

MI Type Variation

Fig. 2 Impact of the use of the MI design model

between attitude and knowledge acquisition resulting in positive post-test scores. For
example, knowledge acquisition motivation, as described by Atkinson (1974) explains a
person’s desire to perform and achieve. This theory sprung from behaviorist tradition that
connects so-called ‘‘motivated behavior’’ with performance based on what he calls
‘‘resultant achievement motivation’’ (Williams 1996, p. 970). Learners achieve their
optimal level of performance when they have an intermediate level of motivation to
achieve success and to avoid failure. Learner’s positive attitudes toward the instructional
strategies may have led to confidence which in turn may have led to positive post-test
scores.
Interestingly though, both groups reported they felt confident they could operate as a
team member and had something to contribute to a team (Table 4, Items 12 and 13). These
results indicate that participants had similar levels of confidence regarding whether they
could actually implement their learning. The instruction took place in a work environment
which may have caused results to appear more trustworthy among the designers and
researchers however the richness of the work environment may have made it more difficult
to prevent unrelated factors from impacting the results. Is the confidence data a result of the
facilitators, the participant’s belief systems, and/or the instructional designers’ products in
general? These are some of the questions surfacing in the analysis of this study’s results
and initiate the need to replicate this study and to reflect on the use of design and
development research and model validation techniques.

Model validation techniques in instructional design

ID model validation has been viewed as either internal or external. Internal validation is
a confirmation of the components and processes of an ID model illustrated in this
designer usability study, while external validation is a validation of the impact of the
products of model use, illustrated in this product use study. As is often the case in design
and development research, the internal and external model validation processes in this
study were part of a larger design and development research project (Richey and Klein
2007). The overlay MI Design Model was initially created and validated in an expert
review (Tracey and Richey 2007). Following that expert review validation study,
designers used the MI Design Model. By comparing the use of the overlay MI Design

123
Design and development research 569

Model and the Dick and Carey (2001) model used by one design team, to the use of the
Dick and Carey Model alone, used by the second team, both teams were instructed to
design a team-building course for a non-profit organization. The MI Design Model was
then externally validated in a controlled testing study testing the effects of the products
created with The MI Design Model and the Dick and Carey Model as compared to the
use of only the Dick and Carey Model. The resulting programs were implemented and
evaluated through the use of learner knowledge post-tests, and surveys addressing
participant reaction to the instruction.
Design and development processes are studied sometime during, but oftentimes after,
the development process of several interventions. Its purpose is the creation of design
knowledge in the form of a new or enhanced design or development model or design
principles (Visscher-Voerman and Gustafson 2004). The four phases in this design and
development study are insufficient to make recommendations regarding the future use of
the MI Design Model; rather they are compelling enough to invite additional exploration of
the model components, use, and the products developed.

Design and development research for model validation

Design and Development research can focus on the development, validation, or use of a
model (Richey and Klein 2007). The primary goal of this research is production of new
knowledge, as in this case in the form of a new design model. Since most ID models are
based on an assumption that they are valid (Richey 2005), and the data supporting validity
is often rare or nonexistent, this study attempted to collect and analyze empirical data to
validate the MI Design Model. The internal validation in this study attempted to test the
components and processes of the model while the external validation attempted to validate
the impact of the products of model use. There are lessons learned about conducting design
and development research and model validation illustrated below.

Validation through designer use

A usability test ought to determine under what conditions the design model can be used.
Numerous variables may come into play during a usability test so the number of designers
may prove critical. In this study, the small number of participating designers makes rec-
ommendations regarding future use of the MI Design Model problematic. In addition to
evaluating the model components and model use with a larger designer population, a
systematic documentation of designer procedures, time taken, resources used, problems
encountered and resolutions of these problems must be conducted.
Reconstructing the actual practices of designers is an essential step toward developing
insight into the gap in design models and design practice, and identifying their reasons for
this gap (Visscher-Voerman and Gustafson 2004). Designer use studies indicate the focus
should be on designers’ rationales and reasons for decisions made rather than on the
activities performed. Studies illustrating the difference between novice and expert
instructional designers confirm that each perform design tasks quite differently (LeMaistre
1998; Rowland 1992). Therefore, replication studies using the MI Design Model must
include data gathering while both novice and expert designers create instruction on an
array of subject matter in diverse settings.

123
570 M. W. Tracey

Lessons learned in product impact

External model validations can be complex due to the large number of extraneous factors;
some which surfaced in this study. These include the characteristics of the facilitators, and
possibly the learner’s past history and value system as indicated in the confidence results
(Table 4, Items 12 and 13). Nonetheless, external validations address those factors that
many consider to be the central focus of design efforts (Richey 2005).
Verification of model effectiveness in this study was based on measures of learning and
participant reactions to the instruction. Controlled testing validation research provides data
that supports the validity of a given procedural model under controlled conditions (Richey
2005). This controlled validation study established an experiment that isolated the effects
of the MI Design Model used with The Dick and Carey Model as compared to the use of
The Dick and Carey Model alone. Ideally, a comprehensive validation, examining the MI
Design Model used under a variety of conditions is needed. To meet this goal, future
research requires systematic replication to determine the impact of factors such as, alter-
native settings, alternative types of learners, alternative content areas, and a variety of
delivery strategies.
There is a need for numerous empirical studies that clarify the processes involved in the
validation of ID models and in the products created through model use. Researchers
identifying indicators of the success of an ID model should question if the model is
universally applicable, its ability to use it with other models, and the match between the
application context and the context for which the model was originally intended (Edmonds
et al. 1994). Replications of model effectiveness under a variety of conditions and inte-
gration of the various findings will increase the credibility of that particular model, but will
also provide data to support (or refute) the fields’ assumptions that ISD processes are
generic (Richey 2005).
The presence of this body of research could explain some of the processes involved in
ID model use and the study of the products created in the ID process. This design and
development study was an attempt to internally and externally validate an instructional
design model in a systematic process rather than supporting a model based on personal
advocacy. Thus, the approaches used in this study can be generalized to other empirical
research involving ID model construction and validation.

References

Armstrong, T. (1999). 7 kinds of smart: Identifying and developing your multiple intelligences. New York:
New American Library.
Atkinson, J. W. (1974). The mainsprings of achievement-oriented activity. In J. W. Atkinson & J. O. Raynor
(Eds.), Motivation and achievement (pp. 193–218). Washington DC: Winston.
Dick, W., Carey, L., & Carey, J. O. (2001). The systematic design of instruction (5th ed.). New York:
Addison-Wesley Educational Publishers.
Edmonds, G. S., Branch, R. C., & Mukherjee, P. (1994). A conceptual framework for comparing ID models.
Educational Technology Research and Development, 42(4), 55–72.
Forsyth, J. E. (1998). The construction and validation of a model for the design of community-based train-
the-trainer instruction. (Doctoral dissertations, Wayne State University, 1997). Dissertation Abstracts
International-A, 58(11), 4242.
Gagné, R. M., Wager, W. W., Goals, K. C., & Keller, J. M. (2005). Principles of instructional design
(5th ed.). CA: Wadsworth/Thomson Learning, Publishers.
Gardner, H. (1983). Frames of mind. The theory of multiple intelligences. New York: Basic Books.
Gardner, H., & Hatch, T. (1989). Multiple intelligences go to school: Educational implications of the theory
of MI. Educational Researcher, 18(8), 4–9.

123
Design and development research 571

Grabinger, R. S. (1996). Rich environments for active learning. In D. H. Jonassen (Ed.), Handbook of
research for educational communications and technology (pp. 665–692). New York: Simon & Schuster
Macmillan.
Gustafson, K. L., & Branch, R. M. (2002). Survey of instructional development models (4th ed.). New York:
ERIC Clearinghouse on Information and Technology, Syracuse.
Keller, J. M. (1987). Development and use of the ARCS model of motivational design. Journal of
Instructional Development, 10(3), 2–10.
LeMaistre, C. (1998). What is an expert instructional designer? Evidence of expert performance during
formative evaluation. Educational Technology Research and Development, 46(3), 21–36.
Lindvall, R. (1995). Addressing multiple intelligences and learning styles: Creating active learners. Doctoral
dissertation, Saint Xavier University, Illinois.
McAleese, R. (1988). Design and authoring: a model of cognitive processes. In H. Mathias, N. Rushby, &
R. Budgett (Eds.), Designing new systems for learning (pp. 118–26). New York: Nichols.
Mory, E. H. (2004). Feedback research revisited. In D. H. Jonassen (Ed.), Handbook of research for
educational communications and technology (2nd ed., pp. 745–783). New York: Simon & Schuster
Macmillan.
Richey, R. C. (1998). The pursuit of useable knowledge in instructional technology. Technology Research
and Development, 46(4), 7–22.
Richey, R. C. (2005). Validating instructional design and development models. In J. M. Spector & D. A.
Wiley (Eds.), Innovations in instructional technology: Essays in honor of M. David Merrill (pp. 171–
185). Mahwah: Lawrence Erlbaum Associates, Publishers.
Richey R., & Klein, J. (2007). Design and development research: Methods, strategies and issues. Mahwah:
Lawrence Erlbaum Associates, Publishers.
Richey, R. C., Klein, J. D., & Nelson W. A. (2004). Developmental research: Studies of instructional design
and development. In D. H. Jonassen (Ed.), Handbook of research for educational communications and
technology (2nd ed., pp. 1099–1130). New York: Simon & Schuster Macmillan.
Rowland, G. (1992). What do instructional designers actually do? An initial investigation of expert practice.
Performance Improvement Quarterly, 5(2), 65–86.
Seels, B. (1994). An advisor’s view: Lessons learned from developmental research dissertations. Paper
presented at the 1994 Annual Meeting of the Association for Educational Communications and
Technology.
Seels, B., & Glascow, Z. (1998). Making instructional design decisions (2nd ed.). New Jersey: Prentice Hall.
Tessmer, M. (1995). Planning and conducting formative evaluations. London: Kogan Page.
Tracey, M. W., & Richey, R. C. (2007). ID model construction and validation: A multiple intelligences case.
Educational Technology Research and Development, 55(4), 369–390.
Tripp, S. D. (1991). Two theories of design and instructional design. In Proceedings of Selected Research
Presentations at the Annual Convention of the Association for Educational Communications and
Technology.
Visscher-Voerman, I., & Gustafson, G. (2004) Paradigms in the theory and practice of education and
training design. Educational Technology Research and Development, 52(2), 69–89.
Williams, M. D. (1996). Learner-control and instructional technologies. In D. H. Jonassen (Ed.), Handbook
of research for educational communications and technology (pp. 957–983). New York: Simon &
Schuster Macmillan.
Winn, W. (2004). Cognitive perspectives in psychology. In D. H. Jonassen (Ed.), Handbook of research for
educational communications and technology (2nd ed., pp. 79–112). New York: Simon & Schuster
Macmillan.

Monica W. Tracey is an Associate Professor of Instructional Technology at Wayne State University,


Detroit, Michigan. Her research focuses on instructional designer model use, model design and validation,
and instructional strategies.

123
View publication stats

You might also like