Thinking  Evolu-onarily:     The  Evidence  Base  

Ross  Nehm   Associate  Professor,  Educa5on  &  EEOB   The  Ohio  State  University  

Outline  of  today’s  talk  
•  1.  Bird’s  eye  view  of  the  evidence  base     •  2.  Thinking  evolu5onarily:  what  we  know   about  novice  to  expert  reasoning   paKerns   •  3.  Recent  discoveries  about  novices’   evolu5onary  thinking  paKerns   •  4.  Implica5ons  of  prior  research  for   ‘evolu5on  across  the  curriculum’  

Today’s  talk  
•  1.  Bird’s  eye  view  of  the  evidence   base     •  2.  Thinking  evolu5onarily:  novice  to   expert  reasoning  paKerns   •  3.  Recent  discoveries  about  novices’   evolu5onary  thinking  paKerns   •  4.  Implica5ons  for  evolu5on  across   the  curriculum  

Low  levels  of  evolu5onary  knowledge  and  high   levels  of  misconcep-ons  are  ubiquitous  

  General  public  (e.g.,  Brooks,  2001;  Newport,  2004)  
Increasing education

  High  school  students  (e.g.,  Demastes  et  al.,  1995)     Undergraduate  students  (e.g.,  Bishop  &  Anderson,  1990)     Undergraduate  biology  majors  (e.g.  Nehm  &  Reilly  2007);     Science  teachers  (e.g.,  Nehm  &  Schonfeld,  2007;  Nehm  et  al.,   2009).     Medical  students  (e.g.,  Brumby,  1984)  
Source:  Nehm  &  Schonfeld  (2007)  

More  educa5on,  same  misconcep5ons  
Biology teacher misconceptions: Nature of Science Theories become facts when they are well-supported Evolution can't be "proven" Evolution can't be refuted by any observation For evolution to be true it must be observed Evolution is weak because it is a theory Evolution Chance cannot be a factor in the origin of complex traits No fossil species found between humans and "apes" Mutations are harmful and cannot give rise to new traits Humans and dinosaurs coexisted Fossil record lacks intermediates Natural selection Use and disuse explains the appearance/disappearance of traits Traits appear only when they are needed Populations develop new traits rather than individuals When sight is lost other senses evolve to be more sensitive Mutations are caused by mutagenic substances in environment Change is caused by the environment Secondary students Also documented in samples of: College undergraduates
Johnson & Peeples (1987); Sinclair, Pendarvis, & Baldwin (1997)

Teachers
Author & Sheppard (2003); Eve and Dunn (1990), Sharmann et al. (2003), Sharmann and Harris (1992)

Ryan & Aikenhead (1992)

Deadman & Kelly (1978), Lawson & Worsnop (1992)

Johnson & Peeples (1987); Sinclair, Pendarvis, & Baldwin (1997)

Eve and Dunn (1990); Zuzovsky (1994); Author & Sheppard (2003)

Deadman & Kelly (1978); Demastes, Settlage, & Good (1995); Hallden (1988); Settlage (1994).

Bishop & Anderson (1990); Brumby (1979, 1984); Dagher & BouJaoude (1997); Jensen and Jimenez (1992); Greene Finley (1995, 1997); Sinclair & (1990); Zuzovsky (1994) Pendarvis (1997/8); Sinclair, Pendarvis, & Baldwin (1997)

Increasing education Nehm  &  Schonfeld  (2007)  

What  works  to  address  these  problems?   Types  of  evidence  
•  RCT  (Randomized  control  trials)  (causa5on,  possible   generaliza5on)   •  Interven5ons  with  comparison  groups  (causal   implica)ons,  possible  generaliza5on)  
Causal claims

•  Interven5on  with  no  comparison  group  (pre-­‐post   change;  associa5ons)   •  Survey  research  (associa5ons)   •  Case  studies,  interviews,  qualita5ve  research   (variable  iden5fica5on,  possible  associa5ons)  

Research studies in evolution education
Intervention studies with control groups (subject randomization)

0 6 24

Intervention studies with comparison groups Intervention studies

Evolution education studies sampled

200
Data from Nehm, 2006

Interven5on  studies  

Intervention efficacy tests depend on measurement quality and task authenticity
 NRC  (2001:  5):  ‘‘[a]ssessments  need  to  examine  how  well  students  engage  in   communica5ve  prac5ces  appropriate  to  a  domain  of  knowledge  and  skill,  what   they  understand  about  those  prac5ces,  and  how  well  they  use  the  tools   appropriate  to  that  domain’’  

Less…  

  Assessment  of  the  most  easily  measured   knowledge  elements     …fragments  of  isolated  knowledge     …recogni5on  of  ideas,  explana5ons     …mul5ple  choice      

More…  

  Assessment  of  the  most  valuable  skills  and   performances     …knowledge  selec5on,  organiza5on,  and   assembly     …communica5on  of  ideas,  explana5ons     …constructed  response    

Interven5on  assessments:  science  teachers  

Ha,  Nehm,  and  Baldwin,  in  review  

Part  1  Summary:     Problems  are  well-­‐established;  solu5ons  are  not  
•  Research  has  established  key  variables  that  should  be   inves5gated  and  many  possible  beneficial  instruc5onal   interven5ons   …but  we  lack  robust,  generalizable,  causal  claims   rela5ng  to  par5cular  pedagogical  strategies  and   interven5ons.   …we  lack  measurement  instruments  that  meet  basic   quality  control  standards  (i.e.,  AERA,  1996)  (Nehm  &   Schonfeld,  2008)  and  capture  authen5c  disciplinary   prac5ces.   ..we  lack  consistent  applica5on  of  measurement   instruments  across  different  popula5ons  (apples  and   oranges  measurement  issues).  

• 

• 

• 

Today’s  talk  
•  1.  Bird’s  eye  view  of  the  evidence   base     •  2.  Thinking  evolu5onarily:   progressions  from  novice  to  expert   •  3.  Recent  discoveries  about  novices’   evolu5onary  thinking  paKerns   •  4.  Implica5ons  for  evolu5on  across   the  curriculum  

Thinking  evolu5onarily:     progressions  from  novice  to  expert  
•  How  do  different  groups  (e.g.,  novice  to  expert)  think   about  the  same  problems  using  performance-­‐based   measures  (explaining  evolu5onary  change).   •  A  recently  completed  large-­‐scale  study  of  >  400   individuals,  from  non-­‐major  college  students  to  full   professors  of  evolu5onary  biology  (Nehm  &  Ha,  in   prepara5on)  reveals  useful  insights  into  learning   evolu5on.  

Competency: explaining evolutionary change
Being  able  to  clearly  and  logically   explain  scien5fic  phenomena  free  of   naïve  ideas  is  a  core  competency   that  is  highly  valued  by  scien5sts  and   educators  but  is  difficult  to  measure.         Students  have  a  harder  5me   explaining  evolu5onary  change  (in   wri5ng  or  orally)  than  recognizing   accurate  scien5fic  elements  of  an   explana5on  on  a  mul5ple-­‐choice  test   (Nehm  and  Schonfeld,  2008).    

Views of competence
Identifying the appropriate elements of an evolutionary explanation (MC tests)

Building a robust and functioning evolutionary explanation (CR)
Knowing the parts and tools needed to assemble furniture does not mean that you can actually built it effectively.

Competence  and  mul5ple-­‐choice  tests:     a  second  concern  

Naïve model

Mixed model

Scientific model

“One day snails had to have a poison in order to fight predator [Need]. Then, environmental pressure then caused them to have poison [Pressure]. Therefore, they all has changed into poisonous snails [Essentialism].”

“One day snails had to [Need] a mutation for poison [Variation]. The poisonous snail had gradually adapted to their environment [Adapt] so the population of the snail increase [Change of pop.].”

“One day there was a mutation [Variation] that produced a poison. The poisonous snail was better able to produce more offspring [Differential survival] in the environment passing on his trait [Heredity].”

Non-majors
10 8 6 4 2 0 50 40 30 20 10 0 Key concepts Key  concept  

Majors

Advanced majors
(%)

Experts

80 60 40

A **

**

B

**
Misconceptions Misconcep5on  

20 0 10 8 6 4 2

4   Scientific models

Mixed 3   models

Naïve 2   models

(%)

C

(%)

D

**

2~3   2-3 core concepts

0

1   Non-adaptive concepts

N  =  428  (107  each  group);  from  Nehm  &  Ha,  in  prepara5on  

** p<0.01

Language  of  evolu5onary  explana5on  (n  =  428)  
(%)
Differential survival

70

60

Differential survival Differential survival Variance

50

N  =  428   (107  each   group);   from  Nehm   &  Ha,  in   prepara5on  

Variance Limited resource

40

Teleology

30

Variance Differential survival Limited resource

Variance Teleology Limited resource Teleology Limited resource

20
Change of pop. Heredity Teleology Heredity Change of pop. Pressure Adapt Use/disuse Competition Energy Heredity Competition Energy Pressure Use/disuse

10
Heredity Adapt

Change of pop. Adapt Use/disuse Energy Intentionality

0

Use/disuse Change of pop. Intentionality Competition

Year 1 Non-majors

Year 2 Majors

Advanced Year 3 majors

Year 4 Expert

Scientific concept V: Variability H: Heritability D: Differential survival Naïve idea T: Need/Goal U: Use/disuse I: Intentionality A: Adapt E: Energy P: Pressure R

D V Non-majors R

D V Majors

T

5% 10% 20% 30% 40%

T

N  =  107  each   group     Nehm  &  Ha,  in   prepara5on  

D V

D V

R

Advanced majors

R

Experts

P

T

T

What  accounts  for  these  paKerns?  
•  What  factors  are  causing  differences  in  novice   and  expert  evolu5onary  reasoning  paKerns?  

Problem solving research in other science domains
Novice biological thinking
Explanatory model “A”
Item a

Expert biological thinking

Item b

Explanatory model “B”

Explanatory model “N”
Item c

Explanatory model “C”

Coherence in evolutionary explanation
Cognitive resource Key concept 6 Key concept 1 Key concept 2 Naïve concept 1 Naïve concept 2 Naïve concept 3 Naïve concept 4 Naïve concept 5 Naïve concept 6 Description Differential survival Causes of variation Heritability Needs drive change Pressure forces change Use an disuse explain change Acclimation = adaptation Inheritance of acquired traits Intentionality explains change Consistency* of Expert use 70% 60% 60% n/a n/a n/a n/a n/a n/a Consistency* of Novice use 36% 4% 8% 0% 0% 0% 0% 0% 0%

Natural selection elements were linked consistently (more coherent) in experts, but haphazardly applied in novices. Naïve ideas absent in experts, no coherence whatsoever in novices Nehm & Ridgway, in press

Coherence hypothesis
• Novices solve problems using concrete surface features. • Experts solve problems using domain principles (e.g., natural selection). • Significant coherence characterizes experts, multiple explanatory models characterize novices.

Novice biological thinking
Explanatory model “A”
Item a

Item b

Explanatory model “B”

Expert biological thinking

Explanatory model “N”
Item c

Explanatory model “C”

Nehm  &  Ridgway,  in  press,  

Part  2  Summary:     Experts  and  novices  “see”  evolu5on  differently  
•  Surface  feature  percep5ons  account  for   differences  in  problem  solving  performance   between  novices  and  experts.   •  While  students  know  the  elements  of  the  theory   of  natural  selec5on,  they  do  not  use  these   elements  together  in  a  consistent  manner  across   different  problems.   •  Even  aqer  comple5ng  an  evolu5on  course,  only   50%  of  students  have  “expert-­‐like”  percep5on  of   evolu5onary  problems.   •  As  students  progress  through  biology,  we  do  liKle   to  help  them  reason  across  cases.  

Today’s  talk  
•  1.  The  knowledge  base:  Evidence-­‐ based  evolu5on  educa5on     •  2.  Thinking  evolu5onarily:  novice  to   expert  reasoning  paKerns   •  3.  Recent  discoveries  about  novices’   evolu5onary  thinking  paKerns   •  4.  Implica5ons  for  evolu5on  across   the  curriculum  

Theory: Contextualized reasoning
Answer Context Process
Item Response Structure

Surface Features Cueing, Framing Contextual problem space Recruitment

Cognitive Resources (conceptual procedural analytical, factual)

A 1 P 5 2 6

T 3

7 U 4 E

Storage

Key concepts Naïve ideas Cognitive biases

I

Nehm  &  Ridgway,  in  press,  Nehm,  in  prepara5on  

Which  surface  features  are  problema5c?  
•  Scale  (intraspecific,  interspecific)    
–  (Nehm  &  Ha,  2011)  

•  Polarity  (trait  gain,  trait  loss)    
–  (Nehm  &  Ha,  2011)  

•  Taxon  (animal,  plant)    
–  (Opfer,  Nehm,  Ha,  et  al.  2011)  

•  Familiarity  (Dodder  vs.  Rose)    
–  (Opfer,  Nehm,  Ha,  et  al.  2011)  

Experimental  design  
•  Par5cipant  randomiza5on   •  Large  samples  of  novices  (>  200  par5cipants)   •  Control  of  all  aspects  of  language  and  item  features   •  Manipula5on  of  one  evolu5onary  problem  feature   •  Nehm  &  Ha  (2011)  JRST  

Scale,  polarity,  and  familiarity  effects   on  evolu5onary  reasoning  
Plant Animal Bacteria

Loss

Loss

Gain

Loss

Gain

Gain

Scale:  intraspecific  vs.  interspecific   Polarity:  trait  gain  vs.  trait  loss   (Same:  organisms  and  traits)  
8  

Accurate elements

6   4   2   0  

Gain Loss

Nehm  &  Ha   (2011)  

BetweenEGALT-­‐B   species

EGALT-­‐W   Between populations of same species

Scale:  intraspecific  vs.  interspecific   Polarity:  trait  gain  vs.  trait  loss   (Same:  organisms  and  traits)  

8  

Misconceptions

6   4   2   0   Loss Gain

Nehm  &  Ha   (2011)  

EGALT-­‐B   Between

EGALT-­‐W   Within

Familiarity  (Prosimian  vs.  Rose)   Taxon  (Plant  vs.  animal)   (Same:  interspecific  trait  gain)  
2.0  

Key concepts

1.6   1.2   0.8   0.4   0.0   Plant

Familiar Unfamiliar

Familiarity effects: plants vs. animals
Opfer,  Nehm,  Ha,  et  al.  (2011)  

Animal

Taxa

A. Within species V N U H I A P E U

B. Between species V N P E H I A D Scientific concept V: Variability H: Heritability D: Differential survival Naïve idea N: Need/Goal U: Use/disuse I: Intentionality A: Adapt E: Energy P: Pressure p<0.01 (+) p<0.05 (+) U H I A D E p<0.01 (-) p<0.05 (-)

D

C. Trait Gain V N U H I A P E N

D. Trait loss V P

D

Part  3  Summary:     Problem  surface  features  deserve  major  aKen5on  

•  Specific  surface  features   of  evolu5onary  problems   play  a  huge  role  how   novices  think  about   evolu5on.     •  Biology  educa5on  can  be   more  precise  in   instruc5onal  targets.  

Today’s  talk  
•  1.  The  knowledge  base:  Evidence-­‐ based  evolu5on  educa5on     •  2.  Thinking  evolu5onarily:  novice  to   expert  reasoning  paKerns   •  3.  Recent  discoveries  about  novices’   evolu5onary  thinking  paKerns   •  4.  Implica5ons  for  evolu5on  across   the  curriculum  

Points  to  consider  1:  
  •  Students  appear  to  progress  from  naïve   models    mixed  models    scien5fic   models,  and  progress  is  very  slow  (25%   of  advanced  majors—students  who  have   completed  an  evolu5on  class  and   addi5onal  coursework-­‐-­‐have  mixed   models).   •  Learning  evolu5on  is  characterized  by   adding  scien5fic  ideas  to  naïve  ideas,   and  yet  most  assessments  don’t  allow   this  op5on.    

Points  to  consider  2:  
•  The  surface  features  of  evolu5onary   problems  play  a  huge  role  how   novices  think  about  evolu5on  (Nehm   &  Ridgway,  in  press).   •  Misconcep5ons  are  surface-­‐feature   specific,  so  instruc5onal  examples   must  be  chosen  carefully.   •  Taxon  (animal/plant),  trait  change   polarity  (gain/loss),  scale  (within  vs.   between  species),  and  familiarity   present  unique  reasoning  challenges   for  students  (Nehm  &  Ha,  2011;   Opfer,  Nehm,  Ha,  et  al.,  2011).  

Take  home  points  3:  
•  Assessments  of  competency  must   include  authen5c  produc5on  tasks,   such  as  explaining  how  evolu5onary   change  occurs,  not  just  fragmented   knowledge  selec5on  tasks  (Nehm  &   Schonfeld  2010).   •  Evolu5on  assessments  must  be   developed  that  meet  quality  control   standards  established  by  the   educa5onal  measurement   community  (AERA  et  al.,  1999);   otherwise,  robust  claims  (causal  or   otherwise)  cannot  be  made.  

Take  home  points  4:  
•  A  minority  of  research  studies   in  evolu5on  educa5on  involve   interven5ons  with  comparison   groups,  limi5ng  robust   guidelines  for  prac5ce.     •  While  evidence  comes  in  many   forms,  causal  claims  must  be   established;  RCT  studies,  or   interven5on  studies  with   comparison  groups,  are   desperately  needed.  

Thank  you  
•  NSF  CAREER  program;  NSF  CCLI  program;  NSF   TUES  program;  NSF  REESE  program  for  research   support   •  Numerous  collaborators  and  students,   par5cularly  Minsu  Ha,  Irvin  Schonfeld,  Hendrik   Haer5g,  Leah  Reilly,  Meghan  Rector,  for  their   important  contribu5ons.   •  Papers:  www.nehmlab.org  

Sign up to vote on this title
UsefulNot useful