You are on page 1of 7

Page 1 of 7




/e$t 0#og1



Sign 3ut

"ichael #eap$$$%h& 'andidate
Search This Blog

Saturday, November 27, 2010

20101127 - concepts & defintions of Evaluation research (Stern)

Lin s
Mobile Learning portal [prototype] WAP Blog: Mobile Learning & Usability

2. Can evaluation be defined?
Stern, E. (2004). Philosophies and types of evaluation research. In Descy, P.; Tessaring, M. (eds), The foundations of evaluation and impact research. There are numerous de initions and ty!es o e"a#uation. There are, or e$am!#e, many de initions o e"a#uation !ut or%ard in hand&oo's, e"a#uation guide#ines and administrati"e !rocedures, &y &odies that commission and use e"a#uation. (## o these de initions dra% se#ecti"e#y on a %ider de&ate as to the sco!e and ocus o e"a#uation. ( recent &oo' identi ies 22 oundation mode#s or 2)st century !rogramme e"a#uation (Stu #e&eam, 2000a), a#though the authors suggest that a sma##er su&set o nine are the strongest. *ather than &egin %ith ty!es and mode#s, this cha!ter &egins %ith an attem!t to re"ie% and &ring together the main ideas and orientations that under!in e"a#uation thin'ing. Indicating !otentia# !ro&#ems %ith +de inition+ &y a ,uestion mar' in the tit#e o this section %arns the reader not to e$!ect straight or%ard or consistent statements. E"a#uation has gro%n u! through di erent historica# !eriods in di erent !o#icy en"ironments, %ith in!uts rom many disci!#ines and methodo#ogies, rom di"erse "a#ue !ositions and rooted in hard ought de&ates in !hi#oso!hy o science and theories o 'no%#edge.

Blog !rchive
- 2010 (144) . e!e"ber (4) - #o$e"ber (%&)
Mobile 'W('')* pros & !ons Mobile P+one is "ore !on$enient t+an #oteboo, -o"p... rea/ing Bible on Mobile p+one in !+0r!+ 20101121 2 -on!ept0al3'+eoreti!al 4ra"e5or, (5i,ip... Usability 4ra"e5or, (6+a!,el) 20101121 2 !on!epts & /e7intions o7 )$al0ation res... 20101121 2 )$al0ation resear!+3"et+o/ (Patton) 20101121 2 )$al0ation *esear!+ (Wi,ipe/ia) 20101121 2 )$al0ation *esear!+ (Po5ell) 20101121 2 )$al0ati$e *esear!+ (Winsett8 #A'-9) 20101121 2 '+e Planning2)$al0ation -y!le (6o!ial*e... 20101121 2 )$al0ation *esear!+ (6o!ial*esear!+Met+... 2010112& 2 iagra" 2 4o!0s :ro0p as a ;0alitati$e ... 2010112& 2 Pee, & 4ot+ergill8 Using 4o!0s :ro0ps 7...


Page 2 of 7

2010112& 2 4o!0s :ro0p resear!+ "et+o/ (:ibbs) 2010112& 2 4o!0s :ro0p reporting ((o5a 6tate Uni$)... 2010112& 2 4o!0s :ro0p resear!+ "et+o/ ((o5a 6tate... 2010112& 2 4o!0s :ro0p 2 a <0alitati$e resear!+ (=... 2010112& > 4o!0s :ro0p resear!+ "et+o/ology (Up4ro... 2010112& 2 4o!0s :ro0p resear!+ "et+o/ology (Wi,ip... 20101122 2 ?an/ling -o""on 'as,s 7or iP+one 2 part... 20101121 2 -o""on 'as,s 7or iP+one 20101121 2 Usability & esign :0i/elines 7or iP+on... 20101121 > Apple8 iP+one ?0"an (nter7a!e :0i/eline... Prototype o7 Mobile Learning Portal@pls !riti!ise3... 20101120 2 6tart eAperiential.."obile apps & M2Lea... 20101120 2 'e!+*a/ar: Mobile 5eb /esign: plat7or" ... Mobile Learning portal [prototype] WAP Blog & )1 !lone 20101114 2 =oo8 9nline -ollaborati$e Learning... /B ay 1C #o$ 2010 Proposal3Wor, e7ense 6e"inar 2010110D 2 'ri7ono$a8 ..?oar/ing -ontent in Mobile... 5riting & s0b"itting Eo0rnal Paper...7ire2n27orget... Usability (nspe!tion -riteria 7or e2Learning Porta... Paper 7or Eo0rnal: Usability (nspe!tion -riteria 7... iE)' 2010

While there is some agreement, there is also persistent difference: evaluation is contested terrain. Most of these sources are from North America where evaluation has been established – as a discipline and practice – and debated for 30 or more ears.
2.1. Assessing

or explaining outcomes

Among the most fre!uentl !uoted definitions is that of "criven who has produced an evaluation #hesaurus, his own e$tensive handboo% of evaluation terminolog : '"evaluation" refers to the process of determining the merit, worth or value of something, or the product of that process &'( The evaluation process normally involves some identification of relevant standards or merit, worth or value; some investigation of the performance of evaluands on these standards; and some integration or synthesis of the results to achieve an overall evaluation or set of associated evaluations.) *"criven, +,,+- p. +3,.. #his definition prepares the wa for what has been called 'the logic of evaluation' *"criven, +,,+- /ournier, +,,0.. #his logic is e$pressed in a sequence of four stages: (a) establishing evaluation criteria and related dimensions; (b) constructing standards of performance in relation to these criteria and dimensions; (c) measuring performance in practice; (d) reaching a conclusion about the worth of the ob ect in question! #his logic is not without its critics *e.g. "chwandt, +,,1. especiall among those of a naturalistic or constructivist turn who cast doubt on the claims of evaluators to %now, to 2udge and ultimatel to control. 3ther sta%eholders, it is argued, have a role and this changed relationship with sta%eholders is discussed further below. #he most popular te$tboo% definition of evaluation can be found in 4ossi et. al.)s boo% Evaluation – a systematic approach: )"rogram evaluation is the use of social research procedures to systematically investigate the effectiveness of social intervention programs. More specificall , evaluation researchers (evaluators) use social research methods to study, appraise, and help improve social programmes in all their important aspects, including the diagnosis of the social problems they

5 9!tober (42) 5 6epte"ber (&2) 5 200C (212)


Badge concepts !efintions of.html


Page 3 of 7

Lion Mick Yap

address, their conceptualization and design, their implementation and administration, their outcomes, and their efficiency.' (Rossi et al., 1999; p. 4). Using words such as effectiveness rather than Scriven's favoured 'merit worth or value' egins to shift the perspective of this definition towards the e!planation of outcomes and impacts. "his is partl# ecause Rossi and his colleagues identif# helping i$prove social progra$$es as one of the purposes of evaluation. %nce there is an intention to $a&e progra$$es $ore effective, the need to e!plain how the# wor& eco$es $ore i$portant. 'et, e!planation is an i$portant and intentionall# a sent ele$ent in Scriven's definitions of evaluation( ')# contrast with evaluation, which identifies the value of so$ething, explanation involves answering a Why or How question a out it or a call for so$e other t#pe of understanding. %ften, explanation involves identifying the cause of a phenomenon, rather than its effects (which is a $a*or part of evaluation). +hen it is possi le, without *eopardi,ing the $ain goals of an evaluation, a good evaluation design tries to uncover $icroe!planations (e.g. # identif#ing those co$ponents of the curriculu$ pac&age that are producing the $a*or part of the good or ad effects, and-or those that are having little effect). "he first priorit#, however, is to resolve the evaluation issues (is the pac&age an# good at all, the est availa le. etc.). "oo often the research orientation and training of evaluators leads the$ to do a poor *o on evaluation ecause the# eca$e interested in e!planation.' (Scriven, 1991, p. 1/0). Scriven hi$self recognises that one pressure $oving evaluation to pa# greater attention to explanation is the e$ergence of programme theory, with its concern a out how progra$$es operate so that the# can e i$proved or etter i$ple$ented. 1 parallel pressure co$es fro$ the upta&e of i$pact assess$ent associated with the growth of perfor$ance $anage$ent and other $anagerial refor$s within pu lic sector ad$inistrations. "he intellectual asis for this wor& was $ost consistentl# ela orated # +hole# and colleagues. "he# start fro$ the position that evaluation should be concerned with the efficiency and effectiveness of the wa# govern$ents deliver pu lic services. 1 core

Create Your Badge

T*itter (ollo*ers
with 2oogle 3riend 4onnect

Members (8

1lread# a $e$ er. Sign in


Page 4 of 7

concept within this approach is what is called 'evaluability assessment' (Wholey, 1981). The starting point for this assessment is a critical review of the logic of programmes and the assumptions that underpin them. This work constitutes the foundation for most of the thinking a out programme theory and logical frameworks. !t also prefigures a later generation of evaluation thinking rooted more in policy analysis that is concerned with the institutionalisation of evaluation within pu lic agencies ("oyle and #emaire, 1999), as discussed further elow. These management reforms generally link interventions with outcomes. $s %ossi et al. recognise, this takes us to the heart of roader de ates in the social sciences a out causality& 'The problem of establishing a program's impact is identical to the problem of establishing that the program is a cause of some specified effect. Hence, establishing impact essentially amounts to establishing causality.' (%ossi et al., 1999). The difficulties of esta lishing perfect, rather than good enough, impact assessments are recognised y %ossi and colleagues. This takes us into the territory of experimentation and causal inference associated with some of the most influential founders of (orth $merican evaluations such as )amp ell, with his interest in e*perimental and +uasi,e*perimental designs, ut also his interest in later years in the explanatory potential of qualitative evaluation methods. The de ate a out e*perimentation and causality in evaluation continues to e vigorously pursued in various guises. -or e*ample, in a recent authoritative te*t on e*perimentation and causal inference, (.hadish et al., /00/) the authors egin to take on oard contemporary criticisms of e*perimental methods that have come from the philosophy of science and the social sciences more generally. !n recent years, we have also seen a sustained realist criti+ue on e*perimental methods led in 1urope y 2awson and Tilley (1993). "ut, whatever their orientations to e*perimentation and causal inference, e*planations remain at the heart of the concerns of an important constituency within evaluation.
2.2. Evaluation,

change and values



Page 5 of 7

Another important strand in evaluation thinking concerns the relationship between evaluation and action or change. One comparison is between 'summative' and 'formative' evaluation methods, terms also coined by Scriven. The former assesses or judges results and the latter seeks to influence or promote change. Various authors have contributed to an understanding of the role of evaluation and change. For example, ronbach !"#$%, "#$#& rooted in policy analysis and education, sees an important if limited role for evaluation in shaping policy 'at the margins' through 'piecemeal adaptations'. (he role of evaluation in ronbach's framework is to inform policies and programmes through the generation of knowledge that feeds into the 'policy shaping community' of experts, administrators and policy)makers. Stake !"##*& on the other hand, with his notion of 'responsive evaluation', sees this as a 'service' to programme stakeholders and to participants. +y working with those who are directly involved in a programme, Stake sees the evaluator as supporting their participation and possibilities for initiating change. (his contrasts with ronbach's position and even more strongly with that of ,holey !referred to earlier& given Stake's scepticism about the possibilities of change at the level of large scale national !or in the -S context Federal and State& programmes and their management. Similarly, .atton, !"##/ and earlier editions& who has tended to eschew work at programme and national level, shares with Stake a commitment to working with stakeholders and !local& users. 0is concern is for 'intended use by intended users'. Virtually everyone in the field recognises the political and value basis of much evaluation activity, albeit in different ways. ,hile Stake, ronbach and ,holey may recognise the importance of values within evaluation, the values that they recognise are variously those of stakeholders, participants and programme managers. (here is another strand within the general orientation towards evaluation and change which is decidedly normative. (his category includes 0ouse, with his emphasis on evaluation for social 1ustice and the emancipatory logic of Fetterman et al. !"##*& and 'empowerment evaluation'. ,ithin the view of Fetterman and his colleagues, evaluation itself is not undertaken by external experts but rather is a self)help activity in


Page 6 of

which – because people empower themselves – the role of any external input is to support self-help. So, one of the main differences among those evaluators who explicitly address issues of programme and societal change is in terms of the role of evaluators, be they experts who act, facilitators and advocates, or enablers of self help.

Source: Comments: Boring. Reading thru all these concepts and definitions on Evaluation Research.
Posted by Michael of MMU at 1:36 AM

No comments:

Post a Comment

Enter your comment...

 
Sign out

Comment as:

 Michael alves !"oogle#


Notify me

http://%ichael&eap.'!#1#/11/!#1#11! (concepts(defintions(


Page of

Links to this post
Create a Link

Newer Post !"bscribe to: Post Co ents #Ato $

Ho e

Older Post

http://%ichael&eap.'!#1#/11/!#1#11! (concepts(defintions(