This action might not be possible to undo. Are you sure you want to continue?
1 Philosophical/discursive This may cover a variety of approaches, but will draw primarily on existing literature, rather than new empirical data. A discursive study could examine a particular issue, perhaps from an alternative perspective (eg feminist). Alternatively, it might put forward a particular argument or examine a methodological issue. Examples: • Davies, P. (1999) ‘What is Evidence-Based Education?’ British Journal of Educational Studies, 47 (2), 108-121. [A discussion of the meaning of ‘evidence-based education’ and its relevance to research and policy] • Pring, R. (2000) ‘The ‘False Dualism’ of Educational Research’. Journal of Philosophy of Education, 34, 2, 247-260. [An argument against the idea that qualitative and quantitative research are from rigidly distinct paradigms] 2 Literature review This may be an attempt to summarise or comment on what is already known about a particular topic. By collecting different sources together, synthesising and analysing critically, it essentially creates new knowledge or perspectives. There are a number of different forms a literature review might take. A ‘systematic’ review will generally go to great lengths to ensure that all relevant sources (whether published or not) have been included. Details of the search strategies used and the criteria for inclusion must be made clear. A systematic review will often make a quantitative synthesis of the results of all the studies, for example by meta-analysis. Where a literature field is not sufficiently well conceptualised to allow this kind of synthesis, or where findings are largely qualitative (or inadequately quantified), it may not be appropriate to attempt a systematic review. In this case a literature review may help to clarify the key concepts without attempting to be systematic. It may also offer critical or alternative perspectives to those previously put forward. Examples: • Adair, J.G, Sharpe, D. and Huynh, C-L. (1990) ‘Hawthorne Control Procedures in Educational Experiments: A reconsideration of their use and effectiveness’ Review of Educational Research, 59, 2, 215-228. [A systematic review and meta-analysis of studies that have tried to measure the ‘Hawthorne Effect’] • Black, P. and Wiliam, D. (1998) ‘Assessment and classroom learning’. Assessment in Education, 5, 1, 7-74. [Quite a long article, but it includes an excellent summary of a large field of research] • Brown, M., Askew, M., Baker, D., Denvir, H and Millett, A. (1998) ‘Is the National Numeracy Strategy Research-Based?’ British Journal of Educational Studies, 46, 4, 362-385 [A review of the evidence for and against the numeracy strategy]
M.3 Case study This will involve collecting empirical data. 35-48. ethogenic. 36. so it is vital to report how samples were chosen. 26. If an evaluation relates to a situation in which the researcher is also a participant it may be described as ‘action research’.T. Cambridge. However. Alternatively. Because of the larger number of cases. R.. B. An evaluation can be formative (designed to inform the process of development) or summative (to judge the effects). American Educational Research Journal. A survey may be cross-sectional (data collected at one time) or longitudinal (collected over a period). what response rates were achieved and to comment on the validity and reliability of any instruments used. (2000) ‘Social Conditions for Stress: young people’s experience of doing GCSEs’ British Educational Research Journal. Evaluations will often make use . CUP. collected for another purpose. S. 3. a survey might make use of already available data. 1. of a predominantly qualitative nature. perhaps using questionnaires. 5 Evaluation This might be an evaluation of a curriculum innovation or organisational change. (1999) ‘Lessons and Dilemmas derived from the Literacy Instruction of two Latina/o Teachers’. 359-374. it does not usually claim representativeness and should be careful not to over-generalise. explanations or hypotheses. R. Oxford Review of Education. 26. a survey will generally involve some quantitative analysis. (2000) ‘The Gendered Subject: students’ subject preferences and discussions of gender and subject ability’. (1981) Beachside Comprehensive: a case study of secondary schooling. [A detailed study of the behaviour and experiences of two teachers of English to minority students] • Ball. It may generate new understandings. Examples: • Jimenez. Often an evaluation will have elements of both. and Gersten. It usually provides rich detail about those cases. generally from only one or a small number of cases. etc) and the principles and methods followed should be made clear. There are a number of different approaches to case study work (eg ethnographic. Issues of generalisablity are usually important in presenting survey results. • Denscombe. hermeneutic. but a classic case study] 4 Survey Where an empirical study involves collecting information from a larger number of cases. it is usually described as a survey. 2. A case study generally aims to provide insight into a particular situation and often stresses the experiences and interpretations of those involved. [This is a book. 265-302. Examples: • Francis.
D. 293-306. The intervention might involve individual pupils. 259-74. C. 6 Experiment This involves the deliberate manipulation of an intervention in order to determine its effects. 557577. Research Papers in Education.of case study and survey methods and a summative evaluation will ideally also use experimental methods. J. J. Research Papers in Education. Examples: • Burden. 72. Journal of Educational Psychology. Examples: • Finn. 15. An experiment may compare a number of interventions with each other. Issues of generalisablity (often called ‘external validity’) are usually important in an experiment. M. if allocation is on any other basis (eg using naturally arising or selfselected groups) it is usually called a ‘quasi-experiment’. If allocation to these different ‘treatment groups’ is decided at random it may be called a true experiment. 27. N. and Nichols. and Achilles. response rates and instrumentation as in a survey (see above). • Ruddock. (2000) ‘Schools learning from other schools: cooperation in a climate of competition’. teachers. presenting evidence about how the different interventions were actually implemented and attempting to rule out any other factors that might have influenced the result. R. 3. (1980) ‘Effects of individual learning expectations on student achievement’. so the same attention must be given to sampling. schools or some other unit. Berry. 15. D. (2000) ‘Evaluating the process of introducing a thinking skills programme into the secondary school curriculum’. or may compare one (or more) to a control group. [A smaller study which investigates how the kinds of feedback students are given affects their achievement] RJC EdD Research Methods 2002 . Again. L. 520-4. [A large-scale classic experiment to determine the effects of small classes on achievement] • Slavin. 3.E. Brown. 4. (1990) ‘Answers and questions about class size: A statewide experiment.M. It is also important to establish causality (‘internal validity’) by demonstrating the initial equivalence of the groups (or attempting to make suitable allowances). R. if the researcher is also a participant (eg a teacher) this could be described as ‘action research’..’ American Educational Research Journal. and Frost.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.