You are on page 1of 9

JPAL102x: Designing and Running Randomized Evaluations

Syllabus

Administration
• Rachel Glennerster, Executive Director, Abdul Latif Jameel Poverty Action Lab (JPAL)
• Marc Shotland, Associate Director of Training, Abdul Latif Jameel Poverty Action Lab
(JPAL)
• Course website: https://www.edx.org/course/designing-running-randomized-evaluations-
mitx-jpal102x

Course Description:
A randomized evaluation, also known as a randomized controlled trial (RCT), field experiment
or field trial, is a type of impact evaluation that uses random assignment to allocate
resources, run programs, or apply policies as part of the study design. This course will provide
step-by-step training on how to design and conduct an RCT for social programs. You will learn
about why and when to conduct RCTs and the key components of a well-designed RCT. In
addition, this course will provide insights on how to implement your RCT in the field, including
questionnaire design, piloting, quality control, data collection and management. The course will
also go over common practices to ensure research transparency.

No previous economics or statistics background is required. However, economic and statistics


concepts and vocabulary will be used and some familiarity is advised. Some exercises will
require use of the statistical software either R or Stata. Resources for downloading, installing,
learning and using R are available in the course

This Course and the MicroMasters:


This course is part of the new MITx MicroMasters in Data, Economics, and Development Policy
(DEDP). The program, which consists of five online classes and proctored exams, is co-designed
and run by MIT’s Department of Economics and the Abdul Latif Jameel Poverty Action Lab
(JPAL), a global leader in conducting randomized evaluations to test and improve the
effectiveness of programs aimed at reducing poverty. The MicroMasters program is intended for
learners who are interested in building a full set of tools and skills required for data analysis in
the social sciences, understanding the problems facing the world’s poor, and learning how to
design and evaluate social policies that strive to solve them. You can learn more about this
program by visiting the new MITx MicroMasters Portal. We hope that many of you will decide
to join us as part of the first MicroMasters cohort!
1
Course Tracks:
This course has two tracks:
• You can enroll in this course on the MicroMasters track. If you pay the MicroMasters course
fee, pass this online class, and pass an additional in-person proctored exam, you will have
fulfilled this component of the MicroMasters credential, and you will receive a certificate.
• You can also enroll in this course on the audit track, for free. You will not receive a
certificate or credit. However, you can upgrade to the MicroMasters track by paying the
course fee at any point throughout the semester. Please note that there is no longer a $50
“verified" certificate track, nor is there any free "honor code" certificate track.

Assignments and Grading Scheme:


All lectures will include Finger Exercises to help you stay engaged with the material. For every
week, there will be at least one Problem Set assignment that covers the main topics covered
during that week. Problem sets will be released on Mondays along with the videos and Finger
Exercises, and will be due Sunday. In addition, there will be a Mid-Term and Final exam. Please
see the online calendar for further information

Grades of the (online) edX course are calculated as follows:


− Lecture Sequence Finger Exercises: 20%
− Problem sets: 30%
− Mid-term Exam: 20%
− Final Exam: 30%

Students who are taking this class in pursuit of the MicroMasters credential will also have to
take an in-person, proctored exam.

To be eligible to register for the proctored exam, you first have to pass the online component of
this class on edX. Your final MicroMasters course grade will be calculated as follows:
- edX course: 40%
- Proctored exam: 60%

Academic Honesty:
We take academic honesty very seriously. You must complete all graded materials by yourself
and not engage in any activity that would dishonestly improve your results, or improve or hurt
the results of others. You may use only one user account and not let anyone else use your
username and/or password. Having two user accounts registered in this course constitutes
cheating. We will be monitoring this diligently. Should we become aware of any suspicious
activity, we reserve the right to remove course credit, not award a MicroMasters certificate,
revoke a MicroMasters certificate, ban you from future MITx Economics classes, and exclude
you from consideration for admission to the MIT blended Master’s program in Data, Economics,
and Development Policy without warning.

Lectures and Time Commitment:


The material for each topic will be posted weekly, and you should keep pace with the rest of the
class. There will be about two-to-three lectures per week. You will have access to videos of the
lecture presented in short segments (8-10 minutes on average), interspersed finger exercises to

2
test your understanding of the material. You will also have to complete a minimum of one
assignment per week (case study, exercise or a set of quiz questions). You will have access to the
presentation slides to follow along during the lecture. The minimum commitment will be
approximately 8-12 hours per week for watching the lectures, doing the readings, and completing
the assignments.

Course Syllabus and Readings:


The lectures released each week along with the required readings for each week is listed below:

Week 1: February 6, 2017


1. What is Evaluation?
• Lecture sequence
• Problem set: Quiz questions
2. Why Randomize?
• Lecture sequence
• Reading: Shotland (2016)
• Problem set: ‘Learn to Read Evaluations’ Case study and Quiz questions
• Optional Reading: Impact Evaluations in Practice, Chapter 3-8

Week 2: February 13, 2017


3. How to Randomize?
• Lecture sequence
• Reading: Running Randomized Evaluations, Chapter 4
• Problem set: ‘Extra Teacher Program’ Case study and Quiz questions
4. Threats
• Lecture sequence
• Reading: Impact Evaluations in Practice, Chapter 9
• Problem set: ‘Deworming in Kenya’ Case study and Quiz questions

Week 3: February 20, 2017


5. Generalizability
• Lecture sequence
• Problem set: Quiz questions
6. Cost Effectiveness
• Lecture sequence
• Reading: Dhaliwal et al (2012)
• Problem set: ‘Estimating cost effectiveness in Education’ Case study

3
Week 4: February 27, 2017
7. Sampling and Randomization
• Lecture sequence
• Problem set: Quiz questions
• Optional Reading: The Power of Survey Design, Chapter 4
8. Sample size and Power
• Lecture sequence
• Reading: Running Randomized Evaluations, Chapter 6
• Problem set: Quiz questions
9. Practical tips: Sampling and Sample size
• Lecture sequence
• Readings: Berry et al (2016); Bruhn, McKenzie (2008); Crepon et al. (2014)
• Problem set: ‘Calculating Power using Statistical Software’ Exercise and Quiz
questions
• Optional Reading: Imbens (2011)

Week 5: March 6, 2017


10. Introduction to Measurement
• Lecture sequence
• Readings: Running Randomized Evaluations, Chapter 5, (5.1-5.3) pp. 180-211;
Zwane et al. (2011)
• Problem set: ‘Women as Policymakers’ Case study and Quiz questions
11. Measuring sensitive questions
• Lecture sequence
• Reading: Running Randomized Evaluations, Chapter 5 (5.4), pp. 212-240
• Problem set: Quiz questions
12. Measuring Health outcomes
• Lecture sequence
• Problem set: ‘Sensitivity and Specificity’ and ‘Incidence and Prevalence’ Exercises
• Optional Reading: National Academy of Sciences (2000)

Week 6: March 13, 2017


13. Measuring Welfare and Consumption
• Lecture sequence
• Readings: Designing Household Survey Questionnaires for Developing Countries,
Volume I, Chapter 5; Deaton and Zaidi (2002)
• Problem set: Quiz questions
14. Measuring Market activity
• Lecture sequence

4
• Readings: McKenzie and Woodruff (2016); Woodruff et al. (2007)
• Problem set: Quiz questions
15. Measuring Networks
• Lecture sequence
• Problem set: Quiz questions

Week 7: March 20, 2017


16. Measuring Behavior and Preferences
• Lecture sequence
• Reading: Gneezy and Imas (2016)
• Problem set: ‘Calculating Preferences’ Exercise
17. Measuring Learning
• Lecture sequence
• Readings: Muralidharan (2016); Glewwe and Muralidharan (2015);
• Problem set: Quiz questions
• Optional Reading: UNESCO (2016), pp 202-211; Das and Zajonc (2008)
18. Measuring Gender and Empowerment
• Lecture sequence
• Readings: Bertrand and Duflo (2016), pp. 1-40; Kabeer (1999)
• Problem set: Quiz questions

Week 8: March 27, 2017


19. Introduction to Data Collection
• Lecture sequence (no finger exercises)
• Reading: The Power of Survey Design, Chapters 1 & 2
20. Questionnaire design and piloting
• Lecture sequence
• Reading: The Power of Survey Design, Chapters 3 & 5
• Problem set: ‘Designing a Questionnaire’ Exercise
21. Modes of data collection
• Lecture sequence
• Problem set: Quiz questions

Week 9: April 3, 2017


22. Human Resources: Survey Team
• Lecture sequence
• Problem set: Quiz questions
23. Collecting high quality data: Complete data
• Lecture sequence

5
• Problem set: Quiz questions
24. Collecting high quality data: Accurate data
• Lecture sequence
• Problem set: ‘Back checks and High frequency checks using Statistical Software’
Exercise

Week 10: April 10, 2017


25. Data Entry
• Lecture sequence
• Problem set: Quiz questions
26. Data Management
• Lecture sequence
• Reading: Best Practices in Coding and Management (IPA), pp. 1-13; Gentzkow and
Shapiro (2014)
• Problem set: Quiz questions
27. Working with Administrative data
• Lecture sequence
• Reading: Using Administrative Data for Randomized Evaluations (JPAL North
America), pp.1-45
• Problem set: ‘Maternal and child home visiting program’ Case study

Week 11: April 17, 2017


28. Data Security
• Lecture sequence
• Problem set: ‘Encryption using Veracrypt’ Exercise
29. Research Transparency
• Lecture sequence
• Readings: Olken (2016); Miguel et al (2014)
• Problem set: Quiz questions
30. Ethics and IRB
• Lecture sequence
• Readings: Glennerster (2016), Section C: Ethics, pp. 22-36; Alderman et al. (2013)
• Problem set: Quiz questions

Week 12: April 24, 2017


31. Project Management
• Lecture sequence
32. Start to Finish
• Lecture sequence

6
References

Alderman, H., J. Das and V.Rao. 2013. Conducting Ethical Economic Research: Complications
from the Field. World Bank Policy Research Working Paper 6446, doi: 10.1596/1813-9450-6446

Berry, Jim, Marc Shotland and Rukmini Banerji. 2016. The Impact of Maternal Literacy and
Participation Programs: Evidence from a Randomized Evaluation in India. American Economic
Journal: Applied Economics, forthcoming

Bertrand, M. and E. Duflo. 2016. Field Experiments on Discrimination. In A. V. Banerjee and E.


Duflo (Eds.), Handbook of Field Experiment, forthcoming

Bruh, Mirium and David McKenze. 2008. In Pursuit of Balance: Randomization in Practice in
Development Economics. © World Bank. http://elibrary.worldbank.org/doi/abs/10.1596/1813-
9450-4752 License: CC: BY 3.0 IGO

Crepon, Bruno., Florencia Devoto, Esther Duflo And William Pariente, 2014. Estimating the
impact of microcredit on those who take it up: Evidence from a randomized experiment in
Morocco. American Economic Journal: Applied Economics 7(1): 123–150

Das, Jishnu and Zajonc, Tristan. 2008. India Shining and Bharat Drowning : Comparing Two
Indian States to the Worldwide Distribution in Mathematics Achievement. Policy Research
Working Paper No. 4644. World Bank, Washington, DC. © World Bank.
https://openknowledge.worldbank.org/handle/10986/6668 License: CC BY 3.0 Unported

Deaton, Angus and Salman Zaidi. 2002. Guidelines for Constructing Consumption Aggregates
for Welfare Analysis. LSMS Working Paper;No. 135. World Bank. © World Bank.
https://openknowledge.worldbank.org/handle/10986/14101 License: CC BY 3.0 IGO.”

Dhaliwal, Iqbal, Esther Duflo, Rachel Glennerster, Caitlin Tulloch. 2012. Comparative Cost-
Effectiveness Analysis to Inform Policy in Developing Countries: A General Framework with
Applications for Education; Abdul Latif Jameel Poverty Action Lab, MIT

Feeney, Laura, Jason Bauman, Julia Chabrier, Geeti Mehra, Michelle Woodford. 2015. Using
Administrative Data for Randomized Evaluations, JPAL North America

Gentzkow, Matthew and Jesse M. Shapiro. 2014. Code and Data for the Social Sciences: A
Practitioner’s Guide. University of Chicago, mimeo

Gertler, Paul J., Sebastian Martinez, Patrick Premand, Laura B. Rawlings, and Christel M. J.
Vermeersch. 2016. Impact Evaluation in Practice, second edition. Washington, DC: Inter-
American Development Bank and World Bank. doi:10.1596/978-1-4648-0779-4

7
Glennerster, Rachel. 2016 The Practicalities of Running Randomized Evaluations: Partnerships,
Measurement, Ethics, and Transparency. In A. V. Banerjee and E. Duflo (Eds.), Handbook of
Field Experiment, forthcoming

Glennerster, Rachel, and Kudzai Takavarasha. 2013. Running Randomized Evaluations: A


Practical Guide, Princeton University Press

Glewwe, P and K. Muralidharan 2015. Improving School Education Outcomes in Developing


Countries: Evidence, Knowledge Gaps, and Policy Implication RISE Working Paper 15/001

Gneezy, Uri and Alex Imas. 2016. Lab in the Field: Measuring Preferences in the Wild. In A. V.
Banerjee and E. Duflo (Eds.), Handbook of Field Experiment, forthcoming

Grosh, Margaret and Paul Glewwe. 2000. Designing Household Survey Questionnaires for
Developing Countries: Lessons from 15 Years of the Living Standards Measurement Study.
Volumes 1, 2, and 3. The World Bank.

Imbens, Guido. 2011. Experimental Design for Unit and Cluster Randomized Trials.
International Initiative of Impact Evaluations (3ie). Washington, DC, http:// cyrussamii.com/ wp-
content/ uploads/ 2011/ 06/ Imbens_June_8_paper.pdf.

Kabeer, Naila, 1999. Resources, Agency, Achievements: Reflections on the Measurement of


Women’s Empowerment. Development and Change, Volume 30, 436-464, Institute of Social
Studies, 1999, Blackwell Publishers. [DOI: 10.1111/1467-7660.00125]

Larossi, Giuseppe. 2006. The Power of Survey Design: A User's Guide for Managing Surveys,
Interpreting Results, and Influencing Respondents. Washington, DC: World Bank. © World
Bank. https://openknowledge.worldbank.org/handle/10986/6975 License: CC BY 3.0 IGO

McKenzie, David and Christopher Woodruff. 2016. Business Practices in Small


Firms in Developing Countries, Warwick Economics Research Paper Series, Series number:
1112 ISSN 2059-4283

Miguel, Edward, C. Camerer, K. Casey, J. Cohen, K. M. Esterling, A. Gerber, R. Glennerster, D.


P. Green, M. Humphreys, G. Imbens, D. Laitin, T. Madon, L. Nelson, B. A. Nosek, M. Petersen,
R. Sedlmayr, J. P. Simmons, U. Simonsohn, M. Van der Laan. 2014. Promoting Transparency in
Social Science Research. Science 343 (6166), 30-31. [doi: 10.1126/science.1245317]

Muralidharan, K. 2016. Field Experiments in Education in Developing Countries. In A. V.


Banerjee and E. Duflo (Eds.), Handbook of Field Experiment, forthcoming

National Academy of Sciences. 2000. Biological and clinical data collection in population
surveys in less developed countries. Summary of a meeting held by MEASURE Evaluation
January 24-25, 2000

8
Olken, Benjamin A.2016. Promises and Perils of Pre-Analysis Plans. Journal of Economic
Perspectives 29(3): 61–80

Pollock, H, E. Chuang and S. Wykstra. 2015. Best Practices for Data and Code Management,
Innovations for Poverty Action

Shotland, Marc. 2016. Identification vs Specification. edX.

UNESCO Institute for Statistics (UIS) (2016). Understanding What Works in Oral Reading
Assessments: Recommendations from Donors, Implementers and Practitioners. Montreal:
UNESCO Institute for Statistics

Woodruff, Christopher, David McKenzie and Suresh de Mel. 2007. Measuring Microenterprise
Profits: Don’t ask how the sausage is made World Bank Policy Research Working Paper 4229,
doi: 10.1596/1813-9450-4229

Zwane, A., J. Zinman, E. Dusen, W. Pariente, C. Null, E. Miguel, M. Kremer, D. Karlan, R.


Hornbeck, X. Giné, E. Duflo, F. Devoto, B. Crepon and A. Banerjee. 2011. Being surveyed can
change later behavior and related parameter estimates PNAS 2011 108 (5) 1821-1826; published
ahead of print January 18, 2011, doi:10.1073/pnas.1000776108

You might also like