Professional Documents
Culture Documents
http://dx.doi.org/10.1257/app.6.4.90
W hile there has been much recent research on the power of providing monetary
incentives to students in order to improve attendance and learning outcomes,
relatively little has taken place in low-income countries, and almost none in Africa.
Recent reports in many African countries show poor learning outcomes. There are
many potential causes of low performance and enrollment retention rates, includ-
ing the need for children to work and cultural norms (particularly for females). One
important reason for low retention could be a low perceived return on the invest-
ment in schooling for parents and students. This makes enrollment less attractive to
families, especially those without a long tradition of formal schooling. If students do
not internalize the entire future payoff of acquiring education, they will underinvest
in their learning efforts. Providing short-term incentives to learn marks one way to
improve learning and retention.
This study uses a randomized experiment in Benin to investigate the effect of
providing relatively small monetary rewards to students based on their learning
outcomes. I offer incentives directly to students using three designs. The first,
* Department of International and Area Studies, University of Oklahoma, 338 Cate Center Drive, Norman, OK
73072 (e-mail: moussa.blimpo@ou.edu). This work was supported by the National Science Foundation under the
award No. SES-0750962. Any opinions, findings, and conclusions or recommendations expressed in this material
are those of the author and do not necessarily reflect those of the National Science Foundation. The protocol for
this research received the formal approval of the University Committee on Activities Involving Human Subjects
(UCAIHS) at New York University. I would like to thank Professors Nicola Persico and Leonard Wantchekon for their
generous support throughout this project. Thanks to Professor William Easterly, Dr. David K. Evans, Ali Yurukoglu,
John Shoven, Greg Rosston, Gopi Shah Goda, Camille Landais, Nick Sanders, Peter Nilsson, Habiba Djebbari, Justin
Sandefur, Andrew Zeitlin, and anonymous referees for their comments and suggestions. I benefited from conversations
with Andrew Schotter, Petra Todd, Nancy Qian, Pauline Grosjean, Pascaline Dupas, and Michael Kremer. Thanks
to participants of seminars and conferences at MEPSA Seminar (IREEP, Benin 2009), Amherst College, Dalhousie
University, Impact Evaluation Network 2010 Conference (University of Miami), SIEPR (Stanford University), DIME
(World Bank 2009), NEUDC 2010 (MIT), CSAE 2011(University of Oxford). All errors are mine.
†
Go to http://dx.doi.org/10.1257/app.6.4.90 to visit the article page for additional materials and author
disclosure statement(s) or to comment in the online discussion forum.
90
Vol. 6 No. 4 Blimpo: Team Incentives for Education 91
The existing work on the impact of incentives on learning has been concentrated in
high-income countries. Studies from the Netherlands (Leuven, Oosterbeek, and Van
der Klaauw 2003), the United States (Bettinger 2008; Fryer 2010), Israel (Angrist and
Lavy 2009), and Canada (Angrist, Lang, and Oreopoulos 2009; Angrist, Oreopoulos,
and Williams 2010) found mixed results. For example, Fryer (2010) found that incen-
tives may be more effective when tied to input rather than performance; one reason is
that students do not necessarily know the production function.
There are many reasons to believe that the impact of similar studies in low-income
countries could be positive. First, students in high-income countries are more likely
to receive some form of incentives from their parents already. In contrast, parents
in rural areas of poor countries may be unable to provide such support or afford
those incentives. Second, because learning outcomes are substantially lower in
low-income settings, additional effort may yield higher value added. Finally, the
relative value of the incentives are much higher, potentially making the schemes
more likely to be cost effective in low-income countries.
To my knowledge, the only study conducted in Africa that focused on this ques-
tion is Kremer, Miguel, and Thornton (2007), which tested a girls’ scholarship
92 American Economic Journal: applied economicsOctober 2014
p rogram in two rural districts of Kenya. The researchers offered financial incentives
to sixth-grade girls for scoring in the top 15 percent and found substantial improve-
ment in learning outcomes.1 In most instances, these programs aim to reduce the cost
of human capital investments for households. One attractive aspect of this study, in
addition to contributing further evidence of the power of incentives for education in
low-income countries, is that it tests programs that rely entirely on administration
data routinely gathered by the ministry of education.
Several recent studies have focused more attention on the importance of the
designs and the structures of the incentives as well. For example, incentives might
have different effects depending on whether they target families (parents) or chil-
dren. Berry (2009) addresses this question in the context of India and found that
for low-income families the incentives were more effective when targeted to the
children directly.2 Another study in rural China compared several combinations of
cash incentives, peer tutoring, and parental involvement, and concluded that fos-
tering peer interaction enhances learning outcomes (Li et al. 2010). In Columbia,
Barrera-Osorio et al. (2011) studied design aspects of the conditional cash programs
and found that it is more effective to delay some of the payment until a re-enrollment
decision is made, or to tie some of the incentives to graduation instead of attendance
exclusively. One shortcoming of many of the existing studies is that they require
large implementation and data collection exercises if they are to be scaled up to the
entire country.
I chose to evaluate incentives’ effects both on individuals and teams because
incentives directed only at the individual may fail to motivate students on either
extremes of the performance spectrum. Low-ability students may not work harder
if they perceive the target to be unattainably high, while high-ability students could
win without extra effort if the target is perceived to be too low. This is precisely
what Neal and Schanzenbach (2010) found regarding teacher effort in a study of
the effect of the No Child Left Behind policy in which schools were rewarded for
achieving specified targets. Providing incentives in teams could foster mutual help
among students (Slavin 1984). It could also generate positive peer effect, which is
widely documented in the context of classrooms (Hoxby 2000; Sacerdote 2001;
Cipollone and Rosolia 2007; Ding and Lehrer 2007; Carrell, Fullerton, and West
2009; Ammermueller and Pischke 2009; De Giorgi, Pellizzari, and Redaelli 2010;
Duflo, Dupas, and Kremer 2011).3 However, the drawbacks of team incentives—the
free rider problem, and also the potential for excessive pressure from peers (Stiglitz
1990, Kandel and Lazear 1992)—leave the effect of the team-based incentive design
as an essentially empirical question.
1
The conditional cash transfers programs, such as Progresa in Mexico and PACES (Angrist, Bettinger, and
Kremer 2006) in Colombia, are also successful examples, but in upper middle income countries.
2
One shortcoming of Berry (2009) is that it lacks a proper control group.
3
Peer effects have been documented in other areas, such as people’s attitudes toward financial decisions (Duflo
and Saez 2003) or criminal behavior (Case and Katz 1991; Bayer, Hjalmarsson, and Pozen 2009).
Vol. 6 No. 4 Blimpo: Team Incentives for Education 93
This section presents a brief background of the education system in Benin and
describes key elements of the experiment. The sample size for this research is large
enough to identify an effect size of 0.3 standard deviations at a significance level of
5 percent, and with a statistical power of at least 80 percent between each design and
the control group. The choice of minimum effect size was partly based on its eco-
nomic significance. The national director of secondary education in Benin reported
that a minimum improvement in test scores that corresponded to about 0.25 standard
deviations would be considered economically significant and call for his attention.
Because of additional constraints on resources available to pay the incentives, we
chose an effect size of 0.30. The sample size does not allow us to detect a small dif-
ference or compare the different intervention groups against each other.
A. Contextual Background
B. Selection of Participants
I obtained the administrative list of all 1,279 secondary schools in Benin from
the Ministry of Education along with figures on the performance of each in the
previous year.5
I excluded schools where the passing rate was 65 percent6 or more because the
study is intended primarily for poorly performing areas. I also excluded schools
that did not have the tenth grade or were very small (with less than ten candidates).
There were 749 schools remaining.
I randomly assigned 100 participant schools (1,476 students) to the 4 groups.
The Individual Target and the Team Target groups had 22 schools each and 16 stu-
dents per school. The Team Tournament group had 28 schools and 12 students
per school. Finally the control group had 28 schools and about 16 students per
4
World Bank (2009).
5
The study was conducted with the formal approval of the Ministry of Education of Benin. I also received
approval from the University Committee on Activities Involving Human Subjects at New York University to ensure
that there were no ethical concerns. All documents are available upon request.
6
The average passing rate on the BEPC was 43 percent in the previous academic year (2007–2008).
94 American Economic Journal: applied economicsOctober 2014
749 schools
Complete randomization
Figure 1. Sampling
school.7 Figure 1 depicts the sampling process and Figure 2 the geographical dis-
tribution of the schools.
At the school level, the selection of classrooms and students was also random. I
randomly selected a tenth-grade class, read the consent form, then I randomly selected
12 to 16 students to participate. No student withdrew from the study or refused to
participate. In the Team Tournament schools, I selected 12 students to form 3 teams
and compete for 3 prizes in the tournament. The other 3 groups consisted of 16 stu-
dents per school. Treatment is at the school level; therefore, students from the same
school received the same treatment.8 Finally, I collected baseline information on the
students and gave them instructions about their specific incentive package.9
C. The Treatments
7
The difference in the number of schools between Team Tournament and the other groups is due to the fact that
I wanted to have as many teams as the number of prizes within schools for the tournament. This was intended to
mitigate potential internal competition among teams. The statistical power threshold is met for each intervention
relative to the control.
8
Note that by design there is no possibility for a student self-selecting into the treatment or control schools.
The treatment status of schools and students were unknown before we actually reached the school, and afterward
it remained unchanged.
9
Underage students (under 18 years old) were required to return a signed parental consent form to participate.
All consent forms were returned to the principals and forwarded to us.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 95
3
3
3
1
1 1 1 2
3 2
1 2 2
3
3 1
3 2 1
2 1 1
1
3 2 1 1
2 2 1 1 1
2 3 21
1
2 2 1
2
2
22 2
Porto-Novo
Figure 2.
Note: Geographical distribution of the sample across the country. Legend: Flag = Control,
1 = Individual, 2 = Team Target, 3 = Team Tournament.
10
The score of 10 out of 20 is the passing grade on the BEPC and 12 out of 20 is the threshold to pass with
honors.
11
Aside from one team that made a rule to pay only the team members who passed, all teams agreed to share
equally if they won.
96 American Economic Journal: applied economicsOctober 2014
Note: The incentives in terms of number of weeks of pocket money are obtained by dividing
the size of the incentive in USD by the average weekly pocket money in the data.
Individual Target:
Pp × 5,000 + Ph × 20,000 + pf × 0 = 2,850(≈ $6).
Team Target: Since the prize would be awarded based on the average perfor-
mance of the team, one could think of each team as a representative student (average
of the four teammates). That “average” student faces similar odds as students in the
individual target group. Therefore, under those assumptions, the ex ante expected
payoff for each team member is as follows:
Team Tournament: Here, the odds of winning are independent of p p, ph, and p f
since the top three teams out of the 84 teams win regardless of their level of per-
formance. The expected payoff from participating in the team tournament is given
then by
The financial incentives are small in absolute terms as reported above, but they are
relatively large for the context. For the winners, the individual incentive amounted
to about $10 without honors and $40 with honors. This corresponds, respectively,
to 4.5 and 18 weeks’ worth of the average pocket money of the participants.12 It
also represents 17 percent and 69 percent, respectively, of the official monthly mini-
mum wage in Benin. Each of the three prizes in the Team Tournament treatment
represents 291 weeks’ worth of the average pocket money. Each member of a win-
ning team in the Team Tournament could make as much as 1.4 years of average
pocket money or 2.75 months of minimum wage. The second column of Table 1
summarizes the incentives in nominal US dollars. These amounts are also expressed
in terms of reported weekly pocket money and as a share of the monthly minimum
wage in Benin.
12
The information on the average weekly pocket money was collected during a pretest of the survey instruments.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 97
I use students’ scores on the BEPC as the measure of performance. The BEPC is
a national examination that all tenth-grade students must pass in order to advance
to high school. It is a comprehensive examination that covers all subjects. It has
two phases: a written phase and an oral phase. The written phase consists of seven
subjects: (i) mathematics, (ii) physics and chemistry or English, (iii) natural
science, (iv) history and geography, (v) writing, (vi) reading comprehension, and
(vii) French. The oral phase has two subjects: sport and oral communication. The
policy in Benin to calculate the average score requires to weigh mathematics by
three, sport and oral communication by one, and each of the remaining subjects by
two. There is a strict proctoring during the test and students are not proctored by
their own professors. Instances of cheating on the BEPC are extremely rare.
The BEPC is an ideal test to use in this study because it’s a national test that is
administered anyway. The test is administered on the same day and time all over
the country. Grading is centralized and anonymous. Students first sit for the written
phase. If they do not achieve the threshold average score over the 7 written subjects
(typically about 9 out of 20), they are disqualified and do not take the oral exam.
There is an unwritten but strong leniency policy that allows students who qualify for
the oral phase to almost always achieve a passing grade or to increase their score.
For this reason, very few students have a final grade in the range of 9 to 10. This
custom is inconsequential to the identification strategy because it applies to both the
treated and the control groups in the same way. Still, since it is easier to manipulate
grades in the oral phase, I report results based only on the written phase. This also
ensures that all students’ scores are based on the same number of subjects.
F. Data
I combine baseline data, the final outcome data, and some information from mon-
itoring visits to conduct the empirical analysis.13
The Baseline Data.— At the beginning of the school year, the research team col-
lected baseline data and implemented the treatments. The baseline data consisted of
the school characteristics and was constructed through an in-depth interview with
the head teacher at each school. We also collected information about the school’s
environment, infrastructure, size, management, community participation, and
other characteristics. Next, we interviewed each participant student. We collected
sociodemographic information for each student, including their past performance,
their home environment, and their opinions on various aspects of their education. In
addition, we collaborated with ninth-grade teachers to design short mathematics and
French quizzes that were administered to the students.14
13
Note that the key analysis done in this paper was written and saved with my peers in a form of analysis pro-
tocol before the final data was collected.
14
Quiz scores will, however, not be used in the analysis because of an implementation issue. In the intervention
schools, the quizzes were administered after the incentives were announced, and students in those groups took it
98 American Economic Journal: applied economicsOctober 2014
The Follow-up Monitoring.— Two months before the final exam, we visited about
half the schools to remind students of the incentives. Unfortunately, due to budget
constraints, we were only able to visit schools from the southern region. However,
we called all the schools to remind them about the incentives. Also, we developed
a questionnaire to use as a monitoring tool, in which we asked students about their
expectations for the final exam, whether they received other incentives, and whether
they studied in teams. We updated students’ cell phone contact information. We also
mailed envelopes with these questionnaires and a prestamped return envelope to a
randomly selected set of schools. We were able to collect monitoring information on
about half the participants.
The Final Data.— The final data consist of the students’ BEPC report cards with
all the scores in each subject and the final weighted average score. We obtained
this data directly from the Ministry of Education and the National Directorate of
Secondary Education.
I now present basic summary statistics on the student and school populations and
show that the different groups were comparable at the baseline. The sample is made
of 1,476 tenth-grade students in 100 schools.
The average tenth-grade class size is 41.30, and there are about 3 tenth-grade
classes per school. Schools have 13 classrooms and 695 students on average. About
40 percent of the sample are females. The average age is 16.22, and the average is
slightly higher for male (16.33) than female (16.06) students. This suggests that
females are more likely than males to drop out after repeating a grade. Regarding
family characteristics, 14 percent of the students reported their father as deceased
and 6 percent reported their mother as deceased. The average number of siblings is
6.54. Of the sample population, 39 percent reported having worked for pay during
the school year.
Baseline group comparison shows that there were no systematic differences
between the control group and the intervention groups. Tables 2 and 3 show that
treatment groups are balanced along both the school and student characteristics. The
last column shows the p-values for the joint test of equality across all groups, and no
variables are significant at the 5 percent level.
Figure 3 shows a similar distribution of test scores in French and mathematics
across groups at the baseline.
The primary objective of this paper is to measure the causal effect of the incen-
tives across intervention groups. I do this by estimating and comparing the average
treatment effect of the three treatments relative to the control group.
more seriously than in the control group. In addition, the quizzes consisted of six multiple choice questions and did
not yield important variations in the outcome.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 99
Observations 100 28 22 22 28
I estimate equation (1) below, where standard errors are clustered at the school
level.
3
(1) scoreis = β0 + ∑
β
k × T kis + ϵis ,
k=1
In this section, I present the main findings, beginning with summary statistics
for the BEPC outcomes and the overall average treatment effect. I also analyze
the extent of h eterogeneity of the treatment effect and the dynamics within teams.
Finally, I assess the cost effectiveness of each treatment arm.
In 2009, 150,847 candidates registered to take the BEPC. About 97 percent
(145,889) of the registered candidates actually took the test, and of those, 44.81 per-
cent passed.
The summary statistics in Table 4 show that students presented with any of
the incentives were about 10 percent more successful on the BEPC than students
assigned to the control group. Honors were awarded to 7 percent of students in the
control group, 11 percent in the Individual Target group, 13 percent in the Team
Target group, and 16 percent in the Team Tournament group.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 101
Panel A. French test score the previous year Panel B. Math test score the previous year
0.2 0.2
0.15 0.15
Kernel density
0.1 0.1
0.05 0.05
0 0
0 5 10 15 20 0 5 10 15 20
Control I-target
T-target T-tournament
The lowest average score was in mathematics, in which students scored 4.70
out of 20 possible points, on average. The Team Tournament group was the best in
mathematics with an average of 5.11 out of 20. Overall, students scored best in his-
tory and geography (12.82/20). The three treatment groups scored better than the
control group in all subjects except writing.
There were substantial test score gains for students who were presented with the
incentives. The average treatment effects are reported in the first two columns of
Table 5. The third and fourth columns compare nonscientific subjects (e.g., history)
with scientific subjects (e.g., mathematics).
The point estimate of the average treatment effect is 0.34 standard deviations for
the team tournament scheme, 0.29 for the individual scheme, and 0.27 for the team
target scheme. The effects are statistically significant at the 5 percent level for both
the Individual Target and the Team Tournament interventions, and significant at the
10 percent level for the Team Target intervention. As stated earlier, by design, the
statistical power does not allow me to distinguish small differences between inter-
ventions. Based on the actual effect sizes, the statistical powers are 81 percent for
the Individual Target, 76 percent for the Team Target, and 93 percent for the Team
Tournament. A t-test could not reject the null hypothesis that the average treatment
effects are the same. The results are statistically the same across intervention groups
102 American Economic Journal: applied economicsOctober 2014
Notes: Standard deviations in parentheses. Standard errors clustered at the school level. All the
scores are over 20 possible points.
a
All the subjects
b
Only written subjects
BEPC Written
(all subjects) BEPC Science Nonscience
(I) (II) (III) (IV)
Individual target 0.29** 0.29** 0.22* 0.25**
(0.13) (0.13) (0.13) (0.12)
Team target 0.27* 0.26* 0.28** 0.17
(0.14) (0.16) (0.14) (0.16)
Team tournament 0.34** 0.34** 0.38*** 0.24*
(0.13) (0.14) (0.13) (0.13)
Pooled (treatments) 0.30** 0.29** 0.29** 0.22*
(0.12) (0.12) (0.11) (0.11)
Notes: Robust standard errors in parentheses (clustered at the school level). Science = math-
ematics and natural sciences; Nonscience = language (reading, writing, second language, his-
tory, geography).
a
The p-value in the second to last row is for the test of the equality of the ATE across all
three treatments arms.
*** Significant at the 1 percent level.
** Significant at the 5 percent level.
* Significant at the 10 percent level.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 103
Table 6—Average Treatment Effect on the BEPC Score Controlling for Baseline Characteristics
Notes: Robust standard errors in parentheses (clustered at the school level). Science = mathematics and natural sci-
ences; nonscience = language (reading, writing, second language, history, geography).
The student level control variables include students’ score in the previous year (ninth grade), gender, whether
a
the student’s father lives or has passed away, and the time the student spends to travel from home to school.
b
The school level control variables include the head teacher’s tenure, the average class size in tenth grade, the
amount of the tuition fees, whether the school has a parent-teacher association, and whether there is a clinic of
any health facilities associated with the school.
*** Significant at the 1 percent level.
** Significant at the 5 percent level.
* Significant at the 10 percent level.
both for scientific and nonscientific subjects. I ran the same estimates while control-
ling for a number of variables at the baseline, such as student’s test scores during the
previous year and a number of student and school characteristics. Table 6 shows that
the results do not change.
This finding suggests that all three designs are viable policy tools available to
policymakers who want to incentivize students. The average treatment effects are
large and there is no difference between boys and girls. In a related study in Israel,
Angrist and Lavy (2009) provided financial incentives for a high-stakes certifica-
tion examination. Even though they found no effect on boys, they found substantial
increased certification for girls. Given that the return to schooling is high in both
contexts of these studies, the findings raise the question of why these additional
financial incentives work.
In this section, I examine the extent to which the different interventions affected
subgroups of students differently. The best way to approximate this effect is to
examine the heterogeneity across a measure of performance at the baseline. At the
baseline, we collected students’ overall annual average scores in the previous year.
I plot the baseline test score against the final score on the BEPC nonparametri-
cally. The results are presented in Figure 4. The different graphs show a monotoni-
cally increasing relation, as one would expect.15 The figures show roughly a similar
15
Tenth-grade repeaters were dropped from this estimation as their previous year’s performance was the
tenth-grade exam, unlike the new students who were in ninth grade.
104 American Economic Journal: applied economicsOctober 2014
0 0
−0.5 −0.5
−1 −1
−6 −2 2 −6 −4 −2 0 2 4
Baseline test score Baseline test score
95% CI 95% CI 95% CI 95% CI
Control Individual Target Control Team Target
1
End line test score
0.5
−0.5
−1
−6 −4 −2 0 2 4
Baseline test score
95% CI 95% CI
Control Individual Target
p attern across the baseline performance range. However, the estimates lack preci-
sion to draw strong conclusions regarding the heterogeneity of the treatment effect.
The regression analysis is presented in Table 7 both under the assumption of linear
interaction effect and nonlinear effect.
The first column shows a substantial and statistically significant interaction effect
with only the tournament, indicating that high performers at the baseline benefited
more when they were in the tournament arm. The second column presents the results
where the treatment variables are interacted with quartile of the baseline test scores.
The relative magnitude of the estimates is similar to the linear effect, however the
estimates are imprecise and statistically insignificant.
This section documents the mechanism through which the average treatment
effect is produced within teams. Do high-performing team members help the others,
or does each member simply work harder? If mutual help and complementarities
play a role, one would expect that, controlling for the average baseline quality of
the team, the impact would increase with the heterogeneity of teams. To investigate
these dynamics within teams, I look at how the baseline heterogeneity of the teams
impacts the final performance on the BEPC.
I find that, controlling for the baseline average performance of a team, heteroge-
neity in the team is positively associated with the final performance on the BEPC.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 105
R2 0.10 0.18
Observations 1,333 1,103
Notes: The dependent variable is the standardized test score on the BEPC. Robust standard
errors in parentheses (clustered at the school level).
*** Significant at the 1 percent level.
** Significant at the 5 percent level.
* Significant at the 10 percent level.
−5
−10
0 1 2 3 4
Within team standard deviation of previous year’s score
−5
0 1 2 3 4
Within team standard deviation of previous year’s score
The expected cost of the prizes for the Individual Target and the Team Target were
$2,000 each, and $1,920 for the Team Tournament. Table 9 shows the actual amount
that were won and paid as incentives. The Individual Target cost 54.50 percent more
than was estimated while the Team Target cost 43.50 percent less than was estimated.
The ratio of the cost per student and the average test score gains presented in the
last row of Table 9 is about $16 per standard deviation gain in test scores for Team
Tournament. This indicates that incentivizing students could be cost-effective in this
context.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 107
R2 0.06 0.06
Observations 328 333
Notes: The dependent variable is the end-line test score on a scale of 0–20 points. Robust stan-
dard errors in parentheses.
*** Significant at the 1 percent level.
** Significant at the 5 percent level.
* Significant at the 10 percent level.
Notes: The currency is the US dollar. Robust standard errors of the average treatment effects
in parentheses.
As for the internal rate of return, the costs of the programs evaluated in this paper
are relatively small, even with the operational costs included. On the benefit side,
one could think of pecuniary and nonpecuniary benefits. The nonpecuniary ben-
efits include the effect of education, not only on health outcomes but also on many
sociocultural outcomes as documented recently in Kenya (Friedman et al. 2011).
While I do not possess robust data to estimate the internal rate of return, there is
reason to believe that it is likely to be high. Recent estimates in Benin by a World
Bank study suggest that the pecuniary returns are high relative to the cost of this
study. Researchers found that the marginal social return of the BEPC relative to the
primary school certificate (that is the extra four years since graduation from primary
school), is rm /p= 1.2 percent; the marginal social return of the high school diploma
relative to the BEPC is r h/m= 7.1 percent (World Bank 2008).16
16
This World Bank study used a basic Mincer regression framework to estimate the private and social marginal
returns to different levels of education in Benin. It also reported the yearly cost of different levels of education. The
private return compares the gains from acquiring that level of education against the private cost. The social return
includes the public funding of education in the cost columns.
108 American Economic Journal: applied economicsOctober 2014
V. Conclusion
REFERENCES
Ammermueller, Andreas, and Jörn-Steffen Pischke. 2009. “Peer Effects in European Primary Schools:
Evidence from Progress in International Reading Literacy Study.” Journal of Labor Economics
27 (3): 315–48.
Angrist, Joshua, Eric Bettinger, and Michael Kremer. 2006. “Long-Term Educational Consequences
of Secondary School Vouchers: Evidence from Administrative Records in Colombia.” American
Economic Review 96 (3): 847–62.
Angrist, Joshua, Daniel Lang, and Philip Oreopoulos. 2009. “Incentives and Services for College
Achievement: Evidence from a Randomized Trial.” American Economic Journal: Applied Eco-
nomics 1 (1): 136–63.
Angrist, Joshua, and Victor Lavy. 2009. “The Effects of High Stakes High School Achievement
Awards: Evidence from a Randomized Trial.” American Economic Review 99 (4): 1384–1414.
Angrist, Joshua, Philip Oreopoulos, and Tyler Williams. 2010. “When Opportunity Knocks, Who
Answers? New Evidence on College Achievement Awards.” National Bureau of Economic Research
(NBER) Working Paper 16643.
Barrera-Osorio, Felipe, Marianne Bertrand, Leigh L. Linden, and Francisco Perez-Calle. 2011.
“Improving the Design of Conditional Transfer Programs: Evidence from a Randomized Educa-
tion Experiment in Colombia.” American Economic Journal: Applied Economics 3 (2): 167–95.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 109
Bayer, Patrick, Randi Hjalmarsson, and David Pozen. 2009. “Building Criminal Capital Behind Bars:
Peer Effects in Juvenile Corrections.” Quarterly Journal of Economics 124 (1): 105–47.
Behrman, Jere R., Piyali Sengupta, and Petra Todd. 2002. The Impact of PROGRESA on Achievement
Test Scores in the First Year. International Food Policy Research Institute (IFPRI). Washington,
DC, September.
Berry, James. 2009. “Child Control in Education Decisions: An Evaluation of Targeted Incentives to
Learn in India.” http://www.depeco.econo.unlp.edu.ar/cedlas/ien/pdfs/meeting2009/papers/berry.pdf.
Bettinger, Eric P. 2008. “Paying to Learn: The Effect of Financial Incentives on Elementary School
Test Scores.” Paper presented at the CESifo/PEPG joint conference, Munich, May 16–17.
Blimpo, Moussa P. 2014. “Team Incentives for Education in Developing Countries: A Random-
ized Field Experiment in Benin: Dataset.” American Economic Journal: Applied Economics.
http://dx.doi.org/10.1257/app.6.4.90.
Carrell, Scott E., Richard L. Fullerton, and James E. West. 2009. “Does Your Cohort Matter? Measur-
ing Peer Effects in College Achievement.” Journal of Labor Economics 27 (3): 439–64.
Case, Anne C., and Lawrence F. Katz. 1991. “The Company You Keep: The Effects of Family and
Neighborhood on Disadvantaged Youths.” National Bureau of Economic Research (NBER) Work-
ing Paper 3705.
Cipollone, Piero, and Alfonso Rosolia. 2007. “Social Interactions in High School: Lessons from an
Earthquake.” American Economic Review 97 (3): 948–65.
Das, Jishnu, Stefan Dercon, James Habyarimana, Pramila Krishnan, Karthik Muralidharan, and
Venkatesh Sundararaman. 2013. “School Inputs, Household Substitution, and Test Scores.” Amer-
ican Economic Journal: Applied Economics 5 (2): 29–57.
De Giorgi, Giacomo, Michele Pellizzari, and Silvia Redaelli. 2010. “Identification of Social Interac-
tions through Partially Overlapping Peer Groups.” American Economic Journal: Applied Econom-
ics 2 (2): 241–75.
Ding, Weili, and Steven F. Lehrer. 2007. “Do Peers Affect Student Achievement in China’s Secondary
Schools?” Review of Economics and Statistics 89 (2): 300–312.
Duflo, Esther, Pascaline Dupas, and Michael Kremer. 2011. “Peer Effects, Teacher Incentives, and
the Impact of Tracking: Evidence from a Randomized Evaluation in Kenya.” American Economic
Review 101 (5): 1739–74.
Duflo, Esther, and Emmanuel Saez. 2003. “The Role of Information and Social Interactions in Retire-
ment Plan Decisions: Evidence from a Randomized Experiment.” Quarterly Journal of Economics
118 (3): 815–42.
Friedman, Willa, Michael Kremer, Edward Miguel, and Rebecca Thornton. 2011. “Education as Lib-
eration?” National Bureau of Economic Research (NBER) Working Paper 16939.
Fryer, Roland G., Jr. 2010. “Financial Incentives and Student Achievement: Evidence from Random-
ized Trials.” National Bureau of Economic Research (NBER) Working Paper 15898.
Hoxby, Caroline. 2000. “Peer Effects in the Classroom: Learning from Gender and Race Variation.”
National Bureau of Economic Research (NBER) Working Paper 7867.
Kandel, Eugene, and Edward P. Lazear. 1992. “Peer Pressure and Partnerships.” Journal of Political
Economy 100 (4): 801–17.
Kremer, Michael, Edward Miguel, and Rebecca Thornton. 2007. “Incentives to Learn.” http://digi-
lander.libero.it/mgtund/IncentivesToLearn_Kremer.pdf.
Leuven, Edwin, Hessel Oosterbeek, and Bas Van der Klaauw. 2010. “The Effect of Financial Rewards
on Students’ Achievement: Evidence from a Randomized Experiment.” Journal of European Eco-
nomic Association 8 (6): 1243–65.
Li, Tao, Li Han, Scott Rozelle, and Linxiu Zhang. 2010. “Cash Incentives, Peer Tutoring, and Parental
Involvement: A Study of Three Educational Inputs in a Randomized Field Experiment in China.”
Stanford University Rural Education Action Project Working Paper 221.
Neal, Derek, and Diane Whitmore Schanzenbach. 2010. “Left Behind by Design: Proficiency Counts
and Test-Based Accountability.” Review of Economics and Statistics 92 (2): 263–83.
Sacerdote, Bruce. 2001. “Peer Effects with Random Assignment: Results for Dartmouth Roommates.”
Quarterly Journal of Economics 116 (2): 681–704.
Slavin, Robert E. 1984. “Students Motivating Students to Excel: Cooperative Incentives, Cooperative
Tasks, and Student Achievement.” Elementary School Journal 85 (1): 53–63.
Stiglitz, Joseph E. 1990. “Peer Monitoring and Credit Markets.” World Bank Economic Review 4 (3):
351–66.
World Bank. 2 009. Le systèm éducatif Béninois: Analyse sectorielle pour une politique éducative plus
équilibrée et plus efficace. Le Développement Humain en Afrique. Washington, DC: World Bank.