You are on page 1of 20

American Economic Journal: Applied Economics 2014, 6(4): 90–109

http://dx.doi.org/10.1257/app.6.4.90

Team Incentives for Education in Developing Countries:


A Randomized Field Experiment in Benin †
By Moussa P. Blimpo *

I examine the impact of student incentives in Benin, using three dif-


ferent designs that can be implemented relatively cheaply and with
administrative data. The first design is a standard incentive struc-
ture where students receive monetary rewards for reaching a perfor-
mance target. In the other two designs, teams of four students receive
incentives based on either their performance level as a group or in
a team tournament scheme. I find a large and similar average treat-
ment effect across designs, ranging from 0.27 to 0.34 standard devia-
tions (Standard errors do not allow to rule out that the three designs
are equally effective). (JEL C93, D82, I21, I28, O15)

W hile there has been much recent research on the power of providing monetary
incentives to students in order to improve attendance and learning outcomes,
relatively little has taken place in low-income countries, and almost none in Africa.
Recent reports in many African countries show poor learning outcomes. There are
many potential causes of low performance and enrollment retention rates, includ-
ing the need for children to work and cultural norms (particularly for females). One
important reason for low retention could be a low perceived return on the invest-
ment in schooling for parents and students. This makes enrollment less attractive to
families, especially those without a long tradition of formal schooling. If students do
not internalize the entire future payoff of acquiring education, they will underinvest
in their learning efforts. Providing short-term incentives to learn marks one way to
improve learning and retention.
This study uses a randomized experiment in Benin to investigate the effect of
providing relatively small monetary rewards to students based on their ­learning
outcomes. I offer incentives directly to students using three designs. The first,

* Department of International and Area Studies, University of Oklahoma, 338 Cate Center Drive, Norman, OK
73072 (e-mail: moussa.blimpo@ou.edu). This work was supported by the National Science Foundation under the
award No. SES-0750962. Any opinions, findings, and conclusions or recommendations expressed in this material
are those of the author and do not necessarily reflect those of the National Science Foundation. The protocol for
this research received the formal approval of the University Committee on Activities Involving Human Subjects
(UCAIHS) at New York University. I would like to thank Professors Nicola Persico and Leonard Wantchekon for their
generous support throughout this project. Thanks to Professor William Easterly, Dr. David K. Evans, Ali Yurukoglu,
John Shoven, Greg Rosston, Gopi Shah Goda, Camille Landais, Nick Sanders, Peter Nilsson, Habiba Djebbari, Justin
Sandefur, Andrew Zeitlin, and anonymous referees for their comments and suggestions. I benefited from conversations
with Andrew Schotter, Petra Todd, Nancy Qian, Pauline Grosjean, Pascaline Dupas, and Michael Kremer. Thanks
to participants of seminars and conferences at MEPSA Seminar (IREEP, Benin 2009), Amherst College, Dalhousie
University, Impact Evaluation Network 2010 Conference (University of Miami), SIEPR (Stanford University), DIME
(World Bank 2009), NEUDC 2010 (MIT), CSAE 2011(University of Oxford). All errors are mine.

 Go to http://dx.doi.org/10.1257/app.6.4.90 to visit the article page for additional materials and author
disclosure statement(s) or to comment in the online discussion forum.

90
Vol. 6 No. 4 Blimpo: Team Incentives for Education 91

Individual Target, is a standard incentive design where students receive monetary


rewards ­individually for reaching a performance target. In the second design, Team
Target, teams of four students received rewards based on the performance of the
team as a whole. In the third design, Team Tournament, teams of four students com-
pete across schools for three substantial monetary prizes.
I randomly assigned 100 secondary schools to one of three treatment groups or
the control group. In treatment schools, tenth-grade students were presented with
equivalent incentives at the beginning of the school year, and their scores on a
national examination were collected at the end of the year to serve as final outcome.
The results of the experiment show a substantial effect on students’ learning out-
comes, on average, across the three designs. I find an average treatment effect of
0.29 standard deviations on the overall test score in the Individual Target group
(significant at the 5 percent level), 0.27 standard deviations in Team Target (signifi-
cant at 10 ­percent), and 0.34 standard deviations in Team Tournament (significant at
5 percent). The impact is statistically the same across interventions, indicating that
the incentives have the ­first-order effect irrespective of the three designs with which
they were given.
This finding suggests that relatively small incentives for students can generate
substantial gains in test scores in Benin. This research contributes to the small but
growing literature about incentivizing educational attainment in developing coun-
tries. In addition, this work relied exclusively on administrative data, thus making it
easy to scale up with relatively low administrative cost.
The rest of the paper is organized as follows. Section I presents an overview of
the literature on incentives in academic settings. Section II describes the experimen-
tal design. Section III talks about the econometrics framework and the identification,
Section IV presents and discusses the results, and Section V concludes.

I.  Related Literature

The existing work on the impact of incentives on learning has been concentrated in
high-income countries. Studies from the Netherlands (Leuven, Oosterbeek, and Van
der Klaauw 2003), the United States (Bettinger 2008; Fryer 2010), Israel (Angrist and
Lavy 2009), and Canada (Angrist, Lang, and Oreopoulos 2009; Angrist, Oreopoulos,
and Williams 2010) found mixed results. For example, Fryer (2010) found that incen-
tives may be more effective when tied to input rather than performance; one reason is
that students do not necessarily know the production function.
There are many reasons to believe that the impact of similar studies in low-income
countries could be positive. First, students in high-income countries are more likely
to receive some form of incentives from their parents already. In contrast, parents
in rural areas of poor countries may be unable to provide such support or afford
those incentives. Second, because learning outcomes are substantially lower in
­low-income settings, additional effort may yield higher value added. Finally, the
relative value of the incentives are much higher, potentially making the schemes
more likely to be cost effective in low-income countries.
To my knowledge, the only study conducted in Africa that focused on this ques-
tion is Kremer, Miguel, and Thornton (2007), which tested a girls’ scholarship
92 American Economic Journal: applied economicsOctober 2014

p­ rogram in two rural districts of Kenya. The researchers offered financial incentives
to ­sixth-grade girls for scoring in the top 15 percent and found substantial improve-
ment in learning outcomes.1 In most instances, these programs aim to reduce the cost
of human capital investments for households. One attractive aspect of this study, in
addition to contributing further evidence of the power of incentives for education in
low-income countries, is that it tests programs that rely entirely on administration
data routinely gathered by the ministry of education.
Several recent studies have focused more attention on the importance of the
designs and the structures of the incentives as well. For example, incentives might
have different effects depending on whether they target families (parents) or chil-
dren. Berry (2009) addresses this question in the context of India and found that
for low-income families the incentives were more effective when targeted to the
children directly.2 Another study in rural China compared several combinations of
cash incentives, peer tutoring, and parental involvement, and concluded that fos-
tering peer interaction enhances learning outcomes (Li et al. 2010). In Columbia,
­Barrera-Osorio et al. (2011) studied design aspects of the conditional cash programs
and found that it is more effective to delay some of the payment until a re-enrollment
decision is made, or to tie some of the incentives to graduation instead of attendance
exclusively. One shortcoming of many of the existing studies is that they require
large implementation and data collection exercises if they are to be scaled up to the
entire country.
I chose to evaluate incentives’ effects both on individuals and teams because
incentives directed only at the individual may fail to motivate students on either
extremes of the performance spectrum. Low-ability students may not work harder
if they perceive the target to be unattainably high, while high-ability students could
win without extra effort if the target is perceived to be too low. This is precisely
what Neal and Schanzenbach (2010) found regarding teacher effort in a study of
the effect of the No Child Left Behind policy in which schools were rewarded for
achieving specified targets. Providing incentives in teams could foster mutual help
among students (Slavin 1984). It could also generate positive peer effect, which is
widely documented in the context of classrooms (Hoxby 2000; Sacerdote 2001;
Cipollone and Rosolia 2007; Ding and Lehrer 2007; Carrell, Fullerton, and West
2009; Ammermueller and Pischke 2009; De Giorgi, Pellizzari, and Redaelli 2010;
Duflo, Dupas, and Kremer 2011).3 However, the drawbacks of team incentives—the
free rider problem, and also the potential for excessive pressure from peers (Stiglitz
1990, Kandel and Lazear 1992)—leave the effect of the team-based incentive design
as an essentially empirical question.

1 
The conditional cash transfers programs, such as Progresa in Mexico and PACES (Angrist, Bettinger, and
Kremer 2006) in Colombia, are also successful examples, but in upper middle income countries.
2 
One shortcoming of Berry (2009) is that it lacks a proper control group.
3 
Peer effects have been documented in other areas, such as people’s attitudes toward financial decisions (Duflo
and Saez 2003) or criminal behavior (Case and Katz 1991; Bayer, Hjalmarsson, and Pozen 2009).
Vol. 6 No. 4 Blimpo: Team Incentives for Education 93

II.  Experimental Design

This section presents a brief background of the education system in Benin and
describes key elements of the experiment. The sample size for this research is large
enough to identify an effect size of 0.3 standard deviations at a significance level of
5 percent, and with a statistical power of at least 80 percent between each design and
the control group. The choice of minimum effect size was partly based on its eco-
nomic significance. The national director of secondary education in Benin reported
that a minimum improvement in test scores that corresponded to about 0.25 standard
deviations would be considered economically significant and call for his attention.
Because of additional constraints on resources available to pay the incentives, we
chose an effect size of 0.30. The sample size does not allow us to detect a small dif-
ference or compare the different intervention groups against each other.

A. Contextual Background

While most African countries have experienced an expansion of access to educa-


tion over the past two decades, there are now growing concerns about the quality
and the learning outcomes. Net primary school enrollment was 93 percent in Benin
in 2008. However, the literacy rate was only 35 percent in 2005 and the expected
years of schooling at birth were about 7 in 2002.4
In Benin, primary and secondary education lasts a total of 13 years and is split into
3 cycles: 6 years in primary school, 4 years in middle school, and three years in high
school. To progress from one grade to the next, students must pass school-specific
tests within cycles and national certification exams at the end of each cycle. This proj-
ect involves students at the end of middle school (grade 10). They must take a national
certification examination called the BEPC, which I use as the performance measure.

B. Selection of Participants

I obtained the administrative list of all 1,279 secondary schools in Benin from
the Ministry of Education along with figures on the performance of each in the
previous year.5
I excluded schools where the passing rate was 65 percent6 or more because the
study is intended primarily for poorly performing areas. I also excluded schools
that did not have the tenth grade or were very small (with less than ten candidates).
There were 749 schools remaining.
I randomly assigned 100 participant schools (1,476 students) to the 4 groups.
The Individual Target and the Team Target groups had 22 schools each and 16 stu-
dents per school. The Team Tournament group had 28 schools and 12 students
per school. Finally the control group had 28 schools and about 16 students per

4 
World Bank (2009).
5 
The study was conducted with the formal approval of the Ministry of Education of Benin. I also received
approval from the University Committee on Activities Involving Human Subjects at New York University to ensure
that there were no ethical concerns. All documents are available upon request.
6 
The average passing rate on the BEPC was 43 percent in the previous academic year (2007–2008).
94 American Economic Journal: applied economicsOctober 2014

All middle schools in Benin


1,279 schools

Removed high-performing schools


Removed schools without tenth grade
or with fewer than ten candidates

749 schools

Complete randomization

Individual target Team tournament Team target Control


22 schools 28 schools 22 schools 28 schools

16 students Three teams of four Four teams of four 16 students


randomly students randomly students randomly randomly

Figure 1. Sampling

school.7 Figure 1 depicts the sampling process and Figure 2 the geographical dis-
tribution of the schools.
At the school level, the selection of classrooms and students was also random. I
randomly selected a tenth-grade class, read the consent form, then I randomly selected
12 to 16 students to participate. No student withdrew from the study or refused to
participate. In the Team Tournament schools, I selected 12 students to form 3 teams
and compete for 3 prizes in the tournament. The other 3 groups consisted of 16 stu-
dents per school. Treatment is at the school level; therefore, students from the same
school received the same treatment.8 Finally, I collected baseline information on the
students and gave them instructions about their specific incentive package.9

C. The Treatments

I described the incentives to students at the beginning of the school year in


September 2008 according to the following schedules.

7 
The difference in the number of schools between Team Tournament and the other groups is due to the fact that
I wanted to have as many teams as the number of prizes within schools for the tournament. This was intended to
mitigate potential internal competition among teams. The statistical power threshold is met for each intervention
relative to the control.
8 
Note that by design there is no possibility for a student self-selecting into the treatment or control schools.
The treatment status of schools and students were unknown before we actually reached the school, and afterward
it remained unchanged.
9 
Underage students (under 18 years old) were required to return a signed parental consent form to participate.
All consent forms were returned to the principals and forwarded to us.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 95

3
3
3
1
1 1 1 2
3 2
1 2 2
3
3 1
3 2 1
2 1 1
1
3 2 1 1
2 2 1 1 1
2 3 21
1
2 2 1
2
2
22 2
Porto-Novo

Figure 2.

Note: Geographical distribution of the sample across the country. Legend: Flag = Control,
1 = Individual, 2 = Team Target, 3 = Team Tournament.

In the Individual Target group, each participant received a promise to be


paid 5,000 Francs CFA (≈ $10) if his or her average score was between 10/20
(­inclusive) and 12/20 (exclusive); and 20,000 francs CFA (≈ $40) if they scored at
least 12/20.10 In the team Team Target group, each randomly formed team of four
students received a promise to be paid 20,000 francs CFA if the average score of the
team was between 10/20 (inclusive) and 12/20 (exclusive), and 80,000 francs CFA
(≈ $160) if the average score was at least 12/20. There was no restriction on how
the winning teams could share the prize.11 In the Team Tournament group, each ran-
domly formed team of 4 students received a promise to be paid 320,000 francs CFA
(≈ $640) if its average score was one of the top 3 average scores of the 84 teams
taking part in this tournament nationwide. To address the adverse effect of competi-
tion within the same school, we had three identical prizes and no more than three
teams in a given school. Finally, the Control group were offered no incentive. I only
collected baseline and endline data in these schools. The experiment lasted for one
school year and collected the performance data in August 2009.
To set the prizes, I used the previous year’s BEPC outcome data for all the
schools and calculated the overall average probability of passing (​Pp​​ = 0.33), pass-
ing with honors (​Ph​​ = 0.06), and failing (​Pf​​ = 0.61) in the entire population that
year. Based on those probabilities, and assuming that students were risk-neutral
with respect to this experiment, I set the prizes so that the ex ante expected amount
of the prizes allocated to each treatment group was roughly of similar magnitude
as described below.

10 
The score of 10 out of 20 is the passing grade on the BEPC and 12 out of 20 is the threshold to pass with
honors.
11 
Aside from one team that made a rule to pay only the team members who passed, all teams agreed to share
equally if they won.
96 American Economic Journal: applied economicsOctober 2014

Table 1—Size of the Incentives

Amount Pocket money Official minimum wage


(USD) number of weeks percent monthly
Individual—pass BEPC 10 4.5 17
Individual—honors 40 18 69
Team target—pass 40 18 69
Team target—honors 160 72 271
Team tournament—top 3 640 291 1,085

Note: The incentives in terms of number of weeks of pocket money are obtained by dividing
the size of the incentive in USD by the average weekly pocket money in the data.

Individual Target:

  ​
P​p​ × 5,000 + ​Ph​​ × 20,000 + ​pf​​ × 0  =  2,850(≈  $6).

Team Target: Since the prize would be awarded based on the average perfor-
mance of the team, one could think of each team as a representative student (average
of the four teammates). That “average” student faces similar odds as students in the
individual target group. Therefore, under those assumptions, the ex ante expected
payoff for each team member is as follows:

p​p​  ×  20,000/4 + ​ph​​ × 80,000/4 + ​pf​​ × 0  =  2,901(≈  $6).


  ​

Team Tournament: Here, the odds of winning are independent of p ​ p​​, ​ph​​, and p​ f​​
since the top three teams out of the 84 teams win regardless of their level of per-
formance. The expected payoff from participating in the team tournament is given
then by

  3/84 × 320,000/4  + 81/84 × 0/4  = 2,857(≈  $6).

D. Size of the Incentives

The financial incentives are small in absolute terms as reported above, but they are
relatively large for the context. For the winners, the individual incentive amounted
to about $10 without honors and $40 with honors. This corresponds, respectively,
to 4.5 and 18 weeks’ worth of the average pocket money of the participants.12 It
also represents 17 percent and 69 percent, respectively, of the official monthly mini-
mum wage in Benin. Each of the three prizes in the Team Tournament treatment
represents 291 weeks’ worth of the average pocket money. Each member of a win-
ning team in the Team Tournament could make as much as 1.4 years of average
pocket money or 2.75 months of minimum wage. The second column of Table 1
­summarizes the incentives in nominal US dollars. These amounts are also expressed
in terms of reported weekly pocket money and as a share of the monthly minimum
wage in Benin.

12 
The information on the average weekly pocket money was collected during a pretest of the survey instruments.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 97

E. The Measure of Performance: The BEPC

I use students’ scores on the BEPC as the measure of performance. The BEPC is
a national examination that all tenth-grade students must pass in order to advance
to high school. It is a comprehensive examination that covers all subjects. It has
two phases: a written phase and an oral phase. The written phase consists of seven
­subjects: (i) mathematics, (ii) physics and chemistry or English, (iii) natural
­science, (iv) history and geography, (v) writing, (vi) reading comprehension, and
(vii) French. The oral phase has two subjects: sport and oral communication. The
policy in Benin to calculate the average score requires to weigh mathematics by
three, sport and oral communication by one, and each of the remaining subjects by
two. There is a strict proctoring during the test and students are not proctored by
their own professors. Instances of cheating on the BEPC are extremely rare.
The BEPC is an ideal test to use in this study because it’s a national test that is
administered anyway. The test is administered on the same day and time all over
the country. Grading is centralized and anonymous. Students first sit for the written
phase. If they do not achieve the threshold average score over the 7 written subjects
(typically about 9 out of 20), they are disqualified and do not take the oral exam.
There is an unwritten but strong leniency policy that allows students who qualify for
the oral phase to almost always achieve a passing grade or to increase their score.
For this reason, very few students have a final grade in the range of 9 to 10. This
custom is inconsequential to the identification strategy because it applies to both the
treated and the control groups in the same way. Still, since it is easier to manipulate
grades in the oral phase, I report results based only on the written phase. This also
ensures that all students’ scores are based on the same number of subjects.

F. Data

I combine baseline data, the final outcome data, and some information from mon-
itoring visits to conduct the empirical analysis.13

The Baseline Data.— At the beginning of the school year, the research team col-
lected baseline data and implemented the treatments. The baseline data consisted of
the school characteristics and was constructed through an in-depth interview with
the head teacher at each school. We also collected information about the school’s
environment, infrastructure, size, management, community participation, and
other characteristics. Next, we interviewed each participant student. We collected
­sociodemographic information for each student, including their past performance,
their home ­environment, and their opinions on various aspects of their education. In
­addition, we collaborated with ninth-grade teachers to design short mathematics and
French quizzes that were administered to the students.14

13 
Note that the key analysis done in this paper was written and saved with my peers in a form of analysis pro-
tocol before the final data was collected.
14 
Quiz scores will, however, not be used in the analysis because of an implementation issue. In the intervention
schools, the quizzes were administered after the incentives were announced, and students in those groups took it
98 American Economic Journal: applied economicsOctober 2014

The Follow-up Monitoring.— Two months before the final exam, we visited about
half the schools to remind students of the incentives. Unfortunately, due to budget
constraints, we were only able to visit schools from the southern region. However,
we called all the schools to remind them about the incentives. Also, we developed
a questionnaire to use as a monitoring tool, in which we asked students about their
expectations for the final exam, whether they received other incentives, and whether
they studied in teams. We updated students’ cell phone contact information. We also
mailed envelopes with these questionnaires and a prestamped return envelope to a
randomly selected set of schools. We were able to collect monitoring information on
about half the participants.

The Final Data.— The final data consist of the students’ BEPC report cards with
all the scores in each subject and the final weighted average score. We obtained
this data directly from the Ministry of Education and the National Directorate of
Secondary Education.

G. Baseline Summary Statistics and Groups Comparison

I now present basic summary statistics on the student and school populations and
show that the different groups were comparable at the baseline. The sample is made
of 1,476 tenth-grade students in 100 schools.
The average tenth-grade class size is 41.30, and there are about 3 tenth-grade
classes per school. Schools have 13 classrooms and 695 students on average. About
40 percent of the sample are females. The average age is 16.22, and the average is
slightly higher for male (16.33) than female (16.06) students. This suggests that
females are more likely than males to drop out after repeating a grade. Regarding
family characteristics, 14 percent of the students reported their father as deceased
and 6 percent reported their mother as deceased. The average number of siblings is
6.54. Of the sample population, 39 percent reported having worked for pay during
the school year.
Baseline group comparison shows that there were no systematic differences
between the control group and the intervention groups. Tables 2 and 3 show that
treatment groups are balanced along both the school and student characteristics. The
last column shows the p-values for the joint test of equality across all groups, and no
variables are significant at the 5 percent level.
Figure 3 shows a similar distribution of test scores in French and mathematics
across groups at the baseline.

III.  Econometrics Framework and Identification

The primary objective of this paper is to measure the causal effect of the incen-
tives across intervention groups. I do this by estimating and comparing the average
treatment effect of the three treatments relative to the control group.

more seriously than in the control group. In addition, the quizzes consisted of six multiple choice questions and did
not yield important variations in the outcome.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 99

Table 2—Group Comparison at the Baseline: School-level Characteristics

Variables All Control IT T-target T-tourn. p-valuea


Percent success BEPC 2008 0.35 0.37 0.35 0.33 0.36 0.81
(0.15) (0.14) (0.15) (0.17) (0.14)
Female percent success BEPC 2008 0.29 0.28 0.28 0.28 0.31 0.84
(0.18) (0.16) (0.16) (0.19) (0.20)
Male percent success BEPC 2008 0.39 0.43 0.38 0.36 0.37 0.85
(0.17) (0.17) (0.16) (0.18) (0.16)
Class size (grade 10) 41.30 40.16 40.39 44.71 40.40 0.54
(14.97) (15.78) (16.66) (13.64) (14.15)
Number of grade 10 classes 2.89 3.04 2.41 2.86 3.14 0.63
(2.64) (3.23) (2.48) (2.29) (2.43)
Head’s tenure (years) 15.35 13.54 14.59 13.91 18.89 0.14
(9.81) (9.42) (9.26) (9.70) (10.26)
Number of classes 15.61 17.14 13.73 15.18 15.89 0.87
(14.20) (14.87) (15.84) (13.58) (13.18)
School is double shifts 0.33 0.32 0.27 0.36 0.36 0.78
(0.47) (0.48) (0.46) (0.49) (0.49)
Number of classrooms 13.18 14.25 11.09 13.13 13.79 0.60
(9.53) (9.13) (10.01) (10.77) (8.73)
Number of students 695.12 756.52 659.19 666.68 685.21 0.99
(851.84) (934.64) (1,028.22) (784.88) (705.60)
Tuition in 1,000s of CFA 50.12 55.51 55.36 47.76 42.58 0.45
(34.6) (38.72) (31.74) (33.23) (33.50)
School has a library 0.36 0.39 0.41 0.36 0.29 0.66
(0.48) (0.50) (0.50) (0.49) (0.46)
School has electricity 0.73 0.75 0.77 0.64 0.75 0.56
(0.45) (0.44) (0.43) (0.49) (0.44)
School has a health center 0.16 0.14 0.18 0.09 0.21 0.49
(0.37) (0.36) (0.39) (0.29) (0.42)
Does the school have a PTA? 0.75 0.75 0.68 0.64 0.89 0.1*
(0.44) (0.44) (0.48) (0.49) (0.32)

Observations 100 28 22 22 28

Note: Standard deviations in parentheses.


a 
The p-value refers to the joint test of equality across all three groups. IT = Individual target; T-target = Team
target; T-tourn. = Team tournament. The first three rows are administrative data from the Ministry of Education.
*** Significant at the 1 percent level.
 ** Significant at the 5 percent level.
  * Significant at the 10 percent level.

I estimate equation (1) below, where standard errors are clustered at the school
level.
3
(1)  scor​e​is​  = ​β​0​ + ​ ∑ 
    ​​β
 k​​ × ​T​  kis ​ ​  + ​ϵ​is​  ,
​ 
k=1

​ ​  kis ​​  is a dummy variable for the


where scor​ei​s​ is the score of student i in school s. T
treatment status of individual i in school s, and k = 1, 2, 3 denote the Individual
Target, the Team Target, and the Team Tournament groups, respectively. Given that
the schools and students were randomly assigned to the treatment, those estimates
are consistent, unbiased estimates of the causal average treatment effect of each
treatment arm.
100 American Economic Journal: applied economicsOctober 2014

Table 3—Group Comparison at Baseline: Students Characteristics

All Control IT T-target T-tourn. p-valuea


Student’s age 16.22 16.20 16.40 16.20 16.07 0.38
(1.84) (1.84) (1.97) (1.80) (1.75)
Student’s gender 0.60 0.60 0.58 0.60 0.60 0.82
(0.49) (0.49) (0.49) (0.49) (0.49)
Father living 0.86 0.88 0.85 0.84 0.88 0.36
(0.35) (0.33) (0.36) (0.37) (0.33)
Mother living 0.94 0.94 0.93 0.93 0.94 0.92
(0.24) (0.24) (0.25) (0.25) (0.24)
Have a paid tutor 0.15 0.16 0.14 0.15 0.16 0.90
(0.36) (0.37) (0.34) (0.36) (0.36)
Number of siblings 6.54 6.31 6.26 6.99 6.61 0.54
(4.58) (4.23) (4.59) (5.07) (4.42)
Electricity at home? 0.72 0.68 0.72 0.74 0.75 0.91
(0.45) (0.47) (0.45) (0.44) (0.43)
Car at home? 0.33 0.36 0.33 0.28 0.33 0.69
(0.47) (0.48) (0.47) (0.45) (0.47)
Have a cell phone? 0.37 0.41 0.38 0.35 0.35 0.87
(0.48) (0.49) (0.48) (0.48) (0.48)
Weekly pocket money 1,204.63 1,291.22 1,173.12 1,136.6 1,201.34 0.94
(1,495.48) (1,674.47) (1,289.90) (1,533.75) (1,408.01)
Worked for a pay? 0.39 0.41 0.38 0.38 0.41 0.91
(0.49) (0.49) (0.49) (0.49) (0.49)
Time to school 25.01 24.65 22.14 27.92 25.25 0.13
(18.75) (19.15) (16.47) (19.59) (19.12)
Observations 1,476 423 347 367 339

Note: Standard deviations in parentheses.


The p-value refers to the joint test of equality across all three groups. IT = Individual Target; T-Target = Team
a 

Target; T-Tourn. = Team Tournament.


*** Significant at the 1 percent level.
 ** Significant at the 5 percent level.
  * Significant at the 10 percent level.

IV.  Results and Discussions

In this section, I present the main findings, beginning with summary statistics
for the BEPC outcomes and the overall average treatment effect. I also analyze
the extent of h­ eterogeneity of the treatment effect and the dynamics within teams.
Finally, I assess the cost effectiveness of each treatment arm.

A. Summary Statistics for the BEPC Outcomes

In 2009, 150,847 candidates registered to take the BEPC. About 97  percent
(145,889) of the registered candidates actually took the test, and of those, 44.81 per-
cent passed.
The summary statistics in Table  4 show that students presented with any of
the incentives were about 10 percent more successful on the BEPC than students
assigned to the control group. Honors were awarded to 7 percent of students in the
control group, 11  percent in the Individual Target group, 13  percent in the Team
Target group, and 16 percent in the Team Tournament group.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 101

Panel A. French test score the previous year Panel B. Math test score the previous year
0.2 0.2

0.15 0.15
Kernel density

0.1 0.1

0.05 0.05

0 0
0 5 10 15 20 0 5 10 15 20

Control I-target
T-target T-tournament

Figure 3. Distribution of Test Scores at the Baseline ( previous year)

The lowest average score was in mathematics, in which students scored 4.70
out of 20 possible points, on average. The Team Tournament group was the best in
mathematics with an average of 5.11 out of 20. Overall, students scored best in his-
tory and geography (12.82/20). The three treatment groups scored better than the
control group in all subjects except writing.

B. Overall Average Treatment Effect

There were substantial test score gains for students who were presented with the
incentives. The average treatment effects are reported in the first two columns of
Table 5. The third and fourth columns compare nonscientific subjects (e.g., history)
with scientific subjects (e.g., mathematics).
The point estimate of the average treatment effect is 0.34 standard deviations for
the team tournament scheme, 0.29 for the individual scheme, and 0.27 for the team
target scheme. The effects are statistically significant at the 5 percent level for both
the Individual Target and the Team Tournament interventions, and significant at the
10 percent level for the Team Target intervention. As stated earlier, by design, the
statistical power does not allow me to distinguish small differences between inter-
ventions. Based on the actual effect sizes, the statistical powers are 81 percent for
the Individual Target, 76 percent for the Team Target, and 93 percent for the Team
Tournament. A t-test could not reject the null hypothesis that the average treatment
effects are the same. The results are statistically the same across intervention groups
102 American Economic Journal: applied economicsOctober 2014

Table 4—Summary Statistics of Students’ Performance on the BEPC 2009

Variable Control Indiv. target Team tar. Team tour.


BEPC passing rate 0.47 0.57 0.56 0.58
(0.50) (0.49) (0.49) (0.49)
Passed with honors 0.07 0.11 0.13 0.16
(0.27) (0.32) (0.33) (0.36)
Weighted average scorea 8.50 9.30 9.23 9.41
(2.69) (2.45) (2.81) (2.63)
Weighted average scoreb 8.01 8.69 8.65 8.82
(2.26) (2.07) (2.46) (2.32)
Writing score 8.23 8.39 7.92 8.04
(2.51) (2.55) (2.68) (2.70)
Reading score 8.58 9.00 9.42 9.41
(3.39) (2.87) (3.33) (3.34)
Mathematics score 4.06 4.85 4.87 5.11
(2.60) (2.75) (2.78) (3.17)
Natural science score 8.89 9.26 9.60 9.78
(3.32) (3.14) (3.37) (3.29)
History and geography score 12.11 13.25 12.88 13.14
(4.72) (4.42) (4.50) (4.35)
Oral communication score 16.77 16.78 16.68 16.71
(1.11) (1.10) (1.17) (1.29)
Sport score 13.98 14.13 14.62 14.22
(3.43) (2.95) (2.49) (3.05)

Observations 378 328 351 326

Notes: Standard deviations in parentheses. Standard errors clustered at the school level. All the
scores are over 20 possible points.
a
All the subjects
b
Only written subjects

Table 5—Average Treatment Effect on the BEPC Score

BEPC Written
(all subjects) BEPC Science Nonscience
(I) (II) (III) (IV)
Individual target 0.29** 0.29** 0.22* 0.25**
(0.13) (0.13) (0.13) (0.12)
Team target 0.27* 0.26* 0.28** 0.17
(0.14) (0.16) (0.14) (0.16)
Team tournament 0.34** 0.34** 0.38*** 0.24*
(0.13) (0.14) (0.13) (0.13)
Pooled (treatments) 0.30** 0.29** 0.29** 0.22*
(0.12) (0.12) (0.11) (0.11)

p-value (F-test of equality)a 0.86 0.83 0.45 0.88


R2 0.02 0.02 0.02 0.01

Observations 1,385 1,386 1,386 1,385

Notes: Robust standard errors in parentheses (clustered at the school level). Science = math-
ematics and natural sciences; Nonscience = language (reading, writing, second language, his-
tory, geography).
a 
The p-value in the second to last row is for the test of the equality of the ATE across all
three treatments arms.
*** Significant at the 1 percent level.
 ** Significant at the 5 percent level.
  * Significant at the 10 percent level.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 103

Table 6—Average Treatment Effect on the BEPC Score Controlling for Baseline Characteristics

BEPC (all subjects) Written BEPC Science Nonscience


(I) (II) (III) (IV)
Individual target 0.26** 0.25** 0.15 0.23**
(0.11) (0.11) (0.12) (0.11)
Team target 0.24* 0.23 0.26* 0.15
(0.14) (0.14) (0.13) (0.15)
Team tournament 0.28** 0.29** 0.28** 0.21*
(0.13) (0.13) (0.13) (0.12)

Student level control variablesa Yes Yes Yes Yes


School level control variablesb Yes Yes Yes Yes

R2 0.14 0.14 0.11 0.11


Observations 1,274 1,274 1,275 1,274

Notes: Robust standard errors in parentheses (clustered at the school level). Science = mathematics and natural sci-
ences; nonscience = language (reading, writing, second language, history, geography).
The student level control variables include students’ score in the previous year (ninth grade), gender, whether
a 

the student’s father lives or has passed away, and the time the student spends to travel from home to school.
b 
The school level control variables include the head teacher’s tenure, the average class size in tenth grade, the
amount of the tuition fees, whether the school has a parent-teacher association, and whether there is a clinic of
any health facilities associated with the school.
*** Significant at the 1 percent level.
 ** Significant at the 5 percent level.
  * Significant at the 10 percent level.

both for scientific and nonscientific subjects. I ran the same estimates while control-
ling for a number of variables at the baseline, such as student’s test scores during the
previous year and a number of student and school characteristics. Table 6 shows that
the results do not change.
This finding suggests that all three designs are viable policy tools available to
policymakers who want to incentivize students. The average treatment effects are
large and there is no difference between boys and girls. In a related study in Israel,
Angrist and Lavy (2009) provided financial incentives for a high-stakes certifica-
tion examination. Even though they found no effect on boys, they found substantial
increased certification for girls. Given that the return to schooling is high in both
contexts of these studies, the findings raise the question of why these additional
financial incentives work.

C. Heterogeneity of the Treatment Effect

In this section, I examine the extent to which the different interventions affected
subgroups of students differently. The best way to approximate this effect is to
examine the heterogeneity across a measure of performance at the baseline. At the
baseline, we collected students’ overall annual average scores in the previous year.
I plot the baseline test score against the final score on the BEPC nonparametri-
cally. The results are presented in Figure 4. The different graphs show a monotoni-
cally increasing relation, as one would expect.15 The figures show roughly a similar

15 
Tenth-grade repeaters were dropped from this estimation as their previous year’s performance was the
­tenth-grade exam, unlike the new students who were in ninth grade.
104 American Economic Journal: applied economicsOctober 2014

End line test score 1 1

End line test score


0.5 0.5

0 0

−0.5 −0.5

−1 −1
−6 −2 2 −6 −4 −2 0 2 4
Baseline test score Baseline test score
95% CI 95% CI 95% CI 95% CI
Control Individual Target Control Team Target

1
End line test score

0.5

−0.5

−1
−6 −4 −2 0 2 4
Baseline test score
95% CI 95% CI
Control Individual Target

Figure 4. Heterogeneity of the Effect across Baseline Performance Spectrum

p­ attern across the baseline performance range. However, the estimates lack preci-
sion to draw strong conclusions regarding the heterogeneity of the treatment effect.
The regression analysis is presented in Table 7 both under the assumption of linear
interaction effect and nonlinear effect.
The first column shows a substantial and statistically significant interaction effect
with only the tournament, indicating that high performers at the baseline benefited
more when they were in the tournament arm. The second column presents the results
where the treatment variables are interacted with quartile of the baseline test scores.
The relative magnitude of the estimates is similar to the linear effect, however the
estimates are imprecise and statistically insignificant.

D. Disaggregated Test Scores and Dynamics within Teams

This section documents the mechanism through which the average treatment
effect is produced within teams. Do high-performing team members help the others,
or does each ­member simply work harder? If mutual help and complementarities
play a role, one would expect that, controlling for the average baseline quality of
the team, the impact would increase with the heterogeneity of teams. To investigate
these dynamics within teams, I look at how the baseline heterogeneity of the teams
impacts the final performance on the BEPC.
I find that, controlling for the baseline average performance of a team, heteroge-
neity in the team is positively associated with the final performance on the BEPC.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 105

Table 7—Interaction Effect of the Baseline Performance

Linear effect Nonlinear effect

Individual target 0.28** 0.16


(0.12) (0.23)
Team target 0.18 0.27
(0.16) (0.27)
Team tournament 0.33*** 0.33
(0.13) (0.22)
Baseline score 0.18**
(0.09)

Individual target × baseline score −0.03 —


(0.10)
Team target × baseline score 0.21 —
(0.13)
Team tournament × baseline score 0.24** —
(0.11)

Individual target × baseline score — 0.28


 (second quartile dummy) (0.22)
Individual target × baseline score — 0.16
 (third quartile dummy) (0.20)
Individual target × baseline score — 0.03
 (fourth quartile dummy) (0.22)
Team target × baseline score — 0.11
 (second quartile dummy) (0.31)
Team target × baseline score — 0.10
 (third quartile dummy) (0.27)
Team target × baseline score — 0.07
 (fourth quartile dummy) (0.29)
Team tournament × baseline score — 0.14
 (second quartile dummy) (0.24)
Team tournament × baseline score — 0.10
 (third quartile dummy) (0.23)
Team tournament × baseline score — 0.24
 (fourth quartile dummy) (0.28)

R2 0.10 0.18
Observations 1,333 1,103

Notes: The dependent variable is the standardized test score on the BEPC. Robust standard
errors in parentheses (clustered at the school level).
*** Significant at the 1 percent level.
 ** Significant at the 5 percent level.
  * Significant at the 10 percent level.

Figure 5 shows a positive correlation between the variance in performance within


teams at the baseline (controlling for the average of the teams) and the final scores
of team members.
The results (Table 8) indicate that an increase of 1 standard deviation in the dis-
persion of performance within a team at the baseline is associated with an aver-
age improvement in the overall final score of the team members: the effects are,
­respectively, 16 percent and 17 percent gains in test scores for the Team Tournament
and the Team Target group.
106 American Economic Journal: applied economicsOctober 2014

Partial scatter plot: final score versus baseline variance


Team target
10

Score on the BEPC

−5

−10
0 1 2 3 4
Within team standard deviation of previous year’s score

95% CI Fitted values

Partial scatter plot: final score versus baseline variance


Team tournament
10
Score on the BEPC

−5

0 1 2 3 4
Within team standard deviation of previous year’s score

95% CI Fitted values

Figure 5. Effect of Teams’ Heterogeneity at the Baseline on the Performance.

Note: Team target (top) and team tournament (bottom).

E. Relative Cost and Internal Rate of Return

The expected cost of the prizes for the Individual Target and the Team Target were
$2,000 each, and $1,920 for the Team Tournament. Table 9 shows the actual amount
that were won and paid as incentives. The Individual Target cost 54.50 ­percent more
than was estimated while the Team Target cost 43.50 percent less than was estimated.
The ratio of the cost per student and the average test score gains presented in the
last row of Table  9 is about $16 per standard deviation gain in test scores for Team
Tournament. This indicates that incentivizing students could be cost-effective in this
context.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 107

Table 8—Effect of the within Team Performance Heterogeneity at the Baseline

Team tournament Team target

Previous year’s average score 0.33* 0.33***


(0.18) (0.10)
Within team variance 3.12*** 3.39 **
  of previous year’s score (1.05) (1.23)
Constant 21.02** 19.48***
(9.75) (6.73)

R2 0.06 0.06
Observations 328 333

Notes: The dependent variable is the end-line test score on a scale of 0–20 points. Robust stan-
dard errors in parentheses.
*** Significant at the 1 percent level.
 ** Significant at the 5 percent level.
  * Significant at the 10 percent level.

Table 9—Ex post Relative Cost of the Three Treatments

Individual Team target Team tournament


Actual cost $3,090 $1,130 $1,920
Ex ante estimated cost $2,000 $2,000 $1,920
Percent change + 54.50 −43.50 0
Average treatment effect 0.29** 0.27* 0.35**
(0.13) (0.16) (0.14)
Cost per standard deviation $30 $12 $16

Notes: The currency is the US dollar. Robust standard errors of the average treatment effects
in parentheses.

As for the internal rate of return, the costs of the programs evaluated in this paper
are relatively small, even with the operational costs included. On the benefit side,
one could think of pecuniary and nonpecuniary benefits. The nonpecuniary ben-
efits include the effect of education, not only on health outcomes but also on many
sociocultural outcomes as documented recently in Kenya (Friedman et al. 2011).
While I do not possess robust data to estimate the internal rate of return, there is
reason to believe that it is likely to be high. Recent estimates in Benin by a World
Bank study suggest that the pecuniary returns are high ­relative to the cost of this
study. Researchers found that the marginal social return of the BEPC relative to the
primary school certificate (that is the extra four years since graduation from primary
school), is ​rm​ /p​= 1.2 percent; the marginal social return of the high school diploma
relative to the BEPC is r​ h​/m​= 7.1 percent (World Bank 2008).16

16 
This World Bank study used a basic Mincer regression framework to estimate the private and social marginal
returns to different levels of education in Benin. It also reported the yearly cost of different levels of education. The
private return compares the gains from acquiring that level of education against the private cost. The social return
includes the public funding of education in the cost columns.
108 American Economic Journal: applied economicsOctober 2014

V. Conclusion

This paper assesses the impact of financial incentives on educational outcomes in


the context of a developing country using a randomized controlled trial. In ­addition,
it compares individual to group incentives, and under the umbrella of group incen-
tives, compares schemes that offer groups simple goals to schemes that put groups
into tournament competition. The evidence showed substantially large test-score
gains for students who were presented with performance-based incentives. There are
substantial gains in test scores across treatment arms on a comprehensive national
examination: their passing rate was nearly 10  percent higher than in the control.
However, the estimates are not precise enough to rule out the hypothesis that the
three schemes are equally effective.
The results of this study suggest that student incentives are important and have the
first-order effect over the design that is used to provide those incentives. Since the
magnitude of the average treatment effect is comparable across groups, each of the
treatment arms represents a viable policy tool to policymakers who want to incentive
students. In addition, the fact that all treatments in the study relied solely on adminis-
trative data suggests that policies based on them would be relatively easy to scale up.
If this experiment were implemented in all schools, many additional factors that
may have not been present at the level of this research could surface. For example,
if the prizes are big and made public, some parents who already offer their chil-
dren incentives may cut back (Das et al. 2013). Some parents or teachers may also
put excessive pressure on students to win, and in some circumstances, this could
become counterproductive. It is beyond the scope of this paper to provide answers
to these general equilibrium predictions.
Finally, even though this study used direct monetary incentives, it is possible that
other nonmonetary incentives could work as well, as long as students care about the
reward. The nature and the size of the incentives are therefore important areas of
research for developing countries.

REFERENCES

Ammermueller, Andreas, and Jörn-Steffen Pischke. 2009. “Peer Effects in European Primary Schools:
Evidence from Progress in International Reading Literacy Study.” Journal of Labor Economics
27 (3): 315–48.
Angrist, Joshua, Eric Bettinger, and Michael Kremer. 2006. “Long-Term Educational Consequences
of Secondary School Vouchers: Evidence from Administrative Records in Colombia.” American
Economic Review 96 (3): 847–62.
Angrist, Joshua, Daniel Lang, and Philip Oreopoulos. 2009. “Incentives and Services for College
Achievement: Evidence from a Randomized Trial.” American Economic Journal: Applied Eco-
nomics 1 (1): 136–63.
Angrist, Joshua, and Victor Lavy. 2009. “The Effects of High Stakes High School Achievement
Awards: Evidence from a Randomized Trial.” American Economic Review 99 (4): 1384–1414.
Angrist, Joshua, Philip Oreopoulos, and Tyler Williams. 2010. “When Opportunity Knocks, Who
Answers? New Evidence on College Achievement Awards.” National Bureau of Economic Research
(NBER) Working Paper 16643.
Barrera-Osorio, Felipe, Marianne Bertrand, Leigh L. Linden, and Francisco Perez-Calle. 2011.
“Improving the Design of Conditional Transfer Programs: Evidence from a Randomized Educa-
tion Experiment in Colombia.” American Economic Journal: Applied Economics 3 (2): 167–95.
Vol. 6 No. 4 Blimpo: Team Incentives for Education 109

Bayer, Patrick, Randi Hjalmarsson, and David Pozen. 2009. “Building Criminal Capital Behind Bars:
Peer Effects in Juvenile Corrections.” Quarterly Journal of Economics 124 (1): 105–47.
Behrman, Jere R., Piyali Sengupta, and Petra Todd. 2002. The Impact of PROGRESA on Achievement
Test Scores in the First Year. International Food Policy Research Institute (IFPRI). Washington,
DC, September.
Berry, James. 2009. “Child Control in Education Decisions: An Evaluation of Targeted Incentives to
Learn in India.” http://www.depeco.econo.unlp.edu.ar/cedlas/ien/pdfs/meeting2009/papers/berry.pdf.
Bettinger, Eric P. 2008. “Paying to Learn: The Effect of Financial Incentives on Elementary School
Test Scores.” Paper presented at the CESifo/PEPG joint conference, Munich, May 16–17.
Blimpo, Moussa P. 2014. “Team Incentives for Education in Developing Countries: A Random-
ized Field Experiment in Benin: Dataset.” American Economic Journal: Applied Economics.
http://dx.doi.org/10.1257/app.6.4.90.
Carrell, Scott E., Richard L. Fullerton, and James E. West. 2009. “Does Your Cohort Matter? Measur-
ing Peer Effects in College Achievement.” Journal of Labor Economics 27 (3): 439–64.
Case, Anne C., and Lawrence F. Katz. 1991. “The Company You Keep: The Effects of Family and
Neighborhood on Disadvantaged Youths.” National Bureau of Economic Research (NBER) Work-
ing Paper 3705.
Cipollone, Piero, and Alfonso Rosolia. 2007. “Social Interactions in High School: Lessons from an
Earthquake.” American Economic Review 97 (3): 948–65.
Das, Jishnu, Stefan Dercon, James Habyarimana, Pramila Krishnan, Karthik Muralidharan, and
Venkatesh Sundararaman. 2013. “School Inputs, Household Substitution, and Test Scores.” Amer-
ican Economic Journal: Applied Economics 5 (2): 29–57.
De Giorgi, Giacomo, Michele Pellizzari, and Silvia Redaelli. 2010. “Identification of Social Interac-
tions through Partially Overlapping Peer Groups.” American Economic Journal: Applied Econom-
ics 2 (2): 241–75.
Ding, Weili, and Steven F. Lehrer. 2007. “Do Peers Affect Student Achievement in China’s Secondary
Schools?” Review of Economics and Statistics 89 (2): 300–312.
Duflo, Esther, Pascaline Dupas, and Michael Kremer. 2011. “Peer Effects, Teacher Incentives, and
the Impact of Tracking: Evidence from a Randomized Evaluation in Kenya.” American Economic
Review 101 (5): 1739–74.
Duflo, Esther, and Emmanuel Saez. 2003. “The Role of Information and Social Interactions in Retire-
ment Plan Decisions: Evidence from a Randomized Experiment.” Quarterly Journal of Economics
118 (3): 815–42.
Friedman, Willa, Michael Kremer, Edward Miguel, and Rebecca Thornton. 2011. “Education as Lib-
eration?” National Bureau of Economic Research (NBER) Working Paper 16939.
Fryer, Roland G., Jr. 2010. “Financial Incentives and Student Achievement: Evidence from Random-
ized Trials.” National Bureau of Economic Research (NBER) Working Paper 15898.
Hoxby, Caroline. 2000. “Peer Effects in the Classroom: Learning from Gender and Race Variation.”
National Bureau of Economic Research (NBER) Working Paper 7867.
Kandel, Eugene, and Edward P. Lazear. 1992. “Peer Pressure and Partnerships.” Journal of Political
Economy 100 (4): 801–17.
Kremer, Michael, Edward Miguel, and Rebecca Thornton. 2007. “Incentives to Learn.” http://digi-
lander.libero.it/mgtund/IncentivesToLearn_Kremer.pdf.
Leuven, Edwin, Hessel Oosterbeek, and Bas Van der Klaauw. 2010. “The Effect of Financial Rewards
on Students’ Achievement: Evidence from a Randomized Experiment.” Journal of European Eco-
nomic Association 8 (6): 1243–65.
Li, Tao, Li Han, Scott Rozelle, and Linxiu Zhang. 2010. “Cash Incentives, Peer Tutoring, and Parental
Involvement: A Study of Three Educational Inputs in a Randomized Field Experiment in China.”
Stanford University Rural Education Action Project Working Paper 221.
Neal, Derek, and Diane Whitmore Schanzenbach. 2010. “Left Behind by Design: Proficiency Counts
and Test-Based Accountability.” Review of Economics and Statistics 92 (2): 263–83.
Sacerdote, Bruce. 2001. “Peer Effects with Random Assignment: Results for Dartmouth Roommates.”
Quarterly Journal of Economics 116 (2): 681–704.
Slavin, Robert E. 1984. “Students Motivating Students to Excel: Cooperative Incentives, Cooperative
Tasks, and Student Achievement.” Elementary School Journal 85 (1): 53–63.
Stiglitz, Joseph E. 1990. “Peer Monitoring and Credit Markets.” World Bank Economic Review 4 (3):
351–66.
World Bank. 2  009. Le systèm éducatif Béninois: Analyse sectorielle pour une politique éducative plus
équilibrée et plus efficace. Le Développement Humain en Afrique. Washington, DC: World Bank.

You might also like