Professional Documents
Culture Documents
Ewen Gallic
ewen.gallic@gmail.com
2020-2021
1. Quantiles
1. Quantiles
Quantiles
Some references
• Arellano, M (2009). Quantile methods, Class notes
• Charpentier, A (2020). Introduction à la science des données et à l’intelligence artificielle,
https://github.com/freakonometrics/INF7100
• Charpentier, A. (2018). Big Data for Economics, Lecture 3
• Davino et al. (2014). Quantile Regression. John Wiley & Sons, Ltd.
• Givord, P., D’Haultfoeuillle, X. (2013). La régression quantile en pratique, INSEE
• He, X., Wang, H. J. (2015). A Short Course on Quantile Regression.
• Koenker and Bassett Jr (1978). Regression quantiles. Econometrica: journal of the
Econometric Society (46), 33–50
• Koenker (2005). Quantile regression. 38. Cambridge university press.
Ewen Gallic Machine learning and statistical learning 4/53
1. Quantiles
1.1. Median
1.1 Median
f(x)
the values above m
• For a random variable Y with cumula- 0.2 50%
0.6
• From the cumulative distribution func-
tion F , that gives F (y) = P(Y ≤ y)
F(x)
0.4
0 1 2 3 4 5
x
0.3
φ(x)
−2 0 2
x
Numeric Optimization
What is a quantile?
In statistics and probability, quantiles are cut points dividing the range of a
probability distribution into continuous intervals with equal probabilities, or
dividing the observations in a sample in the same way. There is one fewer
quantile than the number of groups created (Wikipedia)
We thus define a probability threshold τ and then look for the value Q(τ ) (i.e., the
τ th quantile) such that:
• a proportion τ of the observations have a value lower or equal to τ ;
• a proportion 1 − τ of the observations have a value greater or equal to τ .
Formal definition
The quantile QY (τ ) of level τ ∈ (0, 1) of a random variable Y is such that:
P {Y ≤ QY (τ )} ≥ τ and P {Y ≥ QY (τ )} ≥ 1 − τ
QY (τ ) = F −1 (τ ) = inf {y ∈ R : FY (y) ≥ τ }
Illustration
A Probability distribution function B Cumulative distribution function
0.6
0.8
0.6
0.4 Qτ(.25)
F(x)
f(x)
0.4
0.2
0.2
0 1 2 3 4 5 0 1 2 3 4 5
x x
Numerical optimization
Once again, quantiles can be viewed as particular centers of the distribution that minimize
the weighted absolute sum of deviation. For the τ th quantile:
Loss function
2.0
1.5
ρτ(u)
• non differentiable at u = 0 1.0 τ = 0.1
Then,
• a (1 − τ ) weight is assigned to the
0.5
negative deviations
• a τ weight is assigned to the positive
deviations 0.0
−2 −1 0 1 2
u
Numerical optimization
The minimization program Qτ (Y ) = arg minm E [ρτ (Y − m)] can be written as:
• For a discrete Y variable with pdf f (y) = P(Y = y):
X X
Qτ (Y ) = arg min (1 − τ ) | y − m | f (y) + τ | y − m | f (y)
m
y>c
y≤m
Z m Z +∞
Qτ (Y ) = arg min (1 − τ ) | y − m | f (y)dy + τ | y − c | f (y)dy
m −∞ m
Boxplots
Q1 Q2 Q3
Figure 7: Boxlpot.
Ewen Gallic Machine learning and statistical learning 18/53
1. Quantiles
1.3. Graphical Representation
Choropleth maps
A B
Based on quantiles
Equal cuts
43.4°N (3.7,7.5]
43.4°N
(3.7,5.67]
(7.5,8.3]
(5.67,7.63]
43.35°N 43.35°N (8.3,9]
(7.63,9.6]
(9,9.44]
(9.6,11.6]
43.3°N 43.3°N (9.44,10.1]
(11.6,13.5]
(10.1,10.8]
(13.5,15.5]
43.25°N
(10.8,11.5]
43.25°N (15.5,17.5]
(11.5,12.8]
(17.5,19.4]
43.2°N
(12.8,15]
43.2°N (19.4,21.4]
5.3°E 5.35°E 5.4°E 5.45°E 5.5°E (15,21.4]
5.3°E 5.35°E 5.4°E 5.45°E 5.5°E NA
NA
Note: Example adapted from Arthur Charpentier’s slides (INF7100 Ete 2020, slides 224)
Figure 8: Proportion of individuals between the ages of 18 and 25 (inclusive) registered on the electoral lists in
Marseilles, by voting center.
Ewen Gallic Machine learning and statistical learning 19/53
2. Quantile regression
2. Quantile regression
2.1 Principles
Quantile regression
In the linear regression context, we have focused on the conditional distribution of y, but
only paid attention to the mean effect.
In many situations, we only look at the effects of a predictor on the conditional mean
of the response variable. But there might be some asymetry in the effects across the
quantiles of the response variable:
• the effect of a variable could not be the same for all observations (e.g., if we
increase the minimum wage, the effect on wages may affect low wages differently
than high wages).
Quantile regression offers a way to account for these possible asymmetries.
Date Age Weight Stature BMI* Comments Date Age Weight Stature BMI* Comments
BMI BMI
35 35
34 34
33 33
32 32
31 95
31
30 30
95
29 29
BMI 28 BMI 28
90
90
27 27 27 27
26 85 26 26 26
85
25 25 25 25
75
24 24 24 75 24
23 23 23 23
50
22 22 22 22
50
21 21 21 21
25
20 20 20 20
25
10
19 19 19 19
5
10
18 18 18 5
18
17 17 17 17
16 16 16 16
15 15 15 15
14 14 14 14
13 13 13 13
12 12 12 12
kg/m
2
AGE (YEARS) kg/m 2
kg/m
2
AGE (YEARS) kg/m2
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Published May 30, 2000 (modified 10/16/00). Published May 30, 2000 (modified 10/16/00).
SOURCE: Developed by the National Center for Health Statistics in collaboration with SOURCE: Developed by the National Center for Health Statistics in collaboration with
the National Center for Chronic Disease Prevention and Health Promotion (2000). the National Center for Chronic Disease Prevention and Health Promotion (2000).
http://www.cdc.gov/growthcharts http://www.cdc.gov/growthcharts
Here, β represents the marginal change in the mean of the response Y to a marginal
change in x.
Ewen Gallic Machine learning and statistical learning 24/53
2. Quantile regression
2.1. Principles
Qτ (Y | X = x) = inf{y : F (y | x) ≥ τ }
Qτ (Y | X = x) = x> β τ (2)
h i>
where β τ = β0,τ β1,τ β2,τ . . . βp,τ is the quantile coefficient:
• it corresponds to the marginal change in the τ th quantile folowing a marginal
change in x.
Ewen Gallic Machine learning and statistical learning 25/53
2. Quantile regression
2.1. Principles
Y = X > β τ + ετ (3)
1.5
ρτ(u)
1.0 τ = 0.1
is defined as:
−2 −1 0 1 2
Ewen Gallic u and statistical learning 27/53
Machine learning
2. Quantile regression
2.1. Principles
We can note that in the case in which τ = 1/2, this corresponds to the median.
Quantiles may not be unique:
• any element of {x ∈ R : FY (x) = τ } minimizes the expected loss
• if the solution is not unique, we have an interval of τ th quantiles
I the smalest element is chosen (this way, the quantile function remains left-continuous).
Ewen Gallic Machine learning and statistical learning 28/53
2. Quantile regression
2.1. Principles
Empirically
Now, let us turn to the estimation. Let us consider a random sample {(x1 , y1 ), . . . , (xn , yn )}.
In the case of the least square estimation, the expectation E minimizes the risk that
corresponds to the quadratic loss function, i.e.:
n h io
E(Y ) = arg min E (Y − m)2
m
Pn
• The sample mean solves minm i=1 (yi − m)2
• The least squares estimates of the parameters are obtained by minimizing
Pn > 2
i=1 (yi − xi β)
Empirically
The τ th quantile minimizes the risk associated with the asymetric absolute loss function:
Qτ (Y ) ∈ arg min {E [ρτ (Y − m)]}
m
A τ = 0. 1 B
2,000
0.7
1,500 0.6
1,000 0.5
0.4
500
1,000 2,000 3,000 4,000 5,000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Household Income (BEF/year) Probability level (τ)
2,000
Food Expenditure (BEF/year)
1,500
1,000
500
Figure 12: Food expenditure as a function of income: slope of linear quantile regression.
From the previous graph, we observe that the slopes change relatively greatly depending
on the level of the quantile :
• for the individuals with the lowest incomes, the slope is relatively small (for τ = .1,
it is 0.40)
• for individuals with higher incomes, the slope is relatively higher (for τ = .9, it is
0.69)
Therefore, the higher the income category, the greater the effect of income on consump-
tion.
Location-shift model
Y = X >γ + ε (6)
We have:
Qτ (Y | X = x) = X > γ + Qτ (ε)
This model known as the location model, the only coefficient that varies accordingly
with τ is the coefficient associated with the constant: β0 , τ = γ1 + Qτ (ε)
Location-shift model
Linear reg.
Y = β0 + β1 x2 + ε, with β0 = 3
0.01
y
0.25
ε ∼ N (0, 4). −5
0.5
0.75
NA
0 25 50 75 100
x
Location-scale model
Now, let us consider a location-scale model. In this model, we assume that the
predictors have an impact both on the mean and on the variance of the response:
where ε is independant of X.
As Qτ (aY + b) = aQτ (Y ) + b, we can write:
y
0.25
0.9
0 25 50 75 100
x
2.2 Inference
Asymptotic distribution
As there is no explicit form for the estimate β̂ τ , achieving its consistency is not as easy
as it is with the least squares estimate.
First, we need some assumptions to achieve consistency:
• the observations {(x1 , y1 ), . . . , (xn , yn )} must be conditionnaly i.i.d.
• the predictors must have a bounded second moment, i.e., E || X i ||2 < ∞
• the error terms εi must be continuously distributed given X i , and centered, i.e.,
R0
f ()d = 0.5
h−∞ ε i
• f ε(0)XX > must be positive definite (local identification property).
Asymptotic distribution
Under those weak conditions, β̂ τ is asymptotically normal:
√
d
n βˆ − β − → N 0, τ (1 − τ )D−1 Ωx D−1
τ τ τ τ (7)
where h i h i
Dτ = E f ε(0)XX > and Ωx = E XX >
The asymptotic variance of β̂ writes, for the location shift model (Eq. 6):
n
!−1
i τ (1 − τ )
h 1X >
V̂ar βˆτ = h x xi (8)
n i=1 i
i2
fˆε (0)
where fˆε (0) can be estimated using an histogram (see Powell 1991).
Ewen Gallic Machine learning and statistical learning 42/53
2. Quantile regression
2.2. Inference
Confidence intervals
h i
If we have obtained an estimator of Var βˆτ , the confidence interval of level 1 − α is given by:
" r #
h i
β̂τ ± z1−α/2 V̂ar βˆτ (9)
Otherwise, we can rely on bootstrap or resampling methods (see Emmanuel Flachaire’s course):
Birth data
Let us consider an example provided in Koenker (2005), about the impact of demographic
characteristics and maternal behaviour on the birth weight of infants botn in the US.
Let us use the Natality Data from the National Vital Statistics System, freely available
on the NBER website (https://data.nber.org/data/vital-statistics-natality-data.html)
The raw data for 2018 concerns more than 3M babies born in that year, from american
mothers aged bewteen 18 and 50.
For the sake of the exercise, we only use a sample of 20,000 individuals.
Variables kept
The predictors used are:
• education (less than high school (ref), high school, some college, college graduate)
• prenatal medical care (no prenatal visit, first prenatal visit in the first trimester of
pregnancy (ref), first visit in the second trimester, first visit in the last trimester)
• the sex of the infant
• the marital status of the mother
• the ethnicity of the mother
• the age of the mother
• whether the mother smokes
• the number of cigarette per day
• the gain of weight for the mother
Age
A τ = 0. 1 B
10
5,000
4,000
3,000
0
2,000
1,000
−5
0
20 30 40 50 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Age of the mother (years) Probability level (τ)
Age
• If the slopes are not parallel, and the observed differences are significant, then the
effect of age on the baby’s weight is different depending on where the baby is in the
distribution.
• At the lower part of the distribution of weights (τ = .1, relatively light babies):
effect of age on weight negative but not significant
• At the upper part of the distribution of weights (τ = .9, relatively heavy babies):
positive and significant effect of age on weight
4,000
Birth weight (in grams)
3,000 0.1
0.25
0.5
0.75
2,000 0.9
1,000
0
20 30 40 50
Age of the mother (years)
Figure 13: Birth weight as a function of the age of the mother - Non-linear Quantile Regression
Ewen Gallic Machine learning and statistical learning 49/53
2. Quantile regression
2.3. Example: birth data
The influence of the mother’s age may be thought to be non-linear on birth weight.
For heavy babies (and also light babies), the weight of the babies is relatively lower on
the borders of the mother’s age distribution.
Cigarettes
A B 0
5,000
4,000
3,000
2,000
−200
1,000
−300
0
Do not smoke Smokes 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Mother smokes Probability level (τ)
Cigarettes
References I
Alvaredo, F., Chancel, L., Piketty, T., Saez, E., and Zucman, G. (2018). The elephant curve of global inequality
and growth. AEA Papers and Proceedings, 108:103–08.
Davino, C., Furno, M., and Vistocco, D. (2014). Quantile Regression. John Wiley & Sons, Ltd.
Koenker, R. and Bassett Jr, G. (1978). Regression quantiles. Econometrica: journal of the Econometric Society,
pages 33–50.
Koenker, R. and Bassett Jr, G. (1982). Robust tests for heteroscedasticity based on regression quantiles. Econo-
metrica: Journal of the Econometric Society, pages 43–61.
Powell, J. L. (1991). Estimation of monotonic regression models under quantile restrictions. Nonparametric and
semiparametric methods in Econometrics, pages 357–384.