You are on page 1of 7

Hindawi

Journal of Probability and Statistics


Volume 2019, Article ID 7691841, 6 pages
https://doi.org/10.1155/2019/7691841

Research Article
Bootstrapping Nonparametric Prediction Intervals for
Conditional Value-at-Risk with Heteroscedasticity

Emmanuel Torsen and Lema Logamou Seknewna


Department of Mathematics, Pan African University, Institute of Basic Sciences, Technology, and Innovation, Kenya

Correspondence should be addressed to Emmanuel Torsen; torsen@mautech.edu.ng

Received 24 December 2018; Revised 16 April 2019; Accepted 23 April 2019; Published 7 May 2019

Academic Editor: Dejian Lai

Copyright © 2019 Emmanuel Torsen and Lema Logamou Seknewna. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the
original work is properly cited.

Using bootstrap method, we have constructed nonparametric prediction intervals for Conditional Value-at-Risk for returns that
admit a heteroscedastic location-scale model where the location and scale functions are smooth, and the function of the error term
is unknown and is assumed to be uncorrelated to the independent variable. The prediction interval performs well for large sample
sizes and is relatively small, which is consistent with what is obtainable in the literature.

1. Introduction asymptotic relative efficiency was performed to compare


the NPIs and their parametric prediction intervals (PPIs)
The field of prediction intervals (PIs) has spanned for so counterpart; the results showed NPIs to have better prop-
many decades and could be traced back to the work of erties. Reference [3] studied the problem of constructing
Baker in 1935 (see [1] for details). Three most often used NPIs for future observations for mixed linear models, what
intervals in statistics, namely, the confidence interval, the the authors therein called distribution-free PIs in mixed
prediction interval, and the tolerance interval, were reviewed linear models. They assumed that the future observation
by [2]. A single future observation from a population that is is independent of the present ones. The authors showed
contained in a predictive interval is usually specified with a that, for standard mixed linear models, construction of PIs
coverage probability; such a future observation is assumed works well based on regression residuals. However, for the
to have a particular distribution in statistical modeling. The nonstandard mixed linear models case, it is more involving
distribution could have a finite number of unknown param- since one has to estimate the distribution of the random
eters (e.g., normal distribution, t distribution), and hence if effect. Also, [6] considered the problem of constructing NPIs
those parameters are correctly estimated, then the prediction for a nonparametric autoregressive (NAR) model structure;
interval is obtained. Clearly, this methodology relies heavily the authors compared two strategies, namely, the Gaussian
on the underlying distribution of the future observation. The and Bootstrap strategies. The latter was found to outperform
question is, what happens if the distributional assumption the former.
is not correct? The prediction interval may be incorrect. Future values (observations) forecasting is one of the
Optionally, the nonparametric methodology could be used most well-known concepts in time-series modeling. To check
to overcome the above-mentioned problem [3], which is the the forecast accuracy, the error of prediction is defined
focus of this paper. In the prediction intervals literature, and treated as a measure of uncertainty of the forecast.
a good review is given on when the future observation is Construction of PIs for future observations in general is a
not dependent on the sample under consideration for both closely related problem to the one mentioned above [6].
parametric and nonparametric methods (see [4] for details). Value-at-Risk (VaR), as defined by [7], estimates the
In their paper, the authors in [5] examined the problem maximum worst loss that could happen in a specified period
of constructing NPIs for a future sample median, where of time and confidence level. The importance of VaR as a risk
2 Journal of Probability and Statistics

measure in the financial industry cannot be overemphasized; distribution of 𝑌𝑡 given 𝑋𝑡 = 𝑥. The vector 𝑋𝑡 ∈ R𝑑 normally
it is currently used globally by many financial (e.g., banks, includes lag returns {𝑌𝑡 }, 1 ≤ 𝑙 ≤ 𝑝, for some 𝑝 ∈ N, as well
investment funds, and brokerage firms) and even nonfinan- as other relevant conditioning variables that reflect economic
cial companies. or market conditions.
Hence, in this paper, we have considered the problem Here, we assume that processes 𝑌𝑡 admit a location-scale
of constructing NPIs for Conditional Value-at-Risk (CVaR) representation given as
which admit a location-scale model with heteroscedasticity,
when the distribution of the innovation is assumed to be 𝑌𝑡 = 𝑚 (𝑋𝑡 ) + √ℎ (𝑋𝑡 )𝜖𝑡 (4)
unknown using bootstrap method.
where 𝑚(.) is the unknown nonparametric regression curve
and ℎ(.) > 0 is a conditional scale function representing het-
2. The Nonparametric Predictive Intervals eroscedasticity, defined on the range of 𝑋𝑡 , 𝜖𝑡 is independent
(NPIs) for CVaR of 𝑋𝑡 , and 𝜖𝑡 is an independent and identically distributed
(iid) innovation process with E(𝜖𝑡 ) = 0, V𝑎𝑟(𝜖𝑡 ) = 1, and
We assumed that the sequence {𝑌𝑡 } satisfies a certain weak distribution function 𝐹𝜖 is assumed to be unknown.
dependence condition. Precisely, we consider the concept of From (3),
strong mixing coefficients by [8].
𝐶𝑉𝑎𝑅 (𝑥)𝜏 ≡ 𝑄𝑌|𝑋 (𝜏 | 𝑥) = 𝑚 (𝑋𝑡 ) + √ℎ (𝑋𝑡 )𝑞 (𝜏) (5)
Definition 1 (𝛼 − 𝑚𝑖𝑥𝑖𝑛𝑔 (strong mixing)). Let F𝑙𝑘 be the 𝜎 −
where 𝑄𝑌|𝑋 (𝜏 | 𝑥) is the conditional 𝜏−quantile associated
𝑎𝑙𝑔𝑒𝑏𝑟𝑎 of events generated by {𝑌𝑖 , 𝑘 ≤ 𝑖 ≤ 𝑙} for 𝑙 > 𝑘. The
𝛼 − 𝑚𝑖𝑥𝑖𝑛𝑔 coefficient introduced by [8] is defined as with 𝐹(𝑦 | 𝑥) and 𝑞(𝜏) = 𝐹𝜖−1 (𝜏) is the 𝜏−quantile associated
with unknown 𝐹𝜖 .
𝛼 (𝑘) = sup |P (𝐴𝐵) − P (𝐴) P (𝐵)| . It follows from (4) that
A∈F𝑖1 ,B∈F∞
(1)
𝑖+𝑘
̂ (𝑥)𝜏 ≡ 𝑄
𝐶𝑉𝑎𝑅 ̂ (𝑋𝑡 ) + √̂ℎ (𝑋𝑡 )̂
̂𝑌|𝑋 (𝜏 | 𝑥) = 𝑚 𝑞 (𝜏) (6)
The series is said to be 𝛼 − 𝑚𝑖𝑥𝑖𝑛𝑔 if
The asymptotic properties (6) have been studied in [9].
lim 𝛼 (𝑘) = 0. (2) Furthermore, the estimators 𝑚(𝑋 ̂ 𝑡 ) and ̂ℎ(𝑋𝑡 ) in (5) have
𝑘󳨀→∞
been studied extensively in [10, 11] (see the aforementioned
The dependence described by the 𝛼 − 𝑚𝑖𝑥𝑖𝑛𝑔 is the weakest articles for details). See also [12] for detailed discussion on
as it is implied by other types of mixing. the estimator 𝑞̂(𝜏) in (5). Nevertheless, we briefly mentioned
the results of the work by the authors mentioned above; [10]
Definition 2 (pivotal quantity (pivot)). Let 𝑋 = (𝑋1 , . . . , 𝑋𝑛 ) considered the problem of estimating 𝑚(.) in (4) the same
be random variables with unknown joint distribution 𝐹, and as Local Linear Regression (LLR), estimating the intercept 𝛼.
let 𝜃(𝐹) denote a real-valued parameter. A random variable Suppose that the second derivative of 𝑚(.) exists in a small
𝑄(𝑋, 𝜃(𝐹)) is a pivot if the distribution of 𝑄(𝑋, 𝜃(𝐹)) is inde- neighborhood of 𝑥; then
pendent of all parameters. That is, 𝑋 ∼ 𝐹(𝑥 | 𝜃(𝐹)); then 𝑄(𝑋, 𝑚 (𝑋) ≈ 𝑚 (𝑥) + 𝑚󸀠 (𝑥) (𝑋 − 𝑥) ≡ 𝛼 + 𝛽 (𝑋 − 𝑥) (7)
𝜃(𝐹)) has the same distribution ∀𝜃(𝐹) .
̂ 𝜏 in (5); the
Consider the function estimator 𝐶𝑉𝑎𝑅(𝑥) Now, let us consider a sample {𝑋𝑡 , 𝑌𝑡 }𝑛𝑡=1 and LLR: find 𝛼 and
asymptotic distribution of a pivotal quantity is used to con- 𝛽 to minimize
struct confidence intervals (CIs). Let us define 𝑄(𝐶𝑉𝑎𝑅(𝑥)𝜏 , 𝑛
2 𝑥 − 𝑋𝑡
̂ 𝜏 ) to be the pivotal statistic given as
𝐶𝑉𝑎𝑅(𝑥) ∑ (𝑌𝑡 − 𝛼 − 𝛽 (𝑋𝑡 − 𝑥)) 𝐾1 ( ) (8)
𝑡=1 𝑏1
̂ (𝑥)𝜏 )
𝑄 (𝐶𝑉𝑎𝑅 (𝑥)𝜏 , 𝐶𝑉𝑎𝑅 Let 𝛼̂ and 𝛽̂ be the solution to the Weighted Least Square
(3) (WLS) problem in (7). Then
̂ (𝑥)𝜏 − 𝐶𝑉𝑎𝑅 (𝑥)𝜏
𝐶𝑉𝑎𝑅
= ∑𝑛𝑡=1 𝑊𝑡 𝑌𝑡
√𝑉 (𝑥)𝜏 𝛼̂ = (9)
∑𝑛𝑡=1 𝑊𝑡
where 𝑉(𝑥)𝜏 is the variance of the function estimator defined where
in (26). Similarly, one can construct PIs by simply replacing
𝑥 − 𝑋𝑡
the standard deviation √𝑉(𝑥)𝜏 by the standard deviation 𝑊 (𝑥; 𝑏1 ) = 𝐾1 ( ) [𝑆𝑛,2 − (𝑥 − 𝑋𝑡 ) 𝑆𝑛,1 ] (10)
𝑏1
of the prediction √𝜎2 (𝑥) + 𝑉(𝑥)𝜏 (see [6] for details). The
and
problem of bias correction is discussed in Section 2.2, where
𝑛
we talked about how to correct the bias. 𝑥 − 𝑋𝑡 𝑗
𝑆𝑛,𝑗 = ∑𝐾1 ( ) (𝑥 − 𝑋𝑡 ) , 𝑗 = 1, 2 (11)
𝑡=1 𝑏1
2.1. The Location-Scale Model Structure. Consider {𝑌𝑡 } to Thus, the LLR estimator for 𝑚(𝑋) is defined as
be a stochastic process representing the returns on a given
stock, portfolio, or market index, where 𝑡 ∈ Z indexes a ∑𝑛𝑡=1 𝑊𝑡 𝑌𝑡
̂ (𝑥) =
𝑚 (12)
discrete measure of time and 𝐹(𝑦 | 𝑥) denote the conditional ∑𝑛𝑡=1 𝑊𝑡
Journal of Probability and Statistics 3

where the mean, variance, and bias of (11) are The estimators in (12) and (20) are then used to get a sequence
of standardized nonparametric residuals (SNR) {̂ 𝜖𝑡 }𝑛𝑡=1 , where
𝑏2 󸀠󸀠 ∞
̂ (𝑥)] = 𝑚 (𝑥) +
E [𝑚 𝑚 (𝑥) ∫ 𝑢2 𝐾 (𝑢) 𝑑𝑢
2 −∞ ̂ (𝑥)
𝑌 −𝑚
{
{ 𝑡 , if ̂ℎ (𝑥) > 0
2
𝑏 󸀠󸀠 𝜖̂𝑡 = { √̂ℎ (𝑥) (23)
= 𝑚 (𝑥) + 𝑚 (𝑥) 𝜇2 (𝑘) {
2
(13)
{0, if ̂ℎ (𝑥) ≤ 0

𝜎2 (𝑥) ∞ 2 𝜎2 (𝑥) 𝑅 (𝑘) The conditional cumulative density estimator of 𝐹𝜖 is ob-


̂ (𝑥)] =
var [𝑚 ∫ 𝐾 (𝑢) 𝑑𝑢 =
𝑛𝑏𝑓 (𝑥) −∞ 𝑛𝑏𝑓 (𝑥) tained based on these SNR:

and 1 𝑛
𝐹̂𝜖 (𝑥) = ∑1 (̂𝜖 ≤ 𝑥) (24)
𝑏2 𝑛 𝑡=1 𝑡
̂ (𝑥)) = 𝑚󸀠󸀠 (𝑥) 𝜇2 (𝑘)
𝐵𝑖𝑎𝑠 (𝑚 (14)
2
which is the unconditional cumulative distribution estimator
Next, we briefly state the estimation of ℎ(𝑥) in (4); the for 𝐹𝜖 (𝑥), the error innovation (see [12] for more details).
procedure proposed by [11] is outlined below; the estimator Now [12] estimated 𝑞(𝜏) by 𝑞̂(𝜏), the sample 𝜏𝑡ℎ -quantile
of ℎ(𝑥) is 𝜖𝑡 }𝑛𝑡=1 :
of {̂
̂ℎ (𝑥) fl Γ̂ (15)
𝑞̂ (𝜏) = inf {𝑥 : 𝐹̂𝜖 (𝑥) ≥ 𝜏} = 𝐹̂𝜖−1 (𝜏) (25)
where
To construct the NPIs discussed, we compute the mean and
̂ Γ̂1 )
(Γ, variance of the estimator in (5).
The mean is
𝑛
𝑋𝑡 − 𝑥 (16)
2
= arg min∑ (𝑟𝑡 − Γ − Γ1 (𝑋𝑡 − 𝑥)) 𝐾2 ( ) ̂ (𝑥)𝜏 ] = E [𝑄
̂𝑌|𝑋 (𝜏 | 𝑥)]
𝑡=1 𝑏2 E [𝐶𝑉𝑎𝑅

Now, with the estimator in (12), we have the sequence of


2 𝑛
̂ (𝑋𝑡 )] + E [√̂ℎ (𝑋𝑡 )̂
= E [𝑚 𝑞 (𝜏)]
̂
squared residuals {𝑟𝑡 = {𝑌𝑡 − 𝑚(𝑥)} }𝑡=1 . Therefore,
𝑏2 󸀠󸀠
∑𝑛 𝑊𝑡 𝑟𝑡 = 𝑚 (𝑥) + 𝑚 (𝑥) 𝜇2 (𝑘)
Γ̂ = 𝑡=1 (17) 2
∑𝑛𝑡=1 𝑊𝑡
𝑏2
where + [ℎ1/2 (𝑥) + 𝜇 (𝑘) ℎ󸀠󸀠 (𝑥)2 ] 𝑞̂ (𝜏)
2 2
𝑥 − 𝑋𝑡
𝑊 (𝑥; 𝑏2 ) = 𝐾2 ( ) [𝑆𝑛,2 − (𝑥 − 𝑋𝑡 ) 𝑆𝑛,1 ] (18) 𝑏2 󸀠󸀠
𝑏2 = 𝑚 (𝑥) + 𝑚 (𝑥) 𝜇2 (𝑘) + ℎ1/2 (𝑥) 𝑞̂ (𝜏)
2
and 𝑏2 (26)
+ 𝜇 (𝑘) ℎ󸀠󸀠 (𝑥)2 𝑞̂ (𝜏)
𝑛
𝑥 − 𝑋𝑡 𝑗 2 2
𝑆𝑛,𝑗 = ∑𝐾2 ( ) (𝑥 − 𝑋𝑡 ) , 𝑗 = 1, 2 (19)
𝑡=1 𝑏2 𝑏2 󸀠󸀠
= 𝑚 (𝑥) + ℎ1/2 (𝑥) 𝑞̂ (𝜏) + 𝑚 (𝑥) 𝜇2 (𝑘)
2
Hence, the smooth estimator for ℎ(𝑥) is
𝑏2
𝑛 + 𝜇 (𝑘) ℎ󸀠󸀠 (𝑥)2 𝑞̂ (𝜏)
̂ℎ (𝑥) = ∑𝑡=1 𝑊𝑡 𝑟𝑡 (20) 2 2
∑𝑛𝑡=1 𝑊𝑡
𝑚 (𝑥) + ℎ1/2 (𝑥) 𝑞 (𝜏)
≊ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
with mean, variance, and bias given as =True function

𝑏2 𝑏2 󸀠󸀠 𝑏2
E (̂ℎ (𝑥)) = ℎ (𝑥) + 𝜇 (𝑘) ℎ󸀠󸀠 (𝑥)2 , + 𝑚 (𝑥) 𝜇2 (𝑘) + 𝜇2 (𝑘) ℎ󸀠󸀠 (𝑥)2 𝑞 (𝜏)
2 2 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
2 2
(21) =Bias
1
var (̂ℎ (𝑥)) = 𝑅 (𝑘) ℎ2 (𝑥) 𝜆2 (𝑥) ̂ (𝑥)𝜏 ]
𝐵𝑖𝑎𝑠 [𝐶𝑉𝑎𝑅
𝑛𝑏𝑓 (𝑥)

and 𝑏2 󸀠󸀠 𝑏2
≊ 𝑚 (𝑥) 𝜇2 (𝑘) + 𝜇2 (𝑘) ℎ󸀠󸀠 (𝑥)2 𝑞 (𝜏) (27)
2 2 2
𝑏
𝐵𝑖𝑎𝑠 (̂ℎ (𝑥)) = 𝜇2 (𝑘) ℎ󸀠󸀠 (𝑥)2 (22)
= 𝐵 (𝑥)𝜏
2
4 Journal of Probability and Statistics

For simplicity, we used the variance of the variance function With the proposed estimator of the residual distribution
instead of its standard deviation (6) and the variance of the (23) by [12] and the estimators ((12) and (20)) of the mean
estimator is and variance functions with their asymptotic properties,
̂ (𝑥)𝜏 ] = var [𝑄
̂𝑌|𝑋 (𝜏𝑥)] respectively, and the asymptotic normal distribution of (6),
var [𝐶𝑉𝑎𝑅 one can obtain NPIs for CVaR.
̂ (𝑥) + ̂ℎ (𝑥) 𝑞̂ (𝜏)]
= var [𝑚
2.2. Bootstrap Method. This strategy consists of estimating
̂ (𝑥)] + var [̂ℎ (𝑥) 𝑞̂ (𝜏)]
= var [𝑚 the distribution of the pivotal quantity given below:

̂ (𝑥)𝜏 )
𝑄 (𝐶𝑉𝑎𝑅 (𝑥)𝜏 , 𝐶𝑉𝑎𝑅
+ 2cov (𝑚̂ (𝑥) , ̂ℎ (𝑥))
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
=0
̂ (𝑥)𝜏 − 𝐶𝑉𝑎𝑅 (𝑥)𝜏
𝐶𝑉𝑎𝑅 (31)
2 =
𝜎 (𝑥) 𝑅 (𝑘) (28) ̂ (𝑥)𝜏
√𝜎2 (𝑥) + 𝑉
=
𝑛𝑏𝑓 (𝑥)
1 using the bootstrap method. The distribution of (31) was
+[ 𝑅 (𝑘) ℎ2 (𝑥) 𝜆2 (𝑥)] 𝑞̂ (𝜏) approximated by the distribution of the bootstrapped statis-
𝑛𝑏𝑓 (𝑥)
tics
𝑅 (𝑘)
= [𝜎2 (𝑥) + ℎ2 (𝑥) 𝜆2 (𝑥) 𝑞̂ (𝜏)] ̂∗ (𝑥))
̂ (𝑥)𝜏 , 𝐶𝑉𝑎𝑅
T (𝐶𝑉𝑎𝑅
𝑛𝑏𝑓 (𝑥)
𝑅 (𝑘) ̂ ∗ (𝑥) − 𝐶𝑉𝑎𝑅
𝐶𝑉𝑎𝑅 ̂ (𝑥)𝜏 (32)
≊ [𝜎2 (𝑥) + ℎ2 (𝑥) 𝜆2 (𝑥) 𝑞 (𝜏)] = 𝑉 (𝑥)𝜏 =
𝑛𝑏𝑓 (𝑥) ̂∗ (𝑥)𝜏
√𝜎2 (𝑥) + 𝑉
where
2 where ∗ denotes the bootstrap counterparts of the estimates.
𝜆2 (𝑥) = E [(𝜖2 − 1) | 𝑋 = 𝑥] , Hence, we have the following nonparametric prediction
intervals with (1 − 𝜏) asymptotic coverage probability.
𝑌 − 𝑚 (𝑋)
𝜖= ,
ℎ (𝑋) ̂ (𝑥)𝜏
(29) IT = [𝐶𝑉𝑎𝑅
𝜇2 (𝑘) = ∫ 𝑢2 𝐾 (𝑢) 𝑑𝑢
̂∗ (𝑥)𝜏 𝑞̂ (𝑎) , 𝐶𝑉𝑎𝑅
+ √𝜎2 (𝑥) + 𝑉 ̂ (𝑥)𝜏 (33)
and 𝑅 (𝑘) = ∫ 𝐾2 (𝑢) 𝑑𝑢
̂∗ (𝑥)𝜏 𝑞̂ (𝑏)]
+ √𝜎2 (𝑥) + 𝑉
The covariance function in (28) is zero (cov(.) = 0) because
the arguments are mean 𝑚(.) and variance ℎ(.) functions, The algorithm for the bootstrap procedure is given as
respectively, and, hence, are independent (see [13]). follows:
Next, we state the asymptotic distribution of the Local
Linear Regression Estimator (6). Assume that 𝑓(𝑥) is the Algorithm (Bootstrap)
density of the lag vector of returns at point 𝑥. Then, by the
central limit theorem, the asymptotic normal distribution of (1) Generate 𝑛 data sets from the unknown probability
(6) is given as model of the data generation process in (37), with
independently identically distributed random errors
𝑑 form some unknown probability distribution func-
̂ (𝑥)𝜏 − 𝐶𝑉𝑎𝑅 (𝑥)𝜏 − 𝐵 (𝑥)𝜏 ] →
√𝑛𝑏 [𝐶𝑉𝑎𝑅 󳨀
(30) tion (pdf) 𝐹𝜖 .
N (0, 𝑉 (𝑥)𝜏 ) ̂
(2) Calculate 𝑚(𝑥) and ̂ℎ(𝑥). The estimated errors are as
in (23), from which there is the estimator 𝐹̂𝜖 (𝑥) in
Looking at the asymptotic bias (27) shows that the second
(24).
derivatives of the mean function 𝑚(𝑥) and the variance func-
tion ℎ(𝑥) have to exist for the asymptotic normal distribution ̂ ∗ (𝑥)𝑗 , 𝑗 = 1, 2, . . . , 𝑚
(3) Compute for each process 𝐶𝑉𝑎𝑅
of (6) given in (30) to hold in the neighborhood of 𝑥. Hence, ∗
we assumed that 𝐶𝑉𝑎𝑅(𝑥)𝜏 ∈ 𝐶2 (R); we also assume that ̂ (𝑥) given
(4) Compute the average function 𝐶𝑉𝑎𝑅 𝑚
𝑓(𝑥) and 𝜎2 (𝑥) are both continuous and 𝜎2 (𝑥) > 0 since both by
are in (28). ∗
The estimation of second-order derivatives is required for ̂ (𝑥)
𝐶𝑉𝑎𝑅 𝑚
consistent bias estimate and, consequently, may lead to large (34)
variability. Hence, as opined by [6], it is sensible to compute 𝐶𝑉𝑎𝑅 ̂ 2 (𝑥) + ⋅ ⋅ ⋅ + 𝐶𝑉𝑎𝑅
̂ 1 (𝑥) + 𝐶𝑉𝑎𝑅 ̂ 𝑚 (𝑥)
NPIs without the bias correction, in our case (26). =
𝑚
Journal of Probability and Statistics 5

(5) The standard error between the curves is Simulated Daily Returns plot

1 𝑛 𝑚 ̂∗ ∗
̂ (𝑥𝑖 ))
2
𝑠𝑒 = √ ∑ ∑ (𝐶𝑉𝑎𝑅 (𝑥𝑖 )𝑗 − 𝐶𝑉𝑎𝑅 𝑚 (35)
𝑚 × 𝑛 𝑖=1 𝑗=1

20
(6) The lower and upper bounds of the NPIs at level 𝛼 are
therefore given by

Daily Returns
10

̂ (𝑥) − 𝑞 (𝜏) ∗ 𝑠𝑒
𝐿 = 𝐶𝑉𝑎𝑅 𝑚
√𝑚
(36)

0
∗ 𝑠𝑒
̂ (𝑥) + 𝑞 (𝜏) ∗
𝑈 = 𝐶𝑉𝑎𝑅 𝑚
√𝑚

where 𝑞(𝜏) = 𝐹̂𝜖−1 (𝜏) is the 𝜏𝑡ℎ quantile for the

−10
empirical cumulative distribution function of the
error in (24) above. For instance, 𝜏 = 0.05. 0 500 1000 1500 2000 2500 3000
Time
3. Simulation Study Figure 1: Plot of the simulated daily returns showing their evolution.

To illustrate the NPIs, we conducted a simulation study


considering the following data generating location-scale
model:
Nonparametric Prediction Interval for CVaR
𝑋𝑡 = 𝑚 (𝑋𝑡−1 ) + ℎ (𝑡)1/2 𝜖𝑡 , 𝑡 = 1, 2, . . . , 𝑛 (37)
20

where
10

𝑚 (𝑋𝑡−1 ) = sin (0.5𝑋𝑡−1 ) ,


Returns

𝜖𝑡 ∼ T (] = 3) , (38)
0

ℎ (𝑡) = ℎ𝑖 (𝑋𝑡−1 ) + 𝜃ℎ (𝑋𝑡−1 ) 𝑖 = 1, 2


−10

and
−20

2
ℎ1 (𝑋𝑡−1 ) = 1 + 0.01𝑋𝑡−1 + 0.5 sin (𝑋𝑡−1 ) ,
(39) 0 500 1000 1500 2000 2500 3000
2
ℎ2 (𝑋𝑡−1 ) = 1 − 0.9 exp (−2𝑋𝑡−1 ) Time
upper PI
𝑋𝑡 and ℎ(𝑡) are set to zero (0) initially; then 𝑋𝑡 is generated 95% CVaR
recursively from (37) above. The data-generating process was lower PI
also considered by [14] and also used by [12]. We simulated
Figure 2: Graph showing the 95% Conditional Value-at-Risk in blue
𝑛 = 3000 observations and 200 samples for the purpose of
color, while the upper and lower prediction intervals are in red color.
bootstrapping.
As can be seen in Figures 2 and 3, the nonparametric
prediction intervals by bootstrap method for Conditional
Value-at-Risk perform well. Clearly, the 95% Conditional
Value-at-Risk is contained within the prediction bounds. interval for a conditional quantile (Conditional Value-at-Risk
Figure 1 shows the time-series plot of the simulated daily (CVaR)) using bootstrap method. The simulated returns are
returns generated using the data-generating process in (37). assumed to admit a location-scale model with the distribu-
The procedure for NPIs discussed in this paper works well for tion function of the innovation unknown. The results show
large sample sizes. that the constructed NPIs for CVaR perform well for large
sample sizes.
4. Conclusion
Data Availability
Nonparametric methods of prediction intervals play an
important role in statistics, especially in large samples. In this The data used in the article was simulated and the data-
paper at hand, we have constructed nonparametric prediction generating process (DGP) is included in the article.
6 Journal of Probability and Statistics

NPIs for 95% CVaR [8] M. Rosenblatt, “A central limit theorem and a strong mixing
condition,” Proceedings of the National Acadamy of Sciences of
the United States of America, vol. 42, pp. 43–47, 1956.
3.6

[9] E. Torsen, P. N. Mwita, and J. K. Mungatu, “A three-step non-


3.2

parametric estimation of conditional value-at-risk admitting a


location-scale model,” 2019.
2.8

[10] J. Fan, “Design-adaptive nonparametric regression,” Journal of


0 500 1000 1500 2000 2500 3000 the American Statistical Association, vol. 87, no. 420, pp. 998–
Time 1004, 1992.
[11] J. Fan and Q. Yao, “Efficient estimation of conditional variance
NPIs for Sorted 95% CVaR
functions in stochastic regression,” Biometrika, vol. 85, no. 3, pp.
645–660, 1998.
3.6

[12] E. Torsen, P. N. Mwita, and J. K. Mungatu, “Nonparametric


estimation of the error functional of a location-scale model,”
3.2

Journal of Statistical and Econometric Methods , vol. 7, no. 4, p.


18, 2018.
2.8

[13] M. Hirukawa, “Independence of the sample mean and variance


0 500 1000 1500 2000 2500 3000
for normal distributions: a proof by induction,” Pontian Eco-
Time nomic Research, vol. 5, no. 1, pp. 1–5, 2015.
Figure 3: The red lines in the first and second panels show the upper [14] C. Martins-Filho, F. Yao, and M. Torero, “Nonparametric
and lower prediction intervals, and the blue lines in both cases show estimation of conditional value-at-risk and expected shortfall
the 95% CVaR. based on extreme value theory,” Econometric Theory, vol. 34, no.
1, pp. 1–45, 2016.

Conflicts of Interest
The authors declare that there are no conflicts of interest
regarding the publication of this paper.

Acknowledgments
The authors would like to thank to the editors and two
referees for their many valuable suggestions and comments
which improved this paper greatly.

References
[1] G. A. Baker, “The probability that the mean of a second sample
will differ from the mean of a first sample by less than a certain
multiple of the standard deviation of the first sample,” The
Annals of Mathematical Statistics, vol. 6, no. 4, pp. 197–201, 1935.
[2] G. J. Hahn and W. Q. Meeker, Statistical Intervals: A Guide for
Practitioners, vol. 92, John Wiley & Sons, 2011.
[3] J. Jiang and W. Zhang, “Distribution-free prediction intervals in
mixed linear models,” Statistica Sinica, vol. 12, no. 2, pp. 537–553,
2002.
[4] J. K. Patel, “Prediction intervals—a review,” Communications in
Statistics—Theory and Methods, vol. 18, no. 7, pp. 2393–2465,
1989.
[5] M. A. Fligner and D. A. Wolfe, “Nonparametric prediction
intervals for a future sample median,” Journal of the American
Statistical Association, vol. 74, no. 366, part 1, pp. 453–456, 1979.
[6] J. De Brabanter, K. Pelckmans, J. Suykens, and J. Vande-
walle, “Prediction intervals for NAR model structures using
a bootstrap method,” in Proceedings of the 2005 International
Symposium on Nonlinear Theory and its Applications (NOLTA
2005), p. 6, 2005.
[7] P. Jorion, Value at Risk: The New Benchmark for Managing
Market Risk, 2000.
Advances in Advances in Journal of The Scientific Journal of
Operations Research
Hindawi
Decision Sciences
Hindawi
Applied Mathematics
Hindawi
World Journal
Hindawi Publishing Corporation
Probability and Statistics
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www.hindawi.com
www.hindawi.com Volume 2018
2013 www.hindawi.com Volume 2018

International
Journal of
Mathematics and
Mathematical
Sciences

Journal of

Hindawi
Optimization
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Submit your manuscripts at


www.hindawi.com

International Journal of
Engineering International Journal of
Mathematics
Hindawi
Analysis
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Journal of Advances in Mathematical Problems International Journal of Discrete Dynamics in


Complex Analysis
Hindawi
Numerical Analysis
Hindawi
in Engineering
Hindawi
Differential Equations
Hindawi
Nature and Society
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

International Journal of Journal of Journal of Abstract and Advances in


Stochastic Analysis
Hindawi
Mathematics
Hindawi
Function Spaces
Hindawi
Applied Analysis
Hindawi
Mathematical Physics
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

You might also like