You are on page 1of 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/349008676

Inflation forecasting in an emerging economy: selecting variables with


machine learning algorithms

Article  in  International Journal of Emerging Markets · February 2021


DOI: 10.1108/IJOEM-05-2020-0577

CITATIONS READS

0 182

2 authors:

Önder Özgür Uğur Akkoç


Ankara Yildirim Beyazit University Pamukkale University
33 PUBLICATIONS   77 CITATIONS    18 PUBLICATIONS   22 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Türkiye'de Boş Zaman Talebinin Belirleyicileri View project

Türkiye'de Boş Zaman Talebi ve Belirleyicileri View project

All content following this page was uploaded by Uğur Akkoç on 03 February 2021.

The user has requested enhancement of the downloaded file.


The current issue and full text archive of this journal is available on Emerald Insight at:
https://www.emerald.com/insight/1746-8809.htm

Inflation forecasting in an Inflation


forecasting in
emerging economy: selecting an emerging
economy
variables with machine
learning algorithms

Onder € u
Ozg €r Received 27 May 2020
Revised 15 June 2020
Economics, Ankara Yıldırım Beyazıt University, Ankara, Turkey, and 7 September 2020
ur Akkoç
Ug 30 December 2020
Accepted 13 January 2021
International Trade and Finance, Pamukkale University, Denizli, Turkey

Abstract
Purpose – The main purpose of this study is to forecast inflation rates in the case of the Turkish economy with
shrinkage methods of machine learning algorithms.
Design/methodology/approach – This paper compares the predictive ability of a set of machine learning
techniques (ridge, lasso, ada lasso and elastic net) and a group of benchmark specifications (autoregressive
integrated moving average (ARIMA) and multivariate vector autoregression (VAR) models) on the extensive
dataset.
Findings – Results suggest that shrinkage methods perform better for variable selection. It is also seen that
lasso and elastic net algorithms outperform conventional econometric methods in the case of Turkish inflation.
These algorithms choose the energy production variables, construction-sector measure, reel effective exchange
rate and money market indicators as the most relevant variables for inflation forecasting.
Originality/value – Turkish economy that is a typical emerging country has experienced two digit and high
volatile inflation regime starting with the year 2017. This study contributes to the literature by introducing the
machine learning techniques to forecast inflation in the Turkish economy. The study also compares the relative
performance of machine learning techniques and different conventional methods to predict inflation in the
Turkish economy and provide the empirical methodology offering the best predictive performance among their
counterparts.
Keywords Forecasting, Emerging economies, Inflation, Prophet model, Shrinkage methods
Paper type Research paper

1. Introduction
The inflation forecasting is a hard task. Despite the difficulty of predicting inflation, having
accurate forecasts creates a more stable economic environment (Medeiros et al., 2016).
Inflation expectations through forecasting play a crucial role in decision-making processes
for economic agents. In the future, the direction that the general price level will follow would
influence the investment decisions of companies since the real return on any investment is
highly dependent on inflation. The level of inflation is also critical for households in their
decisions regarding labor supply. They may adapt their decisions due to the impact of
inflation on their real wages. In other words, all market participants would modify their
decisions to the inflation forecasts (Monteforte and Moretti, 2013).
Also, the reliability of inflation forecasting is a critical challenge for monetary
policymakers. The introduction of the inflation targeting regime since the early 1990s,
central banks started to announce a target inflation rate in a certain period and use their tools
to achieve their targeted inflation rate (Iversen et al., 2016). By setting their targeted inflation
rates, central banks need to have reliable inflation forecasts since the monetary policy International Journal of Emerging
Markets
changes are subject to lags (Baybuza, 2018). The higher and more unstable nature of the © Emerald Publishing Limited
1746-8809
inflation rate in emerging economies blocks long-term investment projects and curtails asset DOI 10.1108/IJOEM-05-2020-0577
IJOEM and liability maturities of economic agents (Garcia et al., 2017). Therefore, although it is
highly critical to understand the indicators that are significant in inflation forecasting in any
economy, exploring the inflation dynamics is more challenging in emerging economies.
Turkey, as an emerging economy, experienced high inflation periods since the 1970s, and
the inflation rate skyrocketed during the 1994 crisis and reached 120% (Domaç, 2004).
Following the crisis of 1994, the Turkish government launched a series of stabilization
programs supported by the International Monetary Fund (IMF), but the economy has struggled

to lower inflation rates in single digits (Onder, 2004). However, with the influence of poor
external economic conditions, these programs failed to achieve the desired success. The
Turkish economy eventually experienced severe capital outflows and rising interest rates, and
the country faced two severe crises in November 2000 and February 2001 (Akinci et al., 2012).
After two crises, several attempts have been made to recover the economy, and the Transition
to Stronger Economy program has been introduced. This program includes a set of reforms
aims at providing macroeconomic, fiscal and financial stability. Turkey introduced the floating
exchange rate regime in February 2001 and a transition to the inflation targeting regime
completed in early 2006 (O € gu
€nç et al., 2013a, b). The Central Bank of the Republic of Turkey
(CBRT) makes inflation forecasts in an inflation-targeting regime and announces its predictions
via quarterly inflation reports (CBRT, 2020). The inflation rate in the Turkish economy falls
below ten percent in most of the years between 2004 and 2017. However, the inflation rate
fluctuates above ten percent in recent years, even if the target rate is in single digits. Therefore,
it seems crucial to understand the inflation dynamics in such an emerging country suffered
from double-digit inflation rates but aiming to achieve single-digit rates.
Central banks typically employ a wide range of forecasting models, including Phillips
curve specifications, factor models, vector autoregressive models and dynamic stochastic
general equilibrium (DSGE) models to manifest their inflation projections (Iversen et al., 2016).
The scholars also increased their awareness of inflation forecasting, and a great deal of effort
was made to explore the best predictors of future inflation and to find models that would do
well in any economy. As a result, the literature grows, and there is a vast amount of studies.
The Phillips curve-based model seems to be the main theoretical framework in modeling
inflation dynamics in the literature (i.e. Atkeson and Ohanian, 2001; Stock and Watson, 2008; Ball
and Mazumder, 2019). Besides, DSGE models are also used as a common tool by policymakers in
designing their policies regarding inflation forecasting (i.e. Pichler, 2008; Balcilar et al., 2015;
Iversen et al., 2016). The new versions of DSGE models incorporate different specifications (i.e.
stickiness, rational expectations) and seek to outperform their various counterparts (Balcilar et al.,
2015). Studies incorporating the factor models (i.e. Forni et al., 2003; Monteforte and Moretti, 2013;
Kotchoni et al., 2019) initiate the dimensionality problems and benefit from employing a large
number of candidate variables having potential in predicting inflation.
The major problem in inflation forecasting is to find the most predictive variables. The
analytical models which are being used suffer from the question of choosing the most
appropriate measures (Groen et al., 2013). In other words, the performance of standard
empirical models suffering from the curse of dimensionality underestimates the
methodologies designing more effective specifications that incorporate the enormous size
of the datasets into more concrete measures (Kotchoni et al., 2019).
On the other hand, machine learning techniques are recently introduced and allow us to
perform high-dimensional empirical models in inflation forecasting exercises. These models
allow using a wide range of macroeconomic indicators and various financial measures to
explore the inflation dynamics (Yadav et al., 2019). The introduction of the machine learning
techniques solves the overfitting problems in a data-rich environment, and they created
significant breakthroughs in data analysis (Baybuza, 2018).
In this context, the primary purpose of the current study is to examine the performance of a
set of machine learning techniques (ridge, lasso, ada lasso and elastic net) and a group of
benchmark specifications (ARIMA and multivariate VAR models) in forecasting inflation in Inflation
Turkish economy handling a set of 229 explanatory variables over the period between forecasting in
2007:M03 to 2019:M07. The dataset incorporates variables which can be grouped as exchange
rates, stock market indicators, figures of budget and balance of payments, production and trade
an emerging
indicators, money market rates, prices and a wide variety of other variables. In this sense, this economy
study aims to find whether machine learning algorithms provide better performance compared
to conventional econometric techniques for high and volatile inflation period in the Turkish
economy.
The current study, therefore, contributes to the literature in the following ways: First, to
our knowledge, this study is the first that introduces the machine learning techniques to
forecast inflation in the Turkish economy. Turkish economy suffers from relatively higher
inflation rates. It appears to be critical for providing the forecasts that best fit the future
inflation rates in such an emerging economy since the inflation rate is one of the most
significant indicators affecting the expectations and economic stability (Sek et al., 2015). In a
methodological context, utilized machine learning techniques select the most relevant
indicators that are most significant in inflation forecasting among a set of high-dimensional
datasets.
The second contribution of the study is structural. These machine learning techniques use
penalty or loss functions and are expected to outperform by reducing out-of-sample forecast
errors (Medeiros et al., 2016). The specification of this study divides the sample into two
categories: test sample and train sample. In this way, the model specification allows providing
the out-of-sample fit of the selected methodologies. Third, the study runs several machine
learning techniques and various conventional methods and compares their relative
performances to predict inflation in the Turkish economy and provide the empirical
methodology offering the best predictive performance among their counterparts.
The structure of the article is as follows: The second section reviews the literature. The
third section describes data and empirical methodology. The fourth section introduces
empirical results, and the final section concludes the study.

2. Literature review
Several studies have addressed different methodologies for predicting inflation in both
developed and emerging economies. As it is vital for both economic agents and policymakers,
forecasting inflation attracts more attention, and there is an increasing number of studies in
the literature. A section of the current study refers briefly to the existing literature. Although
it is possible to split the vast literature into different sub-categories, this analysis summarizes
existing studies about their methodological specifications.
Phillips curve-based models have been extensively used to forecast inflation. In one of the
earlier studies, Stock and Watson (1999) handled a standard Phillips curve model to forecast
US inflation in the 12-month horizon. They used the data covering the period between 1970
and 1996. They found that the unemployment rate-based Phillips curve models outperform
the univariate forecasting models. However, they argued that the performance of the Phillips
curve model incorporating the composite index for the aggregate economic activity is better
than the model. Conversely, Atkeson and Ohanian (2001) concluded that the Phillips curve
models could not outperform even against the most naı€ve models forecasting that the
inflation rates of the last months would be equal to those of the months to come.
Stock and Watson (2008) concentrated on the US inflation over the period between 1953
and 2008 and examined the performance of alternative specifications. Their findings
addressed that the general performance of Phillips curve models using the aggregate
economic activity indicators over other multivariate models is not stable over time. Başt€ urk
et al. (2014) extended the new Keynesian Phillips curve (NKPC) models by incorporating the
IJOEM structural time series models having time-varying patterns, forward and backward-looking
expectation components. They focused on forecasting the US inflation covering 1960–2012.
They discovered that the extended NKPC models beat the Bayesian VAR models and
stochastic volatility models in inflation forecasting. Ball and Mazumder (2019) proposed a
Phillips curve-based model for the US economy and emphasized the role of labor-market
slackness and the anchored inflation expectations in inflation forecasting. By extending the
NKPC model into inflation volatility, Jha and Kulkarni (2015) found that the output gap,
lagged output gap and lagged inflation volatility do not cause a substantial impact on
inflation volatility in India. However, they argued that these measures contribute
considerably to the expected inflation volatility.
More recent studies developed some novel approaches in the NKPC framework. McKnight
et al. (2020) integrated time-varying trend inflation to capture the variation in the conduct of
monetary policy in the Euro Area and the United States. They found that a novel NKPC
framework beats the conventional benchmark time series and structural inflation forecasting
models. Martins and Verona (2020) incorporated frequency components of inflation and
examined the out-of-sample inflation forecasting performance of the novel NKPC
specification with benchmark models in the US economy. They noted that the NKPC
model outperforms counterparty reference models once frequency domain information is
accounted for.
DSGE models have also been used to forecast inflation both in emerging and advanced
economies. Of these studies, Pichler (2008) specified a DSGE model for the US economy using
both linear and nonlinear approximation. They gathered data covering the period between
1979 and 2007. They compared the forecast performance of both linear and nonlinear
specifications and found that the nonlinear specification performs slightly better than the
linear model. Following a similar specification, Balcilar et al. (2015) examined the forecasting
performance of nonlinear DSGE models in South Africa. They compared the DSGE models’
performance to their respective counterparts. Their findings indicated that the performance
of nonlinear DSGE models is better than linear ones. The nonlinear DSGE model also tends to
beat the VAR and Bayesian VAR models to predict inflation in South Africa. In another
attempt, Iversen et al. (2016) compared the performance of DSGE models with Bayesian VAR
models for the forecasts made between 2007 and 2013 in Sweden by the Sveriges Riksbank.
They confirmed that the Bayesian VAR models perform better than the DSGE models to
forecast inflation in Sweden. In the case of low and high inflation regimes, Adom et al. (2018)
monitor supply and demand factors to keep pace with continuing inflation in Ghana. Their
Markow-switching dynamic regression model suggests that both factors are meaningful for
inflationary inertia and that the high inflation regime is more persistent than the low inflation
regime.
Some studies have used the factor models, and a large number of variables are
transformed into a small number of common components. Forni et al. (2003) examined the
performance of factor models and the role of financial variables to forecast inflation in the
Euro Area. They examined the period between 1987 and 2001 by using a large sample of
447 monthly measures. Their findings confirmed that multivariate factor models perform
better in all-time horizons than the univariate autoregressive models. Their findings also
favor the critical role of financial variables to forecast inflation. Monteforte and Moretti (2013)
proposed a mixed-frequency data model to forecast the euro area’s daily inflation. Their
measure of the price level has constructed with daily prices of financial assets and
commodities by using a dynamic factor model. Their findings illustrated that the mixed-
frequency data model performs better than the standard VAR model, and they concluded that
the inclusion of daily measures helps to reduce forecasting errors.
More recently, Kotchoni et al. (2019) compared the forecasting performance of a set of
factor models with a group of autoregressive models for the United States over the period
1960–2014. They argued that the performance of these models is not stable over time. They Inflation
confirmed that in the short horizons, the autoregressive moving average model performs forecasting in
better than others. However, the data-rich forecasting models appear to beat the
autoregressive models in recession periods at longer horizons. Similarly, Duncan and
an emerging
Martinez-Garcia (2019) employed a set of distinct standard and more complex models to economy
compare their forecasting performance in emerging economies. Surprisingly, they found that
the random walk model outperforms its counterparts in various specifications.
More recently, the availability of large datasets, advances in numerical algorithms and
methodological improvements in the analysis of large datasets increased scholars’ awareness
of using machine learning techniques in inflation forecasting (Medeiros et al., 2021). Of these
studies, Kapetanios et al. (2016) investigated the forecasting performance of the combination
of heuristic methods and variable selection/reduction methods. They analyzed the role of
particular variables in Euro Area countries in forecasting inflation. Their analysis gathers a
collection of 195 variables over the period 1996–2009. Their results indicated that the
heuristic optimization of the information criteria outperforms the variable reduction models
to predict inflation in different time horizons. They also reported that inflation-related
indicators and variables on the labor market have more predictive power, which in turn
shows the potential Phillips curve relationship. In the UK case, Chakraborty and Joseph
(2017) compared the performance of the various machine learning techniques and benchmark
autoregressive models using a set of macroeconomic variables over the period 1988–2015.
They concluded that all machine learning techniques prove better forecasts than the
benchmark autoregressive models.
Likewise, in predicting Brazilian inflation between 2003 and 2015, Garcia et al. (2017)
examined the reliability of machine learning techniques against the standard autoregressive
and random walk models. They concluded that all machine learning techniques perform
better than the autoregressive and random walk models. Besides, they suggested that the
predictive performance of machine learning models differs in various inflation forecasting
horizons. Baybuza (2018) examined the predictive performance of machine learning
techniques in the Russian economy. Their results also demonstrated that these approaches
outperform traditional autoregression and random walk models. Medeiros et al. (2021) have
recently provided evidence supporting the role of machine learning techniques in forecasting
inflation in the US economy. They argued that the random forest model is the one that
produces better output due to potential nonlinearities in the relationship between
macroeconomic indicators and the rate of inflation. In a more recent study on an emerging
economy, Rodriguez-Vargas (2020) noted that the integration of machine learning models
produces better outcomes than univariate prediction models across all horizons in Costa Rica.
Several studies analyzed the performance of different methodologies and the role of
specific macroeconomic, political or financial variables to forecast inflation in the Turkish
economy. In one of the earlier studies, Leigh and Rossi (2002) used autoregressive models and
hybrid models throughout the period 1986–2002. They concluded that the combination of the
forecasts seems to provide better predictions. They also illustrated that commercial bank
foreign exchange reserves, the monetary aggregates and the commodity price index are the
most predictive measures of inflation. Domaç (2004) investigated the performance of the
mark-up model, money-gap model and the Phillips curve model over the period 1990–2002.
Their findings implied that the mark-up model that demonstrates the role of nominal wages
and foreign prices as driving forces triggers inflation in the Turkish economy.

However, the study proposed by Onder (2004) investigated the performance of the Phillips
curve against various autoregressive and no change in models for the period 1987–2001.
They suggested that the forecast performance of the Phillips curve model is higher than its
counterparts. In their study, O€ gu
€nç et al. (2013a, b) examined the performance of a wide
variety of econometric models. Their data span covers the period 2003–2011. They noticed
IJOEM that the factor models that integrate information from many economic variables perform
much better than univariate models and Phillips curve specifications. They also showed that
the combination of the individual model forecasts reduces errors and provides better
prediction than any individual specification. Finally, Altug and Çakmaklı (2016) analyzed the
period 2001–2004. They proposed a flexible model for approximating the inflation processes
that integrated inflation expectations. Their results suggested that the proposed model beats
the traditional autoregressive and Phillips curve models, which do not incorporate inflation
expectations.
While studies examining the performance of different econometric models to forecast
inflation in both developed and emerging economies have mixed results, most studies
performing machine learning models declare these techniques to be superior. As noted, the
literature also does not include an example of an inflation forecasting exercise that proposes
machine learning techniques for forecasting inflation in the Turkish economy. In this context,
the literature calls for studies investigating the performance of machine learning techniques
to provide inflation forecasting exercises for the Turkish economy.

3. Data and methodology


3.1 Data
It can be argued that inflation has been closely linked to almost all the economic indicators.
This paper utilizes an extensive dataset that consists of economic and financial indicators of
the Turkish economy. The rich dataset includes groups of variables such as exchange rates,
stock market indicators, figures of budget and balance of payments, production and trade
indicators, money market rates, prices and a wide variety of other variables. Figure 1
illustrates the time series graphs of selected variables related to monthly inflation and some
financial and economic variables from our dataset. The dependent variable is inflation that
has been described as the percentage change per month in consumer prices. The price series
base year is 2003. The supplementary material displays the list and definitions of our dataset.
All series are obtained from the Electronic Data Delivery System (EVDS) of the CBRT. The
start of the monthly sample is selected as March of the 2007 based on the availability of
the data. The CBRT introduced the inflation targeting regime at the beginning of 2006. The
introduction of the inflation targeting regime is regarded as the structural transformation of
monetary policy in the Turkish economy. However, some series are not available as of 2006.
So, the sample period of the current study is between 2007:M03 and 2019:M07.
Another point to note is that some standardization processes were implemented without
exceptions to all series. In the first step, they are seasonally adjusted. Second, series that are
not stationary in their levels are used with their first differences. As a third step, stationary
and seasonally adjusted series are standardized by dividing their standard errors to avoid
scale bias. Scale variations of the series analyzed in this paper would change the magnitudes
of the coefficients. The estimated coefficient values can, therefore, influence the estimation
process. If scale bias is not taken into account, some machine learning (hereafter ML)
algorithms could pick the wrong variable set. Therefore, before the estimation process, all
series must be standardized.

3.2 Methodology
3.2.1 Benchmark models. This paper uses the random walk, ARIMA and multivariate VAR
models to evaluate the success of the ML algorithm’s forecasting performance. The first two
models are univariate methods that are commonly suggested as “hard to beat” models
€ unç et al., 2013). The latter is recommended by Kapetanios et al. (2016), which is aimed at
(Og€
discussing the role of explanatory variables for better inflation forecasting.
Inflation
forecasting in
an emerging
economy

Figure 1.
Time series graphs of
some variables

The regular ARIMA model incorporates the moving average and autoregressive terms. A full
ARIMA (p, d, q) model can be written as follows:
0
X p
0
X
q
yt ¼ c þ ∅i yt−i þ θj εt−j þ et (1)
i¼1 j¼1

where yt0 is the differenced and stationary series of relevant variables (inflation), and εt is the
error term. et is the white noise error term. d represents the degree of first differencing
involved. p denotes the order of the autoregressive part, and q means the degree of the moving
average part. These parameters are chosen by investigating the partial autocorrelation
function and comparing the information criteria of the models.
The random walk is the special case of the ARIMA models. If p and q have a value of zero
while the relevant series is first-order integrated, the random walk can be written as follows:
0
IJOEM yt ¼ c þ et (2)

This model is called as a random walk model with a drift because it includes the constant
term. One step further, if there is no constant term in Equation (2), it is called a random walk
model. And it tries to explain y by the white noise term only.
The third benchmark model is a multivariate VAR model. It is modeling the series not only
some own lags but also lags of some explanatory variables simultaneously. To construct the
VAR model, one needs to choose which series are used as explanatory variables. The
literature about the inflation forecasting offers the Philips curve relation to building the VAR
model. The popular approach of Philips curve includes deviations of output and
unemployment to the VAR models:

yt ¼ αðLÞyt þ βðLÞgapt þ εt (3)

where yt and gapt represent inflation and the output gap. So, L is the polynomials lag operator.
3.2.2 Machine learning models and shrinkage algorithms. This section introduces the
machine learning algorithms applied to forecast inflation in Turkey in this paper. The
primary purpose of estimation techniques is to fit the prediction function of the dependent
variable in equation (4). They utilize candidate predictor variables to fit the prediction
function and derive estimated values of the dependent variable.
Y ¼ f ðX Þ þ e (4)

where Y denotes the dependent variable, f is some fixed but unknown function of X1, X2,. . .,
Xn, and e is a random error term. In this equation, the prediction function that is f represents
the precise information that X provides about Y.
Standard linear econometric techniques pose several limitations in the estimation process.
Most importantly, these conventional techniques must challenge with bias-variance trade-off
since they must work with a restricted number of predictors in forecasting exercise. Numerous
problems exist when including as many linearly independent predictors in a linear regression.
The complexity of the model increases while the degrees of freedom declines. Hence, to predict
forthcoming observations of dependent variables in the out of sample is likely to create
potentially poor outcomes. Finally, bias-variance trade-off also contributes to the question of
overfitting, which is a critical concept in machine learning (Kvisgaard-Vera, 2018). When the
number of explanatory variables is high relative to the number of observations available,
modeling the dependent variables as a linear combination of all candidate variables would lead
to estimating many parameters and will likely result in a large forecast variance.
The best model for forecasting exercise is that which balances the bias and variance and is
not too complicated (Kvisgaard-Vera, 2018). Regularization, which solves the problem of
overfitting by shrinking the parameters, is needed in high-dimensional data. Shrinkage
methods (or regularization techniques) propose to suit a model that includes all candidate
predictors. However, these methods reduce predictor coefficients to zero or constrain the
coefficient estimates (James et al., 2013). On the other hand, standard econometric techniques
do no limit the number of predictors. The conventional ordinary least squares (OLS)
estimator, for example, tries to minimize the least square errors:
!2
X
n Xp
b
βx ¼ arg min b yi  β0  βj xij (5)
ðβ x Þ
i¼1 j¼1

Instead of an OLS estimator, shrinkage estimators aim to reduce the forecast variance by
shrinking the parameter estimates in the conventional linear model, possibly to a point where
some parameters are precisely zero, and the corresponding variables are therefore excluded Inflation
from the candidate set (Smeekes and Wijler, 2018). Shrinkage estimators minimize the least forecasting in
square errors under penalty function constraints. The penalty function structure determines
whether those coefficients can be estimated as zero. Thus, different types of penalty functions
an emerging
allow different algorithms of shrinkage. These algorithms can also select potential variables. economy
In general form, penalty function represented by the following:
N 
   2 !
X βx;j  N 
X βx;j 
λ α þ ð1  αÞ (6)
j¼1
ωj j¼1
ωj

Different values of the parameters λ, α and ω create different algorithms. This paper uses the
best-known ridge and lasso procedures to forecast inflation in the Turkish economy.
3.2.2.1 Ridge algorithm. Ridge algorithm is very similar to OLS estimator, except that the
coefficients are estimated by minimizing a slightly different function (James et al., 2013). It
shrinks the coefficients of the variables by imposing a penalty for the magnitude of the
coefficients. In the ridge algorithm, when λ takes positive values, α is zero, and ω is equal to 1
in the penalty function. Finally, the ridge algorithm estimates the coefficients by minimizing
the following equation:
!2
X n X p Xp
yi  β0  βj xij þ λ β2j (7)
i¼1 j¼1 j¼1

where λ is a tuning parameter, and it controls the relative importance of the shrinkage
penalty. As λ grows, penalty gains importance, and parameters go to zero. Increasing the
value of λ will tend to reduce the magnitude of the estimated coefficients but will not result in
the exclusion of any of the variables. In the particular case of λ equal to zero, the ridge
regression provides OLS parameter estimates. For summary, the ridge algorithm reduces all
coefficients to zero, but the final model also contains all of them in different weights. However,
it might create an interpretation problem.
When correlated variables are found in some linear regressions several, the
coefficients of these variables are poorly determined and generate very high variances
(Hastie et al., 2009). Small changes in explanatory variables are responsible for significant
changes in estimated coefficients. The value of the ridge algorithm over OLS lies in the
bias-variance trade-off (James et al., 2013). The implementation of a penalty function like
equation (7) in algorithms of the ridge restricts their scale. The ridge estimator that
provides lower variances and prediction error can be written in the matrix form as
follows:
R
b
β ¼ ðX T X þ λI Þ−1 X T y (8)

It can be argued that the ridge estimator has some functionality in a practical way. First, the
intercept term can be left out of the penalty function. In this way, the intercept can be
launched into the model. The second feature of the ridge estimator is the motivation of the
ridge algorithm when it was first introduced. Thanks to the ridge estimator. It is the linear
function of dependent variable y; even if XTX has not full rank estimation problem, it is still
nonsingular. In the case of singular XTX, its inverse is not defined, and an OLS estimator does
not exist. However, the ridge estimator solves this singularity problem and by adding λI
reaches inverse of the singular XTX. Finally, the solution of the ridge algorithm is not free
from the scaling distortion. Because ridge estimators are not scale invariant, one can think the
standardization before proceeding into the analysis.
IJOEM 3.2.2.2 Lasso algorithm. Lasso algorithm is a kind of continuous subset selection method.
It overcomes the drawbacks of the ridge algorithm that none of the variables at the end of the
day is exactly equal to zero. Lasso’s key strength is its capacity to simultaneously estimate
and select coefficients (Wang and Leng, 2008). The Lasso algorithm transforms each
coefficient with a constant factor, μ, but truncates some of them at zero. It provides a subset of
variables, and this process is called as “soft thresholding” (James et al., 2013).
  algorithm forces some coefficients to be precisely zero by replacing βj in the ridge
2
Lasso
 
with βj penalty function. This form of penalty function called as l1 form and creates the
nonlinear solution according to y. So, when λ takes positive values, and α is one, then ω is
equal to 1. Lasso algorithm yields sparse models that involve only a subset of candidate
predictors. However, in general, results produced by lasso are much more comfortable to
interpret. Finally, the lasso algorithm solves the minimization problem of the following
equation:
!2
Xn Xp X p
yi  β 0  βj xij þ λ jβj j (9)
i¼1 j¼1 j¼1

Although the lasso algorithm performs better variable selection, it cannot be foreknown,
which of the algorithms generate the least prediction error. In addition, due to the fact that
ridge uses all variables, it tends to give better production accuracy. On the other hand ,
several studies claim that lasso restores the appropriate model as the data size increases (e.g.
Donoho, 2006; Knight and Fu, 2000).
3.2.2.3 Group lasso algorithm. The group lasso generalization approach developed by
Yuan and Lin (2006) is a kind of derivative of the standard lasso procedure. In some
situations, it may be a good way to shrink the number of variables and incorporate them as
groups. This algorithm selects variables in a group manner. The group lasso generalization
approach uses group-wise variable selection in its specification. Regressors split up to groups
[1] and procedure solves the following maximization problem:
!2
X n X
p X p
pffiffiffiffisffiffiffiffiffiffiffiffiffiffiffi
X ffi
yi  β 0  βj xij þ λ pj β2i (10)
i¼1 j¼1 j¼1 i∈Ij

There are p variables and j groups that are predictors divided into the group lasso
maximization problem. So, β2j takes zero value only if all of its components are zero. Hence, for
some values of λ, an entire group of variables may shrink to zero, and they are eliminated from
the model. Each group’s design matrix assumes orthonormality and uses soft thresholding
like a lasso algorithm.
3.2.2.4 Adaptive lasso algorithm. This study also employs the adaptive lasso algorithm.
The lasso and adaptive lasso algorithms simultaneously perform variable selection and
estimation. The adaptive lasso algorithm argues that the way to shrink all coefficients
equally could cause inconsistent results (Yuan and Lin, 2006). Therefore, Zou (2006) modified
the lasso penalty function to allow heterogeneity in parameter shrinkage. This simple
solution allows the correct model to be recovered under mild conditions.
As in the case of the lasso, λ takes positive values, α and ω are equal to 1. However,
adaptive weights are used to penalize various penalty function coefficients. All coefficients in
the penalty function weighted by its initial estimators. The initial estimations of the adaptive
lasso algorithms are OLS or ridge algorithms. According to Zou (2006), the process that using
initial estimation improves the performance of variable selection if good initial estimators are
chosen.
!2
X
n X
p Xp
jβj j Inflation
yi  β 0  βj xij þ λ (11) forecasting in
b
j¼1 jβ initial j
i¼1 j¼1
an emerging
3.2.2.5 Elastic net algorithm. One of the disadvantages of lasso and ridge algorithms is their economy
poor performance in working with highly correlated variables. If some of the explanatory
variables are strongly correlated, the lasso estimator may become indifferent between them,
or the ridge estimator shrinks them together in the same way. However, this situation needs a
specific solution. Elastic net overcomes this problem by using the penalty function with two
tuning terms. The first term of the penalty function encourages the sparse selection, but at the
same time, the second term of the penalty function tends to average these coefficients. Finally,
the algorithm tries to solve the following maximization problem:
!2 !
X n Xp Xp X
p
yi  β0  βj xij þ λ α jβj j þ ð1  αÞ βj
2
(12)
i¼1 j¼1 j¼1 j¼1

So, elastic net estimation is made with a mixed structure of lasso and ridge algorithms. If α
takes the value of zero, it is a ridge estimator. On the other side, if α is equal to one, it
transforms into the lasso estimator. Elastic net is a hybrid algorithm relaxing the condition
that α must get a value of 1 or 0. In this algorithm, α can take values between 0 and 1. While
the l1 form penalty function generates a sparse model, the strength of convexity varies with
the values of α. Elastic net searches for the optimal value of α together with λ. Grid search for
this procedure is two dimensional.
Selecting the optimal tuning parameter of λ (and α for the elastic net) is critical for the ridge
and lasso algorithms and their derivatives. Increasing of λ reduces the variance of estimation
at the expense of higher bias. In other words, the optimal value of λ means the equilibrium
point on the variance-bias trade-off. Many criteria can be applied to select optimal tuning
parameters such as error-related measures of mean squared error (MSE), root of mean
squared error (RMSE), R2, and information criteria of Akaike or Schwarz. The empirical
approach constructed in this paper determined the optimal values of tuning parameters by
grid search and RMSE.

4. Empirical results
This paper aims to find a series that has a better predictive performance on forecasting
inflation in the Turkish economy and practicing a forecasting exercise to test the ability of
machine learning techniques. The sample data started from March 2007 and aimed to
investigate the forecasting performance of the selected algorithms by comparing the actual
data with predicted data after 2017. The prediction exercise period can be characterized as a
high and volatile inflation period for the Turkish economy. Providing the forecasts which is
the best fit for future inflation rates in such an emerging economy seems crucial.
Empirical implementation of this study explores how machine learning algorithms can
contribute to forecast inflation in the example of Turkish economy accurately. The empirical
analysis starts by splitting a dataset into two sub-set that are called test data and train data.
The out-of-sample forecasting performance of the learned models is measured in the test data.
Methods such as crossvalidation or bootstrap exist for splitting the data into train and test
parts. As a rule of thumb, the train data consist of 70% of the whole sample data. However,
since this study aims to test the specified algorithms’ predictive accuracy, the sample is
divided into two categories as pre2017 and post2017.
The inflation rate series which our empirical analysis aims to forecast are illustrated in
Figure 2. It shows that the inflation rate variable concentrated on the zero and two percentage
IJOEM Histogram with Normal Curve

80
60
Frequency

40
20
0

Figure 2.
The histogram of
inflation rates -2 0 2 4 6 8
variable (%)
Inflation Rates in Turkey

point intervals though the whole series ranges from-2 to 6. Therefore, the positive inflation
rate series are more common in the Turkish economy in this period. However, the train
sample period and test sample period have notable differences in inflation rate variability.
The standard deviation is around 0.80, and the mean is 0.64 in the training period. Since the
Turkish economy has a high and volatile inflation rate after 2017, the test period inflation rate
series has a mean value of 1.30 and a standard deviation of 1.165.
The values of turning parameters must be verified in the first phase of the forecasting
exercise in ridge and lasso algorithms. Alpha and lambda values are designated by using
some critical measures obtained from the model. It is noteworthy that the alpha parameter
determines the algorithm type, and lambda defines the relative importance of the penalty
measure in minimization practices. To illustrate, the value of lambda that provides minimum
error rate or information criteria might be the best. Table 1 shows the best lambda values
from all algorithms and the related RMSE (root mean square error). The optimal value of
lambda for the ridge algorithm is 9.66. This value is considerably higher than other machine

Lambda RMSE

ARIMA (1,1,0) – 1.237


ARIMA (3,1,2) – 1.167
ARMA(4,1)–GARCH(5,5) – 1.000
MVAR – 1.031
Prophet model – 0.975
Ridge 9.668 1.099
Lasso 0.065 0.834
Table 1. Ada lasso 1.062 10.180
Best lambdas and Group lasso 0.039 0.998
corresponding RMSEs Elastic net 0.710 0.893
learning algorithms. However, it is 0.065 on the best specification of the lasso model. In Inflation
contrast, ada lasso models have an optimal lambda value of 1.062. On the other hand, the grid forecasting in
search results verify that the best values for alpha and lambda for the elastic net algorithm
are 0.1 and 0.710, respectively.
an emerging
Table 1 will also provide details on which algorithm is best suited for a minimum error. economy
The benchmark model is the ARIMA model in our specification. In other words, the RMSE
values for each algorithm correspond to the relative values concerning the ARIMA model. To
illustrate, the RMSE value for the ridge algorithm means that the error measure of the
ARIMA model is about 3,52 (1/0.284) times higher than the ridge model.
Therefore, one might conclude that the error measures are relatively lower for the ridge,
lasso, group lasso and elastic net algorithms than the ARIMA model. However, the RMSE’s of
random walk and multivariate VAR models are very close to the ARIMA model, and the ada
lasso algorithm has the lowest sample fit performance among its counterparts. Overall, the
best suitable measures are provided by the lasso and elastic net models, which means that
these algorithms outperform other empirical specifications.
Figure 3 represents the empirical results of the lasso and elastic net algorithms. These two
algorithms provide a minimum error rate than their counterparts and relative to the random
walk specification. Figure 2 demonstrates that all coefficients are concentrated very close to
zero due to the standardization of the employed variables. However, these results also provide
some outlier parameters. These findings also imply some shortcomings in how the algorithms
work and indicate that the parameters converge to zero as the penalty parameter alpha
increases.
Next, the study presents the variables that have higher predictive performance among the
candidate set of measures. First, Figure 4 shows the variables chosen by the lasso and elastic
net algorithms to forecast the inflation in Turkey after 2006.
Besides, Table 2 provides the best predictor variables and their corresponding
coefficients. In other words, these coefficients demonstrate the sizes of the chosen
regressors for each algorithm.
According to Table 2, the lasso algorithm offers eight variables that have relatively higher
predictive performance among a set of series, and the elastic net algorithm suggests four
more series and a total of 12 variables [2].
Selected variables include measures related to the construction sector, exchange rates,
energy production variables and financial measures. Of these groupings, the table shows that
building stocks have a significantly positive impact on inflation. Therefore, it is safe to
conclude that the construction sector’s acceleration might be an indicator of the rise in
aggregate demand in the Turkish economy. Thus, high demand appears to exacerbate
domestic inflation.
Second, the Sterling exchange rate and the real effective exchange rate (REER) illustrate
that the depreciation of the domestic currency increases the inflation rate in the Turkish
economy. Since the rise in the REER represents the domestic currency’s appreciation against
a set of foreign currencies, the negative coefficient of the REER indicates the adverse link
between the depreciation of the domestic currency and the inflation rate. The role of foreign
exchange rates in predicting inflation might be due to its direct effect on the cost of imported
goods and services. The Turkish economy also provides some examples emphasizing the role
of foreign exchange rate on domestic inflation and its provoking impact on the monetary
authority to implement accommodative monetary policy actions. For instance, the
speculative attacks on the domestic currency in August 2018 caused the depreciation of
the Turkish Lira in massive amounts and the central bank intervenes in the market to smooth
the foreign exchange values. Also, monthly consumer price inflation increased by 2.30 and
6.30% in August and September 2018, respectively, demonstrating the outstanding role of
the exchange rate on Turkey’s inflation rate.
IJOEM

Figure 3.

algorithms
Coefficients and MSEs
of lasso and elastic net
108 86 61 13 2 147 135 113 68 5

0.3
0.4

0.2
0.2

0.1
0.0

Coefficients

Coefficients
-0.2
-0.4

-0.3 -0.2 -0.1 0.0


-6 -5 -4 -3 -2 -10 -8 -6 -4 -2
Log Lambda Log Lambda

Lasso Coefficients Among Lambda Values Elastic Net Coefficients Among Lambda Values
(a) (b)

116 109 99 92 82 75 68 56 47 31 10 8 5 5 2 0

1e+03

log10(z)
1e+00 0
-1
λ
-2
-3
-4
-5

Mean-Squared Error
1e-03

0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50


0.00 0.25 0.50 0.75 1.00
-6 -5 -4 -3 -2
Log(λ) α

MSEs of the Lasso Estimations Among Lambda Values MSEs of the Elastic Net Estimations Among Lambda Values

(c) (d)
Inflation
forecasting in
an emerging
economy

200

200
150

150
Variables Index

Variables Index
100

100
50

50
0

-0.04 0.00 0.04 0.04 0.00 -0.04

Coefficients of Lasso Coefficients of Elastic Net


Figure 4.
Coefficients of
variables according to
different algorithms
IJOEM Variables Lasso coefficients Elastic net coefficients

Intercept 0.167 0.170


Energy production 0.050 0.025
Home buildings – 0.032
Office buildings 0.014 0.007
Industrial buildings 0.017 0.003
Buildings – 0.002
Sterling 0.054 0.042
REER 0.041 0.024
Table 2. Deposit rate up to one year 0.018 0.009
Best predictors of Total credit 0.159 0.126
inflation Durable goods manufacturing – 0.0005

Third, the lasso and elastic net algorithms also offer the energy production variable to predict
inflation in the Turkish economy. Turkey does not satisfy its oil needs with domestic sources
and must import a tremendous amount of energy. Therefore, the Turkish economy
experiences higher trade deficits, especially during the boost periods of economic activity.
Remarkably, the budget deficit of Turkey was relatively low after 2006 and less sizeable than
the trade deficit. It can be argued that the trade deficit has a more pronounced effect on
inflation rather than budget terms.
Finally, some financial measures are regarded as the critical variables to predict inflation
in Turkey and in line with theoretical expectations. The positive coefficient of these variables
argues that accelerated loans and the rising deposit rates lead to a rise in the inflation rate.
Therefore, it is safe to conclude that banking indicators are more critical measures of inflation
forecasting than financial measures and monetary aggregates.
Our findings are partly consistent with the results of earlier studies for Turkey. Of these
previous studies, Domaç (2004) found that the output gap and the monetary disequilibrium

are more important variables to forecast inflation. Similarly, Onder (2004) also suggested that
the Phillips curve-based inflation forecasting model performs better than autoregressive
models and the random walk specification. Since the earlier studies do not concentrate on
machine learning techniques or any other data-rich models in analyzing the inflation
forecasting issue in the Turkish economy, they rely more on the limited number of output-gap
series and monetary measures. Unlike these studies, our results offer a wide range of
candidate variables for inflation forecasting.

5. Conclusions
In comparison to developed countries, emerging countries have higher and more persistent
inflation rates historically. This problem, which was solved at the beginning of the 2000s, has
risen again, and the inflation rate has accelerated since 2017 in the Turkish economy. This
turbulence period was characterized by two-digit yearly inflation numbers and in an
intensely volatile nature. Therefore, the period provides a fruitful environment for prediction
exercise and calls for a study examining the relevant variables for leading indicators of future
inflation rates.
In this context, this paper aims to find relevant variables to forecast Turkish inflation by
handling shrinkage methods of machine learning techniques. The study uses information
gained from a wide range of time series variables over the period between 2007:M03 and
2016:M12 and perform a forecasting exercise for the period between 2017:M01 and 2019:M12.
The study performs ridge, lasso and derivatives of lasso and elastic net algorithms and
compares the forecasting performance of these algorithms with those of benchmark ARIMA
and multivariate VAR models.
It is obvious that hundreds of candidate variables can influence the rate of inflation. Inflation
Shrinkage methods include effective ways to pick a subset of high-dimension variables to forecasting in
predict inflation rates. The main question is that although high and volatile inflation period of
the Turkish economy is getting harder to forecast, whether machine learning algorithms can
an emerging
help us to solve this issue and perform better than conventional econometric modelings. economy
Empirical results of this study indicate that shrinkage methods are better ways to select
the most effective predictors. With the exception of ada lasso, all machine learning algorithms
have less predictive errors compared with their theoretical counterparts. It is clearly seen that
lasso and elastic net algorithms outperform conventional econometric methods in the case of
predicting Turkish inflation. The implementation of these modern methods reduces forecast
errors and increases the predictive power. Best algorithms choose energy production and
some indicators of construction sector as the most relevant predictors of inflation rate. The
results also show that the real effective exchange has an ability to predict inflation in Turkey.
Finally, as money market indicators, deposit rates and the extended bank credits are also
prominent in this issue. It can be deduced that selected variables are in the line with historical
price level developments in Turkish economy. To illustrate, Turkish economy is the one that
depends on the energy imports and experience deficits in an external balance. Also, the credit
market conditions are somewhat driving force to facilitate an economic activity in Turkey
recently.
Taming inflation rates in the past decade is one of the main tasks of monetary policy in the
Turkish economy. The CBRT implemented various tools to stabilize domestic prices and
intervened into the foreign exchange market. Even periodic exemptions of consumption tax
rates enforced to decrease inflation many times in last years. Machine learning algorithms,
therefore, provide new insights into policymakers and might suggest for new measures to be
taken into consideration to provide price stability.
Notes
1. It is presented in Data Appendix 1 that regressors included to which groups.
2. Also, we find that petroleum products manufacturing has a positive effect on inflation. However, the
coefficient is around 1,8e-10 that is negligible.

References
Adom, P.K., Agradi, M.P. and Quaidoo, C. (2018), “The transition probabilities for inflation episodes in
Ghana”, International Journal of Emerging Markets, Vol. 13 No. 6, pp. 2028-2046.
Akinci, D.A., Matousek, R., Radic, N. and Stewart, C. (2012), “Monetary policy and banking sector:
lessons from Turkey”, Centre for EMEA Banking, Finance and Economics Working Paper
Series, Vol. 2012 No. 31, pp. 1-50.
Altug, S. and Çakmaklı, C. (2016), “Forecasting inflation using survey expectations and target
inflation: evidence for Brazil and Turkey”, International Journal of Forecasting, Vol. 32 No. 1,
pp. 138-153.
Atkeson, A. and Ohanian, L.E. (2001), “Are Phillips curves useful for forecasting inflation?”, Federal
Reserve Bank of Minneapolis Quarterly Review, Vol. 25 No. 1, pp. 2-11.
Balcilar, M., Gupta, R. and Kotze, K. (2015), “Forecasting macroeconomic data for an emerging market
with a nonlinear DSGE model”, Economic Modelling, Vol. 44, pp. 215-228.
Ball, L. and Mazumder, S. (2019), “A Phillips curve with anchored expectations and short-term
unemployment”, Journal of Money, Credit, and Banking, Vol. 51 No. 1, pp. 111-137.
urk, N., Çakmakli, C., Ceyhan, S.P. and Van Dijk, H.K. (2014), “Posterior-predictive evidence on US
Başt€
inflation using extended new Keynesian Phillips curve models with non-filtered data”, Journal
of Applied Econometrics, Vol. 29 No. 7, pp. 1164-1182.
IJOEM Baybuza, I. (2018), “Inflation forecasting using machine learning methods”, Russian Journal of Money
and Finance, Vol. 77 No. 4, pp. 42-59.
CBRT (2020), “Inflation report IV”, available at: www.tcmb.gov.tr.
Chakraborty, C. and Joseph, A. (2017), Machine Learning at Central Banks, Working Paper 674, Bank
of England Staff Working Paper.
Domaç, I._ (2004), Explaining and Forecasting Inflation in Turkey, The World Bank, Washington, DC.
Donoho, D.L. (2006), “For most large underdetermined systems of equations, the minimal ?1-norm near-
solution approximates the sparsest near-solution”, Communications on Pure and Applied Mathematics:
A Journal Issued by the Courant Institute of Mathematical Sciences, Vol. 59 No. 7, pp. 907-934.
Duncan, R. and Martınez-Garcıa, E. (2019), “New perspectives on forecasting inflation in emerging
market economies: an empirical assessment”, International Journal of Forecasting, Vol. 35 No. 3,
pp. 1008-1031.
Forni, M., Hallin, M., Lippi, M. and Reichlin, L. (2003), “Do financial variables help forecasting inflation
and real activity in the euro area?”, Journal of Monetary Economics, Vol. 50 No. 6, pp. 1243-1255.
Garcia, M.G., Medeiros, M.C. and Vasconcelos, G.F. (2017), “Real-time inflation forecasting with high-
dimensional models: the case of Brazil”, International Journal of Forecasting, Vol. 33 No. 3,
pp. 679-693.
Groen, J.J., Paap, R. and Ravazzolo, F. (2013), “Real-time inflation forecasting in a changing world”,
Journal of Business and Economic Statistics, Vol. 31 No. 1, pp. 29-44.
Hastie, T., Tibshirani, R. and Friedman, J. (2009), The Elements of Statistical Learning: Data Mining,
Inference, and Prediction, Springer Science & Business Media.
Iversen, J., Laseen, S., Lundvall, H. and Soderstrom, U. (2016), Real-time Forecasting for Monetary Policy
Analysis: The Case of Sveriges Riksbank, Riksbank Research Paper Series, Stockholm, Vol. 142.
James, G., Witten, D., Hastie, T. and Tibshirani, R. (2013), An Introduction to Statistical Learning,
Springer, New York, Vol. 112, pp. 3-7.
Jha, R. and Kulkarni, S. (2015), “Inflation, its volatility and the inflation-growth tradeoff in India”,
International Journal of Emerging Markets, Vol. 10 No. 3, pp. 350-361.
Kapetanios, G., Marcellino, M. and Papailias, F. (2016), “Forecasting inflation and GDP growth using
heuristic optimisation of information criteria and variable reduction methods”, Computational
Statistics and Data Analysis, Vol. 100, pp. 369-382.
Knight, K. and Fu, W. (2000), “Asymptotics for lasso-type estimators”, Annals of Statistics, pp. 1356-1378.
Kotchoni, R., Leroux, M. and Stevanovic, D. (2019), “Macroeconomic forecast accuracy in a data-rich
environment”, Journal of Applied Econometrics, Vol. 34 No. 7, pp. 1050-1072.
Kvisgaard, V.H. (2018), Predicting the Future Past. How Useful Is Machine Learning in Economic
Short-Term Forecasting?, Master’s thesis.
Leigh, M.D. and Rossi, M.M. (2002), Exchange Rate Pass-Through in Turkey (No. 2-204), International
Monetary Fund, Washington, DC.
Martins, M.M. and Verona, F. (2020), Forecasting Inflation with the New Keynesian Phillips Curve:
Frequency Matters, Bank of Finland Research Discussion Paper, Helsinki, Vol. 4.
McKnight, S., Mihailov, A. and Rumler, F. (2020), “Inflation forecasting using the new Keynesian
Phillips curve with a time-varying trend”, Economic Modelling, Vol. 87, pp. 383-393.
Medeiros, M.C., Vasconcelos, G. and Freitas, E. (2016), “Forecasting Brazilian inflation with high-
dimensional models”, Brazilian Review of Econometrics, Vol. 36 No. 2, pp. 223-254.
 and Zilberman, E. (2021), “Forecasting inflation in a data-
Medeiros, M.C., Vasconcelos, G.F., Veiga, A.
rich environment: the benefits of machine learning methods”, Journal of Business and Economic
Statistics, Vol. 39 No. 1, pp. 98-119.
Monteforte, L. and Moretti, G. (2013), “Real-time forecasts of inflation: the role of financial variables”,
Journal of Forecasting, Vol. 32 No. 1, pp. 51-61.
€ gu
O €nç, F., Akdo
gan, K., Başer, S., Chadwick, M.G., Ertu
g, D., H€
ulag€ €
u, T., K€osem, S., Ozmen, U.M. and Inflation
Tekatlı, N. (2013a), “Short-term inflation forecasting models for Turkey and a forecast
combination analysis”, Economic Modelling, Vol. 33, pp. 312-325. forecasting in
€ gu
O €nç, F., Akdogan, K., Başer, S., Chadwick, M.G., Ertu g, D., H€
ulag€u, T. and Tekatlı, N. (2013b),
an emerging
“Short-term inflation forecasting models for Turkey and a forecast combination analysis”, economy
Economic Modelling, Vol. 33, pp. 312-325.

Onder, € (2004), “Forecasting inflation in emerging markets by using the Phillips curve and
A.O.
alternative time series models”, Emerging Markets Finance and Trade, Vol. 40 No. 2, pp. 71-82.
Pichler, P. (2008), “Forecasting with DSGE models: the role of nonlinearities”, The B.E. Journal of
Macroeconomics, Vol. 8 No. 1, pp. 1-33.
Rodrıguez-Vargas, A. (2020), Forecasting Costa Rican Inflation with Machine Learning Methods, Vols
2020-2, Documentos de Trabajo, Banco Central de Costa Rica, pp. 1-44.
Sek, S.K., Teo, X.Q. and Wong, Y.N. (2015), “A comparative study on the effects of oil price changes on
inflation”, Procedia Economics and Finance, Vol. 26, pp. 630-636.
Smeekes, S. and Wijler, E. (2018), “Macroeconomic forecasting using penalized regression methods”,
International Journal of Forecasting, Vol. 34 No. 3, pp. 408-430.
Stock, J.H. and Watson, M.W. (1999), “Forecasting inflation”, Journal of Monetary Economics, Vol. 44
No. 2, pp. 293-335.
Stock, J.H. and Watson, M.W. (2008), Phillips Curve Inflation Forecasts (No. w14322), National Bureau
of Economic Research, Cambridge.
Wang, H. and Leng, C. (2008), “A note on adaptive group lasso”, Computational Statistics and Data
Analysis, Vol. 52 No. 12, pp. 5277-5286.
Yadav, O., Gomes, C., Kanojiya, A. and Yadav, A. (2019), “Inflation prediction model using machine
learning”, International Journal of Information and Computing Science, Vol. 6 No. 5, pp. 121-128.
Yuan, M. and Lin, Y. (2006), “Model selection and estimation in regression with grouped variables”,
Journal of the Royal Statistical Society: Series B, Vol. 68 No. 1, pp. 49-67.
Zou, H. (2006), “The adaptive lasso and its oracle properties”, Journal of the American Statistical
Association, Vol. 101 No. 476, pp. 1418-1429.

Appendix 1
1. MSEs and Number of Selected Variables According to Horizon

7
6.5
6
5.5
5
MSE

4.5
4
3.5
3
2.5
2
1 2 3 4 5 6 8 10 12
Horizon

Lasso Elastic Net


IJOEM 20
18
16

Selected Variables
14
12
10
8
6
4
2
0
1 2 3 4 5 6 8 10 12
Horizon

Lasso Elasc Net

Appendix 2
The supplementary material is available online for this article

Corresponding author
U
gur Akkoç can be contacted at: uakkoc@pau.edu.tr

For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com

View publication stats

You might also like