Dynamic Nonlinear GARCH Modelling

RNNGARCH

Deep Learning of GARCH Models
The volatility (changing variance) in time series of returns impacts the pricing of financial instruments,
and it is a key concept in portfolio management, option pricing and financial market regulation. Popular tools for
representing the variance of the return distribution are the Generalized Autoregressive Conditional
Heteroscedastic (GARCH) models [Bollerslev, 1986], [Engle, 1982]. We developed original dynamic nonlinear
GARCH models [Nikolaev, Tino, and Smirnov, 2011] that are nonlinear extensions to the popular GARCH using
Recurrent Neural Networks (RNN) [Haykin, 2009]. These novel RNNGARCH offer not only flexible nonlinear function
representations, but also enable proper dynamic training of the weight parameters that account explicitly for the
time dependencies in the data. During training RNNGARCH models become fitted to their full potential of being
dynamic machines which learn timedependent functions, and this is their essential advantage over the standard GARCH
which are estimated simply as static functions.
Our current research implements deep learning algorithms [Goodfellow, Bengio, and Courville, 2017] for accurate
training of dynamic nonlinear RNNGARCH models. The RNNGARCH are treated as deep connectionist structures whose
parameter training reflects the temporal ordering of the data [Nikolaev and Iba, 2006]. Special contemporary
techniques are applied for handling the computation of the temporal weight derivatives when propagating error
information. These temporal weight derivatives are accomodated in Extended Kalman Filters (EKF) [Haykin, 2009]
in order to speed up the convergence of the training process. The noise hyperparameters are evaluated with two
techniques: using numerical optimisation, and using the expectation maximisation algorithms.
Experimental investigations show that RNNGARCH have the capacity to capture better than conventional GARCH models
the main characteristics of returns, namely their excess kurtosis, small autocorrelation and high persistence.
Current research in valueatrisk estimation through RNNGARCH demonstrates their usefulness for risk management
in the 1% and 5% critical regions over predefined periods of time.

MDNGARCH

Mixture Density Nonlinear GARCH Networks
Our research develops Mixture Density Nonlinear GARCH (MDNGARCH) [Nikolaev, Tino and Smirnov, 2013] models
for accurate estimation of the conditional volatility in financial time series. The components of these MDNGARCH
are dynamic nonlinear neural networks whose weighted averaging approximates the return distribution
[Schittenkopf, Dorffner, and Dockner, 2000)]. More precisely, pairs of expert networks specialize in modelling
the moments of the conditional distribution of returns. Using such dynamic networks helps us to predict online
the timedependent statistical characteristics of returns, such as mean, variance, skewness and kurtosis.
A distinguishing feature of the MDNGARCH networks is that all the parameters: mean level, persistence and
moving average coefficients, are computed with temporal training algorithms (using analytical formula)
which capture explicitly the dynamics of the data during training.
Preliminary experiments show that the MDNGARCH models produce results with better statistical characteristics
(mitigated skewness and kurtosis) and economic performance (outofsample prediction of future directional changes)
than conventional linear GARCH trained by MCMC sampling and maximumlikelihood estimation. The current research
studies the relation between nonlinear MDNGARCH and normalmixture GARCH models (NMGARCH) [Haas, Mittnik and
Paolella, 2004], [Ausin and Galeano, 2007] which are similar contemporary tools for volatility inference.

References
Nikolaev,N. and Iba,H. (2006). Adaptive Learning of Polynomial Networks: Genetic Programming, Backpropagation
and Bayesian Methods, Springer, New York.
Nikolaev,N., Tino,P. and Smirnov,E.N. (2011). TimeDependent Series Variance Estimation via Recurrent Neural Networks,
In: T.Honkela et al (Eds.) Proc. Int. Conf. on Artificial Neural Networks, ICANN2011, Espoo, Finland,
LNCS6971, Springer, pp.176184.
Nikolaev,N., Tino,P. and Smirnov,E.N. (2013). TimeDependent Series Variance Learning with Recurrent
Mixture Density Networks, Neurocomputing. vol.122, pp.501–512.
Ausin,M.C. and Galeano,P. (2007). Bayesian Estimation of the Gaussian Mixture GARCH Model,
Computational Statistics and Data Analysis, vol.51, N:5, pp.26362652.
Goodfellow,I, Bengio,Y. and Courville,A. (2017). Deep Learning, Adaptive Computation and
Machine Learning Series, MIT Press, 2017.
Haas,M., Mittnik,S. and Paolella,M.S. (2004). Mixed Normal Conditional Heteroscedasticity,
Journal of Financial Econometrics, vol.2, pp.211250.
Haykin,S. (2009) Neural Networks and Learning Machines, Pearson Higher Education.
Schittenkopf,C., Dorffner,G. and Dockner,E.J. (2000). Forecasting timedependent conditional densities:
A seminonparametric neural network approach, Journal of Forecasting, vol.19, pp.355374.
Bollerslev,T. (1986). Generalized Autoregressive Conditional Heteroscedasticity, Journal of Econometrics, vol.31, pp.307327.
Engle,R.F. (1982). Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of UK Inflation,
Econometrica, vol.50, pp.9871007.
Relevant GARCH Software Sites:
n.nikolaev@gold.ac.uk