Dynamic Nonlinear GARCH Modelling
The volatility (changing variance) in time series of returns impacts the pricing of financial instruments, and it is a key concept in portfolio management, option pricing and financial market regulation. Popular tools for representing the variance of the return distribution are the Generalized Autoregressive Conditional Heteroscedastic (GARCH) models [Bollerslev, 1986], [Engle, 1982]. We developed original dynamic nonlinear GARCH models [Nikolaev, Tino, and Smirnov, 2011] that are nonlinear extensions to the popular GARCH using Recurrent Neural Networks (RNN) [Haykin, 2009]. These novel RNN-GARCH offer not only flexible nonlinear function representations, but also enable proper dynamic training of the weight parameters that account explicitly for the time dependencies in the data. During training RNN-GARCH models become fitted to their full potential of being dynamic machines which learn time-dependent functions, and this is their essential advantage over the standard GARCH which are estimated simply as static functions.
Our current research implements deep learning algorithms [Goodfellow, Bengio, and Courville, 2017] for accurate training of dynamic nonlinear RNN-GARCH models. The RNN-GARCH are treated as deep connectionist structures whose parameter training reflects the temporal ordering of the data [Nikolaev and Iba, 2006]. Special contemporary techniques are applied for handling the computation of the temporal weight derivatives when propagating error information. These temporal weight derivatives are accomodated in Extended Kalman Filters (EKF) [Haykin, 2009] in order to speed up the convergence of the training process. The noise hyperparameters are evaluated with two techniques: using numerical optimisation, and using the expectation maximisation algorithms.
Experimental investigations show that RNN-GARCH have the capacity to capture better than conventional GARCH models the main characteristics of returns, namely their excess kurtosis, small autocorrelation and high persistence. Current research in value-at-risk estimation through RNN-GARCH demonstrates their usefulness for risk management in the 1% and 5% critical regions over predefined periods of time.
Our research develops Mixture Density Nonlinear GARCH (MDN-GARCH) [Nikolaev, Tino and Smirnov, 2013] models for accurate estimation of the conditional volatility in financial time series. The components of these MDN-GARCH are dynamic nonlinear neural networks whose weighted averaging approximates the return distribution [Schittenkopf, Dorffner, and Dockner, 2000)]. More precisely, pairs of expert networks specialize in modelling the moments of the conditional distribution of returns. Using such dynamic networks helps us to predict online the time-dependent statistical characteristics of returns, such as mean, variance, skewness and kurtosis. A distinguishing feature of the MDN-GARCH networks is that all the parameters: mean level, persistence and moving average coefficients, are computed with temporal training algorithms (using analytical formula) which capture explicitly the dynamics of the data during training.
Preliminary experiments show that the MDN-GARCH models produce results with better statistical characteristics (mitigated skewness and kurtosis) and economic performance (out-of-sample prediction of future directional changes) than conventional linear GARCH trained by MCMC sampling and maximum-likelihood estimation. The current research studies the relation between nonlinear MDN-GARCH and normal-mixture GARCH models (NM-GARCH) [Haas, Mittnik and Paolella, 2004], [Ausin and Galeano, 2007] which are similar contemporary tools for volatility inference.
Nikolaev,N. and Iba,H. (2006). Adaptive Learning of Polynomial Networks: Genetic Programming, Backpropagation and Bayesian Methods, Springer, New York.
Nikolaev,N., Tino,P. and Smirnov,E.N. (2011). Time-Dependent Series Variance Estimation via Recurrent Neural Networks, In: T.Honkela et al (Eds.) Proc. Int. Conf. on Artificial Neural Networks, ICANN-2011, Espoo, Finland, LNCS-6971, Springer, pp.176-184.
Nikolaev,N., Tino,P. and Smirnov,E.N. (2013). Time-Dependent Series Variance Learning with Recurrent Mixture Density Networks, Neurocomputing. vol.122, pp.501–512.
Ausin,M.C. and Galeano,P. (2007). Bayesian Estimation of the Gaussian Mixture GARCH Model, Computational Statistics and Data Analysis, vol.51, N:5, pp.2636-2652.
Goodfellow,I, Bengio,Y. and Courville,A. (2017). Deep Learning, Adaptive Computation and Machine Learning Series, MIT Press, 2017.
Haas,M., Mittnik,S. and Paolella,M.S. (2004). Mixed Normal Conditional Heteroscedasticity, Journal of Financial Econometrics, vol.2, pp.211-250.
Haykin,S. (2009) Neural Networks and Learning Machines, Pearson Higher Education.
Schittenkopf,C., Dorffner,G. and Dockner,E.J. (2000). Forecasting time-dependent conditional densities: A semi-nonparametric neural network approach, Journal of Forecasting, vol.19, pp.355-374.
Bollerslev,T. (1986). Generalized Autoregressive Conditional Heteroscedasticity, Journal of Econometrics, vol.31, pp.307-327.
Engle,R.F. (1982). Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of UK Inflation, Econometrica, vol.50, pp.987-1007.