Volatility forecasting has great applications in financial markets – from feeding in assumptions in models to risk management to derivatives pricing and trading. People have different methods in predicting volatility, and have developed different models to leverage on historical data to do the forecast. Below are some selections of academic research on this topic:

Forecasting Volatility Tsunami

The empirical aim of this paper is motivated by the anecdotal belief among the professional and non-professional investment community, that a “low” reading in the CBOE Volatility Index (VIX) or large decline alone are ample reasons to believe that volatility will spike in the near future. While the Volatility Index can be a useful tool for investors and traders, it is often misinterpreted and poorly used. This paper will demonstrate that the dispersion of the Volatility Index acts as a better predictor of its future VIX spikes.

Volatility Forecasting I: GARCH Models

The first and simplest model we will look at is an ARCH model, which stands for Autoregressive Conditional Heteroscedasticity. The AR comes from the fact that these models are autoregressive models in squared returns, which we will demonstrate later in this section. The conditional comes from the fact that in these models, next period’s volatility is conditional on information this period…

A Practical Guide to Volatility Forecasting through Calm and Storm

We present a volatility forecasting comparative study within the autoregressive conditional heteroskedasticity (ARCH) class of models. Our goal is to identify successful predictive models over multiple horizons and to investigate how predictive ability is influenced by choices for estimation window length, innovation distribution, and frequency of parameter reestimation. Test assets include a range of domestic and international equity indices and exchange rates. We find that model rankings are insensitive to the forecast horizon and suggestions for best practices emerge. While our main sample spans from 1990 to 2008, we take advantage of the near-record surge in volatility during the last half of 2008 to ask whether forecasting models or best practices break down during periods of turmoil. Sur- prisingly, we find that volatility during the 2008 crisis was well approximated by predictions made one day ahead, and should have been within risk managers’ 1% confidence intervals up to one month ahead.

Predicting Volatility

Uncertainty is inherent in every financial model. It is driven by changing fundamentals, human psychology, and the manner in which the markets discount potential future states of the macroeconomic environment. While denying uncertainty in financial markets can quickly escalate into philosophical discussions, volatility is widely accepted as a practical measure of risk. Most market variables remain largely unpredictable, but volatility has certain characteristics that can increase the accuracy of its forecasted values. The statistical nature of volatility is one of the main catalysts behind the emergence of volatility targeting and risk parity strategies.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: