Technology

Something Old, Something New – Asymptotics and Neural Networks

On occasion, hype rather than fundamentals drive markets.

Source: Vladimir V. Piterbarg, 2021

What’s more interesting is that, to an extent, hype also drives financial markets modelling. Now this is a more interesting statement to contemplate. The latest “trend” in financial markets modelling is using Neural Networks and Artificial Intelligence (AI) techniques in general. Phenomenally successful in enabling computers to distinguish between pictures of cats and dogs, financial modellers incessantly peddle AI and Neural Networks as the solution to pretty much any challenge in modelling financial markets. Find alpha signals – check. Optimise portfolios for risk – check. Price derivatives – check. Solve for credit, funding, and capital uncertainty risk in derivative portfolios – you got it.

AI successes in image classification, computer vision, language translation, text generation and the likes led financial modellers to apply the same techniques to financial markets modelling. This doesn’t come as a surprise – successful methods in some areas of knowledge are routinely tried in others, and in doing so contributing to technological progress. So, are we currently living through one of these “modelling bubbles”? It is, after all, human nature to look for nails once in possession of a hammer.

Neural Networks (NNs) are, at their core, fancy interpolators. Give them a few points with values attached – the more points the better, of course – and then ask for values for points in between, and voilà, they’ll give you the values in between. No doubt, this is extremely useful. The more dimensions to interpolate, the harder it gets. The dimensionality and accuracy of interpolation that modern hardware and modern software AI packages such as TensorFlow and PyTorch can achieve is very impressive.

While Neural Network advances are indeed exciting, financial markets do present unique challenges for AI. The data is sparse, especially when compared to the number of cat and dog pictures on the web. Financial time series are highly non-stationary – market conditions change all the time, unlike canine versus feline appearance. But, arguably, the main problem is that, more often than not, we need to solve an extrapolation problem rather than an interpolation.

Interpolation is about identifying good ways to approximate values in between the points we know. Extrapolation is about figuring out the values outside of the range of the inputs. The latter happens all the time in financial modelling. Market values routinely end up outside of “expected” ranges. Regulators ask us to apply stress tests, and rightfully so, to our models. Standard Monte Carlo and, to an extent, Partial Differential Equation (PDE) methods require significant flexibility in the range of inputs that are allowed to be used.

On top of the need to have control over model extrapolation we, modellers, need to explain how we come up with the values in our models. We cannot just say (nor should do so, of course) that a very complex Neural Network, a black box essentially, came up with a value for us.

All these considerations led me and my collaborators, Alexandre Antonov and Michael Konikov, to devise a way (see [2] or a longer, free to read version [1]) to incorporate explainable asymptotics, which we use for various purposes in financial modelling, into Neural Networks. 

Deriving asymptotics lies in the heart of the traditional (“old”) financial modelling, the bread-and-butter of quants’ skills. We, as quants, take it upon ourselves to understand and capture the behaviour of models for large values of various parameters that go into models. We do that by using various tricks of the trade, approaches that are often rooted in deep traditional math and theoretical physics techniques. These asymptotic methods such as functional analysis, stochastic calculus and others have proven invaluable in the pre-AI days of financial modelling. With our new research, the same techniques can now be incorporated into the AI (“new”) models with relative ease.

The way we suggest to go about it also combines “old” and the “new” approaches. Once we know the behaviour of a given model for large values of (some of) its parameters, we can build a spline function – a fairly traditional quant technique – that captures that. The trick lies in the type of spline function we propose to use. Once this is selected, we can then build a special Neural Network, a, what we call, Constrained Radial Layer, that doesn’t interfere with the extrapolation that we captured with our special spline function but allows us to match the residual values in the “middle” of the region where we’re approximating our function. As such we take care of extrapolation using traditional methods while applying modern Neural Networks methods for interpolation, combining, in a sense, the best of both worlds.

For those with interest in the history of mathematics, we trace the lineage of some of our ideas to Hilbert’s 13th problem that was posed in 1900 and solved by the famous Russian mathematicians Kolmogorov and Arnold around 1950s (see [3]).

Whether we like it or not, as with any (potential) bubble, the consequences of the exuberance of applying AI to financial modelling is with us to stay. However, we hope that our research allows us to avoid the worst excesses of it.

References

[1] Antonov, A., Konikov, M., & Piterbarg, V. V. (2020). Neural Networks with Asymptotics Control, SSRN working paper, https://ssrn.com/abstract=3544698

[2] Antonov, A., Konikov, M., & Piterbarg, V. V. (2021). Deep asymptotics. Risk Magazine

[3] Wikipedia, Kolmogorov–Arnold representation theorem, https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold_representation_theorem

Author: Vladimir V. Piterbarg: Global Head of Quantitative Analytics

scroll to top