Kristian Bondo Hansen, Copenhagen Business School
Big Data & Society 7(1), https://doi.org/10.1177/2053951720926558. First published: May 20, 2020.
Keywords: Ockham’s razor, machine learning models, algorithmic trading, distributed cognition, model overfitting, explainability
In the spring of 2018, I was—as part of my fieldwork into the use of emerging technology in securities trading in financial markets—sitting in on an industry conference on the employment of machine learning (ML) and artificial intelligence (AI) in finance. The conference took place in a rather lavish late eighteenth century building in central London, which had belonged to the freemasons before being turned into a hotel and then conference venue. On the second day of the event a labour union was having a gathering just one floor down from where I sat listening to old school financiers and new school data scientists talk about Markov Chains, unstructured data, reinforcement learning, LSTMs, autoencoders, and a lot of other very technical stuff. I remember pondering what the union people might think about the “capitalists” upstairs and whether I was perceived as one of them. Thoughts about class affiliation and whether or not I was blending in aside, I was enjoying the conference, especially my coffee break conversations with the new breed of tech savvy financiers. In one of the more accessible and less hypothetical presentations—none of the participants were interested in sharing trade secrets, just glimpses of their potentially profitable ML and AI models—a young data wizard doing quantitative risk management in a Dutch clearing bank spoke on a late-stage ML model for anomaly detection in undisclosed financial data. Because it was the first presentation of a production-ready ML model and not just a thing on the drawing board, the room was buzzing with excitement. During the Q&A the presenter was queried about the number of tests and the type and scope of data the model had been trained on. Asked why the team from the clearing bank had only performed a limited number of tests of the model, the presenter replied “because it works. And I have deadlines too!”
Besides being amusingly frank, the response by the data scientist is telling of a combination of and tension between pragmatics and tireless scientific rigour that characterise contemporary quantitative model-driven trading and investment management. Though some algorithms are immensely sophisticated engineering marvels, it is, at the end of the day, their ability to consistently make money that counts. With data scientists increasingly and rapidly replacing economists in the back, middle and front offices of trading firms, hedge funds and banks, finance seems more and more to be turning into an applied data science industry. While finding edge in markets now partly is a scientific endeavour, the end goal however remains the same: to make money. It is the challenge of devising robust, sophisticated, profitable yet understandable and thus manageable ML algorithms for trading and investment purposes that I explore in my paper ‘The virtue of simplicity: On machine learning models in algorithmic trading’. More specifically, I engage with the development of such models from the quants’ perspective and analyse their reflections on how to deal with ML techniques, vast datasets, and the dynamism of financial markets in an in many ways impatient industry. Drawing on distributed cognition theory, my argument basically is that ML techniques enhance financiers’ ability to take advantage of opportunities, but at the same time possess a degree of unavoidable complexity that developers and users need to find ways to make sense of, manage and control.
The paper shows how ML quants attempt to manage the complexity of their algorithms by resorting to simplicity as a virtuous and pragmatic rule of thumb in model development and model implementation processes. Quants consider simplicity—they are particularly fond of the Ockham’s razor principles, which says that things should not be multiplied without necessity—a heuristic that helps them manage and control machine learning model complexity. The argument for having simplicity as a rule of thumb in ML modelling is to ensure comprehensibility, interpretability and explainability. It helps frame the modelling process by making it more foreseeable and controllable, which fortifies accountability. Rather than being able to account for every little detail in learning algorithms, what quants perceive as having an understanding or “feel” for a model is a matter of grasping the algorithm’s basic logic and being capable of interpreting output. The study contributes to research on the relationship between and interaction of humans and algorithms in finance and beyond.
The research that went into this paper is carried out as part of the ERC funded interdisciplinary research project ‘AlgoFinance’, which explores algorithm and model use in financial markets. Combining ethnographic field studies with large scale agent-based simulations of securities markets we try to understand how algorithms—machine learning and non-machine learning—construct and shape interaction dynamics of market actors trading with one another. The research team consists of social scientists Christian Borch (PI), Daniel Souleles, Bo Hee Min and myself, and from the hard sciences side of things Zachery David, Nicholas Skar-Gislinge, and Pankaj Kumar. In addition to the sociological network perspective on interaction of trading algorithms, we examine ways organisations and individuals—traders, portfolio managers, quants, etc.—are affected by, adapt to and try to stay on top of technological advances in the field. One of the things we hope will come of our efforts is a better understanding of the social dynamics underpinning and embedded in the thoroughly quantified and increasingly automated world of securities trading. A big part of shedding light on this social dimension of algorithmic finance is to explore the socio-material assemblages of humans and algorithms, which exactly is what I do in my paper.