As hedge funds increasingly turn to machine learning techniques in their trading strategies, regulators and clients are having to come to terms with money being managed by black box algorithms, so who is responsible when they go wrong?
European Union lawmakers have attempted to legislate against algorithm-centric funds disrupting the market with the Markets in Financial Instruments Directive (MiFID II) regulation. Britain's Financial Conduct Authority (FCA) meanwhile is charged with maintaining order in UK markets.
But are regulators doing enough to keep a cap on investment strategies that are leaning more and more heavily on artificial intelligence? And who bites the bullet if the technology goes rogue?
As reported by Bloomberg last year: "By 2015 artificial intelligence was contributing roughly half the profits in one of Man's biggest funds, the AHL Dimension Programme that now manages $5.1 billion, even though AI had control over only a small proportion of overall assets."
Aside from Man Group, other firms including Renaissance Technologies, Two Sigma, Bridgewater Associates, Steve Cohen's Point72 Ventures, San Francisco-based upstart Sentient and UBS are either experimenting with AI technology or hiring experts.
When it comes to AI in these circles, the idea is to supercharge the sort of algorithmic or high-frequency trading popularised in Michael Lewis's book Flash Boys, by designing machine learning algorithms that can find previously unforeseen market trends and execute trades faster than the competition, helping them to make bets on market momentum quicker than a human ever could.
Machine learning at Man AHL
The use of machine learning techniques at Man Group occurs primarily within its Man AHL department, which was founded in 1987 and focuses on quantitative investment management. This is where a computer algorithm automatically determines the parameters of orders - be that timing, price or quantity - with limited human intervention.
Traditionally, Man AHL researchers would program computers with trends to look for and let them execute on these strategies, broadly speaking. Now, with machine learning algorithms - or more specifically, deep learning - researchers can program computers to look for trends and execute on them without being given specific guidance.
This is how DeepMind's AlphaGo devised strategies to win the ancient board game Go against the best human players, by being given the parameters of the game and the desired outcome, before letting itself learn moves that even the best players couldn't foresee.
As written in a report on Man AHL's website: "What these early machine learning algorithms turned out to be good at is eking out more subtle, non-linear, relationships within data... After three years of trading, and with ongoing research, we believe that these kinds of models, when unconstrained, may help identify directional market behaviour including trends, in a way that can be complementary to existing models." (Emphasis is theirs).
The problem with this approach is it is inherently something of a 'black box', with even the designers of the algorithm somewhat in the dark as to how the machine is deciding to do things.
This gave Man Group's CEO Luke Ellis some pause for thought, according to the Bloomberg article. What Ellis understood was that even though the new approach was paying, quite literally, immediate dividends, he recognised that being unable to explain how the AI managing client's funds operates simply "would never fly with big clients looking for answers when Man inevitably lost some of their money," as Bloomberg writes.
Hedge funds by their nature are less regulated than other investing institutions, because they are private investment vehicles for super-wealthy clients. In theory, hedge funds can invest client money however they see fit, as long as they are upfront with clients from the outset.
Man Group, for example, keeps its clients informed about the new strategies it is implementing through half-yearly reports and regular direct communication. These reports won't exactly plumb the depths of the algorithms - possibly because they couldn't even if they wanted to - but aim to keep clients aware that machine learning algorithms will be increasingly deployed for trading their assets.
Take Man Group's interim results statement (pdf) for the first half of 2017: "AHL has also expanded the focus of their research in machine learning and data analytics, to provide growth opportunities from utilising new research techniques and forms of data. While this initiative is still very much in the research phase, a number of new machine learning-based signals have been added to the Dimension, Alpha and Diversified programmes this year."
The relevance of this comes down to the appetite for risk that hedge fund investors have. Take clients of Michael Burry, the founder of hedge fund Scion Capital who was portrayed by Christian Bale in the Hollywood version of The Big Short. Burry had to inform his investors that he was aggressively betting their money against market-based mortgage-backed securities, causing many to question his methods, pressure him to alter his strategy and threaten to pull their money out. Burry eventually saw a $2.69 billion profit from that bet.
So, taking the flip side of this example, if an AI version of Burry were to register a sharp loss, for whatever reason, would clients of this fund have access to a complaints procedure or any form of remission from regulators? Or is it just part of the risk of doing business with a hedge fund?
The key piece of legislation when it comes to algorithmic trading is MiFID II. The Markets in Financial Instruments Directive is a piece of EU legislation intended to protect investors in a rapidly changing financial landscape, specifically those which lean heavily on dark pool trading of equities and high-frequency trading.
MiFID II took effect in January 2018, and requires testing and pre/post trade controls, as well as a 'kill functionality' to be put in place to prevent algorithms contributing to market disorder or market abuse. Firms must also keep records, "with a description of the nature of its algorithmic trading strategies, details of trading parameters or limits, key compliance and risk controls and details of testing." The FCA then regulates algorithmic trading firms to ensure they meet these standards.
But MiFID II is pretty vague about what goes on under the covers, stating that compliance staff must "have at least a general understanding of the way in which the firm's algorithmic trading systems and algorithms operate, and that they be in continuous contact with those who have a detailed knowledge of the latter."
And there is little mention of cutting-edge techniques like machine and deep learning.
The FCA and MiFID II
A February 2018 publication from the FCA on Algorithmic Trading Compliance in Wholesale Markets (pdf) does specify that "all firms engaged in algorithmic trading need to maintain an appropriate development and testing framework, which is consistently applied across all relevant aspects of the business. This is particularly important where firms are using innovative technology such as machine learning techniques."
It adds that firms must "define and identify substantial or material changes to their algorithms." The FCA also asserts that it may force firms to provide "a description of the nature of its algorithmic trading strategies" within 14 days of a request.
What the FCA doesn't appear to acknowledge, however, is that even the designers of deep learning algorithms will struggle to provide "a description of the nature of its algorithmic trading strategies."
The FCA document does directly reference advanced data techniques, but offers little clarification for firms when it comes to their responsibilities. It reads: "As part of our reviews, we also considered developments in machine learning and artificial intelligence. In these cases, the risks associated with market conducted may be heightened and it is particularly important for firms to consider the potential implications."
Lastly, should UK firms even care about MiFID II with Brexit looming? Well the FCA has stated that until the UK formally withdraws from the European Union, EU law such as MiFID I will continue to apply, and firms should continue to work on implementation of new EU legislation such as MiFID II.
What the lawyers say
With all that considered, would the client of an AI-managed fund have any access to remission or a complaints procedure in the courts if an algorithm were to fail?
Jacob Ghanty, head of financial regulation at law firm Kemp Little explained to Computerworld UK: "Remember that poor performance of a fund is not a disciplinary matter necessarily and doesn't give grounds for a complaint by an investor. Funds underperform all of the time and that isn't a regulatory breach, it could be, but it could also be that the strategy selected by the manager, AI-enabled or not, just hasn't worked out."
For its part, the FCA says that managing investments is a regulated activity, so anything a firm does within a fund is subject to existing rules, such as MiFID II. More specifically, hedge funds tend to fall under the remit of AIFMD – the Alternative Investment Fund Managers Directive. The FCA confirmed to Computerworld UK that there is nothing specific in the rules about the use of artificial intelligence at this time though.
Ghanty believes there is legal precedent in the FCA taking this cautious approach though. "The FCA doesn't have a specific regime for how a manager would deploy the technologies, so they will look more generally," he said. "That doesn't surprise me because the classic English law approach is to apply the longest established legal principles to each new set of facts as they arrive.
"So it is rare under English law to develop new kinds of laws to fit around technologies. It is usually the way with the courts that new technology fits in with the legal system, which might sound outdated."
However, the FCA could get involved if the algorithm has a perceived flaw in it.
"The FCA might look at the hedge fund manager in terms of this being another of their internal systems and, like with all other systems, you have a responsibility to make sure it is fit for purpose in the firm's duty to act in the client's best interest," Ghanty explained.
The FCA refused repeated attempts by Computerworld UK to understand its regulatory approach when it comes to AI specifically, and refused to make anyone available for interview. Man Group also refused to comment for this story.
In terms of the safeguards being put in place by the firms themselves, naturally they aren't too keen to discuss their inner workings with the outside world.
All we know about what Man AHL has specifically built into its algorithms is what's reported by Bloomberg: "compliance and risk management rules are ingrained into the system's DNA, preventing it from going rogue or breaking the law as a fast track to profit."
On this point, Ghanty from Kemp Little said: "Managers need to review their internal risk and compliance systems around using these kind of techniques going forward. It's not just about putting the right documents in front of an investor but also how they operate their business internally.
"So do they have the right internal checks and balances and monitor the system accordingly? Do they have the capability and do people in the compliance and risk functions have the ability to monitor the activities of these programmes?"
Nick Granger, who pioneered the use of these techniques within Man Group and has since become AHL's chief investment officer, told Bloomberg that safeguards include human examination of unusual trades prior to execution, and an autopsy tool to help engineers understand why the AI made certain decisions, making it slightly less of a black box.
Regardless, the beauty of the system is its unknowability. If the fund managers knew exactly what it was doing and how it was doing it, they wouldn't have their edge.