We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message
London Stock Exchange price data failures ‘emerged immediately at Millennium launch’

London Stock Exchange price data failures ‘emerged immediately at Millennium launch’

Are there LSE cacheing problems, or have the vendors connected badly?

Article comments

Within the first 20 seconds of the London Stock Exchange’s new matching engine going live on Monday, price data vendors began displaying incorrect prices, blank prices and wrong trading volumes, according to Computerworld UK sources.

Thomson Reuters, Interactive Data and Netbuilder are among the largest data vendors, providing share prices to traders, that have been displaying pricing problems on some stocks throughout the week. Even the LSE’s own data vendor, ProQuote, experienced problems.

Concerns are being raised that there could be mistakenly setup connections or incorrect software interfaces at some of the large data vendors. Alternatively, there may be a data caching issue at the LSE that means data going out is not properly synchronised between different systems.

Vendors have been working on substantially differing stock transaction volume data, according to sources at benchmarking firms that closely monitor the vendors’ networks. The benchmarking companies sell comparison data to large brokerages looking to compare the fastest and most stable vendors. The data is also sold to data firms who want to compare themselves against competitors.

“Within 20 seconds of Millennium Exchange going live on Monday, our systems flagged up significant discrepancies in vendors’ data on share volumes sold on the exchange,” said one source, at the continental office of a tracking firm. “It was a much bigger discrepancy than I have ever seen before, and much bigger than those same vendors were experiencing on different exchanges. It alarmed me.”

So far, no official reasons for the problems have been given by vendors or the stock exchange, and it appears the reasons are not yet fully understood.

Observers suggested there were two possible causes. “Either the vendors have set up their connections and their coding incorrectly, or the exchange has cacheing problems,” one said.

With regards to potential cache problems, this could lead to different data being sent to the vendors because of synchronisation issues between different LSE databases linked to separate data vendors. But this is not confirmed as being the case.

Monday's launch of the Millennium Exchange matching engine, written in C++ language and running on Novell SUSE Linux-based datacentres, was largely a success in pure stability and latency terms for order messages, with a notable 125 microsecond latency, and without outages.

However traders contacted Computerworld UK with problems – some said they could not access prices of certain stocks, others said they had sent trade orders based on prices that were no longer relevant or accurate.

Separately, there was also a closing auction delay on Tuesday that initially knocked out automated trades in a 42 second window.

The new system replaced Microsoft .Net-based trading software that was written in C# by Accenture, and ran on Windows Server and SQL Server. That system experienced a devastating eight hour outage in 2007 during heavy trading. The problem was a major factor in the system being scrapped, alongside its ongoing two millisecond latency. The exact cause of the outage was never disclosed.

This week, following the problems on the new system, the exchange and all of the vendors have publicly insisted they are working together to find solutions. But statements from the exchange, and messages from vendors to clients, appear to apportion blame elsewhere.

A Thomson Reuters message to clients blamed "an issue at the exchange". The LSE later issued a statement that "unfortunately a couple of market data vendors have experienced some specific issues aligning to the new Millennium Exchange platform”, while “all other” participants were “trading successfully”.

Behind the scenes, signs of growing frustration are clear. Staff at the vendors said they did not want to publicly blame anyone, and even accepted there were likely errors on both sides, but expressed concern over what was happening.

Meanwhile, the private view at the exchange is that the 15 month testing window should have been plenty of time for the vendors to interface perfectly with it. The fact the majority of smaller vendors were fine demonstrated that those having trouble had made mistakes. Having accurate pricing data, naturally, is equally important to the exchange and its reputation.

Others have raised concerns that – while the exchange’s year-long technical pamphlets were extensive and clear on connections, networking and coding – somewhere between certain vendors and the LSE there had been significant technology misunderstandings, which have not been fully overcome.

It is likely to be a busy weekend for the vendors involved, who said they were acting fast to find “workaround” solutions. Software developers were working long hours and night shifts to change extensive lines of code in order to make the data display correctly. The LSE is in close communication with them.

Share:

Comments

  • basilarchia Someone should get wthnospam a free beer for being one of those people that still think Windows is just as good as Linux
  • anonyboy Exactly There is no direct port of complex systems from C to C so what are we really seeing Given the original system produced correct output big given then the new system is either wrong algorithmically defectively coded or misconfigured The speed of the system might have some relationship to the foundation but the correctness has nothing to do with it
  • wthnospam What no major outages with the new nix based system ON THE FIRST DAY LolLook if you want to be Windows bashers thats perfectly fine with me - there are many reasons to dislike Windows however comparing two complicated software systems should be an apples to apples comparison I would very much like to see a stability comparison after some reasonable period of timeIn any case people like to claim that it is the base software stack OS Storage Platform implementation language et cetera that is at fault and not realize that it is entirely up to the developers whether or not a software system is robust I can easily write an unstable unreliable transaction engine in C or C on nix or OSX or Windows I can also write a very stable memory fragmentation proof massively scalable transaction engine in C or C on nix or OSX or WindowsThe language has nothing to do with it the operating system has nothing to do with it et cetera ad nauseumIt is once you apply the fastcheapgood - pick any two maxim that you begin to have to pick and choose where you want your weaknesses to be
Advertisement
Advertisement
Send to a friend

Email this article to a friend or colleague:


PLEASE NOTE: Your name is used only to let the recipient know who sent the story, and in case of transmission error. Both your name and the recipient's name and address will not be used for any other purpose.


ComputerworldUK Knowledge Vault

ComputerworldUK
Share
x
Open
* *