We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message
JP Morgan supercomputer offers risk analysis in near real-time

JP Morgan supercomputer offers risk analysis in near real-time

Improves response times to risk

Article comments

JP Morgan is now able to run risk analysis and price its global credit portfolio in near real-time after implementing application-led, High Performance Computing (HPC) capabilities developed by Maxeler Technologies.

The investment bank worked with HPC solutions provider Maxeler Technologies to develop an application-led, HPC system based on Field-Programmable Gate Array (FPGA) technology that would allow it to run complex banking algorithms on its credit book faster.

JP Morgan uses mainly C++ for its pure analytical models and Python programming for the facilitation. For the new Maxeler system, it flattened the C++ code down to a Java code. The company also supports Excel and all different versions of Linux.

Back in 2008, it took eight hours to run the end-of-day risk process, a significant part of which was around moving data, manual data clean-up and verification. As part of a sustained investment over the past three years, the trading desk has dramatically reduced this time. If problems occur with the analyses, the process can now be re-run on a pool of several thousand CPUs. Reducing the overall cycle time has meant that speeding up the calculation portion has had significant business impact.

The risk calculation time has now been reduced to about 238 seconds, with an FPGA time of 12 seconds.

“Being able to run the book in 12 seconds end-to-end and get a value on our multi-million dollar book within 12 seconds is a huge commercial advantage for us,” Stephen Weston, global head of the Applied Analytics group in the investment banking division of JP Morgan, said at a recent lecture to Stanford University students.

“If we can compress space, time and energy required to do these calculations then it has hard business values for us. It gives us ultimately a competitive edge, the ability to run our risk more frequently, and extracting more value from our books by understanding them more fully is a real commercial advantage for us.”

The faster processing times means that JP Morgan can now respond to changes in its risk position more rapidly, rather than just looking back at the risk profile of the previous day, which was produced by overnight analyses.

The speed also allows the bank to identify potential problems and try to deal with them in advance. For example, JP Morgan's exposure before and after potential defaults by European central banks on their debt are sensitive to the order in which defaults may happen. Improvement in calculation speed enables the credit trading desk to run massive numbers of potential scenarios to assess such complex exposures.

Weston described this as understanding the "character" of the risk.

“Where we thought the risk in the book was, was more or less where it was, but it had a different character to it. We found that we had a particularly interesting sensitivity to the ordering of our defaults, which we had never been able to explore before.

“It gives us the ability to look at things that we couldn’t look at before and that’s extremely valuable to us,” he said.

As well as being faster, JP Morgan required a system that was more energy efficient. The company has just under a million square foot of raised floor space in data centres, and power consumption was a particular problem.

“Our data centres don’t run out of space, but they do run out of power, power to run the machines, to cool the machines. We needed a solution that was fast, efficient, reliable and less power-hungry.” said Weston.

‘Pipelining’

Instead of using existing standard multi-core machines, JP Morgan adopted FPGA technology to enable it to ‘pipeline’ instructions. This means calculations can be executed very quickly by breaking down calculations into simple components that can then be built into ‘pipelines’.

“[Being super pipelined] we can do a huge volume of calculations more than we can in a traditional CPU environment,” Weston said.

In an initial proof-of-concept project on 450 trades that was representative of the value of the global portfolio, JP Morgan found that it was able to run calculations 30 times faster on a single node. It then built a 10-node box, with two FPGAs in each node.

“We put our whole portfolio (several hundred thousand trades) on that [and] we managed to get the book to run 130 times faster,” said Weston.

“We then built a 40-node computer, which is basically a supercomputer.”

The project took JP Morgan around three years, and the bank is now looking to push it into other areas of the business, such as high frequency trading.

This article has been updated with additional information from JP Morgan since the version that appeared on 11 July 2011.

Share:

Comments

  • The guest Why From an input output point of view one needs computational power at the lowest cost and thats all In my opinion GPU and FPGA are difficult to compare in technological terms but certainly not from the execution times and energy efficiency point of view
  • Gustavo Almeida Both FPGAs and GPUs has its own place in business today specially financial businessIf you do have a huge amount of money goes with FPGAs otherwise goes with GPUs Besides GPUs is a lot easier to implement and program than FPGAs
  • Gustavo Almeida I think the majority of comments is equivocated Comparing FPGAs with GPUs is the same of comparing banana with apples
  • guest2 because the fastest supercomputers need to run arbitrary algorithmscode JPMs FPGA needs to run 1 algorithm it will always be better this way you sacrifice flexibility for speedefficiency
  • kuriousoranj The volume trades interbank move risk around which is taken from end consumers of financial services The Banks allow business to work with risk and allow safer access to foreign markets by reducing exhcnage rate risk and change the term of risk without tying up large amounts of capital in loan facilities eg you get paid at varying frequency and volume for the thing you produce but need to pay your workers every month or you want a fixed rate loan or the ability to early terminate a mortgage without huge penalty or cap the rate you pay on a mortgage or floating rate loan So their role is as a risk transformation service and intermediary between those that need to exchange risk there are those who wish to take risk to get a higher return If there was no end market for this service then the profit would not exist The supply and demand forces do work on the profit this can be seen in the decreasing spreads charged on derivatives trades over the years as they became commoditisedKudos to them on getting their engineering people to accept non standard hardware into the server room A lot of people have discussed this solution over the years but always had to fall back on well understood server configurations to ensure the cost of support was not too high
  • Guest Well not the fastest
  • Jim Kring Oops here is the linkhttpvishotscomusing-labvi
  • Jim Kring Heres a cool podcast about guys using LabVIEW to program FPGAs for similar Wall Street applications
  • Wing Wong httpwwwseeedacukkbenk well done FPGA beats out GPUThe same implementations show that FPGAs achieve a 3x speedup compared to equivalent GPU-based implementations Power consumption measurements also show FPGAs to be 336x more energy efficient than CPUs and 16x more energy efficient than GPUs
  • Nalini It is not about having many C to FPGA tools - these tools are very FPGA board specific Almost all FPGA based HPC solution providers such as Pico GiDEL Nallatech provide their own development environment most of them working with only HDLs or CCMaxeler has their own boards and has chosen Java as the base to enable easy coding of algorithms JP Morgan was working with a legacy C code not a basic algorithm thus needed to rewrite it in Java Writing Java code would have been a necessary step even if they were building it from scratch If you have worked with any other HLS tool you would know the amount of effort that goes in restructuring and parallelizing just to generate the RTL
  • an IT curmudgeon All these things run software and hardware but in the case of the CPUGPU the hardware is not specialized to handle the calculations for financial analysis It will do general floating point calculations in the CPU or specialized floating point calculations for graphics processing in the GPU Some applications are using GPUs as general floating point processors and they are faster than CPUs but not as fast as a specialized FPGA would be
  • an IT curmudgeon The interesting thing about the housing crises was the comment that Greenspan made just before he bugged out after 19 years as the Federal Reserve Chairman He made a pretty general state of the reserve speech and included a comment that people shouldnt continue to consider their homes as their primary investment then he resigned a few weeks later He knew what was coming and knew it couldnt be halted or buffered so he bugged outActually it all makes sense under UN Agenda 21 in which the disassembly of the American Industrial Society is an immediate requirement
  • Ionel I rather people with smarts waste brain cycles in financethen developing the next super-bomb And money does not move back and forth it moves mainly in one direction away from the suckers
  • Guest Plenty of people run thousands of GPUs in computing grids The fastest supercomputers are CPUGPU hybridshttpwwwtop500orglists20NVIDIA and AMDATI have spent billions super-pipelining the floating-point arithmetic units of their GPUs Why go to all the trouble to rebuild in an FPGA or ASIC what NVIDIA and ATI have done fore you GPUs now have 512 cores per card at 1 a core They are mass-market supercomputers
  • Guest GPUs are specialized hardware that is super-pipelined to do floating-point arithmetic GPUs eat up no more power per FLOP than FPGAs But you get 512 cores per GPU card
  • Guest I think what theyre saying is that the entire risk batch end to end takes 238s and just their PV takes only 12
  • Guest so why didnt the article write so It sayed flatten C to Javaand why for the love of all that is economic would you rewrite your entire algorithms from C to Java when there are a plethora of C-gtFPGA tools available Especially as JP morgan uses C present formBut meh - computerworld is the voice of IT management Not IT specialists
  • Guest Yes - thats what end-to-end is total execution time so saying they are able to run the book in 12 seconds end-to-end looks like a rather dishonest mistake that should have been corrected during editing
  • Guest I believe they mean the total system execution time was 238 seconds whilst the portion running on the FPGA is 12 seconds
  • Roman You dont know what you are talking about Its not Java that runs on FPGAs Maxeler FPGA complier is a Java library They write the algorithm in Java then the library compiles it as HDL design and thats loaded to FPGAs They dont have a library like that in C So shut up and learn something
Advertisement
Advertisement
Send to a friend

Email this article to a friend or colleague:


PLEASE NOTE: Your name is used only to let the recipient know who sent the story, and in case of transmission error. Both your name and the recipient's name and address will not be used for any other purpose.


ComputerworldUK Knowledge Vault

ComputerworldUK
Share
x
Open
* *