Software algorithm visualises large data sets

Scientists at the University of California have created a new algorithm that should make it easier for them to visualise huge data sets.

Share

Scientists have more data at their disposal than ever before - often more than they can properly examine.

But a new algorithm should make it easier for them to visualise huge data sets. And cheaper, too; software based on the algorithm can run on personal computers with as little as 2GB of RAM.

Scientists at the University of California, Davis and Lawrence Livermore National Laboratory developed the algorithm over a five-year period.

Based on the decades-old Morse-Smale complex, it divides, analyzes, and recombines data sets and illustrates its calculations.

The project was led by Attila Gyulassy, a UC Davis Computer Sciences graduate student, as his Ph.D. thesis. While supercomputers can now simulate physical phenomena like ocean currents and combustion, the huge amount of data they generate are nearly impossible to work with.

"What is all the data good for without visualisation tools that allow us to really see what is going on? We have ability to generate, but not necessarily to comprehend," explained Gyulassy's professor, Bernd Hamann, in a talk with the Industry Standard.

Gyulassy tested the algorithm on a simulation of two liquids coming together - a data set with over a billion points on a three-dimensional grid.

Running on a laptop, his software was able to analyse the information within 24 hours and illustrate aspects of the phenomena in seconds.

Hamann gives Gyulassy most of the credit. "He's really pushed this technology forward." However, he adds that more work must be done on the software before they can make it more widely available.

"Recommended For You"

How Uber delivers machine learning 'as a service' across its business Splunk adds more machine learning to its monitoring tools