Telescopes designed to peer back in time, ultra-high resolution live brain scans and complex on-demand climate modelling might not have much in common at first glance – but a trait these scientific marvels share is being supported by OpenStack infrastructure.
At the CERN laboratory, where the Large Hadron Collider shoots particles towards one another and has proved the theory of the so-called ‘god particle’ the Higgs Boson, it is also an OpenStack cloud (powered by T-Systems) that is making sense of the enormous reams of data that the experiments create.
At the biannual OpenStack Summit, held in Sydney, Australia, researchers from two Australian institutions provided overviews of their complex work and the infrastructure that supports it. They were introduced by two spokespeople from the ‘national e-collaboration tools and resources research cloud’ - or Nectar – an Australian government initiative to enable research on the cloud.
Professor Brendan Mackey, director for the Climate Change Response Programme, set out with his team to remove technical barriers to accessing biodiversity-climate change modelling.
Biodiversity is "all the species that live on earth, the plants and the animals and wildlife," Mackey said. "Also, the things that want to bite and kill you, things like vector-borne diseases and viruses, and how they’re being affected or may be affected by human-forced climate change."
There is an enormous amount of data out there on all of the above, but these have traditionally been highly curated and also siloed off – think the biological observations from dedicated field researchers when they go on trips to places like the Amazon or the Congo. And then there are environmental conditions - "a lot of this comes from satellites or from instruments, for example, bobbing around in the sea," Mackey says.
Then there is future climate – and there’s not just one possible future climate forecast, but more than 40 global climate change models with different scenarios, different times, and other parameters.
What the end user now sees is a kind of "sophisticated app" where they can select biological or climate data and then run different kinds of experiments. This is crunched by a cloud with about 30,000 cores, but all of the inner workings are invisible to the user.
By using an API, the user is in effect designing an experiment to run through the system – the user request passes through the system, accesses the external data, and then back again to the user.
"This is an iterative process, you want to run the experiment multiple times to get a fine model, and eventually you want to share that with the world," Mackey said. "And the good news is you don’t have to be a researcher – anyone here in the room or anyone anywhere in the world can go to BCCVL, register, and explore how your favourite species might change."
DNA sequencing company Garvan Institute, meanwhile, runs the fifth largest DNA sequencing lab in the world. Genomics powers most of its disease research, running on high powered computers (HPCs) with OpenStack infrastructure to better understand illnesses like heart disease and cancer.
Gary Egan is the director of Monash University’s biomedical imaging research facilities. Taking to the stage with a video that provided a potted history of brain research, he explained in brief the research the university is conducting using OpenStack tools.
"Nearly 50 years ago, researchers started building the first brain imaging scanner," Egan said. "The challenge then was just to look inside the living human brains."
About 30 years ago technology for the first time allowed people to observe human brains while tasks were being performed: seeing things, hearing things, or simply thinking. "That challenge has been both to map the structure of human brains and the functions and relay those functions to the microscopic detail of the brain that was first discovered by Edgar Adrian," Egan said.
The team uses a "massive supercomputing system" based at Monash to process large data sets and to visualise the complexity of the brain in 3D.
"Today we are able to map dynamically brain functions, the structure in different sections of the brain," he said. Now it’s possible to visualise the flushes of neural activity at the onset of each task.
Egan went on to share developments in the Characterisation Virtual Laboratory for image processing, which is able to process data from the Synchrotron complex via remote desktop. Researchers have used a new technique called 3D volumetric ultra high resolution CT scanning to look inside living creatures. "When you do it very fast with the Synchrotron’s high flux of x-rays you can actually infer function by looking at the dynamic structure that changes, as is the case with respiration in the lungs," he said.
At remote sites in Australia, New Zealand and South Africa, scientists are preparing the Square Kilometre Array (SKA) project. An enormous site consisting of thousands of dish antennas, the project will generate enormous amounts of data when operational – approximately 5,000 petabytes a day – with the grand aim of ultimately unlocking some of the mysteries of the universe itself.
This, too, will run on a specially built OpenStack environment, and there are other comparisons between the project and the brain-imaging research at Monash as well.
"As is the case for the square kilometre array and the large telescopes, as we peer further and further into the brain we need to increase the resolution of our scanners," said Egan. "Recently there has been development of the next generation, ultra high field MRI scanners." These can be used to peek at individual neurons firing off in the brains of living people.