But staff procedures are also under the microscope – including those taken to detect hazardous deep sea hydrate deposits in the well, and several possible failures in following scheduled pressure testing processes.
Several of BP’s early monitoring systems appeared to have worked, indicating problems with the oil flow, according to the investigation. “One was 51 minutes before the explosion when more fluid began flowing out of the well than was being pumped in,” wrote the committee.
“Another flow indicator was 41 minutes before the explosion when the pump was shut down for a ‘sheen’ test, yet the well continued to flow instead of stopping and drill pipe pressure also unexpectedly increased. Then, 18 minutes before the explosion, abnormal pressures and mud returns were observed and the pump was abruptly shut down.”
This demonstrated that the crew was attempting to make manual interventions, it wrote, but soon afterwards the pressure increased dramatically, leading to the explosion. Nevertheless, a BP investigator told the committee this week that managers and engineers on the rig may have ignored the results of other tests just hours before the explosion.
Earlier this month, BP told Computerworld UK that technology was playing a crucial part in the efforts to stop the spill. This included the robotic submarines being uses to help track and plug the oil, and to detect the composition of the oil in order to contain it in the right way. It is also understood to be drawing on data from its process modelling and in-house Operating Management system to allocate staff and help co-ordinate the response.
As work continues, BP has pledged $500 million (£408 million) for an “open” investigation into the spill, and to research better ways of tracking oil spills with technology.
The ten-year research programme will address what can be done “to improve technology to detect oil, dispersed oil, and dispersant on the seabed, in the water column, and on the surface” as well as for remediating the impact of oil accidently released to the ocean. This is in addition to studying effects of the oil and dispersant on the seabed.
Meanwhile, the US National Science Foundation has embarked on an effort to use one the world's largest supercomputers to forecast, in 3D, how the spill will affect the US shore line. The machine has about 63,000 computing cores, and is capable of speeds of 579 Teraflops.