Red Hat CSA Mike Bursell on 'managed degradation' and open data

mike bursell landscape high res

Computerworld UK speaks with Red Hat's chief security architect about applying AI to the full stack and leveraging open data

Share

As part of Red Hat's CTO office chief security architect Mike Bursell has to be informed of security threats past, present and yet to come – as many as 10 years into the future. 

The open source company has access to a wealth of customers in verticals including health, finance, defence, the public sector and more. So how do these insights inform the company's understanding of the future threat landscape?

"The CTO office has the brief to look 18 months up to 10 years out," says chief security architect Mike Bursell, speaking to Computerworld UK by phone. "Part of our brief is absolutely to understand not just security but virtualisation, storage, AI, all those sorts of things.

"The way I take the brief is I try to look at not just technologies but also methodologies and architectures and trends in various industries that may impact on Red Hat, may impact on our customers, or may make changes to the open source community or the community as a whole, which we need to be a part of, because Red Hat is so grounded in the open source and open community."

In the near term Bursell believes there will be more emphasis placed on risk over the purely technical side of security – and while risk has always been there, understanding risk is becoming more central to wider security and business strategies.

"People are moving to models where they understand you have to tie your governance model in with your policies, and then put automated processes in place you can monitor, then audit, and then turn that cycle round again," he says.

As part of this, developers are having to bring security into their everyday thinking – the sec in ‘devsecops'.

"I love security but not everybody does, and you don't want to be saying to every developer: ‘you now have to become a security expert' – because that's a lot to take in, and it's never going to happen," Bursell says.

"But if you say on the other hand, I have particular people or roles whose responsibility it is to ensure our polices are automated into the processes, then we just make sure people follow the processes and we can manage by exception, then it starts to make sense.

"I think the whole devsecops thing is certainly taking off. I spend a lot of my time talking to very big companies throughout the sectors about that, frankly," Bursell explains.

The technologies that will complement this mindset will include improvements in trust and execution environments such as Software Guard Extensions (SGX) from Intel or Secure Encrypted Virtualisation (SEV) from AMD.

Loosely put, SGX allows developers to safeguard select parts of code or data from modification by using ‘enclaves' – protected areas of execution in memory. SEV similarly aims to encrypt memory to protect virtual machines.

"People want to understand how much they can trust their execution," Bursell says. "That plays in to some of the interesting things you can do in the devops world – where you might say ‘I'm only going to be running something for 24 hours, I'm going to restart all of my containers on a rolling basis over that time'.

"But also – I'm going to say certain workloads should only ever be executing or processed in an environment that I trust at certain different levels. That might be internal servers, it might be cloud-based servers, there are a variety of ways of doing that."

Further down the line, Bursell expects more attacks to occur at multiple levels of the stack – as evidenced over the last few years where DDoS attacks have shrouded data exfiltration – a technique made famous by the Sony attack in late 2014

To combat this, organisations could deploy artificial intelligence and machine learning tools to examine the full stack – beyond the forensic information and event management intelligence available today.

"This is an area where Red Hat has some great benefits because we have so much of that stack – the platform, Enterprise Linux, Atomic Host, the orchestration layers, so Cloud Forms and Openstack and Open Shift, but also the middleware, so we've got the Jboss Middleware suite, and even the tooling."

Managed degradation

The most cyber-aware companies will already understand that with infosec it's only a matter of time before your organisation suffers some kind of attack.

A logical next step might be what Bursell calls ‘managed degradation', which in short means eking a positive business case out of what otherwise might be calamitous security scenarios.

"If you go to your boss and say we've had an attack, we're going to have to take all of the machines down and we can't take orders for five days – that tends not to go down very well," he says.

"On the other hand, if you go to them and say: ‘we have this attack coming in and you have two options', we can either take our machines down or we can take orders at 60 percent performance throughput of volume – with maybe a one in five chance we might lose some internal data – you know which two of these options they're going to take. They're going to take the one which allows the business to continue."

This could be modelled ahead of time so that not only the security professionals but the C-Suite execs understand what the risk would look like – think Netflix's ‘simian army' from its Chaos Monkey programme, which deliberately takes IT systems offline so developers can test how production fares in response to outages.

"It's not if, it's when, and any IT department which doesn't explain that and explain that to business is not going to survive.

"We need to be able to express that, and work with businesses on appropriate mitigations if we're going to do our jobs. Some of that we can do now, but over the five-year timescale there's going to be more techniques we can do to allow that. That's certainly going to be happening."

Open data

Open – the word keeps coming up. Even business leaders at Davos this year were issued a comprehensive state of cyber security report that examined how open source might be beneficial to organisations over closed-shop proprietary solutions.

That is significant. Open source has been tried and tested in the field for decades and the results are plain to see: projects with significantly sized communities are able to scale and innovate extremely fast.

There are, obviously, extremely rational reasons for businesses to wall off their own data, not least confidentiality.

"There are four sets of data," Bursell says. "Sets of data open to everybody already. Two is data confidential to Red Hat. Another would be data which is confidential to one of our customers, and a fourth would be data which is shared between Red Hat and that customer.

"Wouldn't it be interesting if we could make some more of that data available to Red Hat – or to the world more generally? Without giving up all the source data, allowing some extrapolations to happen.

"There are some technologies emerging – things like multi-party computation and differential privacy – which are frankly really academic techniques at the moment, but which I think of that sort of timescale will allow us to think about data and how we use it."

From a security perspective that could mean taking parts of data that are safe to explore and sharing those out with partners or to a wider community.

Promoted