Intel's role in the future of software-defined infrastructure

One of the mostly commonly talked about terms in the ICT industry today is "the software defined infrastructure". Some call it "software-defined data centre" and others prefer to use their own compute, networking and storage centric buzzwords...


One of the mostly commonly talked about terms in the ICT industry today is "the software defined infrastructure". Some call it "software-defined data centre" and others prefer to use their own compute, networking and storage centric buzzwords like "software-defined storage", "network function virtualisation", "server virtualisation" etc.

The reasons for a implementing a highly virtualised, software-defined infrastructure are many - but it all boils down to one word: Economics. If it weren't for buyers realising the overall cost savings by moving away from the rigidity of acquiring and maintaining hardware-defined or hardware-based infrastructure, software-defined anything would have been dead on arrival. But it has not. In fact it is wildly successful in the computing space, and making promising in roads into networking and storage space.

Let's take the computing space for a second (I am using the word "space" loosely here to indicate the industry). Server and desktop virtualisation has become a mainstay in every data centre. Businesses everywhere are eager to pay gobs of money to the likes of VMware to virtualise their compute infrastructure. Not to be left behind, suppliers like Microsoft, Red Hat, Oracle, Citrix and others are also pushing to become strong players in the server virtualisation space. While the approaches to software-defined computing (a.k.a. hypervisors that provide server virtualisation capabilities) differ from supplier to supplier, they all have one thing in common: hardware standardisation. They all make use of commodity, off-the-shelf hardware.

Storage is undergoing a transformation too. Software-defined storage is everywhere. Suppliers that are leaders in hardware-defined storage are moving to a hardware-based and eventually a software-defined architecture. They are moving features delivered via expensive ASIC based hardware to the software stack. The era of traditional RAID array is nearing sunset. It is being replaced by era of replicas, erasure coding and software-defined storage - in which all essential storage functions are provided via software. It is important to note that approaches to software-defined storage differ from supplier to supplier - and in fact there is this big debate about what functions are under the purview of software-defined storage:

  • Suppliers like EMC and ProphetStor argue that software-defined storage only needs to offer control plane virtualisation. In other words, software-defined storage serves as an essentially an out-of-band orchestration medium.
  • Some others in the industry point to OpenStack Cinder, which allows various storage systems to plug into a common abstraction layer, without any implementation of the data persistence functions. However, they forget that OpenStack Swift (and Ceph) are examples of implementation approaches taken by the open source community for providing data persistence.
  • And then there are the traditional storage virtualisation folks like IBM, Datacore and Falconstor who in their own right are software-defined, because they offer all of their services via software running on a standard hardware stack.
  • Others in the industry argue that software-defined storage does in fact need to serve basic storage functions of providing data persistence and data services (access mechanisms). Without the data persistence and services functions they say, software-defined storage is not really storage, it is just a super-duper orchestration layer.

IDC refers to software-defined storage as platforms that deliver the full suite of storage services via a software stack that uses (but is not dependent on) commodity hardware built with off-the-shelf components.

IDC's taxonomy reflects the fact that data persistence and data services are essential to the classification of any storage platform as software-defined. However it also provides a mechanism by which control-plane only solutions can in fact be software-defined if they can provide data persistence functionality. Software-defined storage fundamentally alters how storage platforms are delivered (supplier view) or procured (buyer view).

For a more details on IDC's Software-defined Storage taxonomy click here.

According to IDC's taxonomy, for any solution to be classified as software-defined storage, it must satisfy the following requirements:

  • The solution should not contain any proprietary hardware components like custom ASICs, chipsets, memory components, or CPUs - and the software code should not make any assumption of such components being present to offer any essential storage (or storage efficiency) services.
  • The solution should be able to run on multiple (physical or virtual) hardware instances that are not factory configured by the supplier. Buyers should be able to procure the platform as software and deploy them in a virtual environment or directly on any physical hardware of their choice (as long as this hardware belongs to the same peer class listed in the supplier's hardware compatibility list).
  • The solution is a standalone or autonomous system. In other words, it provides all essential northbound storage services and handles all southbound data persistence functions without requiring additional hardware or software. IDC therefore considers file systems and logical volume managers to be building blocks of a software-defined storage platform rather than complete systems.

Note: IDC does not state that commodity means x86 platforms only. There are several non-x86 platforms like Power and ARM that are becoming commoditised and may ultimately be deemed suitable for mass consumption.

With the context of IDC's taxonomy suppliers can take varied approaches to their software-defined storage solution. Regardless of the approach, most suppliers agree that software-defined storage designed to leverage the economics of (server-based) hardware standardisation, thereby passing the savings to the buyer. Some (stubborn) suppliers do maintain that certain types of workloads do benefit from hardware-defined (custom ASIC-based) storage systems. That may be true in the short term. However most suppliers are aiming towards delivering all core storage functionality via software in the long run, thanks to more powerful processors, and networking interconnects capable of carrying a lot more traffic.

I believe that the approach to software-defined storage is still a bit fluid at the moment (as stated above) but IDC expects that it's adoption will eventually mimic the trajectory taken by server virtualisation. Software-defined storage is nowhere as controversial as the network side of the house. Software-defined networking and Network function virtualisation hold out the promise to go the same route as computing and storage. However the dominance of a few networking behemoths (that make most of their revenue via selling hardware-defined networking gear) means that software-defined networking may have a wait a bit longer before suppliers can agree on a common approach - or at least one that does not leverage custom/proprietary ASICs. Thankfully the likes of OpenStack (and initiatives from Facebook that may show up in the Open Compute domain) will strongly influence the future of SDN, similar to how they have influenced storage. I am willing to bet that at least a handful of networking suppliers support the notion that networking equipment need not have custom ASICs to deliver basic switching or routing services.

Which brings me to the point I am trying to make with the title of this blog post. Is Intel having second thoughts on it's push towards hardware standardisation? Is there a sense of remorse at Intel about it's diminishing influence in shaping the future software-defined infrastructure?

Technically, Intel seems doing well with hardware commoditisation. It still sells lots of processors, and processor related hardware for the data centre. So far it has managed to make up for PC slump by doubling down on the data centre side. But somewhere within Intel I think there may a nagging worry. Some in Intel may be worrying that it has taken a back seat in the innovation race for the software-defined data centre. With hardware standardisation, the focus for innovation in processors largely shifts to speeds, feeds and power consumption (this is not to say that is it not important, in fact it is quite the opposite). Much of the innovation shifts to intelligent software that can run on any kind of hardware (and processor) platform. And Intel may not be too happy with how this situation is unfolding. Intel may be worrying about a similar outcome as what happened to the personal computing industry, in which most users don't really care about the PC hardware, just what runs on it.

Intel is perhaps worrying that it has somehow unleashed the democratisation of hardware, in which it's clout is on a path of diminishing returns. With this democratisation, the software stack now suddenly becomes portable. In a software-defined data centre, it becomes relatively easy to replace or augment one processor stack with another. In fact many of the hyperscalers are already looking at this scenario closely (and some have even begun work on it in earnest). There is a reason why hyperscalers are excited about initiatives like OpenPOWER and/or alternative processor platforms based on ARM architecture (Read my post on ARM in the Data centre ARM-based hardware platforms hold out the promise to dramatically alter the economics of data centre design.

In the short-term, displacing x86-based platforms may appear a bit steep. But in the long term - given the pace at which ARM designs are progressing - there is no doubt that for certain types of workloads like storage (and potentially even networking), ARM-based hardware may offer superior economics. This may provide several suppliers and hyperscalers an alternate hardware (commodity) platform on which they can base their software-defined storage, computing or software-defined networking stack.

What is disconcerting however is that some in Intel are making it easier for suppliers to explore such alternatives. By trying to change the definition of software-defined in counterproductive fashion these folks may render a potentially fatal blow to the vast ecosystem of software-defined infrastructure suppliers that Intel has nurtured in the past few years. According to an unofficial view within Intel, in order for something to be software-defined, it has to manage disparate resources and take a data centre view of resource management. In the process, it does not need to offer any basic functions that would characterise the platform. For example, in Intel's view software-defined storage need not offer any data persistence functionality, as long as it relies on other (hardware-defined) storage systems to offer that functionality.

By messaging to the industry that just because a solution makes use of commodity hardware it is not software-defined, it is in fact moving orthogonal to a view that many in the industry hold with respect to a software-defined infrastructure. I am not sure what is driving Intel to take such a stand, other than speculate that somehow Intel wants to restore the importance of silicon - and stem the tide of everything moving into software. This could also be a defensive move by Intel to preserve its custom silicon base (that I am sure includes suppliers that make hardware-defined solutions).

By taking this stand, Intel risks alienating the many hundreds of storage, computing and networking suppliers that have designed innovative solutions that deliver a full spectrum of services via software. Collectively such suppliers that have invested millions of dollars in furthering the mission for software-defined storage, computing and networking - which is to deliver all essential functions via software running on commodity hardware. That is true innovation - and Intel does not have the right to deny them the right to call their solutions software-defined. Instead of cheering them on, Intel seems to want to clip their wings.

This could have disastrous side effects on Intel's business. If Intel discourages such suppliers too much, it may inadvertently make these suppliers want to explore alternatives to x86-based platforms. For example, ARM-based hardware suppliers may welcome such software suppliers to their ecosystem. That is one big risk Intel cannot afford to take.

Necessity is the mother of invention. When the drive manufacturers were looking for a processor for their Ethernet drives, they did not go with Intel's SOCs. At the last Open Compute Summit, a supplier demonstrated a storage software stack running on an octa-core ARM processor. Red Hat recently announced a program to "enhance partner collaboration and facilitate partner-initiated system designs based on the 64-bit capable ARMv8-A architecture that include Red Hat software". It plans to aim the program at silicon vendors, independent hardware vendors (IHVs), original equipment manufacturers (OEMs), and original design manufacturers (ODMs), and launches with participation and support from several ARM ecosystem leaders, including AMD, American Megatrends, Inc., AppliedMicro, ARM, Broadcom, Cavium, Dell, HP and Linaro.

The point I am trying to make is that software-defined infrastructure suppliers are exploring hardware alternatives. And they are watching such developments closely. Intel cannot afford to alienate them.

The hardware democratisation ship appears to have sailed. So does Intel have a choice? Can it limit the long-term influence of ARM in the data centre because of the hardware commoditisation phenomenon? Can it continue to maintain an influential role in the direction of software-defined world? Or will it eventually become just another supplier of chips? Is this the time for Intel to be "paranoid" again so it can "survive"?

I believe so. But not by overreacting.

Intel's attempts to limit the influence of ARM in the data centre have the potential to pay off. It's new SOCs are aimed at improving the economics of purpose-built hardware platforms for the software-defined infrastructure. Intel has access to a vast ecosystem of partners it can leverage for developing hardware platforms based on its new SOCs.

It can work with ite software ecosystem to push certain software functions back into the silicon (akin to what Seagate is doing with their Kinetic platform). Intel can and should continue to innovate with the silicon that supports the software-defined infrastructure. In fact with the data centre under its control, and it's superiority in processor design and manufacturing will definitely help it in preemptively warding off the imminent threat of ARM as a dominant force in the future data centre.

In the software-defined world, Intel's software division can exert more influence at all. In spite of signs pointing to a trend in the opposite direction (For example, it recently off-loaded it's Hadoop stack to Cloudera. Not much has come out of it's Whamcloud acquisition. And it's role in OpenStack is relatively muted) Intel can fire back with its might. It can play a serious role in proving that "Intel Inside" is all about software-defined infrastructure that is "based on Intel Innovation". Intel should be more forthcoming about its contributions to the software-defined infrastructure. It can use its prowess to guide the way for suppliers in the software-defined infrastructure instead of meddling with their aspirations.

Intel gave wings to the hardware standardisation revolution - and established a dominant place in the data centre because of it. Now it needs to lead the way with software-defined infrastructure, not try and kill it (or influence it so it is self-serving). It should encourage suppliers innovating in software. It should push them to furthering their innovation so that it is easier for them to stick with x86-based hardware. Intel should focus on its long-term vision of the data centre and not get mired with short-term defensive measures to push everything back into silicon. Such measures may hurt it in the long run.
Posted by Ashish Nadkarni