Much of the messaging at the recently held 2014 VMworld conference was focused on the software-defined data centre (SDDC) or software-defined infrastructure (SDI). It is clear that the supplier community seems to be taking this concept to the next level as a way to maintain the viability of their commercial and proprietary hardware-defined offerings.
This blog post assesses what some of the risks to SDDC are - and how too many proprietary approaches pose an existential threat to the future of SDDC.
Software-defined data centre is in vogue these days. Suppliers with proprietary compute, hardware-defined storage and networking technologies are especially pushing the concept of SDDC that includes this notion of "control planes" and "data planes".
Their vision for SDDC is one in which a software-defined data centre has a mix of commodity and proprietary hardware platforms, virtualized (federated and serviced) by a data plane and orchestrated (provisioned and managed) by a control plane. Sounds like a good approach right? Yes, except there is a problem.
Ask any supplier whose data plane and/or control plane should form the governing entity for an SDDC and they will quickly say "their approach". This translates to the "axis mundi" belief held by each supplier or group of suppliers that, their own infrastructure stack, and therefore the associated "software-defined control plane/data plane approach" should be the governing entity.
The situation would have been okay if a group of suppliers belonging to the respective compute, networking and storage infrastructure stack would have agreed on a common approach. For example, all storage suppliers (like EMC, NetApp, IBM, Dell and HP), all networking suppliers (like Cisco, Juniper and HP), and all compute hypervisor suppliers (like VMware, Microsoft, Oracle and Red Hat) agreed to a common control plane and data plane approach for storage, networking and compute respectively.
Wait? Isn't that the mission for OpenStack - the open-source movement for a software-defined data centre? OpenStack is designed to provide data centre/cloud wide storage, compute and networking abstraction (a.k.a. data/control plane) and implementation (a.k.a. data persistence) standards. In other words, we already have a project that most suppliers say they participate in, and whose stack they want to work with. But they also want to hold on to (and in fact push) their own proprietary SDDC stacks into the data centre.
This situation quickly becomes complex when you see suppliers taking on other software-defined infrastructure stacks with their own proprietary technologies in an attempt to become the governing entity in the SDDC. Case in point is VMware. It was one of the first suppliers to usher in the notion of SDDC, but only after it made a conscious effort to include networking and storage capabilities within its hypervisor.
If you will rememeber, a few years ago, VMware was considered to be a "compute virtualization" company. For the most part it did not have much of a say in storage and networking, except certify third party infrastructure. However with the acquisition of Nicira, the general availability of VSANs, and of course the imminent availability of VVOLs, it is clear that VMware today is in a lot better position to prescribe the essential elements of SDDC that transcend compute, networking and storage, as it sees them.
At VMworld for example, VMware execs used VSANs, VVOLs, and NSX as examples of how they saw the SDDC control and data planes live inside vSphere. In other words - and it should come as no surprise to anyone - VMware sees vSphere as the central authority in an SDDC implementation. But it also sees a place in OpenStack (via its own distribution, or via API compatibility) and works with a third party storage SDS stack such as EMC ViPR. However, in all of this, VMware's SDDC has no place for competing software-defined compute and/or networking stacks.
So how do suppliers see different control planes and data planes talking to each other, if there is a need (I need to point out that this is a market reaction, and ideally they'd rather you did not have to do so)? Via APIs i.e. they are stacked on top of each other with the suppliers own stack at the top. So for example, EMC believes that ViPR should control everything storage in a data centre and similarly VMware believes that vSphere should be on top of all other compute, networking and storage stacks (this applies to other suppliers too).
They also believe that such interoperable to the extent that they choose to make their own stack compatible with the other (or in some cases not). Remember SMI-S (in which you could do most of the stuff via SMI-S, but for some crucial stuff, you had to use the suppliers proprietary UI)? So pursuing a heterogeneous stack means that in a data centre, could potentially have 1-2 SDDC stacks for compute, another 1-2 for networking, another couple for storage and then potentially one overarching one for the entire data centre. The more heterogeneous the data centre infrastructure, the more the potential is for its software-defined version to be complicated.
If I am a buyer examining an SDDC strategy, I cannot help but conclude that SDDC is potentially just one big complicated mess. If the vision for SDDC is to simplify a heterogeneous data centre, this appears to be anything but - never mind the promised economics of SDDC. For example, as a VMware shop I could go with vSphere for compute, networking and storage but as soon as I introduce KVM and Hyper-V, and I cannot extend the vSphere stack (for example, VSANs and VVOLs don't work with Hyper-V or KVM, nor does NSX).
For that I need OpenStack, or some other "openly accepted" stack - and for the moment there isn't one (i.e OpenStack appears the only accepted one). For storage, suppliers like EMC are keen on pushing a solution like ViPR, but what are the guarantees that it will work with all third party storage? Yes it supports OpenStack Cinder and SMI-S for third party storage, but what are the chances that the third party storage supplier will not instead push its own SDS stack, and thereby limit access to stack from a third party supplier like EMC? And networking presents a whole different set of challenges.
In all of this, whatever happened to furthering the role of hardware commoditization in a SDDC? That suddenly seems to have taken a backseat as far as some suppliers are concerned. Suppliers like VMware should not give that up so easily. It literally "bootstrapped" the second phase of the hardware commoditization revolution for compute, which included virtualizing x86 hardware (Windows/Linux could be credited with the first). Till today it maintains that its SDDC stack include commodity hardware for compute. It initially stayed clear of such approaches for networking and storage - but now has changed course somewhat with VSANs (I get why it needs to offer both options, so as to not alienate its storage partner ecosystem which includes its parent company EMC). VSANs will now join other software-defined storage platforms (that includes hyperconverged platforms) in bringing commodity (server-based storage) hardware platforms into the enterprise.
However, I wonder if VMware will pursue this course for networking as well? In other words, will pursue a course that pushes hardware standardization for networking with native support in vSphere, but also use NSX to talk with third party stacks/platforms? Like EVO and vSphere, will VMware eventually publish a hardware compatibility list for networking hardware platforms? Will VMware take it upon itself to "bootstrap" the hardware commoditization revolution for networking, either organically or via the acquisition of a supplier like Cumulus Networks and Jeda Networks?
OpenStack appears to be steadlly gaining traction and creates a dichotomy in the future of SDDC. On one hand suppliers like Platform9 are taking a services-based approach to the operationalization of OpenStack private clouds in the enterprise. But on the other hand, incumbent suppliers appear to be be presenting proprietary SDDC approaches as alternatives to OpenStack.
Most of such approaches are two-tier overlays in nature, in which one of the tiers is the control plane while the other is the data plane. All this is fine, but what is not, is the approach to protecting the legacy proprietary hardware-defined platforms that is soon morphing into a vehicle for suppliers to exert control and influence in a SDDC implementation. For the moment it seems that there is a place for both open and proprietary stacks in SDDC - and the two sides will be interoperable (limited but still interoperable). Buyers therefore have three options:
- Adopt OpenStack but acknowledge limited functionality when using it with commercial compute, storage and networking components
- Adopt the proprietary approach preferred by the compute supplier, but leave storage and networking stacks out of it
- Adopt a hybrid approach which involves implementing multiple proprietary approaches and potentially even OpenStack
Only time will tell which approach will be win out.
What I would like to leave the reader with is these questions:
Why couldn't all suppliers, who pledge allegiance to OpenStack anyways, just say they'll ensure all their platforms are fully compatible with OpenStack, and give up proprietary approaches? And move to standard hardware built with off-the-shelf components? After all, innovation can still be done in software - using open approaches. SDDC/SDS/SDN/SDC may be a good hedge against the hardware commoditization revolution, but now that the (hardware commoditization) ship has sailed, I wonder how long it will take for the next wave to come, and which wave it will be. Networking anyone?
Note: Opinions expressed in this blog post are my own, and do not constitute an official IDC position on the topics presented herein. Suppliers and platforms are provided as examples only.
Posted by Ashish Nadkarni