A vendor recently asked us what constituted the categorisation of anything as a workload. My immediate answer was to say “any process or set of processes that runs in the user space and provide a service to an external entity”.
Technically that meant that anything that runs in the kernel space couldn’t constitute a workload. That begs the questions: Can storage be considered a workload? And what sets of processes constitute a “storage workload”.
Before we attempt to answer these questions, let us see what are the differences between user and kernel spaces. In a nutshell, in the context of modern operating platforms,
- The user space is the area in memory where user processes are run. It consists of memory starting above the kernel, and includes the rest of the available memory. This segment of memory is protected, so the operating system prevents one user process from interfering with another user process. Only the kernel is allowed to access a user process. A process operating in this memory region is said to be operating in user mode.
- The kernel space is the region of memory where all kernel-level services are provided via kernel processes. Any process executing in kernel space is said to be executing in kernel mode. Kernel space is a privileged area, so the user has access to it only via known system calls or interfaces. A user does not have direct access to either machine instructions or devices. However, a kernel process does have direct access to both. A kernel process can modify the memory map, an operation frequently required to perform process scheduling. A user process becomes a kernel process when it executes a system call and starts executing kernel code.
So can storage be considered to be a workload?
Yes it should be considered a workload. But it is not one workload, but rather a set of different modules and daemons running in both user and kernel spaces.
Much of storage and storage related services operate in both user and kernel modes. One essential need for several processes is to access the IO subsystem or hardware layer to run essential services such as logical volume management and file systems (both local and network based), remote procedural calls etc. most of which reside in the kernel space. Of course services such as multipathing and disk access via standard interfaces (like SCSI) via tunneling protocols like Fibre Channel reside entirely in the kernel space.
Additional value added services such as data management, namespaces, reporting and monitoring run in user space. Collectively all these user and kernel processes that deal with data persistence, data access and data management constitute a storage workload (this also includes databases that manage metadata in some types of higher level storage services). These processes can run bare metal or in a virtualised container - which is a moot point here because in a virtualised environment, the guest kernel does not really care about whether it is accessing real physical hardware or an emulated version.
What about other application workloads?
Most application workloads run in the user space. The key reason for this is that most applications make use of standard POSIX interfaces for accessing data. They therefore don't need to directly run any kernel processes, for IO access at least. Containerising applications (or virtualising them) therefore is relatively easy because in most cases as tenants in an operating platform, such applications trust that underlying operating platform to provide cater their needs with essential features - which can be provided regardless of whether the operating platform itself is virtualised or not. Very few applications actually attempt to run the storage stack too - and when they do, they limit their access to the user space.
Why is it such a big deal to run storage workloads and non-storage workloads (i.e. why is hyperconvergence necessary)?
IDC talks about hyperconverged platforms as a delivery model, often in the context of software-defined storage, in which the supplier allows non-storage workloads to run adjacent to storage workloads on a common compute (read hardware abstraction) layer - which is provided by a hypervisor (hence the term hyperconvergence). As a side, such adjacency can also be provided on bare metal (which led IDC to first coin the term CompuStorage) or on proprietary hardware platforms. However suppliers see hypervisors as a convenience to manage the hardware platform itself, and in spite of the fact that each hypervisor has a different hardware abstraction implementation, there are benefits in simply offloading physical hardware access to the hypervisor.
In the past this hypervisor was only used for running application (non-storage) workloads. However with advances in hypervisor technology (and more specifically the maturing of the hardware abstraction layer) is making it easier for storage workloads to be moved to the hypervisor itself. This is the genesis of hyperconvergence - and is a one-way street. In other words, hyperconvergence - which is all about moving compute to data and not vice versa - is here to stay.
The maturity of hypervisors to abstract the hardware platform and in fact provide better control over it as a result is pushing hypervisors to be embedded in purpose-built storage controllers. In other words, instead of the firmware running bare metal on the controller, it now runs as a virtual instance on top of a hypervisor embedded inside that controller. This is a huge step up for storage suppliers because it allows them to:
- Embed additional data and orchestration services inside the storage array thereby reducing the overall footprint of the storage infrastructure
- Merge different code bases into a single platform (especially when such code bases are from different acquisitions and merging them is an exercise in futility)
- Open up the storage controller for running additional applications that can benefit from adjacency to the data layer.
So with this in mind, should storage be considered to be a workload that is mated to a hypervisor? Or should it be a capability that is built into the hypervisor so that a purpose built "storage hypervisor" and a general purpose hypervisor are the new world order for what we know as general purpose servers and purpose built "controller" platforms, each with their own operating platform stacks.
Posted by Ashish Nadkarni