Storage virtualization has been around for several decades. While technologies have advanced and capabilities have become more sophisticated over time, the motivations for virtualizing resources and the anticipated benefits have remained relatively stable. Today, because it addresses a number of significant challenges, storage virtualization is again an important topic. Storage vendors, the media, and other sources expound on its merits in ways that might lead one to conclude that the term has a single, agreed-upon meaning and value—that perhaps one size fits all. Still, much confusion exists. Customers look to suppliers like HP for guidance and leadership. Thus, vendor virtualization approaches and strategy are important.
Virtualization is the aggregation of physical resources into a unified structure (“pool”), and the presentation of those resources as capabilities that can be consumed by applications, desktop clients, or other types of storage clients. The term ”virtualization” has really become an umbrella that encompasses a vast array of encapsulation and management technologies and implementation methods. These approaches create pools of sharable resources that optimize utilization and, ultimately, can automatically allocate the resources to match supply to demand.
HP StorageWorks virtualization white paper Executive summary............................................................................................................................... 2 Introduction to storage virtualization ...................................................................................................... 3 Virtualization overview......................................................................................................................... 5 Basic principles................................................................................................................................ 5 Why virtualize? ............................................................................................................................... 6 Virtualization taxonomy .................................................................................................................... 9 Implementing storage virtualization .................................................................................................... 9 Storage virtualization tradeoffs ........................................................................................................ 11 A more complete virtualization perspective ........................................................................................... 13 A different perspective.................................................................................................................... 13 HP StorageWorks virtualization today.................................................................................................. 15 HP StorageWorks use cases................................................................................................................ 17 Storage virtualization more than just hardware................................................................................ 20 HP storage virtualization: The big picture.............................................................................................. 22 HP StorageWorks Extensible Virtualization Stack............................................................................... 23 HP StorageWorks Utility ................................................................................................................. 24 HP StorageWorks virtualization strategy overview ................................................................................. 26 Benefits of the strategy.................................................................................................................... 27 Simplicity .................................................................................................................................. 27 Agility....................................................................................................................................... 27 Value........................................................................................................................................ 28 Summary .......................................................................................................................................... 28 For more information.......................................................................................................................... 29 References..................................................................................................................................... 29 Further reading.............................................................................................................................. 29 Untitled Document2 Executive summary Storage virtualization has been around for several decades. While technologies have advanced and capabilities have become more sophisticated over time, the motivations for virtualizing resources and the anticipated benefits have remained relatively stable. Today, because it addresses a number of significant challenges, storage virtualization is again an important topic. Storage vendors, the media, and other sources expound on its merits in ways that might lead one to conclude that the term has a single, agreed-upon meaning and value that perhaps one size fits all. Still, much confusion exists. Customers look to suppliers like HP for guidance and leadership. Thus, vendor virtualization approaches and strategy are important. Virtualization is the aggregation of physical resources into a unified structure ( pool ), and the presentation of those resources as capabilities that can be consumed by applications, desktop clients, or other types of storage clients. The term virtualization has really become an umbrella that encompasses a vast array of encapsulation and management technologies and implementation methods. These approaches create pools of sharable resources that optimize utilization and, ultimately, can automatically allocate the resources to match supply to demand. Business motivations for virtualizing resources vary, and generally fall into broad categories that include: " Reduce infrastructure costs by improving asset utilization and simplifying the environment " Minimize ongoing costs by simplifying administration and increasing worker productivity " Generate incremental revenue by improving application productivity by way of uptime, performance, and other enhancements Virtualization addresses these needs, often yielding significant economies. For example: " Virtualized resources create pools that enable data to be consolidated into smaller numbers of storage systems, resulting in higher utilization and management efficiencies. " Virtualized pools can simplify the sharing of data among multiple systems. " Virtualization helps to standardize management models and the capabilities delivered by pooled resources, in turn simplifying overall storage management. " Replication and other virtualization technologies enable higher service levels that directly translate into user and business application productivity and lower downtime costs. It is important to note that virtualization extends beyond storage. While the focus of this paper is storage virtualization, one can also virtualize server, application, network, and other resources. HP offers a broad range of virtualization technologies and solutions that are not discussed in this paper. This paper describes the business motivations for storage virtualization and the benefits it can deliver. Basic storage virtualization approaches are described and their benefits and limitations are compared. Finally, the HP StorageWorks virtualization strategy is described. The paper also briefly discusses the enterprise context for storage virtualization a global implementation that will be delivered over time with the HP Adaptive Infrastructure. Untitled DocumentIntroduction to storage virtualization Storage virtualization has been around ever since IBM released the first disk drive in 1956, although some might assert that core and magnetic drum memory were also primitive forms of virtualization. In the context of disk drives, the physical location on which a piece of data was placed was obscured to the application writing or reading the data disk drives keep indices that track data-to-physical block mapping (abstraction). Disk drives also have the ability to change the mapping to accommodate minor failures ( revectoring, or bad block replacement mechanisms). Today this is not even thought of as virtualization it is totally accepted, invisible, and uninteresting from an IT perspective. Figure 1. HP has a long history of storage virtualization innovation. Figure 1 shows a representative evolution of more interesting forms of storage virtualization that HP has delivered. Initially, various forms of RAID implementations were shipped. The first was VMS Volume Shadowing a host-based (implemented with server-resident software) RAID 1 implementation that was delivered several years before the term RAID was invented. After more comprehensive RAID algorithms were developed, HP incorporated them into modular array families like the RA/EMA and SmartArray product lines. Even more interesting, the VA family incorporated a self-adjusting RAID mechanism called AutoRAID an industry-leading approach that was only recently discontinued from the product line. 3 Untitled Document4 All RAID schemes virtualize by creating a single pool consisting of a number of disk drives. Application data is algorithmically deposited across the disks by array controllers. In some cases (RAID 3, RAID 4, RAID 5, RAID 6), one or more disks worth of parity data is also generated by the array and stored on its constituent disks to provide resilience to disk failures. Although RAID was initially invented to allow inexpensive commodity disk drives to reach the availability characteristics of higher-cost proprietary drives, the enduring benefits of RAID include: " Higher and more uniform device (disk) utilization. " Better performance, achieved through automatic load balancing among the disks aggregated into the RAID. " Improved data availability, because built-in redundancy ensures greater uptime. " Reduced probability of data loss, because loss of a disk drive results in no loss in data (although a performance loss may occur). " Easier manageability: one RAID is easier to manage than the corresponding number of independent disk drives. In 2001, HP extended RAID implementation to encompass the entire contents of a storage subsystem, as opposed to creating multiple physically discrete RAIDs within a single subsystem. The result was the HP StorageWorks Enterprise Virtual Array family (EVA). The EVA amplified the benefits of RAID to encompass a complete system that could optimize the utilization of many disks simultaneously, and ensure even greater availability through the proprietary application of RAID algorithms. The system-wide approach also made EVA easier to manage than peer products. Remarkably, the EVA extended storage-based virtualization (virtualization embedded within the storage system) in interesting and useful ways. It transcends conventional RAID by delivering advanced storage-based whole-system virtualization with capabilities like: " Dynamic expansion of the virtual disk pool: disks inserted into a configured array are automatically added to the existing virtualized pool (by the EVA controller), and the workload is automatically redistributed to all members without administrative involvement. " Dynamic expansion of the virtual disks ( LUNs ) presented to servers, although the ability to dynamically present the new capacity is dependent on host operating system ability to recognize the changed LUNs. " Snapshot and other point-in-time replication technologies. These in-the-box replication technologies create virtual images copies of the LUNs and can be used for rapid data protection and recovery, and for time-shifting backup operations. " Remote replication with synchronous and asynchronous options invokes mirroring and other technologies to provide resilience against site outages, as well as being useful for remote backup and other purposes. While general attention has been focused on disk storage virtualization, it is useful to point out that tape libraries, Redundant Arrays of Independent Tapes (RAIT), and other technologies apply virtualization to tape-based storage. As the millennium changed, the industry began delivering network-based virtualization capable of pooling heterogeneous, multi-vendor storage systems using virtualization technologies embedded in the fabric (SAN or LAN) to which the storage resources are attached. HP offered the HP OpenView Continuous Access Storage Appliance (CASA) to aggregate multiple disk subsystems into a single pool for data migration, capacity expansion, management simplification, and other capabilities. Network-based virtualization seemed to be ahead of its time back then, and faded from the limelight until recently in part for reasons discussed later in this paper. Untitled Document5 Around 2003, a new genre of storage systems was introduced with the HP StorageWorks Reference Information Storage System (HP RISS). The virtualization employed in the HP RISS can be thought of as a hybrid of storage-based and network-based, because grid-enabled subsystems are composed of a number of interlinked, networked processor nodes that together present an interestingly useful storage system. HP RISS leverages grid-computing principles to deliver a virtual pool of storage that scales well, performs consistently over a huge capacity range, and incorporates higher-level intelligence to perform a variety of business application-centric tasks. In addition to HP RISS, clustered storage applications running on integrated infrastructures such as the HP StorageWorks Enterprise File Services (EFS) Clustered Gateway and HP StorageWorks Scalable File Share (HP SFS) employ virtualization to deliver a variety of scale-out storage capabilities. At the present, the virtualization of heterogeneous, multi-vendor storage assets is again in the limelight. A basic motivation is the need to migrate data among storage systems for example, when an aging system is replaced with a new one. When the old and new systems are combined into the same pool, network-based virtualization can be used to: " Present storage to business applications. " Migrate data from the old system to the new using mirroring or other techniques. " Logically substitute the new array for the old when data migration is complete. " Accomplish the entire process almost non-disruptively to running business applications (there may be slight downtime). There is no technical need to perform backup operations in conjunction with migration activities, although business practices may require it. Because of the usefulness of inter-system data migration, HP released the HP StorageWorks 200 Storage Virtualization System (SVS200) in 2006. Similar capabilities, integrated with traditional storage-based virtualization, are found in the HP StorageWorks XP10000 and XP12000 Disk Arrays. In both cases, heterogeneous and multi-vendor storage systems are unified into a single pool. The pool can include a range of storage systems representing tiered storage. Application-usable volumes are carved from the pool, and administrators focus their attention on the SVS200 or XP array, rather than the individual systems comprising the pool. It is clear that HP has been on the forefront of storage virtualization for decades. The next section looks at what virtualization is, why it is important, and how it can be implemented. Virtualization overview Basic principles Several concepts pervade the many forms of storage virtualization. At a high level, virtualization is the abstraction of physical resources: storage virtualization technologies create logical views of storage that are distinct from their physical components. Resources are pooled, or aggregated, to encapsulate their resources and capabilities in useful and meaningful ways: this can be thought of as providing a foundation for deploying, managing, allocating, and delivering storage capabilities as services. Thus we speak of virtualization as producing virtual pools of resources that can be provisioned and managed as needed. A key goal of virtualization is to remove the complexities inherent when scores of individual components are gathered within an infrastructure, and to provide efficiencies through standardization. Standardization reduces the visible differences between entities to create efficiencies within an infrastructure, as well as greater utilization efficiency and flexibility. Virtualization ultimately helps to simplify administration. Untitled Document6 For IT managers, virtualization creates a common ground upon which management and control can be exercised. This can be thought of as standardizing the infrastructure by providing a masking layer between disparate components. For example, in an EVA, virtualization allows one to manage the EVA as a whole, rather than as a collection of physically distinct RAIDs or disk drives; an XP12000 or XP10000 Disk Array with attached external arrays allows one to manage the XP and the tiered storage behind it as a single entity, rather than needing to manage each attached array independently. At a higher level, virtualization applied at the network level ( network-based ) enables unified, common management of heterogeneous arrays, and facilitates data migration and replication among arrays. The concept of neutralizing management differences with virtualization also applies to provisioning, hierarchical storage management (HSM), remote replication, and other operations. This illustrates that virtualization applies to more than disk drives it can actually encompass storage management and other functionality that extends the abstraction of core physical resources. Virtualization can occur at many levels in the data processing chain. In addition to virtualizing storage resources and capabilities, higher-level functionality can add automation and adjust provisioning, load balancing, paths through networks, and more. Leaving the storage domain, server resources can be virtualized. This is commonly used to partition today s powerful servers, using virtual machines or other technologies, as a way to maximize server resource (capacity and memory) utilization. Applications and other resources can also be virtualized, as is done by clustering, grid-computing, and other service-oriented architectures. This paper will not cover non-storage virtualization, but it is useful to be aware of the breadth to which virtualization technologies can be applied. Why virtualize? Today, IT managers are challenged to create more efficient and productive infrastructures, to deliver more reliable services (thereby enhancing application and business productivity and agility), and to reduce overall costs both relating to infrastructure and ongoing administration. Clearly, the natural tension between the dual goals of higher efficiency and lower cost is difficult to resolve using conventional approaches. This is where virtualization provides substantial help. Figure 2 summarizes how storage virtualization can impact costs and productivity. This is the value proposition of storage virtualization. By mapping individual business (or IT department) costs into the model, the value of virtualization to the organization can be estimated. This is an important concept: the value of a solution should be greater than the total cost (not price) of the solution. The figure can aid in identifying the relevant components of a solution, and their associated costs to the company. HP can provide tools and expertise to determine the TCO and ROI of a solution; Figure 2 and the following discussion provide a starting point. Untitled Document Figure 2. StorageWorks virtualization value proposition: storage costs (green boxes), facilities (gray), administration (blue) and business productivity (orange) can all be affected by storage virtualization. Many factors contribute to efficiency. Asset utilization is a key focus area. Total asset costs are sometimes difficult to precisely determine, as they may be components in a number of categories such as: " Acquisition Costs associated with obtaining and preparing the equipment for use. These costs include procurement and installing the equipment, configuring and testing it before use. They may also extend to the cost of software licenses and their deployment. " Maintenance After warranty expires, service and other costs continue. " Facilities Storage subsystems consume floor space, power, and cooling on an ongoing basis. While footprint, power consumption, and management advances are expected to reduce facilities requirements, virtualization can help even more by reducing the total number of storage systems needed to do a given amount of work. " Management Administrative costs associated with provisioning and fine-tuning the system on an ongoing basis. These costs may include storage, server, network, and application administrators. " Disposal When the useful life ends, cost is incurred during the disposal process. Virtualization technologies reduce administrative costs while improving business application and user productivity. For example: " Workload balancing can optimize capacity distribution among pooled elements. This can significantly increase capacity utilization compared to manual provisioning with a resulting reduction in the amount of physical storage, and associated software licenses, that must be deployed. Capacity utilization may increase significantly when workload distribution is optimized among disk drives, arrays, and so on. " Workload performance distribution results in optimized load balancing among pooled assets, and can reduce the number of array controllers and network infrastructure needed to satisfy application demands. 7 Untitled Document8 Workload distribution and optimization throughout the storage network which includes paths through multiple HBAs in hosts can contribute toward reduced LAN/SAN/WAN networking administrative overhead. Also, when multiple host connections and appropriate software are deployed, data access (availability) is improved. This has a direct positive impact on application and user productivity. Operational efficiencies are also important. While these can be difficult to quantify, there are at least two key considerations: " Reduce administrative costs, for example, by increasing the amount of storage an administrator can effectively manage. " Improve service delivery levels to increase overall application productivity. Storage virtualization can be thought of as providing a level of standardization among the pooled assets. For example, a pool may contain several different storage arrays perhaps from different vendors. In a non-virtualized environment, each type of array has its own operational characteristics, management software (configuration utility and interface), and requirement for specialized operator understanding of the array s operating characteristics. When these assets are virtualized, a software layer intercedes to manage the arrays ( pool their capacity ), and presents a single interface for the administrator to interact with: it reduces the number of objects to manage and effectively brings all of the arrays to a common level of functionality. This can be a huge simplification for storage provisioning management by reducing the number of administrative tasks, for example. This sort of simplification can significantly extend the amount of storage an administrator can manage. As shown later, this type of virtualization can be found in the HP StorageWorks XP array family and the SVS200. From a business perspective, a central focus for IT is to keep applications running (business productivity), and the quality of service (QoS) of the storage farm plays a significant role. Thus Service Level Agreements (SLAs) are created, and a sufficiently robust infrastructure to ensure that service level objectives (SLOs) are met along with appropriate policies and practices is put in place to try to meet the requirements. Storage virtualization helps by providing a unified foundation the storage pool that distributes and balances resource utilization in predictable ways. Upon the common pool, higher levels of management software can work to improve SLO attainment through failover, data replication, data recovery, and other mechanisms that operate across the pooled assets. The result: higher probability of meeting SLOs, with resulting measurable application productivity. Productivity measures include reduced downtime and higher, predictable performance for all applications using resources from the pool. The discussion so far has focused on the direct results of virtualizing storage today. Derivative factors also could be considered. For example, with fewer arrays in the environment (because of higher capacity utilization), backup and recovery tasks can become simpler and may be less time consuming. Also, matching storage usage to data usage can further reduce costs. HSM and HP Information Lifecycle Management (ILM) solutions are relevant examples. In the future, automated service-oriented storage provisioning methods will further reduce infrastructure and administrative costs while improving business application productivity. One can think of these as providing application- and data usage-oriented virtualization, which can take the form of allocating storage from tiered physical resources (different QoS sub-pools within a larger aggregated pool, for example). Note that the ability to dynamically link all IT resources to business requirements is part of the overall Adaptive Infrastructure strategy from HP. Untitled DocumentVirtualization taxonomy Storage virtualization has historically been described in terms of where it is being implemented: either within storage subsystems (storage-based virtualization), in the storage network fabric (network- or fabric-based virtualization), or in a server (server- or host-based virtualization). At another level, virtualization describes what is being virtualized: physical resources (arrays, tape libraries, and so on), data containers (LUNs, filesystems, files, and so on), storage access; and how the virtualization is accomplished (in or out of the data path). Figure 3 shows a simplified infrastructure stack that is helpful for understanding where virtualization can be implemented and the resources that can be incorporated in a pool. Figure 3. Virtualization is traditionally implemented by software residing in storage, network components, or servers. Implementing storage virtualization Storage-based virtualization is applied to elements within a single storage subsystem or frame. In this approach, the virtualization technology (implemented with software or firmware) resides within the subsystem. Virtualized objects (disks, filesystems, tape drives, or other objects) are available to any host that can access the subsystem through either a direct or network connection. RAID, snapshot, partitioning, and controller failover software are examples of basic storage-based virtualization functionality. Storage vendors continue to evolve storage-based virtualization with advanced resilience, provisioning, data protection, and other capabilities. Storage-based virtualization is host-neutral it can be presented equally well to supported computers and operating systems. It also tends to have low latency (although it might impose high requirements for controller processors and memory), and is provided to applications as securely as non-virtualized storage. Storage-based virtualization techniques and software tend to be unique to each storage system (or family). Network-based virtualization is applied to resources presented by storage subsystems attached to a storage network (typically a SAN) usually to join a number of disk arrays into a single pool from which virtual (logical) disks are created and presented to hosts. These virtual disks may be provided from virtualized storage subsystems like the HP StorageWorks Enterprise Virtual Array. Notice that in this example, different virtualization approaches may be combined to achieve the desired results (availability, performance, and physical location, for example). Virtual disks created within the network can be presented to any host connected to the network, and they are securely presented in a host-neutral manner. 9 Untitled Document10 Network-based virtualization can be implemented either within switches or with self-contained devices sometimes called appliances attached to the network. Switch implementations may be accomplished within the switch operating software, with more interesting switch-resident functionality often being delivered as software that runs on blades or modules installed within the switch. Appliance-based implementations are classified as in-band or out-of-band. In-band appliances, like the HP StorageWorks SVS200, connect to the network and serve as intelligent logical bridges between the resources they pool and the hosts that access them their functionality becomes part of the data path. Out-of-band appliances work indirectly, with minimal direct data path involvement they act more as resource aggregation and connection directors. They perform their virtualization tasks, present the resulting volumes to hosts, and remain out of the data path unless needed for particular operations. When combined with network-based mirroring, and other capabilities, network-based virtualization provides a useful basis for migrating data among storage systems for example, moving data from a retiring array to its replacement, or moving data among tiers of storage. Network-based virtualization can make these and other data movements almost invisible to running applications. Essentially, the application is paused while the array servicing it is placed into the virtualized pool. Once in the pool, the application resumes access while the network virtualization technology performs the migration (typically a mirroring operation) and swaps logical disk identities when the migration is complete. Accessing applications may see a slight performance impact during some operations, depending on how the mirroring is actually accomplished. Data migrations performed this way are far less disruptive and labor intensive than traditional backup and replace array swaps. Network-based virtualization mechanisms tend to differ among vendors. This is important to keep in mind, because like other virtualization approaches, network-based virtualization relies on algorithms (formulas that dictate how the virtualized pool will be created and data deposited within it), and metadata that tells the system what was done (what data is where, for example). This means that data on arrays that were virtualized with one switch or appliance engine will not necessarily be directly usable by another virtualization mechanism. The importance: network-based virtualization should be viewed either as a very long-term solution (where your chosen virtualization technology will be in place for a long time), or as a solution for short-term projects (like moving data from one array to another). Changing network-based virtualization products may require significant backup, data migration, or other efforts. Uses for network-based virtualization include: " One--time data migration among arrays. " Creating pools of dissimilar arrays for use as integrated tiered storage environments. This can be done in conjunction with array-based functionality, and may include higher-level storage management software. " Creating expansive capacity pools to accommodate huge data sets. However, this sort of capacity scaling is of decreasing importance today, given current storage array capacities in the hundreds of terabyte and multiple petabyte ranges (and growing). Also, care must be taken when creating a pool of this sort, lest the pooling mechanism throttle back the performance of high-end arrays to match them with lower-performing units virtualizing tends to standardize the behavior of pooled components. " Continuous data replication among sites for business continuity, backup, or other purposes. Server-based storage virtualization is implemented by software running in a server. The software may reside in the operating system, or in a driver, or as a layered software application. Server-based virtualization can be applied to any storage visible to the server, regardless of whether it physically resides on direct-attached (DAS) or network-attached devices (SAN, LAN). The flexibility of being able to create a pool from disparate storage and attachment technologies is one of server-based virtualization s strengths. Another strength is the level of integration possible with the operating Untitled Documentsystem, which allows OS-specific optimizations and functionality to be easily and transparently implemented. Interesting applications for server-based virtualization include: " Mirroring data among multiple storage systems such as external arrays and DAS RAID. " Creating filesystems that can be provided to networked clients. This is the original implementation of file serving that has evolved today to NAS. " Creating RAID sets from embedded disk drives when a RAID controller is not available. " Creating large volumes from a number of smaller individual disk drives. " Host path failover among multiple SAN connections (HBAs). " Performance load balancing among multiple SAN connections. Storage virtualization tradeoffs From the preceding discussion, it is clear that there may be many ways to achieve a particular result through virtualization. For example, one could aggregate several different arrays into a single pool using either network-based or server-based virtualization techniques. Likewise, RAID could be implemented just as effectively using server-based RAID software or subsystem-resident software. In choosing a where to do it implementation, one should be aware of a number of tradeoffs. These are summarized at a high level in Figure 3. Keep in mind that one can stack virtualization implementations, thus combining the benefits of all layers while minimizing the drawbacks of each layer. Figure 4. Storage virtualization summary: all storage virtualization implementations have tradeoffs. This chart summarizes key tradeoffs involved with choosing an implementation location. 11 Untitled Document12 Storage-based virtualization derives many of its benefits from the performance advantages inherent in modern array controller architectures (including design integration and low latency), the ability to connect easily to switches and/or host adapters, and neutrality in presenting virtualized capabilities to hosts. Also, locating virtualization and other advanced features within the storage system is often the most technically straightforward, easily managed, and host-friendly approach. Most limitations of storage-based virtualization are related to the span of resources it can control typically a single physical storage subsystem. When a single array, NAS appliance, or tape library is not large enough, virtualization that can span multiple subsystems may be employed as an alternative to buying a new, larger array. This is the realm of network- and server-based virtualization. Both network- and server-based virtualization can be applied over a greater scale than storage-based virtualization. With these approaches, multiple subsystems homogeneous, heterogeneous, and multi-vendor can be aggregated into a single managed pool. Storage from the pool can be presented uniformly to consumers, with individual subsystem capabilities being almost totally masked to the outside world. Data can be migrated among subsystems within the pool, availability can be improved by mirroring among subsystems, and other useful inter-subsystem operations can be implemented all totally transparently to consumers. Network-based virtualization was introduced in the late 1990s to solve two key problems. First, customer data sets were often much larger than the disk subsystems available at the time. Hence, the concept of beyond-the-box storage pooling the capacities of multiple arrays to create a mega-array was very interesting. In addition to being able to store vast data sets on a single managed storage system, one could also think about mirroring within that huge system, and even having it be physically distributed across multiple locations. The second major motivation was data migration: non-disruptively moving data from one array to another when an array is being replaced or when data must be moved from one location to another, for example. Today s storage systems are significantly larger that they were then, and it is not hard to find frames that will hold 100 TB, and even scale to more than 200 PB. Hence, there is significantly less need today to spread data sets across multiple frames for capacity expansion reasons alone. Performance, availability, or other considerations still exist, however. This leaves data migration as a key use for network-based virtualization, and this remains one of its greatest strengths. Also, switch-based virtualization can add latency to the I/O path. The latency added by an appliance depends on whether it is in-band or out-of-band, as well as the design of the software running in the appliance. There are relatively few drawbacks to network-based virtualization, but they are important. Recall that network-based virtualization can be implemented either in a switch or an appliance. In either case, the virtualization mechanism (pooling mechanism, data placement, metadata, and so on) is unique to the implementation product. Since multiple storage systems are being aggregated, one must be cognizant of the implications of changing virtualization technologies or mechanisms when switches or appliances are changed (primary concern), and of compatibility of the underlying subsystems with the physical and virtualized infrastructure (secondary concern). Put simply: when you replace a switch and its virtualization technology with another switch or appliance and its virtualization technology, what happens to the data in the pool, host presentation, and so on? Whereas virtualization uniqueness within arrays is self-contained and usually innocuous in this regard, the situation changes dramatically when a higher-order form of virtualization is imposed. Server-based storage virtualization capabilities can be integrated tightly with operating systems and are able to span any storage that is visible to the server. Integration opens the possibility of building advanced capabilities into the virtualization software. This could be taken to the point of integrating storage virtualization with server virtualization, which could make it much easier to synchronize and orchestrate the automated provisioning of these resources making it easier to manage demand-based provisioning and overall resource utilization. Also, storage virtualized by the server can be managed as part of the server, as opposed to requiring separate management interfaces, which can simplify overall management. Untitled Document13 The key compromises that server-based storage virtualization makes include: " If the storage needs to be accessed by another server or client, access must be through the primary server (the one that is doing the virtualization). This introduces latency, places additional workload on the server, may require additional network connections (NICs), and adds the server as a point-of-failure in the data path. " Server resources are consumed to accomplish the virtualization. This can reduce the resources available for accommodating production application workloads. As you can see, regardless of where storage virtualization is implemented, it can offer significant benefits. It is up to the storage or system architect to balance product features with their tradeoffs, in the context of the challenges that needed to be solved. A more complete virtualization perspective The discussion up to this point has been based on a simple, fairly traditional, infrastructure-centric view of storage. While general concepts are helpful in understanding what can be done by implementing storage virtualization in various places and in various ways, it is increasingly useful to understand virtualization from a business solution perspective. In other words, we can think of virtualization as a vast collection of enabling technologies that are combined into products and solutions that do useful things for IT. From this perspective, the focus needs to shift to thinking about how applications and IT administrators interact with the resources they consume and manage, and how these interactions can be made more efficient. Efficiency is measured in terms of improvements in productivity, efficiency, and agility to respond to changing conditions. Indeed, this is how StorageWorks views virtualization: it is a growing set of technologies that form foundations and provide mechanisms for delivering customer value, embodied in products and solutions offered by HP and its partners. Now focus on virtualization from a broader, application-centric viewpoint. Production applications, and the business processes they serve, consume resources. These resources can be described in terms of capacity (such as compute cycles, gigabytes storage, network bandwidth), performance (storage IOPs, bandwidth, network and storage latency, for example), availability (uptime, resilience to failure), security, and so on. Based on their ability to access the required resources when needed, business results are delivered. These resources are simply seen and consumed: their underlying physical components and the management path that presents them to applications are invisible to the applications themselves. Thus, the entire path over which information flows and is processed is really a concerted progression of virtualization steps layered upon core tangible resources. Put another way, resources can be delivered from virtualized physical pools by conduits in the form virtualized services. Virtualized services are managed pathways that provision (allocate, present, and meter), aggregate, or in other ways ensure that appropriate resources are made available to applications in the most efficient ways. They provide end-to-end, application-relevant linkages between physical resources and the processes that consume them. This is a key to understanding storage virtualization in the greater context of IT and the HP Adaptive Infrastructure. A different perspective From an IT perspective, one can view virtualization as having two dimensions: business value and the strategic importance of implementations. Technically, these correspond to the range of resources being pooled, and the ongoing impact to business results. This is useful because it helps to rationalize technology with the benefits it delivers. Untitled DocumentFigure 5 shows a comprehensive view of IT virtualization from HP. At the most basic level, physical resources are pooled. This is called Element Virtualization, which is where inside the box storage-based virtualization fits. EVA, MSA, and other subsystem-inclusive products deliver element virtualization. Figure 5. Virtualization value chain At a higher and more interesting level, heterogeneous resources can be unified. Pools of this sort encompass more diverse capabilities, with correspondingly greater business value, than more constrained element pools. Integrated virtualization creates pools that span multiple subsystems. These pools may combine diverse type of resources such as servers and storage, and different types of storage. The physical resources may even be geographically dispersed. Integrated virtualization necessarily involves resource management software that can provide capabilities beyond basic pooling. Data migration and remote mirroring are two examples. Moving higher, additional automation, integration, and business application-aware software could be added to provide advanced, IT-wide capabilities. The Complete IT Utility delivers virtualization across the entire IT environment. In addition to integrating a broad and disparate range of physical resources, the Complete IT Utility uses virtualization technologies to provide automatic, dynamic, service-oriented resource delivery. More detail can be found in the references listed at the end of this paper. Notice that each successive class of enterprise virtualization provides increasingly greater value to the enterprise. This is why, from an industry R&D perspective, there is more focus and innovation occurring at the higher levels. Indeed, HP StorageWorks is devoting more effort to technologies that deliver integrated virtualization, and is working with the rest of HP to deliver the Adaptive Infrastructure, which will provide Complete IT Utility virtualization. Each successive level of management that is involved in providing resources to applications could be thought of as an element that provides an increased level of virtualization (that is, broader virtualization of the underlying resources). Each management component ( value-add service ) that contributes toward ensuring delivery of data/content/information to applications can be thought of as contributing toward virtualization. 14 Untitled Document15 This perspective implies that: " Automated provisioning can be thought of as automatically providing virtualized physical resources to applications. Thus, one could use provisioning software to present and manage the continued presentation of storage array resources to applications. " QoS can be thought of as virtualized resources that deliver [data/content/information] to business applications with assured service levels, and this is accomplished by layering provisioning management software that takes, for example, virtualized physical resources and adds QoS attributes (such as through the addition of metadata), and subsequently provisions the results to applications. " Automated failover, business continuity, and other high-availability storage management software can be thought of as adding a virtualizing layer whose job is to mask specific kinds of storage failure from applications. In other words, these types of software are essentially providing a value-add layer of virtualization (virtualizing multiple arrays, perhaps across multiple sites) that results in applications not noticing when a specific physical resource dies. HP StorageWorks virtualization today Figure 6 shows a high-level mapping of virtualization delivered by the StorageWorks portfolio. The figure focuses on the more advanced virtualization functionality offered in StorageWorks systems themselves. HP Storage Essentials (which virtualizes the management of multiple systems), HP StorageWorks File System Extender (which offers HSM functionality), and other HP storage software products are not shown on the chart. Also, while MSA offers advanced RAID functionality, HP realizes that the focus of today s virtualization efforts is on deeper levels that encapsulate more resources and capabilities, and provide greater benefits than older, more conventional approaches. Grid-enabled virtualization products represent integrated virtualization. They offer a hybridization of the capabilities found in both storage- and network-based virtualization. For example: " The HP StorageWorks EFS Clustered Gateway delivers file system virtualization from a scale-out, multi-node system that accesses SAN-based storage capacity. It includes tools that simplify provisioning, while improving client access to data. " The HP StorageWorks 6000 Virtual Library System (VLS6000)-series virtual library systems ( virtual tape libraries ) virtualize the physical resources (control nodes and the storage resources connected to them) into a pool. They then go farther, adding virtualized tape drive and tape library presentation to backup and other applications. At the network level, intelligent switches provide virtual, changeable connections between servers and storage. Beyond this virtualized hose concept, intelligent switches can also run special applications that enable them to provide advanced management capabilities. For example, an application running within an intelligent switch could create a virtual pool of storage from a collection of arrays attached to the switch. In this example, virtualization might incorporate the ability to mirror LUNs across dissimilar arrays, or migrate data among arrays. Untitled Document Figure 6. Virtualization is embedded in all of today s HP StorageWorks subsystems Integrated VirtualizationElement VirtualizationStorageNetworkSoftwareServersTraditional (homogeneous array)" EVA (full array)" ProLiant Storage Server" Virtual Library SystemHeterogeneous storage virtualization" XP10000, XP12000Advanced, grid-enabled virtualization" HP StorageWorks RISS" HP StorageWorks Scalable File Share" EFS Clustered GatewayNetwork-based" SVS200 virtualization appliance" Intelligent switchesStrategic Importance 16 Untitled DocumentHP StorageWorks use cases Nearly all HP StorageWorks hardware and software products deliver virtualization today. Figure 7 shows some examples that are in wide use. This section describes them in high-level detail. For more information, refer to the QuickSpecs at http://h18006.www1.hp.com/storage/index.html. Figure 7. HP StorageWorks storage virtualization is widely deployed today. ArrayPoolNASPoolEFS Clustered Gateway: consolidate data for vast numbers of clientsEVA: optimize subsystem utilizationHP RISS: scale-out architecture, intelligent data archivingXP12000: pool heterogeneous storage to simplify managementClient Client Client ClientSANFabricLAN...ArrayPoolSVS200: consolidate heterogeneous storageSANSANHP StorageWorks Continuous Access: ensure availability against site outages The EVA family features system-wide virtualization of all disk resources managed by the EVA controllers. Whereas other systems group disk drives into discrete physical RAID groups and thus have multiple physically distinct RAIDs, EVA uniquely binds RAID across groups of blocks and distributes all resulting RAID groups across all disk drives. EVA RAID uses the same algorithmic data placement patterns as traditional RAID, but applies them more intelligently with the addition of even more abstraction, so that a single disk drive can participate in multiple RAID groups. The result: automatic workload (capacity and performance) distribution to all disks in the array, while delivering one of the easiest arrays to configure in the industry. A secondary result is that data availability in an EVA can be higher than that of the equivalent conventional RAID implementation. The reason is simple. Consider RAID 5. By definition, RAID 5 has redundancy that protects data against an individual disk loss. However, after a disk loss, all data on the RAID is at risk until the failed member is replaced and its data is regenerated from the other disks if any other disk in the array fails before the RAID has been regenerated, all data on the set is lost. However, because of EVA s virtualization mechanics, it turns out that an EVA RAID 5 (called vRAID 5 ) can sustain up to two disk failures, depending on which two disks fail. HP does not recommend that you depend on always being protected against two disk failures, but it is reassuring to know that virtualization makes this possible. 17 Untitled Document18 EVA virtualization also enables dynamic expansion of the disk pool. Even after the disks have been fully configured into RAID and presented to hosts, when disks are added to the EVA frame, they are automatically incorporated into the pool. The EVA goes further: after the new disks have been incorporated into the pool, the workload is automatically redistributed across all disks in the background and almost totally invisibly to host applications. The ability to easily expand as needed and automatically tune the results makes it much easier for customers to acquire storage capacity as needed, as well as making for much more efficient use of storage. Today, while some storage vendors are marketing their network-based virtualization approaches, HP StorageWorks offers similar capabilities in two well-integrated, proven storage systems. The XP10000 and XP12000 Disk Arrays can virtualize a number of storage systems from HP and other vendors into a single pool. When you connect any of a variety of arrays from HP, EMC, HDS, or IBM to an XP12000 Disk Array, they are combined into a single high-performance (up to 1.9 million IOPS; 68 GB/sec), high-capacity (up to 32 PB) virtual pool (the XP10000 Disk Array works similarly, but on a somewhat smaller scale). The administrator then manages one XP12000 Disk Array instead of a collection of individual arrays, and business applications receive their data by accessing the XP array. Thus, HP StorageWorks XP family virtualization has attributes of both storage- and network-based virtualization. This can be extremely useful for migrating data from one array to another: just pool the arrays behind an XP array, have the XP mirror (copy) the data from the old array to the new one, and the XP will do the rest including retiring the old array when data transfer is complete. You can use variants of this approach to create, for example, a single pool of storage that contains multiple tiers a very straightforward way to deploy tiered storage without the overhead of multiple management interfaces and other complexities. Interestingly, arrays that have been part of an XP pool can later be removed from the pool and used conventionally and the data remains intact and usable. This contrasts with typical network-based virtualization approaches that deposit data on constituent arrays using unique and proprietary schemes, rendering the data on individual elements unusable directly if the element is removed from the pool. Like the EVA, XP arrays incorporate a number of other virtualization technologies. For example, the huge cache in an XP array can be partitioned, and part of it used as a solid-state disk. Yet another example of useful virtualization is the ability to partition the XP array into multiple independent virtual arrays a capability not found on competitive products. The need to migrate data among storage systems transparently to business applications is a key reason for virtualizing multiple systems into a single pool. Traditionally, when a storage system was about to be retired or needed to be replaced by a larger system, data was manually moved from the old system to the new. This was usually a tedious, time-consuming, and application-disruptive process that required significant planning to minimize business disruptions. While the XP array family can accomplish this, sometimes it can be lower cost to have a network-based approach. HP created the SVS200 to solve this and similar problems. Employing key technologies from the XP family, the SVS200 is a network-based storage virtualization appliance. SVS200 offers capabilities not found in some competing network-based products. For example, the SVS200 can mirror and migrate data among heterogeneous arrays both locally and remotely. SVS200 can use HP StorageWorks Continuous Access capabilities to mirror data from an EVA at one site to an EVA or even another vendor s array at a remote site. Taking this example a step farther, the remote system could then be used as a backup source to a co-located library effective, zero backup window, remote, lights-out backup. This is a very powerful and useful application of HP virtualization technology. This leads into to another important set of virtualization solutions from HP StorageWorks. All IT organizations strive to keep the company s data available whenever and wherever it is needed. Virtualization from StorageWorks helps in a variety of ways. Storage-based solutions protect against loss of access due to disk failure (RAID) and controller failures (redundancy and automatic failover). At a higher level, SVS200 and XP inter-array mirroring protect against loss from entire subsystem failures. And while you do not often think of SANs as virtualized environments, in fact that is what they are they virtualize the connections between servers and storage, helping to keep data flowing even if a switch or network connection (HBA) fails. But storage virtualization can be extended to Untitled Document19 ensure that data is available even if a site goes down. HP StorageWorks delivers this with Continuous Access and HP StorageWorks Business Copy software for EVA and XP subsystems. These products create copies of data at one or more remote sites, either synchronously or asynchronously. Coupled with the extended clustering and hot-failover capabilities found in some operating systems and applications, HP virtualization solutions synergize to ensure that your production applications and their data are available if there is a problem at a site. The foregoing use cases illustrated interesting and useful ways that HP applies traditional storage virtualization technologies to solve real-world problems. However, StorageWorks continues to evolve virtualization technology and its uses. Its long-term strategy includes a new, totally virtualized storage environment that delivers storage capabilities as a utility service. Fundamentally, this seeks to create a unified infrastructure upon which storage, server, and advanced management software virtualization technologies will be married to form a new, intelligent storage ecosystem. The result will be a service-oriented, scale-out entity that can deliver the functionality of a number of today s disparate storage systems arrays, NAS, and tape libraries, for example. These will ultimately be provisioned to respond to business application demands. StorageWorks has already begun the journey with the release of a number of grid-enabled storage systems, and evolving the management functionality of Storage Essentials. In addition, combining storage and server resource management through Systems Insight Manager provides the beginnings of unified management of very disparate resource pools a step toward an IT utility. Going forward, these management capabilities will continue to evolve and will ultimately deliver a service-oriented resource utility layered over the types of infrastructures that are common in today s IT environments. Grid-enabled storage systems warrant special attention, in part because they may portend an interesting direction for storage in a more general sense. As some industry observers have noted, servers and storage seem to have an opportunity for some convergence in the future the increasing appeal of server blades in one catalyst. Among other things, this makes it possible to deploy useful and interesting new functionality into the storage domain. Following are two illustrative examples. The first grid-enabled storage system from HP was the HP StorageWorks Reference Information Storage System (HP RISS). This system has an architecture consisting of multiple smart cells compute nodes, with built-in storage, that are networked together by way of an HP storage application that runs on all smart cells in the system. In addition to binding the nodes together, the software provides additional intelligence that is used for indexing the data stored in the cell, searching cell contents, and coordinating with other cells. The system scales for capacity and performance by adding smart cells but with an interesting twist. Storage system performance is usually measured in terms of number of requests serviced (IOPS) or amount of data moved (GB/second). HP RISS, though, is not a traditional storage system. It is a data archiving system, and as such its performance metrics must also reflect how quickly it can find any piece of data, and how well it maintains that locate-and-deliver-data performance as the data store grows. This is where virtualization coupled with the parallelism of grid-enabled applications shines. The built-in HP RISS software actually helps users to find their data quickly, and its intervention in the search process is invisible to users! When subjected to a user query, HP RISS search/retrieval functions substitute for those in a business application. All HP RISS smart cells search their contents simultaneously using their built-in indices. Because of this parallelism, the data is located very quickly, and the smart cell search results are aggregated quickly and efficiently, delivering unbelievably quick presentation of results. More interesting: because of the parallelism, HP RISS performance remains practically unchanged regardless of how much capacity it contains. Putting server and storage architectures together and unifying them with virtualization technologies is something HP is uniquely driving into the marketplace. The HP StorageWorks EFS Clustered Gateway is another grid-enabled product. Like the HP RISS, the EFS Clustered Gateway is composed of a number of processing nodes, storage capacity obtained from arrays in a SAN (remember: HP pioneered NAS/SAN convergence years ago a useful application of virtualization), and a storage application that runs on each node. In this case the application provides a clustered filesystem that can be accessed by networked clients connecting to Untitled Document20 any node, and whose data is collectively stored in a SAN. Here, in addition to utilizing virtualized storage for capacity, the system virtualizes multiple access and file system delivery and processing points among the nodes. The results: " Very high performance, measured in CIFS and NFS. " Easy, reliable access: the aggregate of nodes, storage, applications, and filesystems are virtualized into a single unit from a client-access perspective. " Simplified administration: the EFS Clustered Gateway presents a single system for administration tasks. " Very high, predictable reliability: as long as any node and storage are available, users can access their files. The more nodes in the system, the greater the reliability. Also, if a node fails, the performance decreases fairly constantly at 1 divided by the number of nodes. SAN reliability is accomplished as for any other SAN installation. This summarizes some of the interesting ways that StorageWorks delivers the benefits of virtualization in interesting and unique ways. Virtualization technologies power all StorageWorks systems tape libraries, virtual library systems, NAS appliances, and more. The focus of HP is to solve customers business problems, leveraging appropriate technologies and extending boundaries wherever possible. Many StorageWorks capabilities are based on practical applications of virtualization technologies. Storage virtualization more than just hardware Administrators need tools that aggregate complex environments into simpler, more understandable entities. They also need to be able to do their part to ensure that data is available only to authorized users, and that it can be easily accessed once it is made available. Also, given financial and regulatory realities, it would be useful if data could automatically be stored on the most cost-appropriate assets, and that the storage environment is sufficiently secure to satisfy laws and company auditors. Delivering intelligent hardware is a good start, but StorageWorks offers much more complete capabilities. To address these requirements and more, StorageWorks offers a broad and evolving range of storage resource management, information lifecycle management, and data protection software. Figure 8 offers a representative view of the breadth of the storage management capabilities from HP. To the left, Storage Essentials provides resource management for all SAN-attached storage assets. To the right, HP ILM solutions manage and use the SAN and other storage hardware for data- and application-focused purposes. HP solutions can also migrate data among tiers of storage and protect data according to application-relevant criteria. Untitled Document Figure 8. StorageWorks software: virtualization applied to hardware and data management. It is useful to understand how these relate to virtualization. Storage Essentials is a single point of contact for provisioning and monitoring the storage environment, presenting a unified portal to network storage devices. The console can be customized to ensure that administrators can view and access only the resources they are allowed to view: they see a filtered abstraction of the entire environment. While Storage Essentials delivers little virtualization functionality today, the situation will change over time as provisioning tasks become increasingly automated. As its automated provisioning, monitoring, and other management functions reach more deeply into the physical hardware, Storage Essentials will provide managers with a highly virtualized interface that encompasses many features in the SAN s assets administrators will increasingly deal with provisioning abstractions (called policies ) and work with virtualized resources that will be allocated as storage services as business applications see them. In other words, Storage Essentials is evolving along a path of allowing administrators to manage storage assets (hardware and software) in the context that applications consume them virtualized management to deliver virtualized resources. The storage-related aspects of Information Lifecycle Management (ILM) are fundamentally application-focused virtualization. These products and solutions incorporate application- or data-type-specific intelligence and meld them with user-defined policies to govern the placement of data throughout the data s stored life. A generalized schema includes: " Administrative entry of policies or other criteria that describe data types, users, and lifecycle access requirements into a management application. These attributes may include availability, performance, protection level, security factors, and so on. " A pool of storage devices that constitute a centralized data repository. This may include several storage tiers unified through a SAN or other network. The devices may include disk arrays and tape libraries. They may also include systems with embedded advanced intelligence, like the HP StorageWorks RISS. In addition, data protection software (backup, replication, and related functionality) may be added. 21 Untitled Document22 " Management applications control of data placement within the managed pool of devices. Since access patterns, authorized users, and other factors may change throughout the lifecycle, ILM includes the ability to transparently migrate data among devices in the pool, and to dynamically change storage attributes associated with the data, as dictated by the ILM policies. " Business application- or data-type front-ends that provide the system with application contexts by which information can be properly identified and stored. Thus application integration software provides an intelligent lexicon between business applications and ILM policies, enabling appropriate classification and placement of data. Thus, a database front-end might identify database records; a file front-ed might identify Microsoft Office documents; a graphics front-end might identify JPEG and TIFF images; a medical front-end might identify DICOM images, and so on. " Optionally, incoming data may also be transformed so as to ensure usability into the distant future. For example, files bearing proprietary formats may be converted to more standard formats like ASCII or JPEG, as they are transmitted to the storage repository. Clearly the entire process is geared toward totally virtualizing data placement within a virtualized storage pool, with the added intention of moving data through the pool according to descriptive, user-set policies. Intelligent front-ends aid the storage abstraction process and provide a basis for properly tracking (and perhaps rapidly retrieving) data regardless of where it is physically stored. Thus, ILM represents a very complex multi-layered hardware and software virtualization solution. So far, the ILM discussion has focused on storage. For completeness, it should be noted that ILM solutions must also incorporate ways to input data to the ILM system. These could be direct input by way of keyboard, camera, x-ray machine, or other device. It could also be through a digitizing device like a scanner coupled with appropriate conversion software. ILM also necessarily includes human aspects to determine lifecycle storage requirements and other elements. It should now be clear that virtualization plays fundamental roles in delivering most HP storage functionality. As virtualization has become more deeply embedded throughout the product line, HP has focused product descriptions on the business benefits delivered by its solutions, rather than on virtualization foundations and technologies. With this background, it is now time to revisit the virtualization model presented earlier. HP storage virtualization: The big picture Today, HP clearly has storage subsystems that offer a broad range of storage-based virtualization capabilities. It delivers network-based virtualization as well. At the server level, host-based mirroring and other capabilities exist. From the foregoing discussion, though, it is clear that HP has a more expansive view of virtualization, and it believes that this will benefit customers by providing more useful solutions into the future. To understand this, the HP view of the virtualization stack, which is really much more comprehensive than what was described in Figure 3, must be introduced. StorageWorks is rationally extending storage virtualization in powerful ways. The HP perspective enriches the virtualization stack to encompass dynamic provisioning, QoS-oriented functionality (as might be needed by ILM and other business-derived needs), and so on, consistent with the discussion in the previous section. This affords HP a more complete approach and opportunity to deliver complete, integrated solutions than most storage vendors are able to realize, and it forms the basis for the HP virtualization strategy. Untitled DocumentHP StorageWorks Extensible Virtualization Stack First, remember that virtualization is the abstraction of storage that separates the host view from the storage system implementation (source: SNIA storage virtualization tutorial). It makes physical paths, device characteristics, physical data location, and other underlying aspects invisible to entities that access and manage it. Furthermore, it is dynamic. HP, the Storage Networking Industry Association (SNIA), and others recognize that virtualization involves more than directly virtualizing storage elements. It also includes significant device management, presentation, and application integration elements. With this in mind, Figure 9 presents the HP StorageWorks Extensible Virtualization Stack an inclusive and flexible model that integrates virtualization across all storage and management elements in the environment. Figure 9. HP StorageWorks Extensible Virtualization Stack. In addition to traditional hardware-focused storage virtualization, the StorageWorks Grid strategy will deliver an interesting environment that encompasses the capabilities of both storage- and network-based virtualization, as well as embedding advanced management virtualization features. 23 The stack recognizes that many hardware and software elements can contain virtualization technology and/or deliver virtualization capabilities. Furthermore, these elements can be combined in logical ways, thought of as layers contributing to efficient data delivery and asset utilization, that contribute to and amplify the power of a total virtualization solution. For example, network-based virtualization is complementary to storage-based virtualization each has unique applications, and there are times when it is appropriate to use them together. Note that the layers are logical, and are not necessarily in the data path. This is consistent with the perspective in the previous section. Being extensible, the model allows for future technology developments. For example, while specific categories of management are shown, the model allows for additional categories such as security, model-based automation, and other types of management can be added when appropriate. Indeed, this is part of the StorageWorks strategy. Also, new storage- or network-based categories can be added. The stack provides a useful way to visualize how different virtualization technologies might be complementary or redundant, for example. StorageWorks Utility Storage-Based Virtualization Management Application Interfaces (including third-party components) Provisioning Policies Physical Resources ILM Adaptive EnterpriseNetwork Servers/Applications/Users Server-Based Virtualization Network-Based Virtualization Tapes Disks