RSS FeedWhite Papers

White Paper Download

Considerations When Purchasing Storage Solutions

Considerations When Purchasing Storage Solutions

Considerations When Purchasing Storage Solutions

Category: Storage management

Date: , 11:19

Company: Fujitsu

As customers are getting better at understanding the total cost structure of their data storage operations, they realise the need for more sustainable storage economics.  Operational expenditures, in both hard cash and labour overhead, associated with maintaining some legacy storage systems make the case for a forklift upgrade that recovers the investment in 12–18 months.


Download this white paper to acquire an understanding of the primary data storage challenges faced by midsize and large enterprises and the factors to consider when selecting storage systems.

^IDC Analyze the Future Sponsored by: Fujitsu Nick Sundby December 2013 Buyers of midrange ($25K-$250K) and high-end ($250K+ average selling price) storage systems have remarkably similar priorities and objectives, despite the difference in their staff resources and price expectations. Most are conservative and risk-averse - after all, their businesses depend on the integrity and availability of the critical data these systems store. However, the latest economic stress has put legacy systems to the test of cost analysis as customers look at IT expenditures with ever greater scrutiny. An increasing number of midsize and large enterprise users find that the operating costs of some legacy systems from incumbent vendors are hard to justify. In the current environment, IT managers are required to demonstrate value for the money they spend and to cut costs wherever possible. Legacy storage is a primary area of concern with high bills paid towards recurring software licensing, extended technical support contracts, specialist labour overhead, excessive rack space and energy usage. IDC's end-user surveys have found that reducing storage-related costs is a high priority for both large and midsize companies. Pressed by data growth, growing server virtualisation and the business requirement for better performing applications, IDC believes that IT decision makers have to look for solutions that offer not just technically but an economically sustainable path into the next five years. This includes less labour attached to infrastructure level tasks, a customer and growth friendly support and software licensing structure, and a power-efficient, high-density design that saves energy and floor space in the datacentre. This IDC White Paper focuses on the primary data storage challenges faced by midsize and large enterprises and the factors to consider when selecting storage systems. An overview of the trends for storage is offered in this paper alongside the characteristics customers seek and need going forward. These requirements are compared against Fujitsu's ETERNUS DX portfolio of storage systems. As customers are getting better at understanding the total cost structure of their data storage operations, they realise the need for more sustainable storage economics. Operational expenditures, in both hard cash and labour overhead, associated with maintaining some legacy storage systems make the case for a forklift upgrade that recovers the investment in 12-18 months, December 2013, IDC #IDCWP38V assuming more customer friendly licensing policies, lowered support costs, and a more efficient systems management. When business value through better quality of service and more agile provisioning is also considered, the payback period can be even shorter than 12 months, IDC's studies show. IDC's end-user research has confirmed over the years that reducing storage-related costs has become a major concern. Research also shows that these customers, roughly a third of all companies and over 40% of large enterprises, are not willing to make compromises as a result, still asking for the capacity, performance and advanced features to the same extent as their peers that didn't claim cost among the most pressing issues for the time being. ] Source: IDC, 2013 ©2013 IDC #IDCWP38V 2 The pain points for midsize and larger companies are clearly seen. When IT managers are asked for their top priorities regarding future storage spending, three of the top 4 priorities for companies with 500-999 employees and 1,000+ employees relate to the struggle to manage data volumes that may double in size every 18 to 24 months. For larger companies, especially those burdened with complex and fragmented legacy systems, the top priority is to find a way to reduce storage-related costs. As they plan for the future, many midsize companies are increasingly following larger enterprises in the move towards consolidated, highly virtualised and dynamic architectures, to boost capability while driving down the cost of doing business. Storage must play its part in facilitating this strategic goal and, according to IDC research, should support the following objectives. Support the business objectives. During the economic downturn, funding was tight and companies were focused on minimising capital outlays. As the economic situation in some countries has improved, the priorities shifted markedly as companies looked for ways to deploy new and better services for customers in a bid for market share. Enable business continuity and disaster recovery. IDC research shows that improving DR capability is one of the top drivers for storage investment. It is essential that storage systems have rapid recovery times and powerful replication capabilities, including synchronous replication that was previously only the preserve of large scale enterprise systems. More automation. As much as possible, storage must be self-optimising, self-managing and self-healing. Simpler data migration. Midsize companies will not accept the significant migration costs experienced by enterprise companies to replace legacy storage or bring new capacity online. It is essential that migration can be conducted in a rapid and non-disruptive manner. Need to extend the benefit of virtualisation into the storage realm. Storage virtualisation transforms physical storage assets into a flexible pool that can be provisioned and reallocated as the changing workload demands dictate. Need to extend the life of legacy storage arrays. It is no longer acceptable for a company to discard two- or three-year-old arrays simply because they lack basic functions such as thin provisioning. Need for operation and management by non-storage specialists. An intuitive GUI with configuration wizards and readily accessible support resources is now of more value. A single GUI for all disk storage is the ideal. Storage should be a versatile single platform, not a group of specialised silos. Dedicated storage for special workloads adds complexity, risk and cost. Storage must be able to adapt to unknown future demands. New level of price-performance and reduced whole-life costs. Storage must respond to the need for lower capital costs, and must also deliver significant operational cost savings throughout its useful life. The competition for storage resources has become intense: large-scale server virtualisation, more virtual desktops, more transactions and more analytics are driving ever higher requirements for I/O and capacity. Business-critical applications need a rapid response, whereas others can easily tolerate a lower performance. Storage admins can establish a range of performance levels through the use of flash memory, RAID levels, short-stroking and other means. However, if several ©2013 IDC #IDCWP38V 3 applications are sharing data from a single volume, the I/O requests are serviced first-come-first- served, regardless of the application's priority. The solution for some companies is to introduce flash arrays or flash caches in order to accelerate critical workloads. Although it may address the need for performance, it is potentially a step backwards to the days of fragmented, point solutions that increase complexity and management workload. If this is the best solution, then the storage vendors have arguably fallen behind the needs of forward-thinking IT departments. Storage users need higher levels of management consistency, automation, availability and flexibility. Converged infrastructure has moved rapidly into the mainstream as users typically benefit from faster deployment, simplified management and lower cost of ownership. Storage could arguably benefit from similar thinking, to deliver benefits through standardised hardware platforms, scalable architectures and common management tools. Dedicated storage silos restrict flexibility and would be replaced by a single platform serving all workloads and data types. The sought-after features that should define a modern storage platform in IDC's view are: ? Ease of use. In direct connection to operational efficiency, enterprises are increasingly aware of the disproportionally high labour overhead storage management takes up with legacy systems. High-end systems are not exempt from cost scrutiny any more. Most incumbent high-end storage systems require the attendance of highly trained storage systems specialists which is undesirable for an increasing number of businesses. Ease of management is a must regardless of size and workload. This not only means better designed, more intuitive management interfaces but also an easy to set up policy driven automation of day-to-day tasks, including monitoring for hot spots and autonomous rebalancing of the system. ? Every contender has to demonstrate the labour efficiency gains of its storage platform if it wants to stay relevant and deliver better operational economics. Ease of use was demanded by nearly 6 out of every 10 customers. When combined with those who asked for storage automation, the need for less storage administration becomes an almost ubiquitous theme in IDC's enterprise storage end-user survey. Non-disruptive scalability. Large-scale server virtualisation puts networked storage to the test. Due to the highly dynamic nature of many of these environments where the number of VMs can quickly grow, storage systems can struggle to keep up with I/O and capacity. At the same time, higher concentrations of workloads on fewer storage systems require very high availability, avoiding any unnecessary planned downtime. Systems that can scale in both performance and capacity in a seamless, non-disruptive way can provide customers with the flexibility they need, allowing for buying of storage resources when they need it - and avoiding costly oversizing. ? Furthermore, IDC sees value in the capability of scaling performance and capacity independent of each other. This decoupling means that the customer has the choice of adding capacity only behind existing controllers, adding controllers to existing disk shelves or both at the same time, without any disruption to operations. This allows customers to match system specifications closely to their needs. Integrated data protection. Data protection on primary systems has been a repeated priority for storage buyers over recent years. Integrated snapshots that are highly capacity efficient are a must as well as the conjoint operation of snapshots and high-speed replication within or across datacentres for business continuity. ©2013 IDC #IDCWP38V 4 ? Enterprise-grade storage systems should provide users with flexible options that meet their specific need and are quick to set up and test. Local or remote snapshots, mirroring or any replication need to have no impact on quality of service of production data. Software licensing policies should allow and encourage the use of integrated data backup and disaster recovery features rather than be complex and cost prohibitive. Space efficiency. IDC found that while power efficiency of storage systems is of less importance for the overwhelming majority of customers compared to more stressing issues, floor space is a significant concern to many. Storage systems, scalable high-end arrays in particular, tend to take up multiple full cabinets as they grow in capacity and the number of controllers. There is a hard space limit in many businesses, mostly with datacentres in metropolitan locations, and co-location customers better not underestimate the cumulative costs of floor space over the life cycle of the system, which is typically more than five years for a high-end installation. Density as a result of different physical designs can make an impact on a five-year TCO calculation by tens of thousands of dollars alone. I/O performance. Enterprise storage systems are running diverse and, for many users, dynamically changing workloads. The responsiveness and throughput of the storage system plays a major factor in application and VM level performance. Lower latency and high IOPS not only lead to much better overall experience for the users of the infrastructure, but increases the return on investment in servers, virtualisation and enterprise software. Higher consolidation ratios, more and bigger VMs and faster running business applications highlight the major benefits of a predictable high-performing storage system. High I/O speed is achieved by performance optimised low-latency controller design, intelligent and scalable caching and a balanced back-end architecture that avoids bottlenecks. Beyond these requirements, naturally, buyers of high-end systems expect a highly resilient design that withstands any individual or, in some cases, multiple component failure, as well as power loss without data corruption. Customers may also include lean physical design in their requirements from the angle of non-disruptive, quick serviceability of the production system, including basic tasks like upgrading controllers, adding or removing disks and disk shelves, and internal cabling requirements. Vendors should be expected to demonstrate their reliability, availability and field serviceability capabilities to the customers by providing access to demo units. In relation to RAS features, high-end systems running business-critical applications are also expected to maintain acceptable service levels during failure or in-service mode. This includes disk failures, controller component or full controller failures in which case all data should remain accessible and the I/O load rebalanced across available resources. In large-scale deployments, this architectural attribute is a must as the frequency of service events increases with the number of components found in the system. Fujitsu is a major vendor of high-performance storage solutions and its engineering and technical excellence is long-established and widely acknowledged. The company's engineering DNA, inherited from its 78-year-old development history, inclines it to focus primarily on product quality and reliability rather than elaborate marketing or promotional activities that may be seen from other vendors. ©2013 IDC #IDCWP38V 5 Fujitsu's approach to the storage business is different from many of its US-based competitors, and is influenced by the high value its corporate culture gives to engineering excellence. For example, Fujitsu chose to internally develop a coherent, end-to-end portfolio of block and recently added file storage arrays, rather than acquire and integrate a disparate set of external technologies. An example is the Fujitsu CS8000, a scale-out backup and archive platform that allows both mainframe and open systems data to be protected by a consistent set of automated management processes. Fujitsu's strategy is based on meeting the following five objectives: ? Provide a general-purpose storage platform that can simultaneously support a wide range of virtualised workloads, from low to high 10 intensity, block and file, with defined service levels for each workload. Therefore the need for specialised storage point solutions is reduced or eliminated. ? Empower storage admins with quality of service (QoS) tools so that array response time can be defined and managed according to the priority of the application. ? Provide a broad portfolio of storage arrays, from economy to high-end enterprise that can be managed by a single management tool, for both block and file data types. Operator overhead is minimised, efficiency is increased and total ownership costs are reduced. ? Work closely with high-quality business/resell partners to deliver a strong range of storage services. Align the pre- and post-sale support to meet the customer's need for price/performance, flexibility and resilience. ? Provide a continuous flow of performance improvements and functional enhancements that customers can use with their existing Fujitsu storage wherever possible. The purpose is to extend the product lifespan and reward and encourage long-term commitment to Fujitsu storage. The result of this consistent, long-term vision and strategy is the ETERNUS DX range of storage systems. Many organisations are looking to reduce complexity, consolidate resources and minimise operational and infrastructure costs without compromising performance or flexibility. Fujitsu's ETERNUS DX S3 enhancements address this challenge from two angles: ? A single family of storage arrays from economy to high end, yet all can be managed by a single software utility. Since the arrays support both file and block data, and can be provisioned according to capacity or response time requirement, the ETERNUS DX family is now a coherent and versatile storage family for supporting all current and future workloads. This differs from other major vendors, whose storage portfolios often contain disparate products with dedicated management tools. IDC believes that Fujitsu's approach will drive lower operational costs and higher return on assets, due to lower administrative overhead and greater flexibility to accommodate future storage requirements. ? A new scalable performance architecture for the entry and midrange units. Even the best quality of service management cannot help if limitations are overwhelming. Storage performance and system capacity utilisation are closely interlinked and have a direct influence on consolidation capabilities. The density of virtual servers or virtual desktops can be increased by a factor of five within the same model class. From the technical ©2013 IDC #IDCWP38V 6 perspective this is enabled by stronger processors with more cores but also by enhancements to the ETERNUS DX operating system utilising multicore and multithreading processors efficiently by intelligent load distribution. Larger caches, SSDs as additional cache, faster interfaces to the hard disks (SAS3) and SDDs as well as a doubled bus performance also contribute to an overall fivefold increase in IOPS performance and threefold increase in bandwidth. Source: Fujitsu, 2013 In November 2013, Fujitsu introduced its refreshed scalable entry-level and midrange storage line with a refined architecture. The new ETERNUS DX100 S3, DX200 S3, DX500 S3 and DX600 S3 unified storage system is optimised for low latency and high throughput to meet the needs of the most demanding enterprise workloads, ranging from highly random online transaction processing to long sequential reads and writes. Fundamental to the system is Fujitsu's real-time operating system that guarantees the predictable low-latency processing of each I/O, as well as resource efficiency and horizontal scalability. The operating system is designed to have a compact memory footprint to retain as much DRAM for data caching as possible. ©2013 IDC #IDCWP38V 7 The ETERNUS DX S3 family provides both block and file-based access, through its support for Fiber Channel, iSCSI and CIFS/NFS protocols. The user is no longer required to operate and maintain dedicated file and block storage platforms, but can consolidate them into a single pool. The benefits of this approach include: ? Reduced management burden, as both file and block capacity can be provisioned, protected and managed using a common set of processes and tools. ? Simplified capacity planning. File-access data typically grows at a faster rate than block- access data. If both are held on a single platform, overall data growth rates are readily apparent and planning for future needs is simplified. As a result, the storage administrator can often run the array at a higher utilisation level with less chance of hitting a capacity shortage. This improves asset utilisation and defers capacity upgrades. ? More flexibility to meet future demands. A key benefit of a unified platform is a greater ability to accommodate future changes in storage requirements, such as new applications, company mergers or other unforeseen events. A single storage resource pool can be allocated as required, both in terms of capacity and I/O performance level. Some users have the legitimate fear that consolidating data onto a single pool could create performance bottlenecks. IDC believes that the Fujitsu implementation is unlikely to cause such problems, due to its ability to specify maximum acceptable response times for key workloads, and the high overall I/O performance that each system is capable of. Unified access is a key part of Fujitsu's storage vision of a general-purpose resource that can adapt to diverse and rapidly changing workload requirements with the minimum of operational overhead and operational cost. While other vendors' arrays enable storage capacity to be allocated and controlled, the ETERNUS DX S3 family goes further, introducing the concept of business-centric storage. The challenge for storage administrators is dealing with a diverse range of workloads with widely different I/O, capacity and availability requirements. The historical solution often meant deploying multiple dedicated silos of storage, each supporting a small group of applications. Today's highly virtualised environments run hundreds or thousands of workloads, each competing for resources from a consolidated pool. The ETERNUS DX scalable entry, midrange and high-end models allow this potentially chaotic situation to be managed in two ways. ? To provide an overall view of the storage usage scenario, Fujitsu ETERNUS SF software periodically polls all ETERNUS DX arrays in the organisation, measuring and reporting a range of performance and utilisation metrics. This provides the administrator with greater insight into storage usage patterns, so that assets are used more efficiently. ? To manage the storage service levels, the administrator can define the priority and response time required for any individual workload. The system then manages the storage and I/O resources so that the SLA performance for every workload is always maintained. Fujitsu's claims regarding performance and cost effectiveness are validated by customer experiences and independent tests. The midrange ETERNUS DX600 S3 currently achieves mid to ©2013 IDC #IDCWP38V 8 high six-digit IOPS. Enterprise users requiring an IOPS performing in the millions will find this in the ETERNUS DX8700 S2, a block storage system that can handle highly mixed, random I/O workloads with ease while resilient enough to serve mission-critical operations. Customers highlighted its high-density, energy efficient and easy-to-service design as differentiated against the competition, while the simplified flat rate licensing allows their businesses to operate flexibly, adding capacity when needed and without hurdles. For many incumbent storage arrays, upgrades are burdened by often complex and almost always costly software licensing policies. New capacity is not just an expenditure on drives, enclosures and possibly other hardware components and services fees, but also are subject to software licensing to the base operating system as well as the value-added features like thin provisioning, snapshots and replication. Fujitsu, however, dramatically simplified this process by requiring only a perpetual software license for each of the controllers in the system. For the customer, this means that licensing costs are of no concern when new capacity is added, which has a significant impact on a three- to five-year TCO calculation assuming significant data growth in the enterprise. Also, Fujitsu's licensing policy is highly modular and, based on the needs of the customer, it will only charge for features that are in use - in some cases, this means the base operating system functionality when behind a storage virtualisation platform such as FalconStor NSS or DataCore SanSymphony. New licences and fees only come with new controllers, a greatly simplified and economised approach to high-end storage. The complete ETERNUS DX family, both S2 and S3 generations, supports all data management features that are expected of today's enterprise storage systems, including automated dynamic tiering, quality of service policy settings, efficient snapshots and high-speed remote replication and disaster recovery functions, thin provisioning, encryption (controller based and with self-encrypting drives) and access control of sensitive volumes. Fujitsu also provides storage management tools that explore, monitor and manage storage resources on an easy-to-understand interface. The single biggest advantage of Fujitsu's storage, however, lies with its single architecture and management platform across its entire ETERNUS DX portfolio - a unique feature in the industry, IDC believes. This means that the same tools and practices can be applied to all Fujitsu ETERNUS DX systems of various models and sizes in the infrastructure, whether it's an entry-level, midrange or high-end configuration. As such, the overhead and complexity associated with the management of multiple tiers of storage arrays in the datacentre can be greatly reduced. Another advantage of a single architecture across all deployment types is the capability of multiway replication between different tiers of storage with ease, offering an easy and cost-effective route to data protection and disaster survivability. The ETERNUS DX family is engineered and built to Japanese standards, backed up by Fujitsu's broad solutions and services capability. The company has best practices for storage infrastructure ©2013 IDC #IDCWP38V 9 virtualisation that allows for live dynamic migration of data across systems as well as reducing storage management overhead even further. Fujitsu is also an early proponent of converged infrastructure deployments building on its blade server technology, storage systems and technology alliances. Called Dynamic Infrastructures, Fujitsu's solution orchestrates all infrastructure resources throughout the stack, including virtual and physical servers, Ethernet and Fibre Channel networking and storage systems - with the capability of mapping architectural blue print to actual infrastructure resources in an automated fashion. ©2013 IDC #IDCWP38V 10 ©2013 IDC #IDCWP38V ©2013 IDC #IDCWP38V Source: IDC, 2013 ©2013 IDC #IDCWP38V 13 IDC expects spending to cool off due to the cyclical nature of high-end purchases, ongoing concerns about the sovereign debt crisis and continued austerity, but we project a rebound starting as early as the end of 2013. Ongoing enterprise storage consolidation and movement towards more service and business oriented IT will drive high-end and midrange solutions as more efficient next-generation architectures come online in 2014-2015, triggering another cycle of investments. Enterprises and service providers will keep looking for ever more cost efficient platforms that allow them to reduce their storage related costs. The acquisition cost of a system is only part of the total cost structure. Recurring support and software costs as well as professional services and labour overhead associated with the deployment and management of the systems are all major cost drivers over the life cycle of the storage arrays, in many cases dwarfing the cost of buying the system. Customers are increasingly aware of alternative solutions and the economically unsustainable nature of their legacy operations. This opens the door for Fujitsu to challenge the incumbent high- end players that had much less competition in the past. Customers found that Fujitsu's enterprise array, the ETERNUS DX8700 S2, is well-engineered, resilient, highly performing and much more economical than the legacy systems they replaced from a market leading incumbent. Fujitsu's single biggest challenge, in IDC's view, is its lack of global awareness from prospective customers. Having started to sell ETERNUS DX globally only four years ago Fujitsu is not yet perceived as a key vendor of storage solutions in several geographies. Fujitsu needs to make itself much more visible to the buyers of enterprise storage to gain mindshare. This should include not only marketing campaigns and user events, but a broader and more engaged network of resellers and system integrators. Given the disruptive economics and configuration flexibility of the ETERNUS DX family, Fujitsu is very well positioned to capture market share in the midrange and high-end storage systems space. Fujitsu's ETERNUS DX family has been architected to meet the real challenges that storage users are facing. In IDC's view, the ability to tailor the storage QoS to suit the needs of critical applications is a significant step forward and addresses a challenge that is decades old. Those that understand the Japanese approach to quality, reliability and whole-life ownership costs will see the results of careful thinking in this portfolio. For example, as users expand their use of ETERNUS DX arrays, they should expect to see the total operational overhead reduce rather than expand. While other vendors struggle to integrate in-house and acquired storage platforms into their portfolios, Fujitsu offers a coherent end-to-end family with a single management framework. ©2013 IDC #IDCWP38V 14 Fujitsu's purity of vision has clear strategic benefits for forward-thinking customers. ETERNUS DX can be a unified, versatile storage platform capable of supporting virtually all workload demands, now and in the future. With a lower day-to-day management overhead, IT staff can focus on new initiatives that bring a competitive edge and take the business forward. The system is not just highly scalable to cope with future demand, but its flexibility in sizing of I/O processing speed, caching, internal bandwidth and capacity independent of the other attributes makes it a highly versatile platform that can match changing workload needs very closely. This helps to avoid costly over-provisioning and over-spending so common in legacy environments. Not only is the ETERNUS DX family highly scalable with unlimited SSD support and auto-tiering, Fujitsu's software licensing policy makes system expansion simpler and less costly. Fujitsu only charges for software licenses by the number of controllers in the system, and not capacity, unlike some incumbent vendors. Moreover, these licenses are perpetual, and not recurring, making a TCO calculation even lighter. As such, customers who expect their data set to grow on their enterprise storage systems can save a significant amount of money by opting for Fujitsu's storage solutions which are built to high Japanese standards, as witnessed by many customers that already use them in production. ©2013 IDC #IDCWP38V 15 International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications and consumer technology markets. IDC helps IT professionals, business executives, and the investment community make fact-based decisions on technology purchases and business strategy. More than 1000 IDC analysts provide global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries worldwide. For more than 48 years, IDC has provided strategic insights to help our clients achieve their key business objectives. IDC is a subsidiary of IDG, the world's leading technology media, research, and events company. Chiswick Tower 389 Chiswick High Road London W4 4AE, United Kingdom 44.208.987.7100 Twitter: @IDC Copyright Notice This IDC research document was published as part of an IDC continuous intelligence service, providing written research, analyst interactions, telebriefings, and conferences. to learn more about IDC subscription and consulting services. To view a list of IDC offices worldwide, Please contact the IDC Hotline at 800.343.4952, ext. 7988 (or +1.508.988.7988) or for information on applying the price of this document toward the purchase of an IDC service or for information on additional copies or Web rights. Copyright 2013 IDC. Reproduction is forbidden unless authorized. All rights reserved.

You must have an account to access this white paper. Please register below. If you already have an account, please login.


Not registered?Register now

Forgot password?

White paper download

We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

ComputerworldUK Knowledge Vault

* *