Wide area storage solutions.... A redux of Wide Area File Services?

Data Direct Networks (DDN) has similar solutions (and aspirations) using its Web Object Scaler Platform. This made me wonder if newer technologies such as object storage and newer interfaces such as HTTP/REST are allowing vendors to come up with...


Data Direct Networks (DDN) has similar solutions (and aspirations) using its Web Object Scaler Platform. This made me wonder if newer technologies such as object storage and newer interfaces such as HTTP/REST are allowing vendors to come up with a "WAFS 2.0"?

When Cisco and Brocade got into the "Wide Area File services" game they were taking a network centric view of the problem. The premise for WAFS was therefore to allow remote users to access files globally at LAN speeds over the WAN. When they were trying to solve this problem it was not so much a "Big Data" problem as it was "Network latency" problem.

Data sets shared between users were relatively small however the use of latency sensitive interfaces like CIFS and NFS meant that they were no longer practical over  slower network connections or over longer distances. 

WAFS solutions therefore focused on solving the network problem i.e. techniques such as CIFS and MAPI protocol optimisation, data compression and local cache were built-in to allow distributed enterprises to deploy WAFS solutions to link and consolidate remote storage resources to those corporate datacenters or to share corporate resources over longer distances.

One of the inherent challenges with WAFS was that they were not designed to handle large data sets. The techniques that were used to overcome latency issues became very inefficient when large data sets were involved. 

For example, data compression and CIFS optimization. The problem was further compounded by the fact that at the end of the day the underlying storage architecture or the access interfaces still had inherent distance and scale limitations that made them inefficient when WAFS was used with large data sets.

Fast forward, nearly a decade later technologies have involved to a point where vendors can take a ground up approach to solving the problem of wide area file services. So is wide area storage solutions (WASS) going to be successful this time?

Let us look at what Quantum is doing. From Quantum's press release, Lattus is designed to provide "globally distributed disk-based archives that are extremely scalable and cost-effective and allows storage of data forever on disk without interruption or migration". Quantum says it accomplished this by integrating dispersed object storage and components of it's own StorNext file system technology (such as namespace and interfaces). 

This, says Quantum allows the solution to overcome the limitations and inefficiencies posed by traditional disk architectures in multi-petabyte storage environments. Quantum says "Lattus is built to address the challenges inherent in current solutions based on RAID architectures that grow to the petabyte level and beyond in industries such as digital media, science research, surveillance and energy exploration. 

Incorporating next-generation object storage technology, Lattus products are optimized for managing large and growing repositories of big data indefinitely, thereby enabling customers to extract the data's maximum value over its entire life". 

DDN on the other hand has designed its Web Object Scaler (WOS) platform as a distributed peer-to-peer, shared nothing system with an inherent design that eliminates both local and geographic single points of failure or bottlenecks. 

Linear scalability for performance and scalability is achieved using a clustered node-based design which DDN calls as the "cloud building block". WOS nodes are essentially self-contained appliances configured with compute, networking and disk resources. Nodes communicate with each other using industry-standard TCP/IP protocols that are designed to be latency resilient. 

This allows nodes to be placed as far apart as internet distances dictate but yet create a common storage pool that can be accessed from anywhere, regardless of where data lives - using newer object-based interfaces. 

Furthermore, content distribution within the WOS cloud is fully automated and this entire "cloud" - which is basically a collection of nodes in a geographically bound zone - can be managed as a single entity from a central location.

Both solutions have striking similarities in that they:

  • Are designed to overcome the inherent limitations in traditional RAID architectures and native file interface access over longer distances

  • Focused on large data sets that are geographically dispersed, concurrently shared but nevertheless need the same level of resilency and protection.

  • Eliminate the need to do disruptive forklift upgrades

Both the Quantum Lattus WASS and the DDN WOS offer native HTTP/REST support for web and cloud-based access, Local NFS/CIFS access is offered by way of an emulation - so users don't experience latency issues and the storage architecture is object based - so it offers extreme (geo) scalability.

Businesses dealing with Big Data often have a big question mark on how to efficiently store data before/after it is analysed. That solves the Big Data Archiving problem. 

Additionally, geo-dispersal and integration characteristics creates a global namespace that allows multi-site organisations to cost-effectively share and archive large file data with predictable ingest and retrieval times. And locally accessible standard interfaces minimise the need to do custom coding for user access while HTTP/REST interfaces allow standards based access for applications.

Sounds promising.

What about market demand?

Posted by Ashish Nadkarni, IDC 
Enhanced by Zemanta

"Recommended For You"

Nine data storage companies to watch Clever Old Cleversafe