Fortune 1000 CIOs are making the strategic choice to consolidate remote site IT infrastructure into central data centers. They are compelled to move some or all remote file servers, email servers, backup, and other servers because through such site consolidation they can jointly address the need to reduce remote site operating costs and mandates for more rigorous security and compliance.
The stumbling block to consolidation, however, is the severe impact on application performance as seen by remote users. Relocating local servers to a data center and connecting them across a wide area network (WAN) link often results in order-of-magnitude slowdowns to response times and data transfer rates. At these levels of delay business processes are impacted forcing site consolidation efforts to be stalled.
CIOs often discover that upgrading bandwidth to remote sites has little or no effect on application performance. The problem lies instead with the way the applications interact with the server across the WAN. Microsoft Windows file systems, Microsoft Exchange®, NAS, backup applications, CAD applications, and many others were developed with the idea that the client and server were local. Across the WAN, however, where congestion, resource contention, diverse routing conditions, and high latencies exist, these applications grind to a crawl.
Riverbed’s Steelhead appliances use a new combination of patented and patent-pending mechanisms to achieve application acceleration. These mechanisms include transaction prediction, TCP proxying and optimization, and hierarchical compression to deliver orders of magnitude increases in application response time and throughput.
REMOTE IT INFRASTRUCTURE CONSOLIDATION 2006 Riverbed Technology, Inc. All rights reserved. 0 A Riverbed Technology White Paper The 3 Barriers to Centralizing Remote Infrastructure Remote IT Infrastructure Consolidation Untitled Document REMOTE IT INFRASTRUCTURE CONSOLIDATION REMOTE IT INFRASTRUCTURE CONSOLIDATION Introduction Fortune 1000 CIOs are making the strategic choice to consolidate remote site IT infrastructure into central data centers. They are compelled to move some or all remote file servers, email servers, backup, and other servers because through such site consolidation they can jointly address the need to reduce remote site operating costs and mandates for more rigorous security and compliance. The stumbling block to consolidation, however, is the severe impact on application performance as seen by remote users. Relocating local servers to a data center and connecting them across a wide area network (WAN) link often results in order-of-magnitude slowdowns to response times and data transfer rates. At these levels of delay business processes are impacted forcing site consolidation efforts to be stalled. CIOs often discover that upgrading bandwidth to remote sites has little or no effect on application performance. The problem lies instead with the way the applications interact with the server across the WAN. Microsoft Windows file systems, Microsoft Exchange , NAS, backup applications, CAD applications, and many others were developed with the idea that the client and server were local. Across the WAN, however, where congestion, resource contention, diverse routing conditions, and high latencies exist, these applications grind to a crawl. Riverbed s Steelhead appliances use a new combination of patented and patent-pending mechanisms to achieve application acceleration. These mechanisms include transaction prediction, TCP proxying and optimization, and hierarchical compression to deliver orders of magnitude increases in application response time and throughput. Because Riverbed systematically addresses each of the issues affecting application performance over the WAN Riverbed helps companies to consolidate remote server infrastructure and deliver consistent end-to-end application performance without resorting to expensive and often ineffective upgrades to WAN bandwidth. Why Consolidate? Remote site server consolidation is a clear win in terms of reducing operating costs and improving data security. However, there were compelling reasons for distributing server infrastructure in the first place. The reason many companies chose to place servers at remote sites has been to deliver consistent application performance to remote users working with local data sets. Microsoft Exchange servers, for example, have commonly been deployed at remote sites with only 20-30 users because above those numbers, most Exchange messages end up being between local users. Consolidation of remote site infrastructure offers significant benefits: Reduces cost and complexity Improves compliance Improves data and network security Improves resource utilization Eliminates need for costly WAN bandwidth upgrades Eliminates write consistency issues associated with caching Frees up WAN capacity for VoIP and video applications Provisioning of servers at remote sites, however, often leads to low resource utilization and high costs. Since Exchange servers are typically resourced for a capacity of several thousand users, deploying a dedicated server for a few dozen means inefficient use of server resources. This same issue exists for file servers, and web servers. Worse, all those servers have to be managed, backed up, repaired, and patched. Centralizing servers at a data center means greater resource utilization, and fewer servers to backup and patch. Since complexity is reduced, such consolidation also means lower IT staff requirements, less chance for errors, and better system security. Because of the clear benefits, companies are trying to consolidate infrastructure as much as possible, yet many are surprised at how difficult it is to actually complete a successful site consolidation project. They find they can t deliver consistent end-to-end application performance even with significant upgrades to WAN bandwidth. 2006 Riverbed Technology, Inc. All rights reserved. 1 Untitled Document REMOTE IT INFRASTRUCTURE CONSOLIDATION Three Barriers to Site Consolidation When WANs are involved, client-server applications that worked fantastically on LANs break down and work poorly, or not at all. The reason is threefold: 1. Constrained WAN bandwidth 2. TCP throughput drop-off with latency 3. Application chattiness multiplies the effect of latency Constrained WAN Bandwidth WAN bandwidth is often orders of magnitude less than local area network (LAN) bandwidth. A typical remote office has between 64 kbps and T1 bandwidth (1.544 Mbps) or E1 (2 Mbps). Compared to modern LANs that have 100 Mbps to 1,000 Mbps, a remote site typically relies on less than 1% of the bandwidth to access data as a result of server consolidation. From a pure bits-per-second perspective it s easy to see why moving a large file across a WAN link should take more time than over the LAN. However, it s often the other two constraints, not bandwidth, that result in low application performance. TCP throughput drops-off with latency All applications rely on underlying communications protocols which, for reliable transport across the network is almost always TCP. TCP sends data in windows . A window defines the maximum amount of data a sender can transmit before receiving acknowledgement from the receiver. Since it takes a round-trip time to receive the acknowledgement from the receiver the maximum throughput is the amount of data in a window divided by the round trip time. TCP slow start and congestion control features designed to increase reliability also make the throughput problem worse. Figure 1: TCP throughput drop-off with latency on a 45Mbps (T3) link Application chattiness multiplies the effect of latency On top of TCP, applications have their own communications protocols. For example, Microsoft Windows uses CIFS, the Common Internet File System. Microsoft Exchange uses MAPI, the Messaging Application Programming Interface protocol. Web based applications rely on HTTP, and so forth. Some protocols (application or transport) are extremely chatty , which means they generate hundreds or thousands of round trips from client to server, even to accomplish seemingly simple tasks. For example, dragging and dropping a 1 MB file in Windows can trigger over 4,000 WAN round trips. On a LAN, when the latency between client and server is often less than a tenth of a millisecond, those thousands of round trips are completed virtually instantaneously. When the same operation is done on a WAN, the latency is usually in the range of 50 ms to 250 ms, or even more when satellites are involved. 2006 Riverbed Technology, Inc. All rights reserved. 2 Untitled Document REMOTE IT INFRASTRUCTURE CONSOLIDATION The significant difference shown in Figure 2 (at right), in which completion time goes from 0.4 seconds to almost seven minutes, is why just moving the servers doesn t work it is very noticeable to users. Application protocols also have a limited amount of data they can transmit on each round trip. So the problem of many round trips is worse for large files. If the application protocol has a transfer size of 16 KB, then 16 MB file will require 1,000 trips, just to deliver the data, plus lots of additional round trips generated by the application to manage the data transfer, file system operations, or whatever other operations are required. LAN vs. WAN Time to Complete LAN WAN Latency (in ms) 0.10 100.00 Number of Round Trips 4,000 4,000 Time to Complete (in ms) 400 400,000 Seconds / Minutes 0.4 / 0.01 400 / 6.67 Figure 2: A 1MB file drag & drop in Windows generates 4000 transactions between the client and the server A similar chattiness issue applies to the TCP layer, which affects web-based business apps, as well as applications like Notes, FTP and other mission critical applications. Learning from Past Mistakes Exposing the Myths Over the past few years, vendors have created a number of products to accelerate application performance. These solutions, often categorized as WAN Optimization or WAFS, have fallen into three categories: 1. TCP optimization 2. Compression 3. Caching IT professionals have learned that these solutions are either insufficient to address performance across a wide range of applications, introduce additional complexity, or both. While each of these solutions can solve specific issues with application performance across the WAN, the myth still exists that they are general solutions to application performance. Myth #1: You can solve Application performance with TCP Optimization Alone Many IT professionals are aware that TCP as originally defined has a maximum window size of 64 KB (the typical amount of data that can be carried in each TCP round-trip), and that the limit can be modified with some work. In many cases, the configured maximum TCP window size is even smaller 16 KB or 32 KB, which makes the problem even worse. Even companies who elect to go the route of modifying TCP find that fixing or improving TCP does not help application performance if the application protocol is less efficient than TCP. For example, in Microsoft Exchange 2003, the window size was increased from 8 or 16 KB to 64 KB. This helps to reduce in the number of round trips generated by the application when sending large amounts of data, but does nothing to accelerate operations such as calendaring operations that are bottlenecked by the huge number of application level (MAPI) client-server transactions. Hardware solutions exist to modify TCP s behavior in other ways across the WAN to increase its throughput, but modifying the TCP layer in the networking stack does nothing to improve performance issues caused by higher layer protocols. For many applications like Windows file sharing or Exchange, the application protocol (CIFS and MAPI, respectively) are much chattier and less efficient than TCP itself. Thus, making TCP more efficient can be helpful, but in many cases this approach alone is insufficient. Myth #2: You can solve application performance with compression Companies that attribute application performance issues to lack of bandwidth often reach the conclusion that they can solve the problem by adding compression appliances. This is equivalent to adding more bandwidth. More bandwidth is helpful, but again it s insufficient to solve the problem. Adding bandwidth does not help alleviate the chattiness of the application, which means that all those round trips still have to take place. No matter how much bandwidth you buy, once the initial congestion has been alleviated, the application performance will not be materially affected. Myth #3: You can solve Application performance with Caching Some companies have investigated caching appliances as a way to enable site consolidation. That approach can work for single data types, but will not provide a general solution and often is used just to hide the underlying performance problem. For Exchange, there are special purpose mail caching appliances available, but they are not a general purpose solution to the problem. 2006 Riverbed Technology, Inc. All rights reserved. 3 Untitled Document REMOTE IT INFRASTRUCTURE CONSOLIDATION Caching is an application-specific technology: File caching works for file systems, web caching works for web pages, mail caching works for email, and so on. So while adding an Exchange mail cache will help by storing attachments locally, it adds complexity and only affects the perceived performance of Exchange. Another issue for file caching is write consistency. Often caching products will implement elaborate file locking mechanisms to prevent two users from writing the same file. But in the event of network outages and/or box failure, these mechanisms can fail leading to catastrophic results. With Exchange 2003 and Outlook 2003 Microsoft introduced integrated client-side caching to address performance across the WAN. This hides the delay in getting email from servers to clients by not displaying any new email headers until the entire email and any attachments are fully delivered. Thus by the time a user is notified that a new message has arrived, the entire message (and any attachments) have already been cached in the client. This does nothing to improve the actual time to deliver messages or the amount of time to download one s inbox. Moreover, to get any benefit from client-side caching, you must deploy both the server (Exchange 2003) and the client (Outlook 2003). Client-side caching can improve the perceived user experience but may cause much heavier traffic across the WAN, since emails are delivered to the client that might have been deleted without being read if only the headers were seen. Riverbed Breaks the Barriers to Site Consolidation Riverbed has introduced its line of Steelhead appliances to accelerate the performance of all applications running on WANs over TCP by addressing all three bottlenecks to WAN performance. Steelhead appliances are the first devices to address all the issues affecting application performance over the WAN to deliver the highest performance improvements to the widest range of applications. Steelhead appliances optimize both application and transport protocol chattiness, and offer unprecedented bandwidth optimization. These optimizations work in harmony with each other to provide the highest levels of performance improvement. The improvements in performance can be up to 100 times or more. With this kind of LAN-like performance, site consolidation projects can proceed without impacting end users. By addressing all areas of WAN performance Riverbed offers several key advantages over file caching or compression-only approaches for site consolidation: Broad Applicability Steelhead appliances optimize all TCP traffic covering a broad range of applications. Unlike specific file caching or email caching approaches, Steelhead appliances deliver performance and bandwidth savings whether a company is centralizing Exchange servers, Notes servers, file servers, NAS, tape backup or any combination of these.. Time and Bandwidth Savings Steelhead appliances provide response time improvement, in addition to compression and bandwidth savings. In contrast, WAN optimization devices that just compress data will reduce the amount of data within a packet but, because they do not terminate TCP, they send compressed data in the same number of round trips as it would take to send uncompressed data. Better Optimization File caches only give you a hit when a user requests an identical file to one that s been requested before. Compression devices rarely delivery more than 2-3x improvement. Steelhead appliances in many cases offer more than 100x and will deliver improvement even on new versions of old files, with different file names, different applications, etc. Easier Deployment Because caches are proxy servers, end user machines have to be configured to know about the proxies, which means touching and changing every client. Steelhead appliances require no end user configuration which means the rollout is much simpler and quicker. Bandwidth Optimization Scalable Data Referencing Riverbed s Scalable Data Referencing (SDR) bandwidth optimization technology dramatically reduces the amount of data that gets sent across the WAN. SDR replicates data across the network in a new and unique protocol-independent format to reduce subsequent transmissions of the same data. Rather than attempt to replicate data blocks from a disk volume, files from a file system, e-mail messages, or Web content from application servers, Steelhead appliances represent and store data in a protocol and application-independent format. As data is sent through a Steelhead appliance, the SDR technology chops the data into variable size segments and creates a short references to each segment. As data and their accompanying references are created on one side, they are sent to the Steelhead on the other side. Thus, once the Steelheads have seen the data once, they need only send references to the data to 2006 Riverbed Technology, Inc. All rights reserved. 4 Untitled Document REMOTE IT INFRASTRUCTURE CONSOLIDATION the other end. Moreover, these references are hierarchical; references can point to groups of references such that a single reference can represent an arbitrarily large amount of data. The elegance of the approach is that the Steelhead appliances are transparent to the client and server. There are no cache consistency issues to be tackled even though the data segments may exist in multiple locations. The client-server transactions always flow across the network, preserving the protocol semantics, even though very little actual data is transferred across the link. Virtual Window Expansion For all TCP based applications, Steelhead appliances can minimize the time it takes to send data across the WAN by synergistically applying both SDR compression and TCP optimization. This is accomplished by Virtual Window Expansion (VWE) that multiplies the effective TCP window size. Most TCP implementations including Windows 2000 and XP by default send no more than 64KB in round trip across the network. It is often difficult to change these defaults across all hosts and resize buffers in the network elements to accommodate the change. Steelheads implement window scaling across the WAN correctly without host reconfiguration and without requiring larger network buffers for LAN-directed traffic. But beyond window scaling, Steelheads terminate TCP and repack TCP payloads by substituting references to arbitrarily large amounts of data in combination with Riverbed s SDR technology. This technique virtually expands TCP windows beyond the expansion delivered by window scaling because the amount of data that is represented by a reference can be 1 MB, 10 MB or more. By virtually expanding the TCP window size, the number of round trips is minimized which in turn increases throughput. All of this is done without changing the underlying TCP protocol or changing the client server interaction. In contrast, WAN optimization devices that just compress data on a per-packet basis will reduce the amount of data within a packet but, because they do not terminate TCP, they send compressed data in the same number of round trips as it would take to send uncompressed data. Transaction Prediction To address application chattiness, Riverbed has developed a set of algorithms known as Transaction Prediction, which further minimize the number of round trips taken across the WAN without interfering with the client-server semantic. Transaction prediction works in combination with SDR and VWE to provide even higher levels of performance to the most common enterprise applications. With specific knowledge of application protocols like CIFS, MAPI, and others, Steelhead appliances are able to predict upcoming client requests in advance, inject requests to the server on behalf of the client, and then bundle the results of the server interaction into a few round trips. Each round trip avoided saves a discrete amount of time, independent of how much bandwidth is available. When thousands of round trips are avoided, the time saved can be measured in minutes or even hours, depending on the workload. High-Speed TCP Enterprises often find that it is necessary to increase the capacity of links between data centers used for data replication and disaster recovery when undertaking the centralization of remote site infrastructure. On these high-speed WAN links normal TCP can fail to ramp up to full capacity even though plenty of bandwidth is available. This leaves data replication and mirroring applications starved for throughput, thwarting site consolidation efforts with an insufficient data protection. Riverbed has implemented numerous Internet Engineering Task Force (IETF)-specified congestion control mechanisms in the Steelhead appliance that enable TCP performance to scale to hundreds of Mbps over significant latencies (>100ms RT). Riverbed customers with high-speed WAN links can now achieve full utilization of their investment in network bandwidth without losing or compromising any of the familiar and essential characteristics and benefits of TCP. This includes safe congestion control, even when high-speed TCP connections share WAN links with normal TCP connections. Summary Site consolidation has a tremendous ROI, as long as user application performance can be preserved. Most enterprises have a range of remote office IT infrastructure: File servers, Exchange servers, web applications, app servers, Notes servers, NAS, tape backup, and so forth. The more infrastructure that can be centralized or consolidated to the data center, the higher the ROI for the IT department. 2006 Riverbed Technology, Inc. All rights reserved. 5 Untitled Document REMOTE IT INFRASTRUCTURE CONSOLIDATION Riverbed Technology, Inc. 501 Second Street, Suite 410 San Francisco, CA 94107 Tel: (415) 247-8800 www.riverbed.comRiverbed Technology Ltd. UK 200 Brook Drive, Green Park Reading RG2 6UB United Kingdom Tel: +44 118 949 7002 Riverbed Technology Pte. Ltd. 350 Orchard Road #21-01/03 Shaw House Singapore 238868 Tel: +65 68328082 Riverbed Technology K.K. Shibuya Mark City W-22F 1-12-1 Dogenzaka, Shibuya-ku Tokyo Japan 150-0043 Tel: +81 3 4360 5357 2006 Riverbed Technology, Inc. All rights reserved. Riverbed Technology, Riverbed, Steelhead and the Riverbed logo are trademarks or registered trademarks of Riverbed Technology, Inc. Portions of Riverbed s products are protected under Riverbed patents, as well as patents pending. WP-SC031406 2006 Riverbed Technology, Inc. All rights reserved. 6