Next week, Microsoft will be revealing details of a new development tool, codenamed Velocity, that speeds the performance of enterprise and web applications. A session at the Professional Developer’s Conference will be dedicated to the new tool.
Velocity is in beta now, in the form of Community Technical Preview 3, with CTP4 expected sometime in late 2009. Velocity helps enterprises handle growth in web applications (such as blogging platforms or e-commerce sites) and enterprise applications (such as SQL databases). Should this tool succeed, it would put Microsoft into the application performance game. Today, application performance is often seen as a network problem with network gear vendors such as Cisco and Riverbed offering hardware or caching technologies to speed applications at the network level. This tool would speed the performance of distributed applications at the server's memory level.
Microsoft is already achieving impressive application performance gains with Velocity, which relies on data caching. Essentially, Velocity creates a giant, virtualised store of memory out of what would otherwise be separate memory cache assigned to a database or other application.
By putting data in Velocity data caches, data hungry applications like an SQL database can grab data faster, with much less latency. Server CPU and disk resource consumption are also lowered.
Grid Dynamics completed a benchmark study in September 2009 comparing three applications under multiple scenarios with and without Velocity. The results varied significantly depending on the size of the data objects stored in cache, but the overall results showed significant data access performance gains.
One of the tested applications, a blogging engine, achieved nearly a 15x improvement in throughput on a large 57GB dataset of 3.1 million, 16k data objects, 16GB of database cache and 27GB of Velocity cache. Response time growth with Velocity was much more linear as the number of requests per minute increase as compared to the same application without Velocity. Smaller 16k sized data objects presented one of the best case scenarios as smaller object means many more data objects are likely to be found in Velocity cache. The other two tested applications were an e-commerce application and a market data application.
Velocity performs best where cache sizes across systems are consistent. The more homogenous the environment, the better Velocity improves data access response. Additional Velocity nodes can be added with minimal disruption to quickly increase the available cache across servers. Portions of the data cache can also be replicated on multiple caches to improve performance as well.
The benchmark reached several other conclusions. Velocity reduces CPU usage on both the server database layer and on the client application server. The client CPU reductions are because SQL query prep, execution and results parsing are no longer performed. Velocity's failover support did see significant drops in performance during transition periods but there was little performance difference with Velocity's high availability turned off or on.
While the Grid Dynamics' benchmark study was most likely commissioned by Microsoft, the methodology and testing results appear quite thorough and professionally done. The benchmark report includes detailed explanations of scenarios, queries and results found during testing.
The results look promising enough that Velocity could make its way into Microsoft products sometime in the future. Microsoft has pulled the plug on other technical previews, such as the Live Framework preview which left developers with only two weeks' notice the preview was canceled, leaving the future of Live Framework very unclear. The results Velocity is seeing gives it a better chance of surviving inside Microsoft.
Find your next job with computerworld UK jobs