Share

Hardware performance is about much more than clock speed and raw processing power these days, thanks to embedded functions that are helping do things from improving security to virtualising servers.

Chip makers, including Intel and Advanced Micro Devices (AMD), are ushering in a new era in processor design by adding hardware-enabled features to their wares. The goal is to either replace functions that have traditionally been done via software or, more often, significantly improve the operation of the software.

As an added bonus, these hardware-assisted processor functions improve overall system performance without increasing the heat generated, the vendors claim, allowing businesses to keep a lid on utility costs and reduce the need for exotic cooling strategies.

"This is something that has been coming for a long time," says Rick Sturm, president at analyst firm Enterprise Management Associates. "It's the natural course of evolution, and an affordable and rational thing to do to put some of this functionality down on the chip level."

As computer platforms and overall system management increase in complexity, IT professionals are demanding that systems have 100% availability, response times of less than a second and instant problem resolution, Sturm says. These goals are no longer strictly the purview of any one area -- silicon, software or human intervention -- but are now being addressed by taking advantage of advances on all fronts. IT is "strangling" from the costs of operations, Sturm says. "We're spending so much money on management that it is preventing us from innovating and addressing the needs of business."

Early customers

The Charlotte Observer, a large US regional daily newspaper, began migrating some of the publication's most important applications to a virtualised environment in December.

The paper is moving its Oracle-based circulation system database to servers that have Intel's new quad-core Xeon processors with built-in, hardware-enabled virtualisation technology. The paper's editorial content workflow system is also being put on the virtualised servers.

Geoff Shorter, IT infrastructure manager at the Charlotte Observer, says he found out during the testing phase how these new servers can run virtualisation at near-native speeds. The database used for the test prepared subscription renewal notices and determined which accounts needed to billed, how much to bill and for what period of time.

Mike Grandinetti, chief marketing officer for virtualisation software provider Virtual Iron, says virtualisation often results in overall hardware performance loss, ranging from 10% to 50%. But when using chip-enabled virtualisation, the performance loss drops to 4% or less.

This is also what Shorter's group found. "Virtual Iron will tell you their overhead is between 1% and 3%, but a 3% difference on a 10-minute [database run] is not noticeable," Shorter says. "It's just like native. The driving force for going to a virtualisation strategy was cost, but we've tested it, and performance is also a driving factor."

Shorter estimates he can run seven to 12 virtual servers per single-core processor node on existing systems. As the newspaper transitions to quad-core systems over the next year, he expects to be able to support around 30 virtual servers per physical node.

Jason Lochhead, principle architect at managed hosting provider Data Return, says the company is already seeing benefits from hardware-assisted virtualisation within the server infrastructure it offers its customers.

A year ago, Data Return introduced its Infinistructure utility computing platform intended to allow customers to maximise server utilisation and create on-demand computing resources more economically through the use of server virtualisation. Using Hewlett-Packard servers based on AMD Opteron processors, Data Return has been able to create hundreds of virtual server instances for customers.

"We don't have as much wasted hardware capacity and have lowered power and cooling bills by consolidating these physical servers with the use of virtualised machines," Lochhead says. "It's much cheaper, particularly when you' re talking about adding servers for redundancy rather than performance."

The hardware-assisted virtualisation capability of the AMD Opteron processors allows Data Return to run many more operating system varieties on both 32-bit and 64-bit versions of the same base hardware, he says. In future, Opteron’s hardware-assisted abilities are expected to include memory translation and virtualised access to input/output devices, he says.

"We're enthusiastic about it," Lochhead says. "When we were first going down this road, virtualisation was pretty new, and customers were a little leery of accepting it. But when someone like AMD comes out and says they are putting these technologies into hardware, it's a vote of confidence."

What the future holds

Michael Cote, an analyst at RedMonk, says adding hardware-assisted functions to replace or augment software capabilities will continue to increase this year and next, as mainstream microprocessor manufacturers attempt to differentiate their product lines.
Cote adds: "These capabilities will continue to increase as more IT professionals gain a greater understanding of what is available and the potential benefits."

In most cases, rather than fully replacing businesses’ traditional systems management software applications, the new hardware-assisted capabilities will make that software operate more efficiently. Kevin Unbedacht, senior platforms strategist at Altiris, a provider of IT asset management software and services, says Intel's new Active Management Technology (AMT) is a good example.

Altiris' software has traditionally been able to analyse only those systems that are on and running an operating system. If a system is off, or not operating properly, the Altiris software cannot collect a full inventory analysis.

By using the AMT capability embedded within the chipset of VPro systems, however, the Altiris tracking and inventory software can detect systems even when they are off or not operating properly.

In addition, flash memory inside the VPro chipset stores system information each time the PC is booted, providing up to date information on the system status. The out-of-band alerts enabled by AMT can allow an IT department to make a single dispatch call, instead of the two that have been traditionally required for analysis and repair, he says.

The end result, Unbedacht says, is a hardware/software combination that can proactively monitor IT infrastructure instead of reacting only when something is wrong.

In addition, having basic management capabilities hardwired into silicon will make it simpler for new entrepreneurial systems management companies to add product offerings that can rapidly be adopted by IT professionals and integrated in enterprise-level applications, RedMonk's Cote says.

Intel calls its effort Embedded IT, and is attacking the problem with a variety of new or planned capabilities. Competitor AMD is making similar efforts with its Trinity and Torrenza programmes.

Measuring success: not so fast

The biggest boost to processor performance in the last two years has been the move to multicore processors. The migration from single-core to dual-core processors in the x86 market provided direct performance gains of 80% or more, and the first quad-core processors from Intel are providing another 50% improvement, says Nathan Brookwood, an analyst at Insight 64. How much hardware-assisted features or embedded IT will add to performance is debatable, with the real measure of worth to be determined by how the efforts improve such things as manageability.

"The ultimate test is whether it works for the IT professional for their specific application," Brookwood says. "Things like embedded IT are really designed to increase functionality rather than performance."

Markus Levy, an analyst who serves as president of the Multicore Association and the Embedded Microprocessor Benchmark Consortium , says the move to embed more hardware-assisted features will undoubtedly bring performance gains. But measuring any specific gain is a new challenge that industry groups are only beginning to address.

Increasing the clock speed of microprocessors has provided only minimal performance gains in the past few years as processor manufacturers have hit the wall in the trade-off between speed and the heat generated by the chips. Even the addition of multiple cores in processors running at lower clock speeds to reduce heat is expected to see diminishing returns as these chips move to eight or more cores, Levy explains.

In traditional architectures the use of additional cores will not necessarily help applications that require specific optimisation, he says, adding to the need for hardware-enabled assists. "When you are trying to do a specific function like security acceleration, adding another processor core can be an expensive piece of hardware, compared to enabling that capability by using only 100,000 or so gates inside the existing chip," Levy says.

Determining the level of performance enhancement associated with these hardware-assisted hooks and accelerators is a task the technology is just beginning to tackle.

"We're going to have to have benchmarks that are specifically tailored towards the use of those features," Levy says. "It is also going to require that we think of performance in a different way. It is going to be pretty challenging to develop a benchmark suite that will work on everybody's platform as they become increasingly custom."

Management by hardware

The past year has seen the advent of hardware-assisted features in mainstream x86-based microprocessors from Intel and AMD. Even as chip vendors have turned to multicore implementations as the primary source for boosting performance, they are adding hardwired features into their processors and associated chipsets.

These features were previously left solely to software or were not addressed at all.

"We are looking hard at what technologies are right to be moved into silicon and placed within our platforms as opposed to technologies that need to stay in software," says Margaret Lewis, director of commercial solutions at AMD. "As a result, we are on the brink of a lot interesting new concepts in performance. It's no longer simple. In many cases, it won't be necessarily be how fast you complete a task, but how satisfied you are with the result."

AMD's Trinity platform is intended to allow processors to handle virtualisation, security and management. One of the first commercialised efforts has been technology originally developed under the codename Pacifica, to allow hardware to more easily run multiple operating systems.

AMD's Torrenza platform was also introduced in the past year. Torrenza uses AMD' s existing interconnect technology to allow third parties to create application-specific coprocessors that can work alongside AMD processors in multisocket systems.

Intel's embedded IT capabilities include its already released Virtualisation Technology, which like AMD's Pacifica provides a hardware-enabled ability to more effectively create virtualised infrastructure installations. Intel has also introduced Active Management Technology (AMT), which is embedded in client-side processors. AMT allows IT managers to remotely access networked computing equipment -- even where this lacks a working operating system or hard drive or where these have been turned off.

Also in the works from Intel is I/O Acceleration Technology, a network accelerator that can break up the data-handling job between the components in a server, including the processor, chipset, network controller and software. The distributed approach reduces the workload on the processors while accelerating the flow of data, Intel says.

Intel's Trusted Execution Technology, originally codenamed LeGrande Technology, is a set of hardware extensions to processors and chipsets that enhances security. The technology is designed to prevent software-based attacks and to protect the confidentiality and integrity of data stored or created on a client PC.