No industry outside fashion and entertainment can match IT for the waves of hype it generates. Time and again technologies are hailed with the promise of revolutionising our working lives, only to fade from view.
That is why the five technologies below are examined not only on heir potential to change the world of computing over the next 12 months, but also on how soon these advances will be available to everyday users, either at the enterprise or the personal level.
Ruby on Rails: faster, easier web development
Chances are you've heard the term Ruby on Rails – probably from someone on your web development team lobbying heavily to use it for some or all of your company’s web development.
Ruby on Rails (also known as RoR and Rails) is a web application framework written in Ruby, an object-oriented programming language known for its clean syntax. Released in 2004, RoR is an open source project that originally served as the foundation of a project management tool designed by web development company 37signals LLC. It is easily ported among Linux, Windows and Macintosh environments – and it can have a dramatic impact on the speed at which a web development team is able to build and maintain enterprise websites and applications.
Equal parts design philosophy and development environment, Rails offers developers a few key code-level advantages when constructing database-backed web applications. One of the central tenets emphasises using less code for application development by avoiding redundancy and following Rails conventions. This means increased performance and, ideally, decreased development times.
Thanks to these efficiencies and the open source nature of the web development framework, Ruby on Rails is experiencing a tremendous surge in popularity. Notable apps and sites built on Rails include 37signals' own Basecamp project management tool, the Jobster job search site and Revolution Health, an interactive health information site headed by former AOL CEO Steve Case. And Apple has announced that Mac OS X 10.5 (code-named "Leopard") will ship with Rails bundled into the operating system when it is released this spring.
NAND drives: Bye-bye, HDD?
It's nice to know that 2007 will finally bring one of the most coveted advances in computing – the solid-state hard drive. The appeal of solid-state drives (SSD) is plain: They're lighter, faster, quieter and less power-hungry than conventional notebook hard disk drives (HDD), and they won't break if you drop them. NAND is the storage technology that will drive SSDs, making it one of the key technologies to watch in 2007.
NAND (which stands for "Not and") is a type of flash memory technology that excels at reading, writing and erasing data from flash memory. NOR (short for "Not or") is the other type of flash-based storage and is better suited for retrieving data from smaller devices like cell phones. NAND's strengths make it ideally suited for larger-storage drives.
Recognising the appeal of solid-state mass-storage drives, a number of memory manufacturers have begun to develop flash memory drives for inclusion in laptops and other portable devices. In early 2006, Samsung announced the development of a 32GB NAND drive that it touted as a "hard-drive" killer and both Samsung and Sony have released notebooks with flash-based drives in Asia. A number of other notebook manufacturers, including Toshiba and Lenovo Group, have expressed a desire to integrate memory drives into notebook computers. Recent reports have indicated that solid-state hard drives are being built with data throughput capacity of up to 62MB/sec. This is close to 100 times faster than conventional hard drives.
The kicker? The 32Gb drive that SanDisk claims is capable of these speeds has a 1.8-in design. Finally, because of their small size and lack of moving parts, NAND drives consume a fraction of the energy and generate a small percentage of the heat of standard disk-based drives.
The downside of NAND drives is that these tiny drives cost upwards of $500 (£255). That's a lot of budget room to spend on a 32Gb drive, which explains why technology hasn't been implemented in more laptop configurations.
Perhaps as a short-term measure while the price per gigabyte of fixed-drive NAND storage drops – a market condition that courtesy of industry oversupply – drive manufacturers are beginning to experiment with and embrace hybrid hard drives that use both traditional moving parts as well as NAND storage.
The working concept behind these drives is a NAND cache of substantial enough size under 1Gb, with initial sizes ranging from 128Mb to 256Mb) to store a high number of the small, frequently accessed files that operating systems and users work with. Caching these files allows the main drive to shut down during standard system operation, reducing power consumption and extending battery life. In the summer of 2006, Samsung announced plans to release one such hybrid hard drive at the same time Microsoft released Windows Vista. This product is still pending.
Intel has also been smart enough to pick up on this awkward stage of drive technology. The company's pending flash cache technology, code-named "Robson," permits faster hard-drive throughput by using a flash memory cache on the motherboard to speed up disk-based data transfers.
Microsoft also understands the importance of hybrid hard drives. ReadyDrive, one of Windows Vista's new features, was created to accommodate and enhance the performance of hybrid drives by intelligently storing the most frequently accessed files on this cache. The new operating system also includes native support for solid-state drives via ReadyBoost, another new feature that allows Windows to use flash memory devices as additional memory caches or even as boot disks to enhance performance. This is welcome relief for those of us holding onto the promise of pure, solid-state drives.
Ultra-Wideband: 200x personal-area networking
As it currently stands, personal-area networking via Bluetooth is useful for telephone conversations, data syncing between mobile and stationary devices and, in extreme cases, music. But it doesn't take much to imagine a type of usefulness – think video, rich audio and large files – that transcends this wireless technology's current capabilities.
Enter Ultra-Wideband (UWB). A technology for rapidly transmitting data over radio in the 3.1 to 10.6-GHz range, UWB is capable of generating data transfer rates approaching 500Mbit/sec. with relatively low power consumption. By way of contrast, Bluetooth's top speed is only 2.1 Mbit/sec.
One of the underlying strengths of Ultra-Wideband is that it uses data-rich repeated pulses of energy in the radio spectrum to transmit data. These pulses have a fairly short range of 30 feet. In contrast to most wireless systems, which typically transmit data over a narrow band of frequencies, UWB transmissions occur over a much wider spectrum of radio frequencies. Here's an example of how it works. Imagine mopping a floor. As you increase the width of the mop, you can cover a greater surface area on the floor. The other advantage that these short, powerful waves have over conventional wireless transmissions is that because they are so short, they are less subject to interference and cancellation effects.
There are currently two competing UWB specifications: one proposed by the UWB Forum and another championed by the WiMedia Alliance. Neither specification has yet to be ratified as "official." However, the WiMedia Alliance's UWB spec has received Intel's backing, making it the frontrunner in this classic Betamax versus VHS turf war. The chipmaker has a page on its website that indicates some of this emerging standard's advantage, including the following:
- The ability to wirelessly connect a mobile computer or PDA to a digital projector
- The ability play digital video from a camcorder onto an HDTV without having to connect any wires
- The ability to transmit information from a PC (or any device for that matter) to a printer, scanner or any other device
Still not convinced about UWB? In 2006, the Bluetooth technology specification team announced that it will integrate the WiMedia Alliance's UWB specification into the Bluetooth standard in a future iteration. This will result in a significant boost of Bluetooth's capabilities. The end result will likely be a quantum leap forward in personal-area networking.
Hosted hardware: Supercomputing for the masses
Imagine a networking task for your large, small or home business that is so big you needs an enterprise server to handle it. Now imagine being able to lease such a server on an on-demand basis. This ability to tap into a grid of supercomputing power the same way your house taps into the municipal water supply is the premise behind the concept of hosted hardware.
Large technology players such as IBM, Sun Microsystems and HP already sell computing power to sizable corporations, typically on a large scale. But new services from the likes of Amazon.com and 3tera. is bringing on-demand computing to midsize and small businesses. This concept is known as hosted hardware or Grids.
Not surprisingly, one of the key ingredients in this process is virtualisation. Here's how it works: On a per-demand basis, clientscan choose to pay around 10 cents (5p) per virtual server per hour for access to spawned instances of virtual servers. In Amazon.com's case, these servers have the equivalent power of a server with a 1.7-GHz Xeon processor, almost 2Gb of RAM, a 160GBb hard drive and a high-speed internet connection. As InfoWorld's Jon Udell, it's cheaper to use a dedicated hosting provider for ongoing needs that don't fluctuate. But for occasional bursts of use, the on-demand model pays off for businesses that don't have a lot of computing power in house.
Perhaps one of the most interesting aspects of hosted, grid-based computing is that it allows large corporations such as Amazon.com to lease the down cycles of their servers to smaller businesses. In fact, Amazon began selling similar services early last year. In March 2006, the company announced a Simple Storage Service (S3) that allows clients to store data on its servers at the rate of 15 cents per gigabyte per month, plus 20 cents per gigabyte of transferred data. In July of 2006, Amazon launched a Simple Queuing Service (SQS) that allows developers to move data and messages between the various components of noncentralised applications.
Grid computing has received considerable hype over the past few years, but given the increasing emphasis on enterprise efficiencies, 2007 could be its breakthrough year. How big is this potential market? Robert Rosenberg, president of analyst firm Insight Research, sees what is essentially rental-based distributed computing becoming a $24.5 billion market by 2011. (To get a free executive summary or to purchase the full report, see the Insight Research site.)
Advanced CPU architectures: Penryn, Fusion and more
If you think dual-core and quad-core processors are intriguing, wait until you see what CPU manufacturers Intel and Advanced Micro Devices have planned for 2007 and beyond. The coming 12 months will play a critical role in defining new models, architectures and materials for developing highly advanced, state-of-the-art processors.
First consider Intel's upcoming Penryn processor architecture. Currently, Intel is fabricating the vast majority of its CPUs – including the highly popular Core 2 Duo line – on a 65-nanometer process. However, in recent months, the chipmaker has successfully manufactured prototypes of a processor fabricated on a 45nm process. And both Intel and IBM have recently announced the development of a new high-k insulating material that will enable the two chipmakers to shrink CPU die size to 45nm without losing thermal or electrical efficiencies.
For Intel, the end result of this process will be a brand-new CPU architecture. With a possible release date of late 2007 or early 2008, Penryn processors will likely boast increased performance and battery life, and this architecture could lay down a foundation that would allow for eight, 16 or even 32 CPU cores on a single processor die. AMD's microprocessor plans are no less ambitious. In some ways, they're even more advanced than Intel's because they embrace a new trend in CPU design known as heterogeneous processing.
Based on recent public announcements, it appears that in the coming year, AMD will leverage its recent acquisition of graphics chip manufacturer ATI Technologies to produce a brand-new series of processors code-named "Fusion" that combine traditional CPUs and graphics processing units into a dual or quad-core central processing unit.
In theory, this model allows for a PC desktop, laptop, or server to use a standard CPU core to perform standard PC/OS functions, while specialist cores tackle other tasks, such as 3D graphics or floating-point-intensive calculations. AMD's proprietary name for its take on heterogeneous processing is Advanced Processing Unit.
One of the most interesting subplots regarding the development of advanced CPU architectures is the broad implications these new CPU designs will have on the future design and development of software, motherboards and more. An Intel research group, for instance, is exploring the greater ramifications of an 80-core CPU at the hardware and software level.
Finally, software developers are beginning to develop multi-threaded applications designed to accommodate multiple processing cores by sending whole chunks of program code instructions to separate CPU cores. Without multi-threaded applications, the performance potential of multicore processors is greatly reduced. As Ars Technica describes in its very interesting account of the challenges programmers face in writing multi-threaded code, it’s no easy task, but one that could have huge speed and efficiency payoffs in the long run.
Whew! There you have it: Five technologies that will make your computing life faster and more efficient. Which means easier. Which means happier.
Now it's your turn. Which technologies do you have your eye on right now? Remember, these should be technologies that you think will have a solid impact on computing in the very near future.