Jon Stokes from Ars Technica reports on an IDC storage report, and adds his own comments in addition to the IDC's executive summary.
In a nutshell, the problem in the datacenter is the same as it ever was, but with a power-aware twist: backing store (in the form of hard disk arrays) is getting too slow and too hot, while demand rises and cost-per-bit plummets. So the answer is to add a little cache, in the form of flash memory.
From my own perspective, the case for SSDs in the enterprise seems pretty straightforward, built as it is on two constraints: latency and power. Latency comes into play when solid state memory is used as cache for a larger pool of magnetic backing store; such cache can improve response times for databases and Web-based apps.
The power factor is also compelling, though, especially in light of a recent Google study that shows the hard disk to be one of a server's most poorly power-optimized components. Large storage arrays from vendors like EMC suck up major wattage and throw off a ton of heat, so there's a huge and growing appetite for hardware and software technology that either makes more efficient use of storage or cuts down on the amount of times that drives must spin up. Flash-based caches can make it possible for drives to spin down during certain types of low load conditions (functionally an active sleep state), and that could make bimodal power optimization at least feasible for servers.