Many data centers sit on a lot of “cold storage” — servers containing terabytes of user data that must be retained but is rarely accessed, because users no longer need that data. While the servers are considered cold because they are rarely utilized, their hard drives are usually spinning at full speed although they are not serving data. The drives must keep rotating in case a user request actually requires retrieving data from disk, as spinning up a disk from sleep can take up to 30 seconds. In RAID configurations this time can be even longer if the HDDs in the RAID volume are staggered in their spin up to protect the power supply. Obviously, these latencies would translate into unacceptable wait times for a user who wishes to view a standard resolution photo or a spreadsheet.
Reducing HDD RPM by half would save roughly 3-5W per HDD. Data centers today can have up to tens and even hundreds of thousands of cold drives, so the power savings impact at the data center level can be quite significant, on the order of hundreds of kilowatts, maybe even a megawatt. The reduced HDD bandwidth due to lower RPM would likely still be more than sufficient for most cold use cases, as a data rate of several (perhaps several dozen) MBs should still be possible. In most cases a user is requesting less than a few MBs of data, meaning that they will likely not notice the added service time for their request due to the reduced speed HDDs. What is critical is that the latency response time of the HDD isn’t higher than 100 ms in order to not degrade the user experience.