Heading to LV at end of Apr, no not Data Center World, IBM Impact - social, mobile, cloud conference

A lot of data center people will be heading to LV at the end of the month for Data Center World Spring.

NewImage

I have some friends who will be in LV and asked if I could join them for an evening event so I was looking to be in LV.

Then I saw that IBM has conf exactly at the same time called IBM Impact on Mobile, Social, and Cloud.  Huh, facility management professionals or social, mobile, cloud professionals with Forest Whitaker, Tim O-Reilly, Michael Copeland (Wired), Rich KarlGaard (Forbes), and Matchbox Twenty.  Which one should I go to?  Power and Cooling stuff we have talked about for the past 5 years or social, mobile, cloud.  On Apr 29 - May 1 I'll be at the IBM event, maybe I'll chat with some of the data center folks who are all the way down at the Mandalay Bay.

NewImage

NewImage

NewImage

Energy inefficiency forces an early retirement of '09 record holder super computer

There is lots of press around that the IBM Roadrunner supercomputer at Los Alamos National Laboratory is being turned off.

First Petaflop Supercomputer, 'Roadrunner,' Decommissioned

PC Magazine
7 hours ago
 
Written by
David Murphy
 
If you need to take a moment to think of a joke about a particular speedy bird and its coyote companion, we understand. Otherwise, it's time to raise a toast today to one of the computing world's heavyweights, the first supercomputer that ever managed to hit a ...

The one I found most useful is the Arstechnica article where the energy efficiency is mentioned.

Petaflop machines aren't automatically obsolete—a petaflop is still speedy enough to crack the top 25 fastest supercomputers. Roadrunner is thus still capable of performing scientific work at mind-boggling speeds, but has been surpassed by competitors in terms of energy efficiency. For example, in the November 2012 ratings Roadrunner required 2,345 kilowatts to hit 1.042 petaflops and a world ranking of #22. The supercomputer at #21 required only 1,177 kilowatts, and #23 (clocked at 1.035 petaflops) required just 493 kilowatts.

Given the high power consumption it would seem most likely this is the actual power draw, not the additional power for the cooling system.  A pre-2009 super computer would most likely have over 50% for the cooling system, so this could easily be 3.5MW of power.

Supercomputers are regularly rated on its energy use.  And, the author highlights there is a need for a better performance per watt.

"Future supercomputers will need to improve on Roadrunner’s energy efficiency to make the power bill affordable," Los Alamos wrote. "Future supercomputers will also need new solutions for handling and storing the vast amounts of data involved in such massive calculations."

What's up with Small Nuclear Reactors?

There are a fair amount of ex-nuclear sub staff who work in data centers.   It is possible the idea of a small nuclear plant could follow at some part far in the future.  MIT Review discusses the current state of small nuclear reactors.

Nuclear option:Babcock & Wilcox’s proposed power plant is based on two small modular nuclear reactors.

Small, modular nuclear reactor designs could be relatively cheap to build and safe to operate, and there’s plenty of corporate and government momentum behind a push to develop and license them. But will they be able to offer power cheap enough to compete with natural gas? And will they really help revive the moribund nuclear industry in the United States?

Last year, the U.S. Department of Energy announced that it would provide $452 million in grants to companies developing small modular reactors, provided the companies matched the funds (bringing the total to $900 million). In November it announced the first grant winner—Babcock & Wilcox, a maker of reactors for nuclear ships and submarines—and this month it requested applications for a second round of funding. The program funding is expected to be enough to certify two or three designs.

Natural gas is so cheap it is hard to imagine a small nuclear plant being deployed any time soon for a data center.

Even if small reactors can compete with conventional nuclear power, they still might not be able to compete with natural-gas power plants, especially in the United States, where natural gas is cheap (see “Safer Nuclear Power, at Half the Price”). Their success will depend on how much utilities think they need to hedge against a possible rise in natural-gas prices over the lifetime of a plant—and how much they believe they’ll be required to reduce carbon dioxide emissions.

“At the end of the day, we’ll build the lowest-cost option for ratepayers,” Cryderman says. “If it’s too expensive, we won’t build it.” The challenge, he says, is predicting what the lowest-cost options will be over the decades new plants will operate.

One of these days natural gas will not be plentiful and nuclear is going to be one of those options that may make sense.

Is Dropbox a cause of Fedex's loss of customers wanting it next day?

WSJ has an article on FedEx needing to make changes to its fleets as customers want their deliveries slower and cheaper.

Things that absolutely positively have to be there overnight don't absolutely positively have to be there overnight anymore.

Nowhere was that more evident than when FedEx Corp. FDX +0.56% on Wednesday reported that its quarterly profit plunged 31% as its international customers and shippers flocked to slower, cheaper delivery options instead of its premium-priced express service.

Now the company whose name is synonymous with overnight delivery—and has built the world's largest air express fleet—must make even deeper changes that take it farther away from its roots. It has already tweaked capacity and deferred plane purchases. Now it will shrink its global network, possibly its air fleet and direct more business via third-party alternatives like ships, commercial airlines and third-party shippers.

Besides the shift in next day big packages, I wonder how much of the next day document delivery which I would guess is a much higher margin business than shipping iPhones around the world is impacted.

We have all suffered through 5MB document limits through e-mail delivery.  Using Gmail you would be able to send a big document, but the receiver couldn't receive it in their e-mail system.  With the growth of DropBox, Box, Google Drive, Micorsoft Skydrive you can now send GB documents to users in a text or e-mail for little cost and delivery in seconds.

In the WSJ article FedEx discloses the shift in delayed shipments from Asia.

Though you would think this "was a good problem to have," Dave Bronczek, CEO of FedEx Express, told analysts in an earnings call, the latest gain was driven by a 12% growth in deferred international export traffic, mainly led out of Asia and Europe. So while the company had "a lot of freight on our planes; our high load factors, quite frankly, were driven by the deferred traffic. So we had lower yields. We had more traffic, higher pounds—all in deferred traffic," Mr. Bronczek said. International export priority volume inched up only 2%.

Somewhere buried in those next day deliveries are probably documents which don't weigh that much and push the revenue up.

2 - 5 GB isn't enough.  I have 27 GB with Skydrive and pushing up 1 GB of photos is something I don't even think about.  

FedEx could have created a DropBox type of product, but there is no way they could charge the kind of money they make for a next day delivery.  FedEx tried a fax service in 1984 that didn't work.

A new facsimile delivery service, known as ZapMail, made its debut in 1984. It guaranteed delivery of five pages or less in less than two hours for $35. That year, the firm made its first acquisition, package courier Gelco Express. Other acquisitions soon followed, including businesses in Europe and the Middle East. International expansion continued in 1985 when Federal Express established a European headquarters in Brussels, Belgium. Sales grew to $2 billion.

The company's ZapMail service proved unprofitable. As a result, Federal Express discontinued it in 1986.

Read more: Fedex Corp - Early History - Express, Federal, Firm, and Delivery http://ecommerce.hostip.info/pages/443/Fedex-Corp-EARLY-HISTORY.html#ixzz2Ox0CO1zw

The Perfect Data Center

Dilbert's Scott Adams has a blog post on the Perfect Room and piece of SW that could support this.

You often see rooms that can't be furnished properly because furniture placement was an afterthought. The design of a room should start with the perfect arrangement of furniture and fixtures. I would think that for every budget and set of preferences there are a few furniture arrangements that stand out as the best. How hard would it be to catalog those best arrangements?

I imagine a time when a user can design a home simple by checking boxes on a long digital form. Questions for a living room might include:

1.      Do you want a TV in this room?
2.      Do you want a cozy reading chair?
3.      Do you want a fireplace?
4.      Etc.

Once the user selects all of his preferences for each room, he clicks a "shuffle" button and it spits out a house layout complete with external windows, doors, hallways, stairs, and engineering support structures. All of that stuff is fairly rules-based. If you don't like the first design, click the shuffle button again. In every case, the rooms will have exactly the features you specified but arranged differently. And of course you can walk through your model in 3D mode.

Scott closes with points on the savings and the issues.


1.      Rooms that need plumbing should be near each other to reduce costs.
2.      Orientation to the sun makes a huge difference in heating/cooling/insulation.
3.      Some designs require fewer hallways, which saves space.
4.      Some designs require more support structures, doors, windows, etc.
5.      Some designs have ductwork issues.

Those are just some obvious examples of potential savings. You'd also cut your architect expense by 80%. And you'd save on labor and materials because the building materials would be measured and cut at the factory, including everything from lumber to floor tiles to carpet.

My observation is that the building industry is slow to innovate and fairly disorganized. Builders, architects, and materials companies are all their own little silos. So my guess is that the "shuffle design" program will originate in some sort of online game environment before it gets ported to the real world.

i think this is what Compass Data Centers is attempting to do.

If you’ve ever sat behind a foul pole at a baseball game you know what a pain columns can be. That’s why your 10,000 square feet of 36” raised floor is column free. At Compass, your data center floor accommodates anything from a tape library or a Cisco 7000 with side-to-side airflow to OpenCompute’s new larger rack sizes. This degree of flexibility even extends to cable management to support your preference whether its above the rack, hanging from the ceiling, or below the floor. At Compass, the only option you don’t have is to strand your IT capacity.

Speaking of your data center floor flexibility, can you control how much the software uses of the server and storage capacity? We didn’t think so. The reality of the world is that most software does not drive the full use of the server (virtualized or otherwise). As a result, it’s tough to predict what your actual usage will be from rack-to-rack. Not to mention the patch, network and storage…That’s why your Compass solution will support rack densities that cover the spectrum up to 20kW without containment (from 0 to 400 W/sf). Just imagine what you can do using ASHRAE TC 9.9 best practices including containment.

Although perfection is hard to achieve as once you live in the space you find out things you didn't consider.