When will Wal-Mart green its data center? Next Data Center destined to high carbon Colorado Springs

DatacenterKnowledge reports on Wal-Mart's selection of Colorado Springs for a new data center site.

Wal-Mart Confirms Colorado Springs Project

July 22nd, 2011 : Rich Miller

Wal-Mart confirmed Thursday that it will build a major corporate data center in Colorado Springs, boosting efforts by local officials to boost the city as a data center destination. Construction costs for the new data center are estimated at $100 million, and initially, the data center would need 20 to 40 full-time employees with annual salaries of $30,000 to $70,000.

The Colorado Springs reports more details coming from the local economic development officials.

City officials say based on their estimates, and using information provided by Wal-Mart, the facility will cost about $100 million to build; the company declined to disclose the cost. Wal-Mart also is expected to invest another $50 million to $100 million in machinery and equipment over the initial 15-year life of the facility, city officials say.

The center will employ about 30 people with salaries of $30,000 to $70,000, Phair said. It will be built on 24 acres Wal-Mart has contracted to buy southeast of InterQuest and Voyager parkways on the city’s far north side; construction is scheduled to begin in October and  is expected to be completed in late 2012, Phair said.

Wal-mart following the Fight Club rule in data centers.

In Wal-Mart’s case, “the new data center will have strategic importance to our business and help us serve our customers more effectively,” Rollin Ford, Wal-Mart’s executive vice president and chief information officer, said in a statement. The company declined to be more specific about what operations will take place in the Springs.

Colorado Springs touted how they beat North Carolina.

Among reasons Colorado Springs was chosen, Phair said: low-cost and reliable electricity, since data centers consume vast amounts of power; an available and highly educated work force; and a location that’s largely free from natural disasters. Financial incentives also were a factor, Phair said.

To land the data center, Colorado Springs beat out Charlotte, N.C., which Quimby and White said had offered incentives totaling about $25 million.

Wal-mart has a sustainability goal stated on their sustainability web site.

Sustainability

At Walmart, we know that being an efficient and profitable business and being a good steward of the environment are goals that can work together. Our broad environmental goals at Walmart are simple and straightforward:

  • To be supplied 100 percent by renewable energy;
  • To create zero waste;
  • To sell products that sustain people and the environment.

When you look at CARMA.ORG's web site for Colorado Springs power.  The carbon impact looks large.

image

I wonder what Wal-mart's plans are to be supplied by 100 percent renewable energy for its new Colorado Springs data center.  Here is a greener hybrid Wal-mart uses.

Walmart Hybrid Assist Truck

Walmart is working with manufacturing partners to develop new technologies to help reduce our environmental footprint, are viable for our business and provide a return on investment. This truck is a hybrid assist, which means the batteries kick in when the truck needs more power (at start up or going up a hill). Freightliner is testing a new location for the batteries – on the back axel, which is designed to put power where it’s needed and mitigate power loss. This truck represents a test for Walmart. We will learn from this vehicle and work with Freightliner to continue to enhance the technology.

Intel joins the Networking Industry, buys Fulcrum Microsystems for 10/40 Gbe chips

Intel has a press release announcing its acquisition of Fulcrum Microsystems.

FocalPointFulcrum Microsystems

“Intel is transforming from a leading server technology company to a comprehensive data center provider that offers computing, storage and networking building blocks,” said Kirk Skaugen, Intel vice president and general manager, Data Center Group. “Fulcrum Microsystems’ switch silicon, already recognized for high performance and low latency, complements Intel’s leading processors and Ethernet controllers, and will deliver our customers new levels of performance and energy efficiency while improving their economics of cloud service delivery.”

10 Gigabit Ethernet (10GbE) networks are one of the fastest-growing market segments in the data center today. As demand for data continues to increase, there is a growing need for high-performance, low-latency network switches to support evolving cloud architectures and the growth of converged networks in the enterprise. Fulcrum Microsystems designs integrated, standards-based 10GbE and 40 Gigabit Ethernet (40GbE) switch silicon that have low latency and workload balancing capabilities while helping provide superior network speeds.

What is part of Intel's motivation?

Cloud computing is driving the convergence of server, storage and network technologies and solutions based around Intel® Xeon® processor solutions.

A future with Intel Xeon's at the center, and Intel setting the performance standard for the converged infrastructure - server, storage and network.

Here is a post by Rob Enderle on what he observed.

A few years ago, I attended an Intel Labs presentation and one of the more interesting segments was on a technology it was quietly developing for large network switches. Intel argued that it could do to the very expensive and high-margin switch business what it did to UNIX servers over the last two decades to cut costs dramatically.

Apparently, Intel has now started executing on that strategy with the acquisition of Fulcrum Microsystems, a fabless semiconductor and related software vendor, targeting low-cost, high-performance, high-end switches.

This will be good news for HP, but bad news for Cisco. Let me explain.

Another way to green the data center, changing the flow of the bits

I've been spending a lot of time researching some ideas that green the data center, but not in the facilities side which has seen great achievements over the past 5 years as PUE has helped create a focus on energy efficiency of the mechanical systems in the data center.   In what area? Consider this Intel statement regarding their acquisition of Fulcrum Microsystems.

"Our customers are looking to purchase compute, networking and storage as one unit," said Steve Schultz, director of marketing at Intel.

When talking to some senior SW architects,  I test the idea that the next big operating system will not be on a server, but run across servers, network, and storage devices.

To green the data center from a server OS level is frustrating as you can't see the bits move into the server and off the server through the network and access devices like storage.  If you want to green the data center you want to use the least amount of energy to process bits into higher value bits.

To support better communication for the OS, there are things that can be done at the chip device level.

The key to these types of initiatives is to make servers and network components aware of each other so they can work more closely together, said Yankee Group analyst Zeus Kerravala. As applications on dedicated servers give way to virtual machines that can be moved around for greater efficiency, it's harder for IT administrators to keep network policies up to date manually, he said. Such an effort is both time-consuming and prone to human errors, which are the leading cause of network downtime, according to Kerravala.

No other vendor yet has all the pieces for the kind of tight integration Cisco has achieved, Kerravala said. If Intel makes server and switch chips that can talk to each other at that level, smaller manufacturers will be able to offer coordination with products from other vendors that also use Intel.

HP and Dell have converged infrastructure initiatives.

  • DRIVE BUSINESS GROWTH - by accelerating IT innovation and responsiveness
  • MANAGE RISKS - by accelerating security and disaster recovery
  • LOWER COSTS - by accelerating ROI and sustainability

...

Intelligent Infrastructure: Redefining Efficiency in the Virtual Era

Imagine a truly efficient data center: What if you could break down your silos of isolated IT resources? What if you could manage your entire data center as a single pool of servers, storage and networking?

2x Storage and Performance for less cost, a greener storage solution?

StorageMojo has a post on Backblaze’s most recent open source storage solution.

2 years ago Backblaze, an online backup provider, open-sourced their storage pod design: 45 drives in a box (see Build a RAID 6 array for $100/TB). Now they’re back with v2: 45 3TB drives in a box with higher performance.

Backblaze uses this device for their storage service, and compare their costs vs. AWS S3.

And the savings over renting cloud storage can be substantial as this Backblaze chart suggests:

True, Amazon provides many more services, but if you need petabytes for mini-bucks, this is hard to beat.

Backblaze blog discusses its own savings.

Density Matters – Double the Storage in the Same Enclosure

We upgraded the hard drives inside the 4U sheet metal pod enclosure to store twice as much data in the same space. After the cost of filling a rack with pods, one datacenter rack containing 10 pods costs Backblaze about $2,100 per month to operate, roughly divided equally into thirds for physical space rental, bandwidth, and electricity. Doubling the density saves us half of the money spent on both physical space and electricity. The picture below is from our datacenter, showing 15 petabytes racked in a single row of cabinets. The newest cabinets squeeze one petabyte into three-quarters of a single cabinet for $56,696.

Backblaze Storage Servers in Datacenter

Our online backup cloud storage is our largest cost, and we are obsessed with providing a service that remains secure, reliable and, above all, inexpensive. We’ve seen competitors unable to react to these demands who were forced to exit the market, like Iron Mountain, or raise prices, like Mozy and Carbonite. Controlling the hardware design has allowed us to keep prices low

Is Seamicro SM1000 series the mainframe of the Web2.0 data centers?

There has been a bunch of press on Seamicro's latest SM1000 product.  The press release is here.

SeaMicro Introduces the SM10000-64HD, Setting Industry Record for Energy Efficiency and Compute Density

July 17, 2011

With 384 Intel® Atom™ Dual-core 1.66 GHz Processors; 768 64-bit Cores and 1,275 GHz in a 10 Rack Unit System

SUNNYVALE, Calif., July 18, 2011 – SeaMicro™, the Silicon Valley pioneer of low power server technology, today announced the immediate availability of the world’s most energy efficient 64-bit x86 server: the SM10000-64HD™. SeaMicro has once again defined best in class by improving its own compute density record by 150 percent and increasing its own industry leading compute per-watt metric by 20 percent. The new SM10000-64HD replaces 60 traditional servers, four top of rack switches, four terminal servers and a load balancer while using one-fourth the power and taking one-sixth space—all without requiring any changes to software.

I have been talking to a couple of people who have hands on experience with the Seamicro boxes.  One way to think about the Seamicro box is as if it is different type of mainframe.

Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960s, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Gigabyte to terabyte-size record files are not unusual.[5] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.[citation needed] Other server families also offload I/O processing and emphasize throughput computing.

Thanks the processor and high volume server wars, the data center in dominated by Intel Xeon based servers typically running in 2 processor configuration.  Few 4, 8, 16 processor configurations are sold.  For Big Iron for Amazon.com and eBay type of loads there are the Sun Servers for big Oracle databases, but Sun no longer has the presence in data centers it used to.

SPARC Enterprise M9000 Server

Oracle's most scalable mission-critical server for the largest, most demanding workloads

Designed for mission-critical environments, Oracle's SPARC Enterprise M9000 server delivers massive scalability, with up to 64 processors and 256 cores for the most demanding virtualization, consolidation, and multi-hosting deployments.

SPARC Enterprise M9000

Now the problem with a mainframe metaphor is mainframes are considered dying, but look how IBM has extended the life of the mainframe.

The safe thing to do for a Web 2.0 company is to continue down the path of low cost dual processor servers, networked with top of rack gigabit switches.

Even though some look at Seamicro in terms of the Intel Atom processor, I pay more attention to how Seamicro is solving the IO and networking issues that are typical bottlenecks for throughput.  Consider this job posting at Seamicro.

SeaMicro is looking for an experienced Senior Hardware Design Engineer to architect and implement a flexible and scalable networking solution for its next generation data center products. This is an excellent opportunity for high-energy candidates who can take a complex networking solution from conception, through execution, to first customer shipment.

Qualifications:

  • 10+ years of experience in high-performance/high-bandwidth micro-architecture
  • 10+ years of experience in Verilog RTL development, with some experience in design/development of networking chips
  • Experience with designs based on network processors, 10G interfaces, and DDR memory controllers desirable
  • Solid understanding of L2 Ethernet switching protocols including VLAN, Broadcast/Multicast, and LACP is a plus
  • Working knowledge of IPv4, IPv6, ACLs, and QOS