HP Announces Virtualization Blade - BL495C

HP (NYSE: HWP) announced its BL495C, claiming the first blade designed for virtualization.

Overview The HP ProLiant BL495c virtualization blade is the world’s first server blade designed specifically to host virtual machines. The BL495c is the ideal platform for virtualized environments that require significant memory, data storage and network connections to optimize server performance.


The BL495c eliminates performance bottlenecks by accelerating virtual server speed with twice the memory capacity, solid state drives that use significantly less power, and up to two more Ethernet network connections than competitive offerings.(1) Based on the typical configuration of 4 gigabytes (GB) per virtual machine (VM), the BL495c can support a minimum of 32 VMs. Customers filling an HP BladeSystem c7000 enclosure with 16 BL495c server blades can utilize up to 512 VMs versus 256 and 112 from the closest competitors.(2)


“Customers looking to maximize the performance of their virtualized infrastructure have the answer with the BL495c. It was architected and optimized specifically for virtual machines,” said Mark Potter, vice president, HP BladeSystem. “With its memory, storage and I/O capabilities, the BL495c is unmatched in the industry and redefines server blades for virtualization.”

The BL495C ships with 10GB NICs.  In a video on this page, HP specifically points out comparisons vs. IBM and Dells used of 1GB NIC and half the DIMM slots.  HP’s support for SSD also is another point. 

10GB vs 1 GB. 2 times the DIMMs. Support for SSD.

It is interesting that HP has added all this IO capability to dual proc, quadcore AMD blade. This supports the idea that IO is the big issue in virtualization hardware.

This announcement was part of a bigger HP Virtualization initiative.

HP Encourages CIOs to Rethink Virtualization in Business Terms

HP has expanded its industry-leading virtualization portfolio to maximize better business outcomes for customers. To meet the growing demands of technology environments, HP is providing businesses with the virtualization tools and strategies needed to rethink infrastructure management, client architectures and barriers to building and organizing infrastructures.

HP's latest hardware, software and services offerings enable enterprises to blend physical and virtual assets to expand virtualization beyond the data center, providing new ways for customers to accelerate growth, lower cost and mitigate risk.

Read more

Dell Trading Up from Manufacturing Hardware to Manufacturing Cloud Computing

There  is a lot of coverage on Dell selling its manufacturing operations.

GigaOm has one speculating Dell is getting out of the hardware mfg business in preparation for building cloud computing.

Just a day after Dell launched it’s own line of mini Inspirons, and after CEO Michael Dell said carriers would likely subsidize such netbooks, creating smaller price tags, the Wall Street Journal speculates that Dell will sell its manufacturing plants, shrinking its operations. This would be good for Dell because it would give it a chance to ditch an aspect of its business with diminishing returns and go after a growth area, like cloud computing.

...

The hardware and operations that comprise a computing cloud will be a low-margin business for those offering it. If Dell can take the lessons of squeezing the costs from a low-margin business like building computers and translate that into helping build, deliver and operate clouds most efficiently, it could win. By tying its range of consumer and corporate devices back into such clouds, it could become a powerful business generator for cloud providers.

When it comes to the utility industry (which is what many cloud providers like to compare their business to), GE sells billions in equipment and services to providers. Dell has made some acquisitions that get it started down that road offering both services and equipment to clouds, but it also has a company culture of exactitude and discipline that can’t be bought. I think if Dell can dump its manufacturing plants, it will head for the clouds.

If there is a Dell built cloud, can Dell build better devices for its cloud? 

Read more

Sun's Energy Efficient Data Center, The Role of Modularity in Datacenter Design

Sun has a pdf on  Energy Efficient Datacenters - The Role of Modularity in Datacenter Design and a wiki post.

Energy Efficient Datacenters: The Role of Modularity in Datacenter Design

by Dean Nelson, Michael Ryan, Serena DeVito, Ramesh KV, Petr Vlasaty, Brett Rucker, and Brian Day
June, 2008

Virtually every Information Technology (IT) organization and the clients that they serve have dramatically different requirements that impact their datacenter designs. Sun is no exception to this rule. As an engineering company, Sun has cross-functional organizations that manage the company's corporate infrastructure portfolio including engineering, services, sales, operations, and IT.

On the surface, the datacenters supporting these different organizations look as different as night and day - one looks like a computer hardware laboratory and another looks like a lights-out server farm. One has employees entering and leaving constantly, and another is accessed remotely and could be anywhere. One may be housed in a building, and another may be housed within an enhanced shipping container. Beneath the surface, however, our datacenters have similar underlying infrastructure including physical design, power, cooling, and connectivity.

At first I thought the document was going to be able Sun's Containers, but as I continued through the document more details were discussed in modular power, cooling, and cabling. The following is from the summary.

The last thing that a datacenter design should do is get in the way of a company’s ability to conduct business. Traditional datacenter designs can do just that. Cooling via raised floors and perimeter CRAC units limit the ability to increase density and achieve energy efficiency. Power distribution units and under-floor whips limit flexibility and require downtime for reconfiguration. Home-run, under-floor cabling makes growth difficult, impacts cooling and raises costs.


Datacenter designs that facilitate — rather than limit — growth, density, flexibility and rapid change can be a company’s competitive weapon. At Sun, our modular, pod-based datacenters can turn on a dime whenever business directions change, from accommodating new equipment in our pods to expanding our rack footprint by deploying additional Sun Modular Datacenters. We can accommodate growth and increases in density because three key datacenter functions — power, cooling, and cabling — are prepared from day one to support an overall doubling in each area.

The document is 58 pages with the following content.

  • The Role of Modularity in Datacenter Design
    • Choosing Modularity
    • Defining Modular, Energy-Efficient Building Blocks
    • Buildings Versus Containers
    • Cost Savings
    • About This Article
  • The Range of Datacenter Requirements
    • Power and Cooling Requirements
      • Using Racks, Not Square Feet, as the Key Metric
      • Temporal Power and Cooling Requirements
      • Equipment-Dictated Power and Cooling Requirements
    • Connectivity Requirements
    • Equipment Access Requirements
    • Choosing Between Buildings and Containers
    • Living Within a Space, Power, and Cooling Envelope
      • Space
      • Power
      • Cooling
    • Calculating Sun's Santa Clara Datacenters
      • Efficiency in Sun's Santa Clara Software Datacenter
  • Sun's Pod-Based Design
    • Modular Components
    • Pod Examples
      • Hot-Aisle Containment and In-Row Cooling
      • Overhead Cooling for High Spot Loads
      • A Self-Contained Pod - the Sun Modular Datacenter
  • Modular Design Elements
    • Physical Design Issues
      • Sun Modular Datacenter Requirements
      • Structural Requirements
      • Raised Floor or Slab
      • Racks
      • Future Proofing
    • Modular Power Distribution
      • The Problem with PDUs
      • The Benefits of Modular Busway
    • Modular Spot Cooling
      • Self-Contained, Closely Coupled Cooling
      • In-Row Cooling with Hot-Aisle Containment
      • Overhead Spot Cooling
      • The Future of Datacenter Cooling
    • Modular Cabling Design
      • Cabling Best Practices
  • The Modular Pod Design at Work
    • Santa Clara Software Organization Datacenter
    • Santa Clara Services Organization Datacenter
    • Sun Solution Center
    • Guillemont Park Campus, UK
    • Prague, Czech Republic
    • Bangalore, India
    • Louisville, Colorado: Sun Modular Datacenter
  • Summary
    • Looking Toward the Future
  • Read more

    Technical Details Behind Intel’s Power Gating & Turbo Mode

    Ars Technica has technical details on Intel’s power gating and turbo mode.

    First an overview

    Power gating, turbo mode, and the PCU

    Most of the Nehalem disclosures in Gelsinger's keynote, and in the subsequent technical session on Nehalem, had to do with the new microarchitecture's power management capabilities. With Nehalem, Intel is introducing a technology that it calls "power gating." Traditionally, Intel has been able shut down an unused core by cutting its active power, but even though it's in a sleep state, that core is still dissipating plenty of power because of leakage current. Intel's power gating technique involves a new transistor design, and it lets Intel cut the leakage current, as well, so that the sleeping core's power dissipation drops to near zero.

    When one or more of the cores on a Nehalem chip are powered down, the processor can divert extra power to the cores that are in use by increasing their clockspeed and voltage. (This is kind of like the "divert power to the main thrusters," thing that Scottie would always do in Star Trek.) This gives the active cores extra performance headroom while permitting the overall processor to remain in the same power envelope, and Gelsinger noted that it effectively gives each core two extra speed bins worth of performance.

    Here are technical details on what is on the chip.

    Nehalem's power gating, turbo mode, and other dynamic power saving features involve a complex coordination of a huge array of on-die sensors and circuit blocks on different parts of the chip, a feat that would have been impossible to pull off on a four- or eight-core processor without some centralized means of processing sensor data and orchestrating the power changes. So for Nehalem, Intel's designers have introduced a brand new major functional block, the power control unit (PCU).

    Nehalem's PCU is a relatively large programmable microcontroller that uses firmware to implement various sophisticated power optimization algorithms, and committing such large amount of die space to such a complex block is a fairly dramatic move, especially in the name of power savings. I don't remember the exact transistor count (it's in the millions, and Gelsinger remarked that it was the size of a 486), but Intel is paying a hefty price in transistors and power for this unit. This being the case, it must have a dramatic impact on Nehalem's overall power consumption under load if it's able to more than pay for itself in net power savings.

    It will be interesting what the real world performance tests show.

    Read more

    More News on Microsoft’s Container Data Center, But What is Inside?

    News.com has an article on Microsoft’s Container Data Centers.

    Microsoft's data centers growing by the truckload

    Posted by Ina Fried

    Once upon a time, Microsoft used to fill its data centers one server at a time. Then it bought them by the rack. Now it's preparing to load up servers by the shipping container.

    Starting with a Chicago-area facility due to open later this year, Microsoft will use an approach in which servers arrive at the data center in a sealed container, already networked together and ready to go. The container itself is then hooked up to power, networking and air conditioning.

    "The trucks back'em in, rack'em and stack'em," Chief Software Architect Ray Ozzie told CNET News. And the containers remain sealed, Ozzie said. Once a certain number of servers in the container have failed, it will be pulled out and sent back to the manufacturer and a new container loaded in.

    It's just one way that Microsoft is trying to cope in a world where it adds roughly 10,000 servers a month.

    As much talk as there is about the containers,  a part we do not know is what is in the containers.  Here are hints Microsoft is pushing for higher efficiency server hardware.

    Gone are the days in which Microsoft settled for off-the-shelf hardware to fill its server farms. These days, Microsoft is looking for servers designed to its exact needs. It's not just that Microsoft doesn't want servers that have keyboard or USB ports--it wants motherboards that don't even have the added wiring necessary to support those things that it will never use. Such moves eliminate cost, space and power consumption.

    "We are not physically building our servers, but there is very deep engagement (with the computer makers)," Josefsberg said.

    Even a 1 percent or 2 percent reduction in power consumption makes a big difference, Josefsberg said. As it is, Microsoft is trying to cram a whole lot of gear in a small space. While server racks at a Web hosting facility might have power densities of 70 watts to 100 Watts per square foot, things are packed far more tightly in the containers, which might be consuming in the thousands of watts of power per square foot.

    Watch for the Server OEMs making noise about new server skus. I wrote about how Microsoft is influencing the industry with its purchasing power.

    Given the purchasing by Microsoft's data center properties (search, hotmail, maps,etc.) are now driving Server OEMs with custom RFPs like the CBlox RFP, OEMs are building exactly what Microsoft wants to run a more efficient data center. And, versus Google's model of requiring exclusive designs no one else in the industry can purchase, the Microsoft skus spill into the rest of the market.

    People can argue the benefits of containers, but listening to Mike Manos and Christian Belady, part of what the containers give Microsoft is a method to determine the compute per watt efficiency for what is in the container.

    Read more