Google Ads

Enter your email address:

Delivered by FeedBurner

This form does not yet contain any fields.

    IBM partners with APC to reduce up front capital costs and TCO

    Data Center construction is typically expensive and long lead time.  Modularity and containers are discussed as ways to address these issues.  IBM’s partnership with APC is one effort that tries to change the data center construction industry.



    The official press release is here.

    APC and IBM Announce Availability of the IBM Portable Modular Data Center Solution Based on APC’s Award-Winning InfraStruxure® Architecture

    West Kingston, RI, January 11, 2010APC by Schneider Electric, a global leader in integrated critical power and cooling services, today announced an expanded relationship withIBM to offer an IBM Portable Modular Data Center container version based on APC’s award-winning InfraStruxure® architecture and IBM’s global services capabilities. IBM’s PMDC provides a fully functional data center in a shipping container with a complete physical infrastructure including power and cooling systems and remote monitoring. By integrating APC InfraStruxure products into the container it builds on the global alliance between APC and IBM announced in 2006 when APC was selected as a key data center physical infrastructure provider to IBM's Scalable Modular Data Center (SMDC) and later when APC solutions were chosen as the foundation for the IBM High Density Zone (HDZ) solution, which allows customers to deploy a high density environment rapidly within an existing data center.

    With HP’s acquisition of EYP and EDS. IBM needs to work on end-to-end solutions in data centers.

    The partnership enables clients to quickly design and build a data center in nearly any working environment using IBM Global Services’ capabilities and a standardized data center architecture, reducing up front capital and on-going operational costs.

    One of the biggest obstacles to this approach will be entrenched IT and facilities organizations who are used to the status quo of data center construction and operation.  But, if anyone has the ability to reach the ears of the CIO and CFO it is IBM.

    I am currently evaluating whether I’ll attend IBM’s Pulse 2010 event in Las Vegas Feb 21- 24.


    Click to read more ...


    Fuel Cell facts published by US DOE

    Fuel Cells are getting more news and has potential for use in data centers.  Here is a DOE site that provides some good facts on the current and projected numbers for operating fuel cells.

    One is for Natural Gas.


    and another is for Diesel.


    What I didn’t know is the current standards for start-up time from 20 degrees C.  60 minutes for a natural gas CHP solution.


    UTC Power explains two different fuel cell technologies that illustrate the start-up time issue.

    Phosphoric Acid fuel cells (PAFCs):  Phosphoric acid fuel cells use liquid phosphoric acid as the electrolyte. UTC Power's family of stationary power plants, produced since 1991, are PAFC power plants and highly efficient - total efficiency of 90 percent is achievable when waste heat produced by the fuel cell is used for co-generation. PAFC power plants are usually large, heavy and require warm-up time. Because of this, PAFCs are used mainly for stationary applications.

    Proton Exchange Membrane fuel cells:  PEM fuel cells, also known as polymer electrolyte fuel cells, are a type of fuel cell currently under development at most fuel cell companies. PEM fuel cells use a thin solid membrane as an electrolyte. These fuel cells deliver high power density and offer the advantages of low weight and volume, compared to other fuel cells. These fuel cells also operate at relatively low temperatures, around 175°F. Low temperature operation allows them to start quickly (less warm-up time), which makes them particularly well suited for transportation applications such as automobiles and fleet vehicles.

    So, if you thinking of fuel cells in the data center, you may want to use them as the primary power and have the grid as back-up.  DataCenterKnowledge referenced customers who currently take this approach.

    Hydrogen Fuel Cells Power Bank Data Center

    December 30th, 2008 : Rich Miller

    Green data centersThe First National Bank of Omaha wanted to ensure that its data center couldn’t be knocked offline by a tornado or power outage. So in 1999 it built a new data center in the underground levels of its Omaha building, encased in concrete walls that can withstand winds of 260 miles per hour. The facility is also powered by hydrogen fuel cells, operating completely “off the grid.”

    Both the Verizon and Fujitsu facilities use fuel cell systems from UTC. APC recently introduced Fuel Cell Extended Run (FCXR), a hydrogen-based fuel cell backup solutionthat integrates with the company’s InfraStruXure racks and enclosures. MGE and Siemens also tested fuel cell solutions for data centers, but later discontinued the programs, according to SearchDataCenter.

    Click to read more ...


    When will solid state memory server be an option in AWS instances?

    I was having another stimulating conversation in silicon valley last night, and one of the ideas that made sense is for solid state memory servers to be part of the cloud computing option.  It’s just a matter of time.  Amazon has their current instance offerings with a division of performance and memory.

    Standard Instances

    Instances of this family are well suited for most applications.

    Small Instance (default)*

    1.7 GB memory
    1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit)
    160 GB instance storage (150 GB plus 10 GB root partition)
    32-bit platform
    I/O Performance: Moderate

    Large Instance

    7.5 GB memory
    4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)
    850 GB instance storage (2×420 GB plus 10 GB root partition)
    64-bit platform
    I/O Performance: High

    Extra Large Instance

    15 GB memory
    8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)
    1,690 GB instance storage (4×420 GB plus 10 GB root partition)
    64-bit platform
    I/O Performance: High

    High-Memory Instances

    Instances of this family offer large memory sizes for high throughput applications, including database and memory caching applications.

    High-Memory Double Extra Large Instance

    34.2 GB of memory
    13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute Units each)
    850 GB of instance storage
    64-bit platform
    I/O Performance: High

    High-Memory Quadruple Extra Large Instance

    68.4 GB of memory
    26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each)
    1690 GB of instance storage
    64-bit platform
    I/O Performance: High

    High-CPU Instances

    Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

    High-CPU Medium Instance

    1.7 GB of memory
    5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each)
    350 GB of instance storage
    32-bit platform
    I/O Performance: Moderate

    High-CPU Extra Large Instance

    7 GB of memory
    20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each)
    1690 GB of instance storage
    64-bit platform
    I/O Performance: High

    But as with Virident’s offering you can get higher performance with high memory addressing if you are MySql or memcached, resulting in a higher performance per watt which should translate into a higher performance per dollar.

    GreenCloud Server for MySQL

    The GreenCloud Server for MySQL delivers extreme performance improvement over industry standard servers using disk arrays or SSDs, including high-performance PCIe SSDs, on Web 2.0 workloads. Virident optimized versions of MyISAM and InnoDB storage engines directly access datasets stored in the storage class memory tier to eliminate I/O bottlenecks. GreenCloud servers sustain significantly higher query rates, dramatically lower the cost of scaling to larger datasets, and simplify the replication and sharding processes usually employed for scaling. The extreme performance additionally makes it possible to obtain new insights into data and deliver new services by running complex operations such as multi-table joins, which are beyond the reach of traditional servers.

    • 50-70x performance versus Industry Standard Servers with hybrid disk/DRAM configuration on third party benchmarks.
    • 5 -7x versus fastest PCIe-based SSD systems.
    • Binary compatible with existing InnoDB and MyISAM databases.
    • 30-35x Improvement in QPS/Watt.
    • 10-15x Improvement in QPS/$.
    GreenCloud Server for Memcached

    The Virident GreenCloud Server for Memcached delivers a new standard of high-performance and cache size scaling for the popular distributed caching application. These servers can deliver 250K object gets per second with low and predictable latencies and support caches with up to 3 billion objects, increasing performance by up to 4x and the available cache memory by up to 8x versus industry standard servers. These performance and scaling benefits permit larger key spaces to be supported by a single server and decrease cache miss rates thereby reducing load on backend database servers.

    • Industry–leading performance
      ▫ Up to 250K object gets per second w/ average size of 200-300 bytes
      ▫ Supports a larger object cache – up to 3 Billion objects
    • Higher cache hit rates due to larger caches – up to 8x versus industry standard servers
      ▫ Lower the backend database load up to 50%
    • 50-70% decrease in TCO
      ▫ GreenCloud servers can replace 4 or more traditional servers in a sever consolidation project

    I would expect AWS is evaluating this, and it will be here by the summer.

    Click to read more ...


    Nielsen says XBox 360 most used console, maybe online experience requiring lots of data center resources is a factor

    Our family has an XBox 360 and a Wii, and I would agree with the cnet news article.

    Xbox 360 is most-used game console, Nielsen says

    by Don Reisinger

    Nintendo Wii

    The Wii is not the most-used console, but it has attracted female gamers.

    (Credit: Nintendo)

    As the game console wars rage on, new findings from Nielsen may give Xbox 360 fans a little more fodder for their bragging rights.

    According to the market researcher, Microsoft's Xbox 360 is the most-used console when measured by its share of total usage minutes, capturing 23.1 percent of gaming time. It is followed by the PlayStation 2 with 20.4 percent of usage time and the Nintendo Wii with 19 percent. Surprisingly, the PlayStation 3 didn't make the list top-three list.

    My kids actually play games on their iPod Touches more than the Wii, and the cost is significantly less for games on the iPod, so I am not complaining. 

    One of the comments on the cnet article made me think of data centers.

    This comes as no real surprise to me. I'm a PC gamer primarily but I also own a Wii. It sits in the corner and gathers dust. The superior online features of the 360 keep people coming back. Much more than can be said about the Wii's garbage online multiplayer.

    I have a friend who works in XBox 360 online operations, and I don’t think the Nintendo or Sony data center operations team come close to the scale of XBox 360.  I don’t recall running into any big news on Nintendo or Sony’s data centers so it is hard to find, and in general data center operations for Nintendo and Sony is probably an overhead as opposed to a revenue stream for XBox 360.

    Click to read more ...


    Marvell Plug Computer 3.0, Linux Microserver, ARM chip, WiFi, Bluetooth, HD – always on home server platform has a post on Marvell Plug Computer 3.0.

    Marvell Plug Computer 3.0: The Tiny Linux Brick

    By Jesus Diaz, on Tue, 05 Jan 2010 13:59:25 –0


    If I had $1,000,000 I would buy 10,000 of these Marvell Plug Computer 3.0, with a 2GHz Armada 300 CPU, Wi-Fi, and Linux 2.6, and build myself a supercomputer. It's either that or cocktails.

    But, is this computer or what others would think of as a server?  A server defined by Wikipedia is any combination of hardware or software designed to provide services to clients.

    Marvell Unveils Plug Computer 3.0 With Integrated Wireless and Built-in Hard Drive

    Powerful Microserver is bolstered with 2 GHz ARMADA Processor to drive the "Always-On Lifestyle"

    A cooler picture of a plug computer 3.0 is on CES cnet live.

    Marvell super-upgrades its Plug Computer

    by Dong Ngo

    The Plug Computer 3.0

    (Credit: Marvell)

    It's been just half a year since the first plug computer, the SheevaPlug, or the Plug Computer 1.0, was introduced, but Marvell is now ready to release the third generation of the product.

    The company announced Tuesday at CES 2010 the Plug Computer 3.0, which it believes to be such an upgrade over the first one that it decided to designate it as the third (3.0) generation of the product, even though it's really the second.

    The naming aside, the Plug Computer 3.0 seems indeed impressive. Sleek-looking and smaller than a deck of playing cards, the new mini computer is now much more powerful than the first generation. It's equipped with Marvell's brand-new ARMADA 300 processor, running at 2.0Ghz (as opposed to only 1.2Ghz of the Marvell Kirkwood processor that powers the SheevaPlug).

    The new processor is also designed to use less energy and at the same time has better support for plug and play and streaming media. The Plug Computer 3.0 also offers many more options than the previous generation, including built-in storage and support for wireless networking and Bluetooth. And like the previous generation, it also has a built-in USB port and a Gigabit Ethernet port. The machine supports multiple standard Linux 2.6 kernel distributions, making it a great platform for application development.

    Click to read more ...