Nielsen says XBox 360 most used console, maybe online experience requiring lots of data center resources is a factor

Our family has an XBox 360 and a Wii, and I would agree with the cnet news article.

Xbox 360 is most-used game console, Nielsen says

by Don Reisinger

Nintendo Wii

The Wii is not the most-used console, but it has attracted female gamers.

(Credit: Nintendo)

As the game console wars rage on, new findings from Nielsen may give Xbox 360 fans a little more fodder for their bragging rights.

According to the market researcher, Microsoft's Xbox 360 is the most-used console when measured by its share of total usage minutes, capturing 23.1 percent of gaming time. It is followed by the PlayStation 2 with 20.4 percent of usage time and the Nintendo Wii with 19 percent. Surprisingly, the PlayStation 3 didn't make the list top-three list.

My kids actually play games on their iPod Touches more than the Wii, and the cost is significantly less for games on the iPod, so I am not complaining. 

One of the comments on the cnet article made me think of data centers.

This comes as no real surprise to me. I'm a PC gamer primarily but I also own a Wii. It sits in the corner and gathers dust. The superior online features of the 360 keep people coming back. Much more than can be said about the Wii's garbage online multiplayer.

I have a friend who works in XBox 360 online operations, and I don’t think the Nintendo or Sony data center operations team come close to the scale of XBox 360.  I don’t recall running into any big news on Nintendo or Sony’s data centers so it is hard to find, and in general data center operations for Nintendo and Sony is probably an overhead as opposed to a revenue stream for XBox 360.

Read more

Marvell Plug Computer 3.0, Linux Microserver, ARM chip, WiFi, Bluetooth, HD – always on home server platform

Gizmodo.com has a post on Marvell Plug Computer 3.0.

Marvell Plug Computer 3.0: The Tiny Linux Brick

By Jesus Diaz, on Tue, 05 Jan 2010 13:59:25 –0

image

If I had $1,000,000 I would buy 10,000 of these Marvell Plug Computer 3.0, with a 2GHz Armada 300 CPU, Wi-Fi, and Linux 2.6, and build myself a supercomputer. It's either that or cocktails.

But, is this computer or what others would think of as a server?  A server defined by Wikipedia is any combination of hardware or software designed to provide services to clients.

Marvell Unveils Plug Computer 3.0 With Integrated Wireless and Built-in Hard Drive

Powerful Microserver is bolstered with 2 GHz ARMADA Processor to drive the "Always-On Lifestyle"

A cooler picture of a plug computer 3.0 is on CES cnet live.

Marvell super-upgrades its Plug Computer

by Dong Ngo

The Plug Computer 3.0

(Credit: Marvell)

It's been just half a year since the first plug computer, the SheevaPlug, or the Plug Computer 1.0, was introduced, but Marvell is now ready to release the third generation of the product.

The company announced Tuesday at CES 2010 the Plug Computer 3.0, which it believes to be such an upgrade over the first one that it decided to designate it as the third (3.0) generation of the product, even though it's really the second.

The naming aside, the Plug Computer 3.0 seems indeed impressive. Sleek-looking and smaller than a deck of playing cards, the new mini computer is now much more powerful than the first generation. It's equipped with Marvell's brand-new ARMADA 300 processor, running at 2.0Ghz (as opposed to only 1.2Ghz of the Marvell Kirkwood processor that powers the SheevaPlug).

The new processor is also designed to use less energy and at the same time has better support for plug and play and streaming media. The Plug Computer 3.0 also offers many more options than the previous generation, including built-in storage and support for wireless networking and Bluetooth. And like the previous generation, it also has a built-in USB port and a Gigabit Ethernet port. The machine supports multiple standard Linux 2.6 kernel distributions, making it a great platform for application development.

Read more

Architecture of Internet Datacenters

How many of you would like to attend a course on Architecture of Internet Data Centers?  This course is part of RAD Lab who wrote the Adove the Clouds paper.

image

Well in Fall of 2007 UC Berkeley (my alma mater) had the following course for graduate students.

CS 294-14: Architecture of Internet Datacenters (RADLab Research Seminar 2.0)

Instructor: Randy H. Katz
Time: MW 2:30-4:00 PM
Place: 310 Soda
Units: 3 (2-4, but you had better sign up for 3!)

Course Description

Internet Datacenters have recently emerged as a significant new computing platform, designed to provide high capacity processing for large numbers of web clients. Major web properties like Google have designed their own building-scale computer facilities, integrating processing, storage, internal and external networking, along with integral power and cooling infrastructures. The resulting datacenters typically deploy 100,000 to 1,000,000 computers within a single facility.

In this research seminar, we will read and discuss the very recent literature on the design and implementation of processor clusters, virtual machines, virtual storage, and datacenter networking organization. Architectural approaches to deal with failures, effective sharing of processing/storage/network resources, and efficient management of power across the systems stack will be considered. Some class meetings will be dedicated to meeting with and discussing issues with industrial leaders from Google, IBM, Cisco, and Network Appliances.

Here are the first two weeks.

Week 1: Course Organization, Overview, and Technology Trends

  • Monday, August 27
    1. [Randy] Randy H. Katz, “Internet-scale Computing: The Berkeley RADLab Perspective,” IWQoS 2007, Evanston, IL, (June 2007). [pdf]
    2. [Randy] Stephen Alan Herrod, VMWare, “The Future of Virtualization Technology,” ISCA 2006. [pdf]
  • Wednesday, August 29
    1. [Randy] Raj Yavatkar, Intel, “Platforms Design Challenges with Many Cores,” HPCA-12, 2006. [pdf]
    2. [Randy] Renato Recio, IBM, “System IO Network Evolution: Closing the Requirement Gaps,” HPCA-12, 2006. [pdf]
    3. [Randy] Steve Kleiman, NetApp, “Trends in Managing Data at the Petabyte Scale,” FAST 2007, San Jose, CA, (February 2007). [pdf]
Week 2: Applications Software Infrastructure
  • Monday, September 3: Labor Day Holiday
  • Wednesday, September 5
    2:30-4:00
    1. [Matei] S. Ghemawat, H. Gobioff, S.-T. Leung, “The Google File System,” Proc. SOSP’03, 2003. [pdf] [Notes].
    2. [Kuang] J. Dean, S. Ghemawat, “Mapreduce: Simplified Data Processing on Large Clusters,” Proc. OSDI’04, pages 137 – 150, (December 2004). [pdf] [Notes].
    3. [Michael] F. Chang, J. Dean, S. Ghemawat, W. C. Hsieh, D. A. Wallach, M. Burrows, T. Chandra, A. Fikes, R. E. Gruber, “Bigtable: A Distributed Storage System for Structured Data,” Proc. OSDI'06, 2006. [pdf]

    6:00-7:30
    1. [Randy] Intel and Sun White Papers on Multicore Architectures [Notes]
      • Intel, "Intel Multi-Core Processors: Making the Move to Quad-Core and Beyond." [pdf]
      • Intel, "Inside Intel Core Microarchitecture: Setting New Standards for Energy-Efficient Performance." pdf
      • Intel, "Preparing for Peta-scale." [pdf]
      • Harlan McGhan, "Niagara 2 Opens the Flood Gates," Microprocessor, 11/6/2006. [pdf]
    2. [Ari] L. A. Barroso, J. Dean, U. Holzle, “Web Search for a Planet: The Google Cluster Architecture,” IEEE Micro, 23(2):22–28, March/April 2003. [pdf] [Notes]
    3. [Henry] L. A. Barroso, “The Price of Performance: An Economic Case for Chip Multiprocessing," ACM Queue, 3(7), September 2005. [html] [pdf].

What I found interesting was the student project proposals.

Student Project Proposal Presentations

Read more

Site Selection Data Center Report for Columbia, MO

I found this site selection report, Data Center Analysis & Evaluation for Columbia Missouri By Angelou Economics from Oct 2007.  This report is a reverse site analysis comparing Columbia region to other areas in the US.

image

As part of the analysis they used Quincy, WA as a case study for the economics development.

image

For competitive analysis, they compared, Quincy, WA – Lenoir, NC – Pryor, OK – Goose Creek, SC.

image

The results of the evaluation were.

image

The following issues raised in 2007 were raised 2 years ago.

image

Giving timing for the Columbia Regional Economic Development group to respond.

A local development group has acquired over 200 acres of land located adjacent to the convergence of multiple transmission lines, electrical substation, and a gas peaking plant. The Ewing Property offers redundancy of electric and broadband, along with looped water supply. The development group submitted the site as the first application to the Missouri Department of Economic Development’s Certified Site Program, and the site has met all of the requirements for certification. Access information on Missouri's First Certified Site, Columbia's Ewing Industrial Park.

Columbia also has other positive attributes for data centers. The University of Missouri College of Engineering Computer Science and Electrical and Computer Engineering can ensure a qualified workforce. The National Security Administration recently named the University of Missouri as a Center for Academic Excellence in Information Assurance Education. The County of Boone has also developed an incentive package for qualifying data center projects based on the Chapter 100 Revenue Bond Program to encourage new data center investment in Columbia.

Read more

Sewage treatment plants using methane for fuel cell power generation

Note:  One lesson from the later part of this post is plan on a hybrid of natural gas and digester gas to run the fuel cell.  Don’t think you will run only on the sewage produced methane.  And the manufacturer, fuel cell says this too.

In many applications digester gas production volume is variable. In such applications, the plant can be designed to operate with automatic blending with natural gas.  Over 30% of FuelCell Energy's Direct FuelCell® (DFC®) installations operate on renewable biogas.

economist.com has article on sewage (waste) treatment plants as a source of energy.

Renewable energy

The seat of power

Dec 30th 2009
From The Economist print edition

Better sewage treatment is the latest thing in clean energy

Illustration by David Simonds

WHERE there’s muck, there’s brass—or so the old saying has it. The cynical may suggest this refers to the question of who gets what, but thoughtful readers may be forgiven for wondering, while they are recovering from the excesses of Christmas in the smallest room in the house, what exactly happens when they flush the toilet.

The answer is encouraging. Less and less waste, these days, is actually allowed to go to waste. Instead, it is used to generate biogas, a methane-rich mixture that can be employed for heating and for the generation of electricity. Moreover, in an age concerned with the efficient use of energy, technological improvements are squeezing human fecal matter to release every last drop of the stuff. Making biogas means doing artificially to faeces what would happen to them naturally if they were simply dumped into the environment or allowed to degrade in the open air at a traditional sewage farm—namely, arranging for them to be chewed up by bacteria. Capturing the resulting methane has a double benefit. As well as yielding energy, it also prevents what is a potent greenhouse gas from being released into the atmosphere.

MSNBC.com had an article in 2004 on fuel cells and methane for power generation.

Poop power? Sewage turned into electricity

Fuel cells and waste sludge mix to power treatment plant

Miguel Llanos

Reporter

msnbc.com

updated 9:31 a.m. PT, Mon., July 19, 2004

RENTON, Wash. - It's not as neat as spinning straw into gold, but what Greg Bush gets to do in the world of sewage treatment is pretty magical: making electricity from what's flushed down the sewer. And he does it using fuel cells, technology that's cleaner and more efficient than traditional power generation.

How it works
The largest project of its type in the world, the process goes like this: Biodegradable solid waste is sent to large tanks, called digesters, that provide a home for three to four weeks. There bacteria eat away at the waste, releasing methane gas and further reducing the amount of solid waste.

FUEL CELL PLANT AND DIGESTER TANKS

James Cheng / MSNBC.com

Four large digester tanks sit behind the fuel cell power plant at the wastewater treatment plant in Renton, Wash.

 

 

 

The fuel cells mentioned by MSBBC in King County are no longer in use.  But, here is the executive summary of the results, published in Apr 2009.

Final Report, King County Fuel Cell Demonstration Project

Issued April 2009

Final Report CoverExecutive Summary

Increasing energy costs, more stringent air emission regulations, and an interest in exploring emerging energy technologies prompted King County, Washington, to search for new and innovative ways to provide electricity for its wastewater treatment plants. In June 2004, the county began a two-year demonstration of a fuel cell power plant to be fueled by gas produced through anaerobic digestion of solids produced at its South Treatment Plant. The project was the first application in the country to use digester gas to fuel a molten carbonate fuel cell.

King County’s fuel cell power plant was sized to produce 1 megawatt (MW) of electricity and was designed to capture waste heat from the fuel cell exhaust and return it to the treatment plant. Two project goals were established:

  • Demonstrate that the molten carbonate fuel cell technology can be adapted to use anaerobic digester gas as a fuel source.
  • Achieve a nominal plant power output target of 1 MW using either digester gas or natural gas.

Both goals were achieved during the two‑year demonstration period. A number of secondary objectives (performance goals) were also met.

The final pdf is here.

Read more