Nvidia says its strategy is ARM

With the success of iPhone and Android as a smartphone platform the developer focus has shifted to ARM vs. x86.  Many will scoff at the ARM processor for not being able to do the work in the data center, but when you look at price performance and power performance the ARM chip is competitive.

CNET has an interview with NVidia's CEO, Jen-Hsun Huang.

I also asked Huang about the company's strategy for central processing units, or CPUs, used in smartphones and tablets. Nvidia has been supplying its first-generation Tegra chip to portable music device makers such Microsoft, which used Tegra in the Zune HD. The second-generation Tegra 2 is targeted at smartphones and tablets but has yet to make an appearance in a product from a first-tier device maker. All Tegra chips are based on a design from United Kingdom-based ARM.

"Our CPU strategy is ARM," Huang said, referring to the fact that Nvidia was, unit last year, only a supplier of GPUs. "ARM is the fastest growing processor architecture in the world today. ARM supports (Google's) Android best. And Android is the fastest growing OS in the world today," Huang said.

Huang said that its dual-core Tegra 2 chips currently come in two flavors, the AP20 for smartphones and the T20 for tablets. "And both of them are being designed into products," Huang said.

Smooth-Stone is preparing a product line for data center performance with cell phone power.

Long time ago, x86 processors were laughed at as incapable to run data center IT.  It was a world of mainframes and minis.  Dominated by IBM, Digital, and others.  Where are those companies now in the server business?  Meanwhile Intel was selling tons of x86 processors in desktops and with Microsoft's help, Intel's x86 made inroads into servers.

Why can't ARM processors move from smartphones to the server businesss as well?  HP, Dell, IBM, and the dominant server vendors will help to fuel the anti-ARM server.  Meanwhile the ARM processors growth is fueled by smartphones.

There are technical issues like ARMs not being 64 bit, but people have figured out how to get around this issue.  Note: some supercomputers have  32 bit low power processors to keep their power footprint lower.

Is the future green data center going to have ARM servers?

How can it not?

Read more

Seagate and Samsung to co-develop SSD for enterprise storage

SSDs use much less power and have a higher performance under some conditions than HD, but the uptake in enterprises has been slow and in data centers that want to be green.

CNET has an article on Seagate and Samsung co-developing SSD for enterprise storage.

Seagate and Samsung to co-develop SSD controller

by Dong Ngo

Seagate and Samsung, the two major makers of hard drives and system memory, announced Thursday that they have entered into a joint development and licensing agreement.

Under this agreement, the two companies will develop and cross-license related controller technologies for solid state drives.

Seagate is leveraging its enterprise storage expertise.

Seagate says that the joint development will build on the existing SSD capabilities of each company while combining Seagate's enterprise storage technology with Samsung's 30 nanometer-class MLC NAND flash memory technology. Seagate will then use the jointly developed controller for its enterprise-class SSDs.

Seagate's blog says the partnerships is to address SSD memory errors and lower cost.

Each company brings something unique to the table besides its market leadership. While companies in any technology field when marketing will tend to focus on the positive aspects of the technology they are producing or selling, we know that behind the scenes all technologies have challenges and hurdles that must be overcome. In the case of storage, it doesn’t matter whether we’re discussing SSDs or HDDs; engineers working with both technologies are most often tasked with limiting the number of data errors produced at the media. Think of it as the game of always looking to make perfect something that will always be imperfect to start with.  Seagate has great expertise in minimizing errors on its media and its current enterprise HDDs are best-in-class in the area of error recovery.

So that is at the heart of the collaboration from a technical perspective: error recovery and management. Samsung brings its flash technology expertise while Seagate brings its error recovery expertise to the table. Between them, the companies will look to produce a controller for SSDs that can attain the high levels of performance, reliability, and endurance demanded by enterprise storage applications.

Another interesting technical piece is the fact that today’s announcement references the use of  Samsung’s 30 nanometer-class MLC (Multi-Level Cell) NAND as the technology base for the collaborative project. MLC NAND enables higher capacities at a lower cost, but it has not typically been a target technology for enterprise use due to having lower endurance. However, the controller technology that Seagate and Samsung develop together with its advanced error recovery and flash management, will enable more cost-effective and long-life products for the enterprise space.

Seagate's press release is here.

"Seagate has long recognized that solid state technology has an important role to play in the comprehensive solutions the storage industry will deliver today and in the future, particularly in the enterprise market," said Steve Luczo, Seagate chairman, president and CEO. "Today's agreement with Samsung will help us bring a compelling set of SSD innovations to the enterprise storage market, with benefits that range from enhanced performance, endurance and reliability to cost and capacity improvements. Overall, this agreement with Samsung strengthens our SSD solutions strategy, and positions Seagate well as global demand for storage continues on its strong growth path."

"We are pleased to be jointly developing a high-performance SSD controller with Seagate for the enterprise storage market," said Dr. Changhyun Kim, senior vice president and Samsung Fellow, Memory product planning & application engineering, Semiconductor Business, Samsung Electronics. "Our green memory solution is designed to enable more energy-efficient server applications, which is expected to increase the use of NAND-based SSD storage in enterprise applications."

Read more

Technique for Changing Data Center Behavior, focusing on people's thinking

After spending two days in The Pacific Institute seminar, there are some interesting ideas I like that identifies a problem in implementing new data center projects that  support a green data center.  One is the concept of sabotage.

Many Six Sigma implementations fail because of
sabotage. Not overt resistance, but the silent, subtle,
“so maybe this program will go away” kind of
resistance. People won’t likely be aggressive, but
will instead display what we call passive-aggressive
behavior. For example, a Six Sigma implementation
imposed exclusively top-down can create a
counter-force – a bottom-up, nonproductive
“push-back.” If this happens, there can be much
waste, frustration, and many false starts.

The following article is written by Ron Mevdev on six sigma projects, and much of what he discusses applies to the problems of implementing changes in data centers.  Rarely do you hear someone speaking on the issues of people in data centers. Why?  Because the data center speaker system is dominated by vendors and products who sell to data center builders and operators.  This group of vendors in general are selling how to solve problems by buying things, and the people factor is rarely discussed. Yet, some of the top data center executives are excellent people managers, building a team.  Urs Hoelzle and Olivier Sanche are two examples of people who have a loyal following.

I think Urs and Olivier understand this paragraph well.

SOLVING PROBLEMS AT THE ROOT CAUSE
But sometimes problems can be tenacious. Often
there’s a complexity that includes the most interesting
variable of all – people. If we want to change
for the better, a fundamental understanding of
how people think and how their beliefs affect their
performance must be factored into the equation.
An appreciation of the basics of human behavior
and performance enhances analysis. Better yet, it
helps managers and employees solve their own
people problems.

Greening the data center requires people to change their behaviors and thinking.

Which is why it is so hard.

Do your people sometimes feel like this crowd living in the past?  So many people are comfortable staying with what has worked in the past. How far back in history should you go and feel safe you can copy?  Last year, 5 years, 10 years?

Read more

Thought Patterns for High Performance

I am spending the next 2 days in a seminar at The Pacific Institute on Thought Patterns for High Performance.

image

The Nick Saban is a great story of how The Pacific Institute helped the Alabama Football team.  if this good enough for the #1 football team, there is something there to learn.

Psychology of the Data Center, learning from the Science of Football - Alabama’s Nick Saban

I just had a conversation with Tom Roth who introduced me to some of the work that The Pacific Institute does.  Tom and I had an interesting discussion on data centers as he is familiar with real estate development in Eastern Washington’s recent data center build out that has used up hydroelectric power which has little employment impact the original dam developers intended.

image

One of the points well made is on what Adult Learners Learn and Retain:

  1. 10% of what we HEAR
  2. !5% of what we SEE
  3. 20% of what we both SEE and HEAR.
  4. 40% of what we DISCUSS with OTHERS.
  5. 80% of what we EXPERIENCE.
  6. 90% of what we ATTEMPT to TEACH OTHERS.

This is an excellent point on why some of the top data center industry people are quite social and interact with others.

For those of you who lock your data center people in the building, consider letting them out to let them achieve a higher performance.

Read more

IDS launching sea green data center space

Rich Miller DataCenterKnowledge reports on IDS's ocean port based data center ship.

IDS Readies Data Centers on Ships

August 9th, 2010 : Rich Miller

In early 2008, startup International Data Security revealed plans to build a fleet of data centers on cargo ships docked at ports in the San Francisco Bay. After an initial flurry of publicity, the company receded from the spotlight amid industry chatter of funding challenges.

Now IDS is back, and the company says it has lined up funding and an anchor tenant for a proof-of-concept “maritime data center” that will dock at Redwood City, Calif. The first vessel is a former training ship for the California Maritime Academy that IDS has acquired and is prepping for renovation. IDS representatives say the company has lined up $15 million for an initial deployment of 500 racks of servers.

One of the points is the concern about salt water.

Concept Brings Curiosity, Skepticism
IDS said it has experienced the same mix of curiosity and skepticism. “A lot of the conversations we’ve had with data center operators have been around questions like ‘do you want to put this kind of equipment close to salt water’ and ‘is the rolling motion of the ship a problem,’” said Prince. “The reality is that the Navy has had data centers on war-fighting ships for 20 years or more.”

I've been on the USS Abraham Lincoln air craft carrier looking at some of their IT systems.

image

The data center space is isolated as IDS says, but it is the cost of all the other systems supporting the space that is beyond most budgets.

I am curious how much lower IDS's costs if they were able to set up a floating data center in a fresh water port.

Read more