Asia's Data Center Power Infrastructure

I have been staring at this post on DCD for a while to write my own post, then I realized the post was written by Schneider Electric SVP for APAC, Philippe Arsonneau.

 

ASIA PACIFIC’S POWER CHALLENGE

 

Will power will take on new importance in 2014?

 

10 February 2014 by Schneider Electric SVP for APAC, Philippe Arsonneau

 

 

 

Asia Pacific’s power challenge
burning power: Singapore at night

One of the major trends we see for businesses moving into the New Year is the need for green, efficient IT, especially in Asia Pacific.  As IT demands increase, so too does the data center’s power expenditure. Analyst firm Frost & Sullivan notes that more than 80% of the major data centers in Asia Pacific are running at close to 90% capacity. Companies across the region are struggling to cope with changes while data center capacity is constrained by inefficient equipment and stranded power.

 


I found this post useful to provide information on what is the current state of power in Asia that I have heard from friends.  Until I find a public disclosure though, I didn't feel comfortable writing about the situation.

Here are some good facts.

In terms of energy efficient data centers, parts of Asia – particularly the developing South East Asian countries – are falling behind due to a combination of factors including poor internet connectivity infrastructure (Indonesia), unstable power and inadequate power supply (Malaysia), developing standards (Vietnam). The more established countries include the likes of Singapore, Australia and Hong Kong. But that’s not to say they don’t also face challenges.

Part of why companies like Google, Amazon, and Microsoft have built data centers in Singapore is the stable power infrastructure.

Singapore is a very mature market in terms of technology compared to emerging countries such as Thailand and Indonesia. The Singapore IDA initiated its iN2015 master plan in 2005 to grow the infocomm sector and build up IT infrastructure. This initiative encouraged many major players to set up their data centers as early as almost a decade ago.

The call to action is good.  You can’t just think of the data center in isolation of the IT load.  The opportunities are to think of the synergy between the facilities and the IT load.

What is required in 2014? 
Effective and comprehensive energy management goes beyond IT. As such, senior IT executives will need to work closely with their facility management colleagues to put in place a comprehensive energy management strategy.  They will also need to develop a more holistic and end-to-end approach towards their data center strategy and energy management as opposed to seeking piecemeal solutions such as server virtualization or DCIM.

To operate energy efficient and reliable data centers that are able to cope with the exponential growth of data brought on by smart cities, it is important that business take a holistic and end-to-end approach towards their data center strategy.



Wouldn't it be so much Easier if Google had a map of where there Barges are?

CNET writes about the Google Barge moving to Stockton.  Wow isn’t that exciting.  :-)

It's official: Google Barge moving to Stockton

The floating showroom is expected to set sail for its new home as early as next week. Now, maybe Google will finally tell us what's behind all the black netting and scaffolding.

 February 26, 2014 5:04 PM PST

Google Barge is said to be moving to Stockton, Calif., as early as next week, weather permitting.

(Credit: Josh Miller/CNET)
It would be so much easier if Google would just share the location like where is James Hamilton’s boat?
NewImage 
NewImage
 

 

 

The Three Rules of Obamacare's Trauma Team

Time has a an article on the Trauma Team that rescued Obamacare.

Monday, Mar. 10, 2014

Obama’s Trauma Team

 

Last Oct. 17—more than two weeks after the launch of HealthCare.gov—White House chief of staff Denis McDonough came back from Baltimore rattled by what he had learned at the headquarters of the Centers for Medicare and Medicaid Services (CMS), the agency in charge of the website.

McDonough and the President had convened almost daily meetings since the Oct. 1 launch of the website with those in charge—including Health and Human Services Secretary Kathleen Sebelius, CMS administrator Marilyn Tavenner and White House health-reform policy director Jeanne Lambrew. But they couldn’t seem to get what McDonough calls “actionable intel” about how and why the website was failing in front of a national audience of stunned supporters, delirious Republican opponents and ravenous reporters.

One excellent point are the three rules that most of us know work well for an effective team.  In this case it was the Trauma Team to make Obamacare work.

Dickerson quickly established the rules, which he posted on a wall just outside the control center.

Rule 1: “The war room and the meetings are for solving problems. There are plenty of other venues where people devote their creative energies to shifting blame.”

Rule 2: “The ones who should be doing the talking are the people who know the most about an issue, not the ones with the highest rank. If anyone finds themselves sitting passively while managers and executives talk over them with less accurate information, we have gone off the rails, and I would like to know about it.” (Explained Dickerson later: “If you can get the managers out of the way, the engineers will want to solve things.”)

Rule 3: “We need to stay focused on the most urgent issues, like things that will hurt us in the next 24—48 hours.”

Can you imagine the disruption of the chain in command?  An example is the executive Zients who chooses to use the Apollo 13 analogy.

Zients isn’t a techie himself. He’s a business executive, one of those people for whom control—achieved by lists, schedules, deadlines and incessant focus on his targeted data points—seems to be everything. He began an interview with me by reading from a script crowning the team’s 10-week rescue mission as the White House’s “Apollo 13 moment,” as if he needed to hype this dramatic success story. And he bristled because a question threatened not to make “the best use of the time” he had allotted. So for him, this Apollo 13 moment must have been frustrating—because in situations like this the guy in the suit is never in control.

 

 

How Network Functions Virtualization (NFV) Greens the Data Center, by using standard Server Power Management

Part of the point of NFV is the lower power use vs. the current state of equipment.  How is this done?

in the first paper on NFV here is the part that explains how the power savings will be achieved.

Reduced energy consumption by exploiting power management features in standard servers
and storage, as well as workload consolidation and location optimisation. For example,
relying on virtualisation techniques it would be possible to concentrate the workload on a
smaller number of servers during off-peak hours (e.g. overnight) so that all the other servers
can be switched off or put into an energy saving mode.

The number of telecom equipment that goes into lower power mode is probably amazingly low.

Besides saving power this same feature makes it easier for maintenance operations.

Option to temporarily repair failures by automated re-configuration and moving 
network workloads onto spare capacity using IT orchestration mechanisms. This 
could be used to reduce the cost of 24/7 operations by mitigating failures 
automatically.

 

 

 

 

A New Option for moving out of AWS, Forsythe's Data Center in 1,000 sq ft increments

Moving out of AWS can save a lot of money. Huh?  Yes, here is one public disclosure from Moz’s CEO.

Building our private cloud

We spent part of 2012 and all of 2013 building a private cloud in Virginia, Washington, and mostly Texas.

This was a big bet with over $4 million in capital lease obligations on the line, and the good news is that it's starting to pay off. On a cash basis, we spent $6.2 million at Amazon Web Services, and a mere $2.8 million on our own data centers. The business impact is profound. We're spending less and have improved reliability and efficiency.

Our gross profit margin had eroded to ~64%, and as of December, it's approaching 74%. We're shooting for 80+%, and I know we'll get there in 2014.

 

 

 

So you want to move out of AWS, you dread the task of finding something the right size.  A cage in a colocation facility.  Seems too old school.  A wholesale pod?  Too big and you aren’t ready to jump into managing your own electrical and mechanical infrastructure.

How about 1,000 sq ft of data center space configured exactly the way you want?  Need more, get another 1,000 sq feet.  This is what Forsythe Data Centers has announced with its latest data center, offering the solution in the middle of this table.

NewImage



“Forsythe’s facility offers the flexibility and agility of the retail data center market, in terms of size and shorter contract length, with the privacy, control and density of large-scale, wholesale data centers,” said Albert Weiss, president of Forsythe Data Centers, Inc., the new subsidiary managing the center. He is also Forsythe Technology’s executive vice president and chief financial officer.

I got a chance to talk to Steve Harris and the flexibility for customers to have multiple suites designed exactly to support their gear is a dream come true those who know one size fits all usually means you wasting money somewhere.  You could have one suite that is just for storage, tape backup and other gear that is more sensitive to heat.  The high temperature gear could be right next to the storage suite.  You could have higher level of redundancy for some equipment and less for others in another suite.

And just like the cloud, your ability to add is so much easier than I need to move to a bigger cage.  Just add another suite.

How much power do you want per rack?  What’s a suite look like?

NewImage

Oh yeh and the data center is green too.

The facility is being designed to comply with U.S. Green Building Council LEED certification standards for data centers and to obtain Tier III certification from the Uptime Institute, a certification currently held by approximately 60 data centers in the U.S. and 360 worldwide, few of which are colocation facilities.