The Three Rules of Obamacare's Trauma Team

Time has a an article on the Trauma Team that rescued Obamacare.

Monday, Mar. 10, 2014

Obama’s Trauma Team

 

Last Oct. 17—more than two weeks after the launch of HealthCare.gov—White House chief of staff Denis McDonough came back from Baltimore rattled by what he had learned at the headquarters of the Centers for Medicare and Medicaid Services (CMS), the agency in charge of the website.

McDonough and the President had convened almost daily meetings since the Oct. 1 launch of the website with those in charge—including Health and Human Services Secretary Kathleen Sebelius, CMS administrator Marilyn Tavenner and White House health-reform policy director Jeanne Lambrew. But they couldn’t seem to get what McDonough calls “actionable intel” about how and why the website was failing in front of a national audience of stunned supporters, delirious Republican opponents and ravenous reporters.

One excellent point are the three rules that most of us know work well for an effective team.  In this case it was the Trauma Team to make Obamacare work.

Dickerson quickly established the rules, which he posted on a wall just outside the control center.

Rule 1: “The war room and the meetings are for solving problems. There are plenty of other venues where people devote their creative energies to shifting blame.”

Rule 2: “The ones who should be doing the talking are the people who know the most about an issue, not the ones with the highest rank. If anyone finds themselves sitting passively while managers and executives talk over them with less accurate information, we have gone off the rails, and I would like to know about it.” (Explained Dickerson later: “If you can get the managers out of the way, the engineers will want to solve things.”)

Rule 3: “We need to stay focused on the most urgent issues, like things that will hurt us in the next 24—48 hours.”

Can you imagine the disruption of the chain in command?  An example is the executive Zients who chooses to use the Apollo 13 analogy.

Zients isn’t a techie himself. He’s a business executive, one of those people for whom control—achieved by lists, schedules, deadlines and incessant focus on his targeted data points—seems to be everything. He began an interview with me by reading from a script crowning the team’s 10-week rescue mission as the White House’s “Apollo 13 moment,” as if he needed to hype this dramatic success story. And he bristled because a question threatened not to make “the best use of the time” he had allotted. So for him, this Apollo 13 moment must have been frustrating—because in situations like this the guy in the suit is never in control.

 

 

How Network Functions Virtualization (NFV) Greens the Data Center, by using standard Server Power Management

Part of the point of NFV is the lower power use vs. the current state of equipment.  How is this done?

in the first paper on NFV here is the part that explains how the power savings will be achieved.

Reduced energy consumption by exploiting power management features in standard servers
and storage, as well as workload consolidation and location optimisation. For example,
relying on virtualisation techniques it would be possible to concentrate the workload on a
smaller number of servers during off-peak hours (e.g. overnight) so that all the other servers
can be switched off or put into an energy saving mode.

The number of telecom equipment that goes into lower power mode is probably amazingly low.

Besides saving power this same feature makes it easier for maintenance operations.

Option to temporarily repair failures by automated re-configuration and moving 
network workloads onto spare capacity using IT orchestration mechanisms. This 
could be used to reduce the cost of 24/7 operations by mitigating failures 
automatically.

 

 

 

 

A New Option for moving out of AWS, Forsythe's Data Center in 1,000 sq ft increments

Moving out of AWS can save a lot of money. Huh?  Yes, here is one public disclosure from Moz’s CEO.

Building our private cloud

We spent part of 2012 and all of 2013 building a private cloud in Virginia, Washington, and mostly Texas.

This was a big bet with over $4 million in capital lease obligations on the line, and the good news is that it's starting to pay off. On a cash basis, we spent $6.2 million at Amazon Web Services, and a mere $2.8 million on our own data centers. The business impact is profound. We're spending less and have improved reliability and efficiency.

Our gross profit margin had eroded to ~64%, and as of December, it's approaching 74%. We're shooting for 80+%, and I know we'll get there in 2014.

 

 

 

So you want to move out of AWS, you dread the task of finding something the right size.  A cage in a colocation facility.  Seems too old school.  A wholesale pod?  Too big and you aren’t ready to jump into managing your own electrical and mechanical infrastructure.

How about 1,000 sq ft of data center space configured exactly the way you want?  Need more, get another 1,000 sq feet.  This is what Forsythe Data Centers has announced with its latest data center, offering the solution in the middle of this table.

NewImage



“Forsythe’s facility offers the flexibility and agility of the retail data center market, in terms of size and shorter contract length, with the privacy, control and density of large-scale, wholesale data centers,” said Albert Weiss, president of Forsythe Data Centers, Inc., the new subsidiary managing the center. He is also Forsythe Technology’s executive vice president and chief financial officer.

I got a chance to talk to Steve Harris and the flexibility for customers to have multiple suites designed exactly to support their gear is a dream come true those who know one size fits all usually means you wasting money somewhere.  You could have one suite that is just for storage, tape backup and other gear that is more sensitive to heat.  The high temperature gear could be right next to the storage suite.  You could have higher level of redundancy for some equipment and less for others in another suite.

And just like the cloud, your ability to add is so much easier than I need to move to a bigger cage.  Just add another suite.

How much power do you want per rack?  What’s a suite look like?

NewImage

Oh yeh and the data center is green too.

The facility is being designed to comply with U.S. Green Building Council LEED certification standards for data centers and to obtain Tier III certification from the Uptime Institute, a certification currently held by approximately 60 data centers in the U.S. and 360 worldwide, few of which are colocation facilities.

The transformation from Hardware to Software based Operations, AT&T's Network Transformation

I started my career in manufacturing and distribution logistics, then moved to hardware, and eventually operating systems and other software.  Most of what drove the changes in what I do is got bored and was looking to learn new things.  But, most people don’t like to change, they like predictability of what needs to be done. 

In AT&T’s Domain 2.0 document is a long list of transition they plan on making going from a hardware approach to a software approach.  

I don’t know about you, but I like the right side of the list much better than the left side.  The left side is easier from a micro management of what needs to be done, but it misses the customer focus which dominates the right side.  

NewImage

AT&T announces its Embracing Cloud Principles for its Network

AT&T announced its User-Defined Network Cloud which is kind of puzzling.  So, the current network is a non-user defined specialized equipment environment where people (mostly men) picked their favorite equipment in self serving perspectives thinking of their jobs and users should trust these people to be the experience they wanted? This was the old way, but technology is moving too fast, and users expectations are growing.  Here is a graphic that illustrates the change AT&T is making.

NewImage

NFV aims to address these problems by evolving standard IT virtualization technology to consolidate
many network equipment types onto industry standard high volume servers, switches and storage that
can be located in data centers, network PoPs or on customer premises. As shown in Figure 2, this
involves the implementation of network functions in software, called VNFs, that can run on a range of
general purpose hardware, and that can be moved to, or instantiated in, various locations in the
network as required, without the need for installation of new equipment.

The document that has this graphic is here.

Here is another graphic that shows the change.

NewImage

Here is the blog post from AT&T’s John Donovan.  I think if John had added these simple graphics to his blog post it would have communicated much more clearly what AT&T is doing.

I found this information thanks to GigaOm’s Kevin Fitchard post.

Software is eating the mobile network, too, as AT&T begins its journey into the cloud

 

6 HOURS AGO

2 Comments

cloud-cell-tower
photo: Gigaom Illustration
SUMMARY:

AT&T is taking the first steps toward transforming its network into a data center. It’s not touching the cellular network — at least not yet — but it will start virtualizing its mobile core and application infrastructure.