Slate.com dives into the details on Healthcare.gov issues, discusses back-end server issues

Healthcare.gov's availability and usability is in the new since Oct 1 launch.

Slate.com has a post on what is behind the problems.

They are finding Oracle DB errors.

“Error from: https%3A//www.healthcare.gov/oberr.cgi%3Fstatus%253D500%2520errmsg%253DErrEngineDown%23signUpStepOne.”

To translate, that’s an Oracle database complaining that it can’t do a signup because its “engine” server is down. So you can see Web pages with text and pictures, but the actual meat-and-potatoes account signup “engine” of the site was offline.

And who the contractors are for the client web front end and the back-end.

This failure points to the fundamental cause of the larger failure, which is the end-to-end process. That is, the front-end static website and the back-end servers (and possibly some dynamic components of the Web pages) were developed by two different contractors. Coordination between them appears to have been nonexistent, or else front-end architect Development Seed never would have given this interviewto the Atlantic a few months back, in which they embrace open-source and envision a new world of government agencies sharing code with one another. (It didn’t work out, apparently.) Development Seed now seems to be struggling to distance themselves from the site’s problems, having realized that however good their work was, the site will be judged in its totality, not piecemeal. Back-end developers CGI Federal, who were awarded a much larger contract in 2010 for federal health care tech, have made themselves rather scarce, providing no spokespeople at all to reporters. Their source code isn’t available anywhere, though I would dearly love to take a gander (and so would Reddit). I fear the worst, given that CGI is also being accused of screwing up Vermont’s health care website.

Part of the reason why this post makes sense and is researched well is it written by a SW developer.

 

About

davidheadshot-300x221I am a writer and software engineer. I’ve worked for Google and Microsoft. I live in New York with several thousand books. I have contributed to Slate, the Times Literary Supplement, The Nation, n+1Bookforum, Triple Canopy, The Quarterly Conversation, and elsewhere.

The closing remarks are proof the author knows what he is talking about.

Bugs can be fixed. Systems can even be rearchitected remarkably quickly. So nothing currently the matter with healthcare.gov is fatal. But the ability to fix it will be affected by organizational and communication structures. People are no doubt scrambling to get healthcare.gov into some semblance of working condition; the fastest way would be to appoint a person with impeccable engineering and site delivery credentials to a government position. Give this person wide authority to assign work and reshuffle people across the entire project and all contractors, and keep his schedule clean. If you found the right person—often called the “schedule asshole” on large software projects—things will come together quickly. Sufficient public pressure will result in things getting fixed, but the underlying problems will most likely remain, due to the ossified corporatist structure of governmental contracts.

Have you noticed you don't need to restart your smartphone as much as you used to?

Have you noticed you don't need to restart your smartphone as much as you used to?  Isn't it sad that the most often device you restart at home is your internet connection and/or router?

When you first got a cell phone you didn't think about restarting your phone.  Then with the iPhone, Windows Phone, and Android you got used to.  At least I got in the habit of regularly restarting my phone.

The one time I know I need to restart my phone is during an update, and those are usually only once every 3 months.

Hopefully this is a trend and the smartphone will just work all the time.  Huh, wonder what the availability % of a phone is?  99.7%  99.8  99.9

iOS7 creates an iMessage bug, poor users have no idea

I have an iPhone 5 and am surrounded by iPhone users.  With iOS7 there is an iMessage bug that prevents messages from being sent.

Last night I tried this work around and it worked.  The only hassle is I needed to re-enter my wifi access passwords.

How to fix iMessage not working in iOS 7 - three simple steps

iMessage texts not sending? Apple promsies fix but here's a simple fix.

Apple has now admitted that there's a bug in its new iOS 7 operating system for iPhone and iPad that stops iMessage sending text messages. Here's a simple three-step fix that seems to be working for many of those affected.


Unfortunately, I need to apologize to a few people for my texts not getting through.

After the above work around I could see what text messages were not sent.

Sure there will be a bug fix for this soon, but until then the work around worked for me.

FUBAR & SNAFU are words for NSA's Utah Data Center's bad habit of frequent Arc Flash events

WSJ has an article that covers the electrical problems that the NSA data center is having.

What comes to mind are the military acronyms - FUBAR and SNAFU.  

SNAFU is a military slang acronym meaning "Situation Normal: All Fucked Up".

FUBAR stands for fucked up beyond all recognition/repair/reason, like SNAFU and SUSFU, dates from World War II. The Oxford English Dictionary lists Yank, the Army Weekly magazine (1944, 7 Jan. p. 8) as its earliest citation: "The FUBAR squadron. ‥ FUBAR? It means 'Fouled Up Beyond All Recognition."[7] NFG is equipment that is not functional, but may or may not be repairable, FUBAR is beyond repair.

Here is the WSJ article below points made.  Can you imagine the size of the analysis documents of these outages.  It would probably take weeks to read and make your brain hurt.

Meltdowns Hobble NSA Data Center

Investigators Stumped by What's Causing Power Surges That Destroy Equipment

 

Chronic electrical surges at the massive new data-storage facility central to the National Security Agency's spying operation have destroyed hundreds of thousands of dollars worth of machinery and delayed the center's opening for a year, according to project documents and current and former officials.

[image]

There have been 10 meltdowns in the past 13 months that have prevented the NSA from using computers at its new Utah data-storage center, slated to be the spy agency's largest, according to project documents reviewed by The Wall Street Journal.

 

 

 

 

 

Sounds like there is a lot of covering of asses, and pointing of fingers of blame.

But another government assessment concluded the contractor's proposed solutions fall short and the causes of eight of the failures haven't been conclusively determined. "We did not find any indication that the proposed equipment modification measures will be effective in preventing future incidents," said a report last week by special investigators from the Army Corps of Engineers known as a Tiger Team.

The architectural firm KlingStubbins designed the electrical system. The firm is a subcontractor to a joint venture of three companies: Balfour Beatty Construction, DPR Construction and Big-D Construction Corp. A KlingStubbins official referred questions to the Army Corps of Engineers.

The joint venture said in a statement it expected to submit a report on the problems within 10 days: "Problems were discovered with certain parts of the unique and highly complex electrical system. The causes of those problems have been determined and a permanent fix is being implemented."

There have been 10 arc flash events since Aug 2012.

The first arc fault failure at the Utah plant was on Aug. 9, 2012, according to project documents. Since then, the center has had nine more failures, most recently on Sept. 25. Each incident caused as much as $100,000 in damage, according to a project official.

It took six months for investigators to determine the causes of two of the failures. In the months that followed, the contractors employed more than 30 independent experts that conducted 160 tests over 50,000 man-hours, according to project documents.

 

Ouch 572 total views for 6 Microsoft GFS videos over past 4 months

I ran across some of the Microsoft GFS videos.  Some have had a lot of views.

NewImage

Then I saw the list of the last 6 videos over the past 4 months and the total views = 109 + 56 + 69 + 131 + 47 + 160 = 572.  I've posted videos of my kids at a school play that have more views. :-)

  • MSGFSTeam uploaded and posted

    Customize your message

    Business Impact

    • 4 months ago
    • 109 views
    Cloud computing reduces business costs by delivering the software, platform, and IT infrastructure services via the Internet. It not only reduces the need for server and storage ca…
  • MSGFSTeam uploaded and posted

    Customize your message

    Sustainability

    • 4 months ago
    • 56 views
    Microsoft is committed to driving software and hardware innovations that help people and organizations reduce their impact on the environment. Consumers and business customers…
  • MSGFSTeam uploaded and posted

    Customize your message

    Cloud-Scale Reliability

    • 4 months ago
    • 69 views
    Cloud computing reduces business costs by delivering the software, platform, and IT infrastructure services via the Internet. Delivering these integrated services at cloud-scale re…
  • MSGFSTeam uploaded and posted

    Customize your message

    Securing the Microsoft Cloud

    • 4 months ago
    • 131 views
    Watch Pete Boden, General Manager for Online Services Security and Compliance, as he discusses top concerns we hear from customers around security, privacy and compliance. P…
  • MSGFSTeam uploaded and posted

    Customize your message

    Sustainability in the Microsoft Cloud

    • 4 months ago
    • 47 views
    Join Rob Bernard, Microsoft's Chief Environmental Strategist, as he discusses our commitment to sustainability, and how technology helps Microsoft and our customers achieve more susta…
  • MSGFSTeam uploaded and posted

    Customize your message

    Server Design Strategy

    • 4 months ago
    • 160 views
    Watch Kushagra Vaid, General Manager of Data Center Compute Infrastructure, as he discusses how we engineer the array of servers and storage devices that our process cloud-…

Some of the older videos have way more views.  Wonder what happened to decrease the viewership.

  • MSGFSTeam uploaded a video

    Microsoft GFS Datacenter Tour

    • 2 years ago
    • 305,251 views
    This video will provide a deeper look at how Microsoft uses secure, reliable, scalable and efficient best practices to deliver over 200 cloud services to more than a billion customers a…
  • MSGFSTeam uploaded a video

    Future Datacenter Sustainability

    • 2 years ago
    • 675 views
    How should cloud service providers and enterprises be thinking about efficiency and sustainability five to ten years out? Microsoft is researching new ways to measure efficiency …
  • MSGFSTeam uploaded a video

    Microsoft's Modular Datacenter

    • 2 years ago
    • 7,107 views
    Microsoft's Modular Datacenter Overview of the Quincy, WA Facilit…
  • MSGFSTeam uploaded a video

    ITPAC VIDEO

    • 2 years ago
    • 7,308 views
    A video describing the Generation 4 Modular Data Center plans. This is our vision and will be the foundation of our cloud data center infrastructure in the next five years. We believe it is o…
  • MSGFSTeam uploaded a video

    Chicago Container Bay

    • 2 years ago
    • 2,052 views
    Container Bays at the Chicago Datacenter…