3 Forces for the Magic of Insight

I posted about a book on Seeing What Others Don't.  The book focuses on developing insights.

The last paragraph in the book is

The magic of insights stems from the force for noticing connections, coincidences, and curiosities; the force for detecting contradictions; and the force of creativity unleashed by desperation.  That magic lives inside us, stirring restlessly.

There are too many words in this.  I like to think of this as.  

The three secrets for achieving the magic of insight are seeing patterns, recognizing anomalies, and tapping the source of trying what others haven't.

Slate.com dives into the details on Healthcare.gov issues, discusses back-end server issues

Healthcare.gov's availability and usability is in the new since Oct 1 launch.

Slate.com has a post on what is behind the problems.

They are finding Oracle DB errors.

“Error from: https%3A//www.healthcare.gov/oberr.cgi%3Fstatus%253D500%2520errmsg%253DErrEngineDown%23signUpStepOne.”

To translate, that’s an Oracle database complaining that it can’t do a signup because its “engine” server is down. So you can see Web pages with text and pictures, but the actual meat-and-potatoes account signup “engine” of the site was offline.

And who the contractors are for the client web front end and the back-end.

This failure points to the fundamental cause of the larger failure, which is the end-to-end process. That is, the front-end static website and the back-end servers (and possibly some dynamic components of the Web pages) were developed by two different contractors. Coordination between them appears to have been nonexistent, or else front-end architect Development Seed never would have given this interviewto the Atlantic a few months back, in which they embrace open-source and envision a new world of government agencies sharing code with one another. (It didn’t work out, apparently.) Development Seed now seems to be struggling to distance themselves from the site’s problems, having realized that however good their work was, the site will be judged in its totality, not piecemeal. Back-end developers CGI Federal, who were awarded a much larger contract in 2010 for federal health care tech, have made themselves rather scarce, providing no spokespeople at all to reporters. Their source code isn’t available anywhere, though I would dearly love to take a gander (and so would Reddit). I fear the worst, given that CGI is also being accused of screwing up Vermont’s health care website.

Part of the reason why this post makes sense and is researched well is it written by a SW developer.

 

About

davidheadshot-300x221I am a writer and software engineer. I’ve worked for Google and Microsoft. I live in New York with several thousand books. I have contributed to Slate, the Times Literary Supplement, The Nation, n+1Bookforum, Triple Canopy, The Quarterly Conversation, and elsewhere.

The closing remarks are proof the author knows what he is talking about.

Bugs can be fixed. Systems can even be rearchitected remarkably quickly. So nothing currently the matter with healthcare.gov is fatal. But the ability to fix it will be affected by organizational and communication structures. People are no doubt scrambling to get healthcare.gov into some semblance of working condition; the fastest way would be to appoint a person with impeccable engineering and site delivery credentials to a government position. Give this person wide authority to assign work and reshuffle people across the entire project and all contractors, and keep his schedule clean. If you found the right person—often called the “schedule asshole” on large software projects—things will come together quickly. Sufficient public pressure will result in things getting fixed, but the underlying problems will most likely remain, due to the ossified corporatist structure of governmental contracts.

Have you noticed you don't need to restart your smartphone as much as you used to?

Have you noticed you don't need to restart your smartphone as much as you used to?  Isn't it sad that the most often device you restart at home is your internet connection and/or router?

When you first got a cell phone you didn't think about restarting your phone.  Then with the iPhone, Windows Phone, and Android you got used to.  At least I got in the habit of regularly restarting my phone.

The one time I know I need to restart my phone is during an update, and those are usually only once every 3 months.

Hopefully this is a trend and the smartphone will just work all the time.  Huh, wonder what the availability % of a phone is?  99.7%  99.8  99.9

iOS7 creates an iMessage bug, poor users have no idea

I have an iPhone 5 and am surrounded by iPhone users.  With iOS7 there is an iMessage bug that prevents messages from being sent.

Last night I tried this work around and it worked.  The only hassle is I needed to re-enter my wifi access passwords.

How to fix iMessage not working in iOS 7 - three simple steps

iMessage texts not sending? Apple promsies fix but here's a simple fix.

Apple has now admitted that there's a bug in its new iOS 7 operating system for iPhone and iPad that stops iMessage sending text messages. Here's a simple three-step fix that seems to be working for many of those affected.


Unfortunately, I need to apologize to a few people for my texts not getting through.

After the above work around I could see what text messages were not sent.

Sure there will be a bug fix for this soon, but until then the work around worked for me.

FUBAR & SNAFU are words for NSA's Utah Data Center's bad habit of frequent Arc Flash events

WSJ has an article that covers the electrical problems that the NSA data center is having.

What comes to mind are the military acronyms - FUBAR and SNAFU.  

SNAFU is a military slang acronym meaning "Situation Normal: All Fucked Up".

FUBAR stands for fucked up beyond all recognition/repair/reason, like SNAFU and SUSFU, dates from World War II. The Oxford English Dictionary lists Yank, the Army Weekly magazine (1944, 7 Jan. p. 8) as its earliest citation: "The FUBAR squadron. ‥ FUBAR? It means 'Fouled Up Beyond All Recognition."[7] NFG is equipment that is not functional, but may or may not be repairable, FUBAR is beyond repair.

Here is the WSJ article below points made.  Can you imagine the size of the analysis documents of these outages.  It would probably take weeks to read and make your brain hurt.

Meltdowns Hobble NSA Data Center

Investigators Stumped by What's Causing Power Surges That Destroy Equipment

 

Chronic electrical surges at the massive new data-storage facility central to the National Security Agency's spying operation have destroyed hundreds of thousands of dollars worth of machinery and delayed the center's opening for a year, according to project documents and current and former officials.

[image]

There have been 10 meltdowns in the past 13 months that have prevented the NSA from using computers at its new Utah data-storage center, slated to be the spy agency's largest, according to project documents reviewed by The Wall Street Journal.

 

 

 

 

 

Sounds like there is a lot of covering of asses, and pointing of fingers of blame.

But another government assessment concluded the contractor's proposed solutions fall short and the causes of eight of the failures haven't been conclusively determined. "We did not find any indication that the proposed equipment modification measures will be effective in preventing future incidents," said a report last week by special investigators from the Army Corps of Engineers known as a Tiger Team.

The architectural firm KlingStubbins designed the electrical system. The firm is a subcontractor to a joint venture of three companies: Balfour Beatty Construction, DPR Construction and Big-D Construction Corp. A KlingStubbins official referred questions to the Army Corps of Engineers.

The joint venture said in a statement it expected to submit a report on the problems within 10 days: "Problems were discovered with certain parts of the unique and highly complex electrical system. The causes of those problems have been determined and a permanent fix is being implemented."

There have been 10 arc flash events since Aug 2012.

The first arc fault failure at the Utah plant was on Aug. 9, 2012, according to project documents. Since then, the center has had nine more failures, most recently on Sept. 25. Each incident caused as much as $100,000 in damage, according to a project official.

It took six months for investigators to determine the causes of two of the failures. In the months that followed, the contractors employed more than 30 independent experts that conducted 160 tests over 50,000 man-hours, according to project documents.