Apple's 1 billion Reno Data Center has 1/66th built, $14.85 mil tax assessment

The media was all over Apple’s $1 bil data center in Reno.

Apple’s largest project in the area, however, is its new $1 billion iCloud data center that includes both a large, rural parcel of land within the nearby Reno Technology Park and plans to build new supporting facilities on the edge of downtown.

I don’t know about you, but my friends had a good laugh at speculation that Apple was building 1 billion worth of data centers in Reno.

Well here is a progress based on RGJ reporting on the tax benefits of the Reno data center.

Meanwhile, the data center is projected to generate more than $142,000 in property taxes for the 2013-14 fiscal year, according to the Department of Taxation. The amount is generated from a total taxable value of $14.85 million based on Washoe County Assessor data.

This construction cost fits with a 2.5 MW data center build.

Today, thanks to its high-profile tenant, the Reno Technology Park is starting to take shape. Earlier this summer, Apple completed its first phase — a 20,000-square-foot, 2.5-megawatt data center.

That is the wonderful power of the Apple brand.  Who else could get the amount of press coverage for a 2.5 MW, $15 mil data center building.  If you say this is the first of four phases, Reno could be 10MW and $60 Mil.  If this is one tenth, then you get 25MW and $150mil.  To get to $1Bil data center you need to multiple this first building by 66.

Wouldn't it be great if IT Services behaved like well mannered house guests, respect the territorial boundaries

The holiday season is a time of travel.  Which means either you are the guest at someone’s house or you are the host of guests.  Benjamin Franklin’s cited as the originator of the classic quote, and here is an article that build on the concept in Psychology Today.

Benjamin Franklin famously said that guests, like fish, begin tosmell after three days. Many of us are inclined to agree. I myself recently struggled to share my space and resources with a houseguest. I wanted to be hospitable yet I experienced an unexpectedly inhospitable reaction to my mackerel-like guest (herein known as “Mack”).  The dissonance was intense. What was up with that? Fortunately, my psychology arsenal includes tools from the psychology subdiscipline ofenvironmental psychology. It is there we find theories and research on human territoriality that explain the trouble with houseguests (at least some of it!). 

What comes to mind though is wouldn’t life be better if IT services behaved like well mannered house guests and respect the territorial boundaries of the host.

How many of you are frustrated when new IT service change your routine?

Houseguests then, are stressful to the extent that they disrupt our routines and usurp the high amount of control we normally enjoy in this personal territory. If their routines interfere with ours or if their presence restricts our normal uses of home spaces, stress is likely. 

Unfortunately new IT services don’t leave like a house guest, so their habits now influence yours.

How many of you think some of the IT services that come in leave a bad smell in your clean operations?

Of course, territoriality isn’t the whole picture. Among other things, increased household labor also makes guests “smelly” (often more of an issue for women in traditionally gendered households where they bear the brunt of cooking and cleaning). The moral of this story: if you want to stay a welcome houseguest, it probably pays to respect your host’s home as a primary territory, and to keep your visit short.

Danger, Yahoo Mail is having the T-Mobile Sidekick Experience that sunk the service

If you hang around the hot things in the technology it is easy to believe that email is dead.  I don’t know about you, but e-mail is part of how I communicate.  Many young people have dropped their e-mail accounts as their friends use social media.  Yahoo is finding out how important mail is with days of outages that appear there is no end in sight.

This event has the possibility of being as big a disaster as Microsoft’s Danger T-Mobile sidekick outage/data loss that caused users to drop the service.

October 2009 data loss[edit]

In early October 2009, a server malfunction or technician error at Danger's data centers resulted in the loss of all Sidekick user data. As Sidekicks store users' data on Danger's servers—versus using local storage—users lost contact directories, calendars, photos, and all other media not locally backed up. Local backup could be accomplished through an app ($9.99 USD) which synchronized contacts, calendar, and tasks, but not notes, between the web and a local Windows PC. In an October 10 letter to subscribers, Microsoft expressed its doubt that any data would be recovered.[6]

The customer's data that was lost was being hosted in Microsoft's data centers at the time.[7] Some media reports have suggested that Microsoft hired Hitachi to perform an upgrade to its storage area network(SAN), when something went wrong, resulting in data destruction.[8] Microsoft did not have an active backup of the data and it had to be restored from a month-old copy of the server data, totalling 800GB in size, from offsite backup tapes. The entire restoration of data took over 2 months for customer data and full functionality to be restored.[9]

The Danger/Sidekick episode is one in a series of cloud computing mishaps that have raised questions about the reliability of such offerings.[10]

When you look at what is one of the causes of a major outage you will eventually trace to operations.  The initial Yahoo mail outage was caused be a hardware failure.  Marissa Mayer has posted the latest as of 5p today.

The initial failure was in a storage system.

On Monday, December 9th at 10:27 p.m. PT, our network operating center alerted the Mail engineering team to a specific hardware outage in one of our storage systems serving 1% of our users. The Mail team immediately started working with the storage engineers to restore access and move to our back-up systems, estimating that full recovery would be complete by 1:30 p.m. PT on Tuesday.

So, Yahoo fixes the problem, but restoring service is not simple as users are affected in a wide range.

However, the problem was a particularly rare one, and the resolution for the affected accounts was nuanced since different users were impacted in different ways. Some of the affected users were unable to access their accounts, instead seeing an outdated “scheduled maintenance” page which was a confusing and incorrect message (this has since been corrected and updated). Further, messages sent to those accounts during this time were not delivered, but held in a queue.

Now the service is running unless you use IMAP.  What is IMAP?  It is the way many mail clients mobile and desktop download mail, but it is not as easy as POP.

While IMAP remedies many of the shortcomings of POP, this inherently introduces additional complexity. Much of this complexity (e.g. multiple clients accessing the same mailbox at the same time) is compensated for by server-side workarounds such as Maildir or database backends.

The IMAP specification has been criticised for being insufficiently strict and allowing behaviours that effectively negate its usefulness. For instance, the specification states that each message stored on the server has a "unique id" to allow the clients to identify the messages they have already seen between sessions. However, the specification also allows these UIDs to be invalidated with no restrictions, practically defeating their purpose.[13]

Unless the mail storage and searching algorithms on the server are carefully implemented, a client can potentially consume large amounts of server resources when searching massive mailboxes.

Users don’t care about these details on IMAP.  Marissa closes her status with the following.  Will that make the users who don’t have mail through IMAP feel better?

Above all else, we’re going to be working hard on improvements to prevent issues like this in the future. While our overall uptime is well above 99.9%, even accounting for this incident, we really let you down this week.

We can, and we will, do better in the future.

It’s still not clear what is going to happen to those users email accessible through IMAP.

Here's a bit odd point, why is it inevitable Google build a data center in Hong Kong?

Many have blogged based on Google choosing to not build a data center Hong Kong.

Then I saw this post on why this event is odd.

Google sets aside Hong Kong data center plans. Here’s why that’s a bit odd

Google sets aside Hong Kong data center plans. Here’s why that’s a bit odd

An aisle of servers in a Google data center

The point made is that Google will be in every country, so they will return to Hong Kong.

And because high-speed service has been a priority for Google product managers over the years, it’s not hard to imagine that one day a Google data center could be located in every country, or every other country. Google’s got the money and the demand, so while a Hong Kong data center might be off the roadmap for the immediate future, it will probably open someday.

 

I guess you can imagine that there are users in every country in the world using Google.  Oh, that is pretty much true except where Russia, China, and other countries may make it difficult for Google to do business.  But, given Google has users in every country, does the company need to have data centers in every country?

What's wrong with so many operations, not seeing the flaw of human intervention

I just spent some time in an operations discussion, and I quickly realized the path that the team was taking was wrong.  It was a classic enterprise IT system approach to collect all the requirements, get all the people on board, lots of meetings, create an enterprise IT solution that meets the requirements.  Spend millions of dollars on the system and pray that it will deliver.

What is wrong?

Lots of process.  More people add more errors.  Move at the pace of meetings.  limited by how fast people will type, and review.

Another example of how things don’t work is in hospital care.  Here is a NYTimes op-ed piece.

 

More Treatment, More Mistakes

 

 


DOCTORS make mistakes. They may be mistakes of technique, judgment, ignorance or even, sometimes, recklessness. Regardless of the cause, each time a mistake happens, a patient may suffer. We fail to uphold our profession’s basic oath: “First, do no harm.”

The piece closes with a possible solution to the problem.

Hospitals are supposed to take care of the sickest members of our society and uphold the highest standards of patient care. But hospitals are also charged with teaching doctors, and every doctor has a first mistake. The only thing we can do is learn each time one happens, and reduce future errors in the process. Having a consistent gathering to talk about the mistakes goes a long way toward that goal, and just about any institution, public or private, could benefit from a tradition like M and M. It is not enough to stop the practice of defensive medicine, but when doctors are asked by their colleagues to justify the tests they ordered and the procedures they performed, perhaps they will be reminded that more is not always better.

It is amazing how so many systems are not focused on catching the errors and addressing them.   The #1 mistake I see is when people can’t see that the system itself is full of human errors.  How can you run operations with an IT system that introduces more errors on top of the problems you are trying to fix?

The Bureaucracy of the Vietnam War comes to mind as something that introduced more problems than it solved.

NewImage

Being closer to the problem and understanding the impact is something that I think works better.

NewImage