Media’s Creative Destruction – competition benefitting consumers

WSJ has an opinion article on media’s challenge to adapting to the forces putting more power in the hands of consumers.

Media Moguls and Creative Destruction

Competition works to the benefit of consumers, not producers.

  • By L. GORDON CROVITZ

Columnist's name

For media, this is the best of times and the worst. The best because the cost to publish news, make a video or distribute a song has never been lower. But also the worst because it's hard to find a company, new or old media, that has emerged with a sustainable business model. Consumers are left wondering how much longer their favorite sources of news and entertainment will be around.

    The most recent stark contrast was between the $1 billion valuation for pre-revenue startup Twitter and the shutdown of the iconic Gourmet magazine. A new book provocatively entitled "The Curse of the Mogul: What's Wrong with the World's Leading Media Companies" explains how digital technology has undermined almost all media and why this creative destruction is set to continue.

    Crovitz

    The book, by investment banker Jonathan Knee and business strategists Bruce Greenwald and Ava Seave, details the dismal performance of almost all large media companies going back to the 1990s. It documents how and why the Internet wasn't the hoped-for savior of newspapers, magazines or broadcasters, and why almost all digital media executives have also found it hard to build companies that last.

    The benefactors of this competition has been the data center market.

    We should keep in mind that people are consuming more media than ever, all day and in real time through many new outlets. Content creators from musicians to authors can sidestep the middlemen who were once required to package and deliver the content. This means that as consumers, we have unprecedented choice in many areas.

    Media companies also have options. They can become more efficient, find new revenue streams from their most engaged consumers, and add new services.

    Still, no one knows which brands will survive in a world where the traditional advantages are the new disadvantages and where so many new-media companies don't survive the pace of change they helped accelerate. The challenge for all media—old and new—is the same, even if the difficulty level is higher than ever before: Focus on what makes each brand different and more valuable than the ever-increasing number of alternatives that technology makes inevitable.

    How many media companies own big rooms with printing presses that dwarf their data centers?

    The media have gone through periods of disruption before—from the development of the telegraph, telephone, radio and then television, but never at this pace. "Sometimes differences in degree become differences in kind," Mr. Knee says. "Never has there been this much fundamental change across so many sectors in such a short period of time." Moreover, the former competitive advantages of printing presses and unique access to large audiences have become costly liabilities.

    Read more

    Cloud Computing PR disaster - Failure Sinks the Server in Microsoft/Danger’s Client/Server Model – Client Data unrecoverable

    In spite of all the effort spent on disaster recovery and redundancy it is amazing how fragile IT systems can be.  The latest disaster is T-Moblie sidekick, built on the Microsoft acquired Danger client/server platform.

    If T-Mobile was smart they’d offer free account transfers to Google Android or RIM smartphones for anyone who wants to dump their sidekick device.  How you handle the outages to survive is the sign of maturity.  One example of handling a crisis situation is Johnson and Johnson’s Tylenol crisis.

    "The PR industry has an important role to play in helping companies identify and manage risks that could damage their reputation." Nick Purdom of PR Week

    THE TYLENOL CRISIS, 1982

    Johnson and Johnson survived based on its credo defined in 1943.  We’ll see if T-Mobile, Sidekick or Microsoft Danger will survive.

    The Danger platform is a client/server model.

    The Danger client/server model

    Diagram of: Danger Platform

    view PDF »

    A powerful client-server architecture


    IP-based communications allow you to develop powerful web
    services, real-time information, and networked applications.


    Guaranteed delivery of data
    Powerful HTTP library
    Device-to-device communications
    Asynchronous network communications
    REST, XML/RPC and SOAP
    All application data is backed up to the Danger Service
    Encryption and authentication are managed by Danger

    It will be interesting to know if we will ever know the full story.  One speculation was the data loss was caused by an attempted upgrade to the storage array without backing up first.

    In the Danger case, it appears from initial speculation that the data was lost because they attempted to upgrade a storage array without backing it up first. Here is a case of smart and rational people who do this for a living at one of the best companies in the world, and they didn't even bother making a backup ¿ so what hope do we have? Relying on the cloud as a backup didn't work, because somebody forgot to backup the backup.

    T-Mobile has started to manage the PR nightmare.

    Sidekick customers, during this service disruption, please DO NOT remove your battery, reset your Sidekick, or allow it to lose power.

    Updated: 10/10/2009 12:35 PM PDT

    T-MOBILE AND MICROSOFT/DANGER STATUS UPDATE ON SIDEKICK DATA DISRUPTION

    Dear valued T-Mobile Sidekick customers:

    T-Mobile and the Sidekick data services provider, Danger, a subsidiary of Microsoft, are reaching out to express our apologies regarding the recent Sidekick data service disruption.

    We appreciate your patience as Microsoft/Danger continues to work on maintaining platform stability, and restoring all services for our Sidekick customers.

    Regrettably, based on Microsoft/Danger's latest recovery assessment of their systems, we must now inform you that personal information stored on your device - such as contacts, calendar entries, to-do lists or photos - that is no longer on your Sidekick almost certainly has been lost as a result of a server failure at Microsoft/Danger. That said, our teams continue to work around-the-clock in hopes of discovering some way to recover this information. However, the likelihood of a successful outcome is extremely low. As such, we wanted to share this news with you and offer some tips and suggestions to help you rebuild your personal content. You can find these tips in our Sidekick Contacts FAQ. We encourage you to visit the Forums on a regular basis to access the latest updates as well as FAQs regarding this service disruption.

    In addition, we plan to communicate with you on Monday (Oct. 12) the status of the remaining issues caused by the service disruption, including the data recovery efforts and the Download Catalog restoration which we are continuing to resolve. We also will communicate any additional tips or suggestions that may help in restoring your content.

    We recognize the magnitude of this inconvenience. Our primary efforts have been focused on restoring our customers' personal content. We also are considering additional measures for those of you who have lost your content to help reinforce how valuable you are as a T-Mobile customer.

    We continue to advise customers to NOT reset their device by removing the battery or letting their battery drain completely, as any personal content that currently resides on your device will be lost.

    Once again, T-Mobile and Microsoft/Danger regret any and all inconvenience this matter has caused.

    Read more

    Mike Manos and Olivier Sanche conversation, Data Center Design is a popularity contest?

    Mike Manos has a post discussing a conversation he had with Olivier Sanche, Apple’s Global DC Director.

    Opinion Polls and the End of Times

    October 9, 2009 by mmanos

    I recently had an interesting e-mail exchange with Olivier Sanche the chief DC architect at Apple.  As you probably know this is a very small industry and Olivier and I have enjoyed a long professional working relationship.   He remarked that we are approaching the end of times, as we were both nominated for a Data Center Dream Team in an industry magazine.  I agreed with him wholeheartedly.

    We we were referring to the poll being conducted by the Web Hosting Industry Review (WHIR) who is conducting a survey to see who would represent the Industry’s best Data Center Dream Team.  While its a definite honor to be mentioned, it definitely signals the end of times.  :)

    To me the phrase “Dream Team” conjures images of people with a long list of accomplishments.   Its a bit strange to think of the Data Center Industry at large as having made significant movement forward.  There has been a tremendous amount of innovation in the last few years, and I do definitely believe we are at the start of something truly revolutionary in our industry, I think its probably way to early in our steps forward to start defining success like this.  

    For those of you interested the poll is located below.  Please keep in mind that you cannot see the results without actually taking the poll itself.

    http://www.thewhir.com/Poll/vote

    \Mm

    Here are the poll results so far.

    image

    I totally agree with Mike’s point on this being too early to celebrate progress, but this is a marketing stunt by WHIR.  Not reflective of true design leadership.

    There are so many other people out there pushing the edge of design like Microsoft’s Daniel Costello, MegaWatt Consulting KC Mares, bunches of people at Google and Amazon.

    You could vote, but Rob Roy has already rallied his supporters in a popularity contest to vote for him.  You know Rob Roy is going to be marketing his winning the WHIR Data Center Designer title.

    What would be interesting to see is the list of people on the Write-in.

    For a perspective, check out the comments on Mike’s blog entry.  Manos and Sanche get mentioned in the last comment, and I agree with his comments on the innovation from these two.

    3 Responses
    1. on October 9, 2009 at 8:46 am | Reply Gerald Downs

      I just took this poll and I have to say I was shocked! Rob Roy from SwitchNap is leading the data center designed category? Please! That man is a total joke. He probably voted for himself a million times. You and Olivier have 1000 times more experience than he does. They should have put him in the self-promoter category or marketing.

    2. on October 9, 2009 at 5:40 pm | Reply m00sh00

      Wow Gerald you are spectacularly uninformed. Apparently you have never been to see the SuperNAP or you would know it for the engineering triumph that it is. Rob Roy invented something that outperforms anything built by anyone else on that list, or in the data center industry for that matter. You might want to check your facts.

    3. on October 10, 2009 at 9:05 am | Reply Gerald Downs

      m00Sh00,

      I have been on his dog and pony tour through that facility and I can tell you that there is absolutely nothing new. The whole tour was seriously a conversation in a cult of personality around Roy. I would also be curious as to how easily names get dropped as far as other customers in the building. As far as I am concerned his modularized approach and mechanical designs have been present in the military, oil and gas, and other industries for a long time. But you dont really have to go that far. You can easily look to the work being done by Google and Microsoft and a ton of others to see this same kind of thing. Not to plug Manos, but he has done the same thing on a much bigger, global scale than Roy. Additionally, Olivier Sanche who is mentioned is another truly innovator in the data center industry. Additionally, both Sanche and Manos are out there talking to the industry. I have yet to see Roy show up to ANY industry events. Perhaps he is to busy playing the with action figures in his office.

      I think it might be you who needs to check your facts.

    Read more

    Data Center Myth – Thermal/Temperature Shock

    Mike Manos has a post pointing out what he calls “data center junk science” and the data center thermal shock requirement. 

    Mike’s post got my curiosity up, and I spent time researching to build on Mike’s post. This is my 956th post in less than 2 years, and people many times think I have a journalism writing background.  Well fooled you, I am an Industrial Engineer and Operations Research graduate from Cal Berkeley.  So, even thought I write a lot, you are reading my notebook of stuff that I discover I want to share with others. For those of you who don’t want industrial engineers do.

    Industrial engineering is a branch of engineering that concerns with the development, improvement, implementation and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material and process. It also deals with designing new prototypes to help save money and make the prototype better. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical and social sciences together with the principles and methods of engineering analysis and design to specify, predict and evaluate the results to be obtained from such systems. In lean manufacturing systems, Industrial engineers work to eliminate wastes of time, money, materials, energy, and other resources.

    This background all helps me think of how to green the data center.

    And Operations Research helps me think about the technical methods and SW to do this.

    interdisciplinary branch of applied mathematics that uses methods such as mathematical modeling, statistics, andalgorithms to arrive at optimal or near optimal solutions to complex problems. It is typically concerned with determining the maxima (of profit, assembly line performance, crop yield, bandwidth, etc) or minima (of loss, risk, etc.) of some objective function. Operations research helps management achieve its goals using scientific methods.

    Mike’s post got me thinking because one of my summer internships was at HP where I worked as a reliability/quality engineer figuring out how to build better quality HP products.  The team I worked in were early innovators in thermal cycling and stressing components back in the early 1980’s. 

    Data Center Junk Science: Thermal Shock \ Cooling Shock

    October 1, 2009 by mmanos

    I recently performed an interesting exercise where I reviewed typical co-location/hosting/ data center contracts from a variety of firms around the world.    If you ever have a few long plane rides to take and would like an incredible amount of boring legalese documents to review, I still wouldn’t recommend it.  :)

    I did learn quite a bit from going through the exercise but there was one condition that I came across more than a few times.   It is one of those things that I put into my personal category of Data Center Junk Science.   I have a bunch of these things filed away in my brain, but this one is something that not only raises my stupidity meter from a technological perspective it makes me wonder if those that require it have masochistic tendencies.

    I am of course referring to a clause for Data Center Thermal Shock and as I discovered its evil, lesser known counterpart “Cooling” Shock.    For those of you who have not encountered this before its a provision between hosting customer and hosting provider (most often required by the customer)  that usually looks something like this:

    If the ambient temperature in the data center raises 3 degrees over the course of 10 (sometimes 12, sometimes 15) minutes, the hosting provider will need to remunerate (reimburse) the customer for thermal shock damages experienced by the computer and electronics equipment.  The damages range from flat fees penalties to graduated penalties based on the value of the equipment.

    As Mike asks the issue of duration.

    Which brings up the next component which is duration.   Whether you are speaking to 10 minutes or 15 minutes intervals these are nice long leisurely periods of time which could hardly cause a “Shock” to equipment.   Also keep in mind the previous point which is the environment has not even violated the ASHRAE temperature range.   In addition, I would encourage people to actually read the allowed and tested temperatures in which the manufacturers recommend for server operation.   A 3-5 degree swing  in temperature would rarely push a server into an operating temperature range that would violate the range the server has been rated to work in or worse — void the warranty.

    Here is the military specification typically used by vendors. MIL-STD- 810G to define temperature/thermal shock.

    MIL-STD-810G
    METHOD 503.5
    METHOD 503.5
    TEMPERATURE SHOCK

    1.
    SCOPE.
    1.1
    Purpose.
    Use the temperature shock test to determine if materiel can withstand sudden changes in the temperature of the surrounding atmosphere without experiencing physical damage or deterioration in performance. For the purpose of this document, "sudden changes" is defined as "an air temperature change greater than 10°C (18°F) within one minute."
    1.2
    Application.
    1.2.1
    Normal environment.
    Use this method when the requirements documents specify the materiel is likely to be deployed where it may experience sudden changes of air temperature. This method is intended to evaluate the effects of sudden temperature changes of the outer surfaces of materiel, items mounted on the outer surfaces, or internal items situated near the external surfaces. This method is, essentially, surface-level tests. Typically, this addresses:
    a.
    The transfer of materiel between climate-controlled environment areas and extreme external ambient conditions or vice versa, e.g., between an air conditioned enclosure and desert high temperatures, or from a heated enclosure in the cold regions to outside cold temperatures.
    b.
    Ascent from a high temperature ground environment to high altitude via a high performance vehicle (hot to cold only).
    c.
    Air delivery/air drop at high altitude/low temperature from aircraft enclosures when only the external material (packaging or materiel surface) is to be tested.

    As Mike says the surprising part is the requirement for thermal shock is coming from technical people, most likely who have military backgrounds.

    Even more surprising to me was that these were typically folks on the technical side of the house more then the lawyers or business people.  I mean, these are the folks that should be more in tune with logic than say business or legal people who can get bogged down in the letter of the law or dogmatic adherence to how things have been done.  Right?  I guess not.

    I can’t imagine any business person or attorney thinking a thermal shock is 3 degree change in 15 minutes.  If there was an attorney involved they would go to MIL-STD 810G definition of temperature shock being greater than 10°C (18°F) within one minute.

    So where does this myth come from?  Most likely their is a social network effect of people who have consider themselves smarter than others and have added thermal shock to the requirements.  One of the comments from Mike’s blog documents the possible social network source.

    Dave Kelley, Liebert Precision Cooling

    The only place where something like this is “documented” in any way is in the ASHRAE THermal Guidelines book. Since the group that wrote this book included all of the major server vendors, it must have been created with some type of justifiable reason. It states that the “maximum rate of temperature change is 5 degress C (9 degrees F) per hour.

    And as Mike closes this has unintended consequences.

    But this brings up another important point.  Many facilities might experience a chiller failure, or a CRAH failure or some other event which might temporarily have this effect within the facility.    Lets say it happens twice in one year that you would potentially trigger this event for the whole or a portion of your facility (your probably not doing preventative maintenance  – bad you!).  So the contract language around Thermal shock now claims monetary damages.   Based on what?   How are these sums defined?  The contracts I read through had some wild oscillations on damages with different means of calculation, and a whole lot more.   So what is the basis of this damage assessment?   Again there are no studies that says each event takes off .005 minutes of a servers overall life, or anything like that.   So the cost calculations are completely arbitrary and negotiated between provider and customer. 

    This is where the true foolishness then comes in.   The providers know that these events, while rare, might happen occasionally.   While the event may be within all other service level agreements, they still might have to award damages.   So what might they do in response?   They increase the costs of course to potentially cover their risk.   It might be in the form of cost per kw, or cost per square foot, and it might even be pretty small or minimal compared to your overall costs.  But in the end, the customer ends up paying more for something that might not happen, and if it does there is no concrete proof it has any real impact on the life of the server or equipment, and really only salves the whim of someone who really failed to do their homework.  If it never happens the hosting provider is happy to take the additional money.

    Temperature/thermal shock is a term that doesn’t apply to data centers.  Hopefully you’ll know when to call temperature/thermal shock requirements in data center operations a myth.

    Thanks Mike for taking the time to write on this.

    Read more

    Dark Side of Smart Grid – Privacy and Exponential Data Growth

    MSNBC has a post on the impact of power meters most don’t talk about.  The fact that power meters provide data on what you do in your house and all this data is going to mean petabytes of data.

    What will talking power meters say about you?

    Posted: Friday, October 9 2009 at 05:00 am CT by Bob Sullivan

    Would you sign up for a discount with your power company in exchange for surrendering control of your thermostat? What if it means that, one day, your auto insurance company will know that you regularly arrive home on weekends at 2:15 a.m., just after the bars close?

    Welcome to the complex world of the Smart Grid, which may very well pit environmental concerns against thorny privacy issues. If you think such debates are purely philosophical, you’re behind the times.

    The gov’t is bringing up privacy concerns.

    Pepco’s discount plan is among the first signs that the futuristic “Smart Grid” has already arrived. Up to three-fourths of the homes in the United States are expected to be placed on the “Smart Grid” in the next decade, collecting and storing data on the habits of their residents by the petabyte. And while there’s no reason to believe Pepco or other utilities will share the data with outside firms, some experts are already asking the question: Will saving the planet mean inviting Big Brother into the home? Or at least, as Commerce Secretary Gary Locke recently warned, will privacy concerns be the “Achilles’ heel” of the Smart Grid?

    The dark side of what could be is discussed.

    Dark side of a bright idea
    But others see a darker side. Utility companies, by gathering hundreds of billions of data points about us, could reconstruct much of our daily lives -- when we wake up, when we go home, when we go on vacation, perhaps even when we draw a hot bath. They might sell this information to marketing companies -- perhaps a travel agency will send brochures right when the family vacation is about to arrive. Law enforcement officials might use this information against us ("Where were you last night? Home watching TV? That's not what the power company says … ”). Divorce lawyers could subpoena the data ("You say you're a good parent, but your children are forced to sleep in 61-degree rooms. For shame ..."). A credit bureau or insurance company could penalize you because your energy use patterns are similar to those of other troublesome consumers. Or criminals could spy the data, then plan home burglaries with fine-tuned accuracy.

    How big is the data growth?

    According to a recent discussion by experts at Smart Grid Security, here’s a quick explanation of the sudden explosion in data. In the United Kingdom, for example, 44 million homes had been creating 88 million data entries per year. Under a new two-way, smart system, new meters would create 32 billion data entries. Pacific Gas & Electric of California says it plans to collect 170 megabytes of data per smart meter, per year. And if about 100 million meters are installed as expected in the United States by 2019, 100 petabytes (a million gigabytes) of data could be generated during the next 10 years.

    And you can expect storage vendors and enterprise consultants to support the mindset to leverage the data.

    "Once a company monetizes data it collects, even if the amount is small, it is very reluctant to give it up," he said. Many companies he audits have robust privacy policies but end up using information in ways that frustrate or cost consumers, he said. "They talk a good game, but I'm sure (utility companies) will find ways to use the data, and not necessarily to benefit people but to harm people."

    This complexity of privacy and data storage is part of why I have not focused much effort on the consumer smart meter market.  Applying the concepts of smart metering in data centers and commercial buildings has the potential to be adopted much quicker and supports energy efficiency and the green data center.

    Read more