IBM’s Smarter Planet is a mass marketing of System Engineering

I just spent 2 full days at IBM’s Pulse 2010 conference, and got a chance to meet some interesting people.  The one piece of irony I realized is when I graduated from college I had a choice of working for IBM or HP as an Industrial Engineer.  I chose HP 30 years ago.  I wonder what my life would be like if I had chosen IBM? 

Although I probably wouldn’t have lasted too long at IBM and actually helped an IBM engineer leave to join Apple.  Why is it relevant to talk about an engineers?  Because part of what I figured out is IBM’s Smarter Planet is a rebranding and positioning of system engineering.

So what is a Systems Engineer?  I found this post by Univ of Arizona.

Systems Engineering is an interdisciplinary process that ensures that the customer's needs are satisfied throughout a system's entire life cycle. This process is comprised of the following seven tasks.

  1. State the problem. Stating the problem is the most important systems engineering task. It entails identifying customers, understanding customer needs, establishing the need for change, discovering requirements and defining system functions.
  2. Investigate alternatives. Alternatives are investigated and evaluated based on performance, cost and risk.
  3. Model the system. Running models clarifies requirements, reveals bottlenecks and fragmented activities, reduces cost and exposes duplication of efforts.
  4. Integrate. Integration means designing interfaces and bringing system elements together so they work as a whole. This requires extensive communication and coordination.
  5. Launch the system. Launching the system means running the system and producing outputs -- making the system do what it was intended to do.
  6. Assess performance. Performance is assessed using evaluation criteria, technical performance measures and measures -- measurement is the key. If you cannot measure it, you cannot control it. If you cannot control it, you cannot improve it.
  7. Re-evaluation. Re-evaluation should be a continual and iterative process with many parallel loops.

The seven steps are too complex for most, and IBM has done an excellent job to simplify the seven steps for the IT executives – Instrument, Interconnect, and Make them intelligent.  Now I would potentially argue IBM’s approach is too simple, but marketing messages need to be simple and contain at most three points.

Start here. Three big ideas. 1. Instrument the world's systems. 2. Interconnect them. 3. Make them intelligent. Introduce yourself to a smarter planet.

This idea clicked when I was interviewing District of Columbia Water and Sewer Authority CIO, Mujib Lodhi and listened to him describe his approach to manage the Water and Sewer infrastructure as a system using IBM’s asset management tools.  In the short term I had with Mujib he described what a system engineer would do to manage an aging water infrastructure.

IBM has an excellent section on smarter water management.  Which is good if you are a Tivoli user.  If not, there are many other vendors that have been in the water management business for decades.

image

In other conversations with IBM technical staff, I  kept on hearing the reoccurring methods of system engineers.

Here are slides from presentations.

image

Alcatel Lucent threw this slide up.

image

The system needs to be designed to be lean, mean, and green.

image

Read more

Air Force and IBM partner to prove Cloud Computing works for Defense and Intelligence services

One of the top concerns about Cloud Computing is security of the data in the cloud.  IBM has a press announcement on the partnership here.

U.S. Air Force Selects IBM to Design and Demonstrate Mission-Oriented Cloud Architecture for Cyber Security

Cloud model will introduce advanced cyber security and analytics technologies capable of protecting sensitive national data

ARMONK, N.Y. - 04 Feb 2010: The U.S. Air Force has awarded IBM (NYSE:IBM) a contract to design and demonstrate a secure cloud computing infrastructure capable of supporting defense and intelligence networks. The ten-month project will introduce advanced cyber security and analytics technologies developed by IBM Research into the cloud architecture.

There are press articles too.

CNet News

Air Force taps IBM for secure cloud

by Lance Whitney

IBM has a tall order from the U.S. Air Force--create a cloud network that can protect national defense and military data.

Big Blue announced Thursday a contract from the Air Force to design and demonstrate a cloud computing environment for the USAF's network of nine command centers, 100 military bases, and 700,000 personnel around the world.

The challenge for IBM will be to develop a cloud that can not only support such a massive network, but also meet the strict security standards of the Air Force and the U.S. government. The project will call on the company to use advanced cybersecurity technologies that have been developed at IBM Research.

and Government Computer News.

What I find interesting is how few authors reference the IBM press release.  The goal of the project is a technical demonstration.

"Our goal is to demonstrate how cloud computing can be a tool to enable our Air Force to manage, monitor and secure the information flowing through our network," said Lieutenant General William Lord, Chief Information Officer and Chief, Warfighting Integration, for the U.S. Air Force. "We examined the expertise of IBM's commercial performance in cloud computing and asked them to develop an architecture that could lead to improved performance within the Air Force environment to improve all operational, analytical and security capabilities."

Which is cut and pasted into the CNet news article as well.

On the other hand, there are some good insights by Larry Dignan on his ZDnet blog.

What’s in it for IBM? Cloud computing has a lot of interest, but security remains a worry for many IT buyers. If Big Blue can demonstrate cloud-based cyber security technologies that’s good enough for the military it would allay a lot of those worries.

The advanced cyber security and analytics technologies that will be used in the Air Force project were developed by IBM Research (statement).

According to IBM the project will show a cloud computing architecture that can support large networks and meet the government’s security guidelines. The Air Force network almost 100 bases and 700,000 active military personnel.

and Larry continues on the key concepts of what will be shown.  Models!!! yea!

  • The model will include autonomic computing;
  • Dashboards will monitor the health of the network second-by-second;
  • If Air Force personnel doesn’t shift to a “prevention environment” in a cyber attack the cloud will have automated services to lock the network down.
  • Read more

    Symbian Mobile OS goes open source, is data center design the next open source opportunity?

    Symbian OS went open source today.

    Symbian Is Open

    As of now, the Symbian platform is completely open source.  And it is Symbian^3, the latest version of the platform, which will be soon be feature complete.

    Open sourcing a market-leading product in a dynamic, growing business sector is unprecedented.  Over 330 million Symbian devices have been shipped worldwide, and it is likely that a further 100 million will ship in 2010 with more than 200 million expected to ship annually from 2011 onwards.


    Now the platform is free for anyone to use and to contribute to.  It is not only a sophisticated software platform, It is also the focal point of a community. And a lot of the foundation’s effort going forward will be to ensure the community grows and is supported in bringing great innovations to the platform and future devices.

    PCWorld write on the 5 benefits of open sourcing Symbian.

    Five Benefits of an Open Source Symbian

    By Tony Bradley

    The Symbian mobile operating system is getting a second life as the Symbian Foundation makes the smartphone platform open source. The lifeline will revitalize the platform, and has benefits for Nokia, smartphone developers, Symbian handsets, and smartphone users.

    With open source hitting all aspects of IT including mobile, when will data center designs go open source?  Don’t hold your breath as few of the data center designers are software people, so open source is still a foreign concept for many as designs are protected and transparency of what goes on is heresy to their thinking and business models.

    But, maybe as Cloud Computing goes open source with companies like Eucalyptus, people will not see the value in much of how data centers have been built in the past.

    Eucalyptus open-sources the cloud (Q&A)

    It's reasonably clear that open source is the heart of cloud computing, with open-source components adding up to equal cloud services like Amazon Web Services. What's not yet clear is how much the cloud will wear that open source on its sleeve, as it were.

    Eucalyptus, an open-source platform that implements "infrastructure as a service" (IaaS) style cloud computing, aims to take open source front and center in the cloud-computing craze. The project, founded by academics at the University of California at Santa Barbara, is now a Benchmark-funded company with an ambitious goal: become the universal cloud platform that everyone from Amazon to Microsoft to Red Hat to VMware ties into.

    Or, rather, that customers stitch together their various cloud assets within Eucalyptus.

    Is open source a threat to data center design?  For some maybe, for others it is an opportunity.

    For compliance and regulatory issues, eventually cloud computing providers will need to provide some level of transparency on their data center infrastructure.  Enough to meet the needs of governments and other regulatory agencies.  Will this be a driving issue for opening more details on data center infrastructure?

    There are those who argue for security reasons, we are not transparent to reduce our risks.  But, open source software believers say the systems are more secure by being transparent and allowing peer review.

    Read more

    What does a Cloud Computing Data Center look like? Comparison version 1

    There are a flood of cloud computing content out there.  As a thought experiment I start comparing conceptually what cloud computing is versus the existing data centers.  Many take the approach of building data centers to be solid as a rock which interesting enough is an opposite of clouds.  Rock is Earth.  Clouds are water and air, and electricity (lightning).

    Below is a first version of thinking about how there are differences between cloud computing data center vs. a Rock data center.

    When you start thinking about Cloud Computing as the future, what kind of data center fits business needs? 

    I am having some conversations with data center designers on this concept.  Cover up the right side, and only look at the left side.  When I look at the left, who doesn’t want this?  Except maybe those who may their money on the right side.

     

    Cloud Data Center Rock Data Center
    Water + Air + Energy = Clouds with lightning Earth = building built in a capital intensive redundant manner
    Business Alignment to current conditions Over-provisioned for the unknown future, but ironically many times limit businesses
    Speed is an advantage for less resources and changing business (minutes) You have no choice so you move at our pace (weeks/months)
    Systems integrated to reduce costs for business services Silos of self-optimization are used to prove efficiency
    Pay as you go service use Costs are not transparent or directly related to what you use
    Virtualized servers, storage, and network abstract discussions to capabilities for business Staff discusses specifications of servers, storage, and networking
    Energy efficient and high utilization are standard discussions Energy is viewed as a small cost paid for by someone else
    Commodity hardware Specialized hardware
    Healthy, growing vendor ecosystem Static ecosystem that is growing slowly, maybe even declining
    Exponential growth currently, innovation Declining as users migrate to Cloud, maintenance mode, cost reduction
    Read more

    Where is Data Center Innovation developing? Facebook as an example

    I’ve bounced around a bunch in many parts of the data center ecosystem.  The big data center operators, the construction companies, the engineering companies, the outsourced maintenance companies, the data center equipment companies, the IT equipment companies, and the software companies.

    So, where is the innovation coming from?

    Is it coming from the people who design and build data centers?

    Is it coming from the equipment vendors?

    Or is it coming from the customers who have gotten tired of the way the data centers have been designed and built?

    Data centers are high profit margin buildings compared to the rest of the construction industry.  Why?  Because they are so complex and feature creep is prevalent.  And with this complexity comes big budgets and  prestige to be in charge of the data center construction so territorial battles will exist as to who is responsible for the construction.  The majority of which are done by real estate and facilities department in companies.

    But, you look at the big data center operators and the standard is to have the data center construction be integrated with the data center operations team.  If you could see the organizations in Microsoft, Google, eBay, Amazon, Facebook, and Yahoo you would find the data center construction is integrated mainly with data center operations, not real estate and facilities.

    Why is this important because as much as real estate/construction based people want to own the job, they have almost no idea how their data center designs interact with IT services.  They barely know the IT hardware let alone the SW running to provide customers services.  What sane person puts a group of people responsible for design and construction of commercial office space for people in charge of the place to host information services?  Well almost everyone does except the enlightened companies.

    As an example of data center innovation coming from the IT group DataCenterKnowledge references Facebook’s efficiency of the data center.

    Designed for Efficiency
    The new design foregoes traditional uninterruptible power supply (UPS) and power distribution units (PDUs) and adds a 12 volt battery to each server power supply. This approach was pioneered by Google, which last year revealed a custom server thatintegrates a 12 volt battery, which the company cited this design as a key factor in the exceptional energy efficiency data for its data centers.

    Facebook will most likely shortly announce its data center in Prineville, OR.

    Facebook to Build Its Own Data Centers

    January 20th, 2010 : Rich Miller

    A look at the fully-packed racks inside a Facebook data center facility.

    A look at the fully-packed racks inside a Facebook data center facility.

    Facebook has decided to begin building its own data centers, and may announce its first facility as soon as tomorrow. The fast-growing social network has previously leased server space from wholesale data center providers, but has grown to the point where the economics favor a shift to a custom-built infrastructure.

    “Facebook is always looking at ways to scale our infrastructure and better serve our users,” said Facebook spokesperson Kathleen Loughlin said last week. “It should come as no surprise that, at some point, building a customized data center will be the most efficient and cost effective way to to do this. However, we have nothing further to announce at this time.”

    One of the data center engineers at Facebook is ex-google, Amir Michael.

    Amir Michael Amir is a 2nd degree contact

    Hardware and Data Center Engineer at Facebook

    San Francisco Bay Area
    Computer Hardware
    Current
    • Hardware and Datacenter Engineer at Facebook
    Past

      Hardware Engineer
      Google Inc., Mountain View CA,
      Responsible for data center electronics including: cooling systems, electrical distribution, and monitoring.
      Wrote specifications and requirements in cooperation with mechanical engineers for various data center
      control projects. Managed vendors and coordinated with manufacturing engineers and commodity
      management teams to deploy finished products.
      Embedded power measurement device for servers. Designed electrical schematics, printed circuit
      boards, and wrote the software. Hired and managed two interns to advance project development. The
      project resulted in the savings of several million of dollars in energy costs.

    What is causing more change/innovation in the industry the real estate/construction data center consortium or guys like Amir at Facebook networking with the other data center innovators at Google, eBay, and Yahoo in the bay area?

       

    Read more