Nebula launches Hardware Appliance to run the cloud, but will users want the HW or SW?

The cloud is about virtualized environments.  So, it is bit ironic that Nebula's first product is a physical hardware appliance when the solution could be downloaded bits.

Nebula Cloud Appliance

What they’re all working is fairly fascinating: A hardware appliance pre-loaded with customized OpenStack software and Arista networking tools, designed to manage racks of commodity servers as a private cloud.

...

Kemp wasn’t planning to do an appliance, he admits, but initial investor Bechtolsheim convinced him it was the right approach. It lets Nebula provide a turnkey product for deploying OpenStack, Kemp explained, by optimizing and locking down some of the variables that might make deploying a private cloud more difficult.

Nebula's team didn't like the Eucalyptus product and choose OpenStack.

However, even with all the specialization, Nebula is very committed to building the core OpenStack code base. “OpenStack exists because Eucalyptus didn’t work at NASA,” Kemp acknowledged, so he understands the importance of solid, customizable, open-source code.

Ultimately, he said, a better OpenStack means a better Nebula, because Nebula can focus on filling in the gaps and not on reinventing the wheel. Much like Bechtolsheim was successful at Sun Microsystems  by building atop Unix and at Arista by using standard hardware components.

Here is a question.  If Nebula is the cloud appliance.

Elastic Infrastructure

The Nebula appliance dynamically provisions and destroys virtual infrastructure and storage as workloads fluctuate.

Why wouldn't you run the Nebula SW on multiple Open Compute Servers in your cloud environment?  Seems like the Nebula appliances are single point of failures unless you have multiple instances running in your cloud environment.  Which should be easy if you buy a few more Open Compute Servers.

Nebula was announced at OSCON,  but who would let their cloud environment be down waiting for a Fedex and ship their cloud data outside the company in their Nebula Appliance?

Nebula will supply the appliance. "If it fails, FedEx it back to us, and we'll send you another one," Kemp said. "Our little box has a 10 gigabit ethernet switch built into it. You can plug cheap commodity servers into the rack. You don't have to turn them on. It will do that. The interface is like Amazon Services." These servers act as monitors by this appliance, including log files and flow data. "What we do is create interface points to all of the common CMDB tools, managing tools, security tools, like ArcSight or Splunk," said Kemp. "We will create integration points for those particular products."

I am sure there is a high availability architecture that Nebula has, but why buy multiple Nebula Appliances when the same hardware, the Open Compute Servers are in your environment?  Because, the investor convinced the Nebula Founders it was a better revenue model?

Kemp wasn’t planning to do an appliance, he admits, but initial investor Bechtolsheim convinced him it was the right approach.

Would you want an appliance or the software you can run on the Open Compute Server?

BTW, given the SW runs on the Open Compute Server the Nebula Software should run on any hardware, unless Nebula modified the software to be hardware specific.

 

MapR, 1/2 the HW and faster performance for Apache Hadoop

MapR technologies came out of stealth mode in June, and their solution is available for download.

image

MapR is the Next Generation for Apache Hadoop

Here is a presentation that you can watch to learn more.

The Design, Scale and Performance of MapR’s Distribution for Apache Hadoop

Posted on JULY 27, 2011 by JACK

Check out M.C. Srivas’ Hadoop Summit presentation. Srivas, the CTO and co-founder of MapR, outlines the architectural details behind MapR’s performance advantages. This technical discussion also describes the scale advantages of the MapR distributed NameNode and provides comparisons to HDFS.

← Big Data and Hadoop

Pretty cool you can save 1/2 power for your Apache Hadoop system.  Software can save a lot of power to support a green data center.

It is funny to think about the reality of a lights out data center, Dilbert Cartoon as an example

Dilbert has a cartoon on lights out data centers.

Dilbert.com

Human error in the data center is a reality, and the funny part is an answer just like above is to not allow employees in the data center.  Especially if the data center is self-aware.

What is potentially worse than employees are the vendor support employees who are not tracked.  Do you exactly what the warranty service technician did in his service call in your data center?

How important is AWS to Jeff Bezos if it listed last in the highlights of a quarterly financial results?

Amazon.com posted its quarterly press release.

Amazon.com Announces Second Quarter Sales up 51% to $9.91 Billion

SEATTLE, Jul 26, 2011 (BUSINESS WIRE) —

Amazon.com, Inc. (NASDAQ:AMZN) today announced financial results for its second quarter ended June 30, 2011.

Operating cash flow increased 25% to $3.21 billion for the trailing twelve months, compared with $2.56billion for the trailing twelve months ended June 30, 2010. Free cash flow decreased 8% to $1.83 billion for the trailing twelve months, compared with $1.99 billion for the trailing twelve months ended June 30, 2010.

The market reacted positively.

Amazon.com Tops Profit, Sales Estimates

Amazon.com Inc. (AMZN), the world’s largest online retailer, reported profit and sales that beat analysts’ estimates after its Kindle e-reader and digital-media services helped fuel growth. The shares jumped in late trading.

One interesting fact is AWS is last in the Highlights of quarter.  But, there are three bullet items.

  • Amazon Web Services (AWS) and SAP announced that AWS has been certified as a global technology partner of SAP. Customers can now deploy a variety of SAP solutions in full production environments including SAP(R) Rapid Deployment and SAP(R) BusinessObjects(TM).
  • AWS announced the availability of Amazon Relational Database Service (RDS) for Oracle databases, allowing customers to easily set up, operate and scale fully managed Oracle databases in the cloud.
  • AWS lowered prices for the fifteenth time in four years by eliminating inbound Internet data transfer costs and reducing outbound data transfer costs.

What was #1 with 4 bullet items?  The Kindle.

  • Sales growth of Kindle devices accelerated in second quarter 2011 compared to first quarter 2011.
  • Since AT&T agreed to sponsor screensavers, Kindle 3G with Special Offers is now our bestselling Kindle device - at only $139. With Kindle 3G, there’s no wireless set up and no paying or hunting for Wi-Fi hotspots. Kindle 3G’s always-on global wireless connectivity means that wherever you are, you can download books and periodicals in less than 60 seconds and start reading instantly. Amazon pays for Kindle’s 3G wireless connectivity, which means the convenience of 3G comes with no monthly fees, data plans or annual contracts.
  • Amazon.com announced the launch of Kindle Textbook Rental, offering students savings of up to 80% off textbook list prices. Tens of thousands of textbooks are available for the 2011 school year. In addition, Kindle Textbook Rental offers the ability to customize rental periods to any length between 30 and 360 days, so students only pay for the specific amount of time they need a book.
  • The U.S. Kindle Store now has more than 950,000 books, including New Releases and 110 of 111 New York Times Bestsellers. Over 800,000 of these books are $9.99 or less, including 65 New York Times Bestsellers. Millions of free, out-of-copyright, pre-1923 books are also available to read on Kindle.
  • So how important is AWS to Jeff Bezos?  You know his top priority is The Kindle.

    One way to look at Green Data Center Start-ups are they founded by engineers and scientists or VCs

    Two of my cloud computing engineering friends and I are having a blast working on a technology solution that can be used in data centers as well as many other areas. I ran across Steve Blank's post on

    How Scientists and Engineers Got It Right, and VC’s Got It Wrong

    There are many parts of Steve's post that resonate with our team.

    Startups are not smaller versions of large companies. Large companies execute known business models. In the real world a startup is about the search for a business model or more accurately, startups are a temporary organization designed to search for a scalable and repeatable business model.

    ...

    Scientists and engineers as founders and startup CEOs is one of the least celebrated contributions of Silicon Valley.

    It might be its most important.

    We all worked in Silicon Valley, so we have a bunch of methods ingrained our thinking.

    Why It’s “Silicon” Valley
    In 1956 entrepreneurship as we know it would change forever.  At the time it didn’t appear earthshaking or momentous. Shockley Semiconductor Laboratory, the first semiconductor company in the valley, set up shop in Mountain View. Fifteen months later eight of Shockley’s employees (three physicists, an electrical engineer, an industrial engineer, a mechanical engineer, a metallurgist and a physical chemist) founded Fairchild Semiconductor.  (Every chip company in Silicon Valley can trace their lineage from Fairchild.)

    The history of Fairchild was one of applied experimentation. It wasn’t pure research, but rather a culture of taking sufficient risks to get to market. It was learning, discovery, iteration and execution.  The goal was commercial products, but as scientists and engineers the company’s founders realized that at times the cost of experimentationwas failure. And just as they don’t punish failure in a research lab, they didn’t fire scientists whose experiments didn’t work. Instead the company built a culture where when you hit a wall, you backed up and tried a different path. (In 21st century parlance we say that innovation in the early semiconductor business was all about “pivoting” while aiming for salable products.)

    The Fairchild approach would shape Silicon Valley’s entrepreneurial ethos: In startups, failure was treated as experience (until you ran out of money.)

    Conveniently, our idea does not need VC money or MBAs.

    Scientists and Engineers = Innovation and Entrepreneurship
    Yet when venture capital got involved they brought all the processes to administer existing companies they learned in business school – how to write a business plan, accounting, organizational behavior, managerial skills, marketing, operations, etc. This set up a conflict with the learning, discovery and experimentation style of the original valley founders.

    Yet because of the Golden Rule, the VC’s got to set how startups were built and managed (those who have the gold set the rules.)

    I have been reading Steve Blank and some of his ideas as he experiments with business models.

    Earlier this year we developed a class in the Stanford Technology Ventures Program, (the entrepreneurship center at Stanford’s School of Engineering), to provide scientists and engineers just those tools – how to think about all the parts of building a business, not just the product. The Stanford class introduced the first management tools for entrepreneurs built around the business model / customer development / agile development solution stack. (You can read about the class here.)

    Some of the best data center conversations I have are on new business models not technology. Give it a try sometime.  It is much more fun.