Adobe Greens Content Publishing, Closes Loop with Omniture Acquisition

Adobe announced its acquisition of Omniture to measure the effectiveness of published content.

Adobe acquires Omniture

On Sept. 15, 2009, Adobe Systems Incorporated (Nasdaq:ADBE) and Omniture, Inc. (Nasdaq:OMTR) announced the two companies have entered into a definitive agreement for Adobe to acquire Omniture in a transaction valued at approximately $1.8 billion on a fully diluted equity-value basis. Under the terms of the agreement, Adobe will commence a tender offer to acquire all of the outstanding common stock of Omniture for $21.50 per share in cash.

Adobe’s acquisition of Omniture furthers its mission to revolutionize the way the world engages with ideas and information.

Adobe and Omniture

For the data center the significance is in closing the loop in published content.

By combining Adobe’s content creation tools and ubiquitous clients with Omniture’s Web analytics, measurement and optimization technologies, Adobe will be well positioned to deliver solutions that can transform the future of engaging experiences and e-commerce across all digital content, platforms and devices.

This is an important part of a strategy to understand the value of information in the data center.

Greening the data center is easier if you can understand what information systems are high value and which are low value.

The more advanced companies are thinking of a strategy for greening the data center accounting for the business value of the systems.  You can save more by segmenting the IT loads than by treating them all the same.

Read more

KC Mares Asks The Tough Questions, Rewarded by PUE of 1.04

DataCenterKnowledge has a post on Ultra-Low PUE.

Designing for ‘Ultra-Low’ Efficiency and PUE

September 10th, 2009 : Rich Miller

The ongoing industry debate about energy efficiency reporting based on the Power Usage Efficiency (PUE) metric is about to get another jolt. Veteran data center specialist KC Mares reports that he has worked on three projects this year that used unconventional design decisions to achieve “ultra-low PUEs” of between 1.046 and 1.08. Those PUE numbers are even lower than those publicly reported by Google, which has announced an average PUE of 1.20 across its facilities, with one facility performing at a 1.11 PUE in the first quarter of 2009.

KC’s post has more details.

Is it possible, a data center PUE of 1.04, today?

I’ve been involved in the design and development of over $6 billion of data centers, maybe about $10 billion now, I lost count after $5 billion a few years ago, so I’ve seen a few things. One thing I do see in the data center industry is more or less, the same design over and over again. Yes, we push the envelope as an industry, yes, we do design some pretty cool stuff but rarely do we sit down with our client, the end-user, and ask them what they really need. They often tell us a certain Tier level, or availability they want, and the MWs of IT load to support, but what do they really need? Often everyone in the design charrette assumes what a data center should look like without really diving deep into what is important.

And KC asks the tough questions.

Rarely did I get the answers from the end-users I wanted to hear, where they really questioned the traditional thinking and what a data center should be and why, but we did get to some unconventional conclusions about what they needed instead of automatically assuming what they needed or wanted.

We questioned what they thought a data center should be: how much redundancy did they really need? Could we exceed ASHRAE TC9.9 recommended or even allowable ranges? Did all the IT load really NEED to be on UPS? Was N+1 really needed during the few peak hours a year or could we get by with just N during those few peak hours each year and N+1 the rest of the year?

KC provides background we wish others would share.

Now, you ask, how did we get to a PUE of 1.05? Let me hopefully answer a few of your questions: 1) yes, based on annual hourly site weather data; 2) all three have densities of 400-500 watts/sf; 3) all three are roughly Tier III to Tier III+, so all have roughly N+1 (I explain a little more below); 4) all three are in climates that exceed 90F in summer; 5) none use a body of water to transfer heat (i.e. lake, river, etc); 6) all are roughly 10 MWs of IT load, so pretty normal size; 7) all operate within TC9.9 recommended ranges except for a few hours a year within the  allowable range; and most importantly, 8) all have construction budgets equal to or LESS than standard data center construction. Oh, and one more thing: even though each of these sites have some renewable energy generation, this is not counted in the PUE to reduce it; I don’t believe that is in the spirit of the metric.

If you want higher efficiencies and lower costs you need to be ready to the tough questions. 

The easy thing to do is collect the requirements of various stakeholders and say this is what we need built.  And, don’t ask the questions of how much does that requirement cost?

I know KC’s blog entry has others curious, and he has lots more appointments.

Hopefully this will wake up many others to ask the tough questions of “how much does that data center requirement cost?”

Read more

Who owns Data from Tax Funded Projects, 3 example of Transit Systems

News.com has a post on the issue of data access in three different transit systems – NY, SF, and Portland. This is something to think about when considering smart grid projects and other projects that could affect data centers. 

Who owns transit data?

by Rafe Needleman

Commuters on public transit want to know two fundamental things: when can I expect the bus or train to pick me up? And when will it drop me off at my destination?

Nowadays, they may also be wondering whether their local transit agency is willing to share that data with others to put it into new and helpful formats.

How likely is it that the arrival and departure information will be available on a site or service other than the official one? That depends on how open your local agency is. In some metro areas, transit agencies make data--routes, schedules, and even real-time vehicle location feeds--available to developers to mash into whatever applications they wish. In others, the agencies lock down their information, claiming it may not be reused without permission or fee.

When tax payers money is involved there are interesting views on what should happen.

In local blogs and on transit sites, outrage over agencies and companies that claim ownership of the data is growing. The core argument against locking down such data is that it's collected by or paid for by public, taxpayer-funded agencies and thus should be open to all citizens, and that schedule data by itself is not protectable content. The argument against is that the agencies might be able to profit from using the data if they can maintain control of it. The counter to that is the belief that if the data is open, clever developers will create cool apps that make transit systems more usable, thus increasing ridership and helping transit agencies live up to their charters of moving people around and getting as many private cars as possible off the roads.

StationStops gives New York metro rail commuters a timetable in their iPhones.

(Credit: StationStops)

Each city and metro area with a transit system is unique, but there are three cases in the U.S. that highlight the way the transit data drama can play out.

NY’s view treats data as copyrighted work.

New York locks down subway schedules
As reported last week at ReadWriteWeb and elsewhere, the New York Metropolitan Transportation Agency believes its public train schedules fall under copyright law and thus applied an interpretation of the Digital Millennium Copyright Act (DMCA) to send a takedown notice to the developer of StationStops, an iPhone app that gives people access to train schedules on the Metro-North lines.

SF is taking a more open approach, but has its hiccups.

San Francisco writes data accessibility into contracts

The Routesy iPhone app uses NextBus data to predict transit arrival times.

(Credit: Routesy)

In San Francisco last week, Mayor Gavin Newsom unveiled (via TechCrunch) the Datasf.org initiative, which aims to put all the city's data online for open access. Included in the program is the San Francisco Municipal Transit Agency's schedule data. There's no question that this is a positive development for San Francisco Bay Area transit app developers and that it sets a good precedent for developers elsewhere. However, static schedule data is not the whole story for transit apps, especially on systems where route schedules are poorly adhered to (on New York's Metro-North lines, the schedules are somewhat reliable; for San Francisco's MUNI buses, they are not). The most useful new apps collect real-time vehicle location data, and access to that information is not yet available from SFData.

In many cities, a company called NextBus gathers location data from vehicles and then makes that information available to the subscribing cities as well as on its own Web site. Developers of real-time transit iPhone apps, such as San Francisco's Routesy and iCommute, have had mixed results in getting access to that data.

Portland is the best.

Visit Portland for the best in transit apps
In Portland, Ore., openness on the part of the local transit agency has been a blessing for transit app developers. There are more than 25 apps that use the public TriMet data stream. Many of the apps duplicate others' functions and features, but it's just this kind of competition that makes apps better over time. When companies control data about their services and are the only ones to provide the apps that use the data, users do not get the same benefit of rapid application evolution.

Then there is google working with all three.

Google drives the bus
Google is the most aggressive company in the transit planning business. If you ask Google Maps for directions, by default it will route you by car, but you can also ask it to give you directions by public transit. In many metro areas, it will even direct you among different transit systems (from a local bus line to a commuter rail system, for example).

I travel to all three cities, and I agree Portland has the best system and is the most enjoyable to visit for public transit.  The Portland Trimet system has this site for apps.  And, a developer center.

Read more

Follow the Cheap Energy, Software Routes Internet Traffic to Slash Costs

MIT’s Technology Review has an article on an Internet-routing algorithm that adapts to energy prices.

Energy-Aware Internet Routing

Software that tracks electricity prices could slash energy costs for big online businesses.

By Will Knight

MONDAY, AUGUST 17, 2009

  • An Internet-routing algorithm that tracks electricity price fluctuations could save data-hungry companies such as Google, Microsoft, and Amazon millions of dollars each year in electricity costs. A study from researchers at MIT, Carnegie Mellon University, and the networking company Akamai suggests that such Internet businesses could reduce their energy use by as much as 40 percent by rerouting data to locations where electricity prices are lowest on a particular day.

Data beast: Google maintains a huge datacenter in The Dalles, OR.
Credit: John Nelson

Modern datacenters gobble up huge amounts of electricity and usage is increasing at a rapid pace. Energy consumption has accelerated as applications move from desktop computers to the Internet and as information gets transferred from ordinary computers to distributed "cloud" computing services. For the world's biggest information-technology firms, this means spending upwards of $30 million on electricity every year, by modest estimates.

The researchers worked with Akamai to test their ideas.

Asfandyar Qureshi, a PhD student at MIT, first outlined the idea of a smart routing algorithm that would track electricity prices to reduce costs in a paper presented in October 2008. This year, Qureshi and colleagues approached researchers at Akamai to obtain the real-world routing data needed to test the idea. Akamai's distributed servers cache information on behalf of many large Web sites across the US and abroad, and process some 275 billion requests per day; while the company does not require many large datacenters itself, its traffic data provides a way to model the demand placed on large Internet companies.

The researchers first analyzed 39 months of electricity price data collected for 29 major US cities. Energy prices fluctuate for a variety of reasons, including seasonal changes in supply, fuel price hikes, and changes in consumer demand, and the researchers saw a surprising amount of volatility, even among geographically close locations.

The interesting insight is there was no site that was always cheapest.

"The thing that surprised me most was that there was no one place that was always cheapest," says Bruce Maggs, vice president of research at Akamai, who contributed to the project while working as a professor at Carnegie Mellon and is currently a professor at Duke University. "There are large fluctuations on a short timescale."

Keep in mind this is cost reduction, not energy reduction.

Maggs cautions that the idea is not guaranteed to reduce energy usage or pollution, only energy costs. "The paper is not about saving energy but about saving cost, although there are some ways to do both," he says. "You have to hope that those are aligned."

And they reached out to Digital Realty Trust’s Mike Manos to get his view.

Michael Manos, senior vice president of Digital Realty Trust, a company that designs, builds, and manages large datacenters, believes that the lack of elasticity currently built into modern hardware makes it impossible to achieve the improvements suggested.

"It is great research but there are some base fundamental problems with the initial assumptions, which would prevent the type of savings they present," Manos says. Because most servers aren't used to capacity, he says, "you just can't get there."

However, Manos does see plenty of room for improvement in datacenter designs. "I believe the datacenter industry is just beginning to enter into a Renaissance of sorts," he says. "Technology, economic factors, and a new breed of datacenter managers are forcing change into the industry. It's a great time to be involved."

Read more

Google’s Data Center Efficiency Implies Greener Email

Here is an article where the writer takes into DataCenterKnowledge and Google’s PUE  calculations to imply gmail is greener than most other email systems.

Let’s start with the DataCenterKnowledge content.

So how do you, a single consumer use Gmail to lower your carbon footprint?  Well Google has achieved extremely good Power Usage Effectiveness (PUE) levels (see: the site “Data Center Knowledge“).  PUE is the ratio of total energy in to your data center, divided by the energy used by your IT technology.  A perfect PUE ratio would be 1, this would indicate that all of the energy used by your data center is used only to drive the IT.  Nothing to cool it, light it, etc.   Since that is about as realistic as perpetual motion, Google has set the standard of excellence right now at a PUE of about 1.1.  They measure PUE daily, and have some data centers down to ratios of 1.11.  They are not the only ones striving for efficiencies.  Microsoft’s newest data center approaches are also getting very efficient.  They report a global average PUE of 1.60.

So what does this really mean?  According to Google, their data centers are roughly 2x as energy efficient as other data centers (imagine one you run yourself, Google’s will be twice as energy efficient).  In addition to their energy efficiencies, they also recycle the water used for cooling their components.  Re-using their cooling water makes Google’s data centers even more environmentally friendly – and better re-use of water is another one of Google’s priorities as they evolve their data centers.

Then web based email perspective.

Lifehacker recently announced the results of a survey of their readers regarding e-mail.  54% indicated they prefer to manage e-mail on the web, versus just 24% who prefer a desktop email client.  20% prefer a hybrid approach (see “Web-based e-mail slaughters Its Desktop Counterparts“).

Web-based e-mail is preferred for lots of reasons.  With this note I’ll highlight one you might not have considered yet.  Web-based e-mail can be better for the environment.

The author credits google for driving data center efficiency.

Google has been the driving force behind Data Center and Server efficiency and optimization for the last few years.  They have been using a variety of means to lower data center energy use (detailed at Google’s Going Green site).   The gist of it is that they use water evaporation, streamlined electrical infrastructure, and minimized technology requirements to curb their energy desires.  They even take the Graphics Processing Units (GPUs) out of their servers to lessen energy requirements.

You can argue with various points with this, but the author can write what he thinks is the truth, and 600 people have read this post.

One good question he poses.

Power Usage Effectiveness

Ask your ISP for their PUE

Read more