Carbon footprint of a Tweet, Energy/Tweet approx 100J, CO2 0.02 grams

Earth2Tech reports on the Energy per Tweet.

How Much Energy Per Tweet?

By Katie Fehrenbacher Apr. 19, 2010, 12:00am PDT 1 Comment

2 0

Every time you send out 140 characters over the social application Twitter, how much energy does that consume? According to some back of the napkin calculations from Raffi Krikorian, a developer for Twitter’s Platform Team, each tweet sent consumes about 90 joules. That means each tweet emits about 0.02 grams of C02 into the atmosphere.

For the roughly 50 million tweets sent on average per day, that’s the equivalent of 1 metric ton of CO2 per day. (1 metric ton of CO2 looks kinda like this).

Raffi Krikorian's passionate talk on energy use of a tweet starts at 2:50 into this video.  It is only 5 minutes long.


Apr 15

From #chirp: Energy / Tweet ≈ 100 J ±  something / Tweet

Last night at Chirp, I gave an Ignite talk entitled "Energy / Tweet".  Taking a few liberties, some assumptions, and running all of Twitter in development mode on my laptop, "energy per tweet" comes out to about 100 J / Tweet.

You can catch me talking (and introduced by @brady) starting at 2:50 in this video:

You can also just get the slides here:

Excuse this comment, but it illustrates the passion Raffi has as at 8:10 he says we can be less of a "planet fucker."

This is the kind of thinking that is going to get people thinking what is the carbon impact of code just like Microsoft posted last week.

eBay understands the energy per listing.  Google understands the energy per search.  Twitter understands the energy per tweet.

Do you have energy consumption for your IT services?

Twitter knows it has to be more energy efficient look at its growth.

The new numbers blow past Pingdom’s stats. Some of the highlights:

- In 2007, around 5,000 tweets were sent per day.

- By 2008, the number grew to 300,000 tweets per day.

- By 2009, around 2.5 million tweets were sent through Twitter every single day.

- Tweet growth shot up by 1,400% in 2009, reaching 35 million tweets per day by the end of the year.

- As of now, Twitter sees 50 million tweets created per day.

Great Job Raffi for waking up your development community on the energy / tweet.

Read more

What is behind the Adobe vs. Apple feud?

I have been meaning to write this post, and news just got hotter with the latest news that Adobe is preparing to sue Apple.

Adobe vs. Apple is going to get uglier

You think things are bad now between Apple and Adobe? Just wait until the lawsuit.

207 comments | 36I like it!
Tags: Adobe, Apple, developer, flash, ipad, iPhone, iPod

April 12, 2010, 05:36 PM —

Usually I write about security here, but Apple's iron-bound determination to keep Adobe Flash out of any iWhatever device is about to blow up in Apple's face. Sources close to Adobe tell me that Adobe will be suing Apple within a few weeks.

It was bad enough when Apple said, in effect, that Adobe Flash wasn't good enough to be allowed on the iPad. But the final straw was when Apple changed its iPhone SDK (software development kit) license so that developers may not submit programs to Apple that use cross-platform compilers.

Below is more information, but I want to put my thoughts up here on what is behind the Adobe vs. Apple feud.  Apple's business is built on a closed system approach to its hardware and software.  Adobe's approach is cross platform technologies.  Adobe's history of postscript, type 1 fonts, Acrobat, and Mac/Win app development have targeted customers who need to work across different systems.  Postscript and Type 1 fonts worked because you could use the same postscript on Mac/Windows desktops, then on your high end printers.  This Adobe strategy also made it so their IP was owned by them to sell to others who wanted to be compatible with Apple printers and hardware, eventually bringing an end to the Apple LaserWriter as Apple couldn't compete.

Building on the success of the original LaserWriter, Apple developed many further models. Later LaserWriters offered faster printing, higher resolutions, Ethernet connectivity, and eventually color output. To compete, many other laser printer manufacturers licensed Adobe PostScript for inclusion into their own models. Eventually the standardization on Ethernet for connectivity and the ubiquity of PostScript undermined the unique position of Apple’s printers: Macintosh computers functioned equally well with any Postscript printer. After the LaserWriter 8500, Apple discontinued the LaserWriter product line.

Steve Jobs has learned this lesson well, and knows what happens if he lets Adobe's cross platform technologies into Apple products.  Steve's made his decision.  Adobe Flash will not ship on the iPhone as he can see what happened with postscript.  In addition, Steve has provided technical reasons why Adobe Flash is not appropriate for the iPhone, but if he really wanted Flash he couldn't he work with Adobe to address the technical issues?

Steve jobs has slammed Flash.

by Erica Ogg

Jobs iPad Flash

Jobs using the iPad, sans any support for Adobe Flash.

(Credit: James Martin/CNET)

Apple CEO Steve Jobs has reportedly continued his campaign against Adobe's Flash video technology, this time at a meeting with The Wall Street Journal, according to a report in Valleywag.

People who were at a recent meeting Jobs had with some of the paper's executives told the Gawker-owned sitethat Jobs dismissed Flash as "a CPU hog," full of "security holes," and "old technology" and would therefore not be including the technology on the iPad, or presumably, the iPhone. (Adobe did recently promise to make theMac version of its browser plug-in faster.)

It's not the first time we've heard this. At an Apple shareholder meeting two years ago Jobs explained why Flash wouldn't be on the iPhone any time soon. He told those present that the full-blown PC Flash version "performs too slow to be useful" on the iPhone, and that the mobile version--Flash Lite--"is not capable of being used with the Web."

The following reminds of the frustration at Apple and Microsoft when we cussed about Adobe Type Manager (ATM) crashing the OS. (I worked on both the Mac and Windows OS while at Apple and Microsoft.)

More recently, word leaked out from Apple's employee-only meeting after the iPad introduction that Jobs had slammed Flash. According to a report on Wired, he responded to an employee question that "whenever a Mac crashes, more often than not, it's because of Flash," and that "no one will be using Flash. The world is moving to HTML5."

The little piece of irony is Google believes HTML5 is key to mobile growth as well.

This is starting to feel like a feud similar to the Hatfields vs. McCoys.  Although a more modern term is a smackdown.


Steve Jobs hates Adobe Flash: iPhone 4.0 SDK lockdown smackdown

9 comments

Ouch. Looks like writing apps in Flash is verboten, according to the latest iPhone OS 4.0 SDK legal language. CS5 and other cross-compilers could be dead in the water. In IT Blogwatch, bloggers uncover more proof that Steve Jobs hates Adobe.

By Richi Jennings. April 9, 2010.
(AAPL) (ADBE)

He's back: your humble blogwatcher selected these bloggy morsels for your enjoyment. Not to mention Michelle Obama's biggest fear...
Cade Metz has met the enemy, and it's Adobe, apparently:

Apple's new SDK for the iPhone ... will likely prevent ... Adobe's upcoming Flash ... development suite from converting Flash scripts into native Jesus Phone apps. ... Apple's iPhone SDK has always said that "applications may only use Documented APIs." ... But Steve Jobs and company have now tacked on a few additional sentences.
...
It would appear that Steve Jobs has landed another blow against ... Adobe. ... Steve Jobs has already barred untranslated Flash from the iPhone and the iPad, calling it "buggy," littered with security holes, and a "CPU hog."more

John Gruber is widely credited with breaking the news:

My reading of this new language is that cross-compilers ... are prohibited. This also bans apps compiled using MonoTouch. ... The folks at Appcelerator realize ... that they might be out of bounds withTitanium. Ansca’s Corona SDK ... strikes me as out of bounds.
...
The language in the agreement doesn’t leave much wiggle room for Flash. ... Wonder what Adobe does now? ... They’re pretty much royally ****ed..more

Hank Williams calls it an "insane restraint of trade":

3.3.1 not only bans cross platform tools, it bans everything that is written in other languages and are ported to C. This, obviously, includes libraries. ... [It's] an insidious concept and strikes at the core of product development and of computer science in general. Everything is built on other stuff. ... This language is fundamentally unreasonable.
...
Some may say my interpretation is too pedantic. But the point is that in order for Apple to limit people in the way that they want to ... they are inflicting collateral damage. ... There is a reasonable risk that not only is 3.3.1 restraint of trade, but that the entire ... App Store concept ... is found to be restraint of trade. ... Adobe, and/or class action lawyers start your engines!
Read more

Microsoft writes humorous blog post to educate SW developers of the power costs to run their code

Microsoft has a blog on "How Much Does Your Code Cost?", interjecting humor into a typically dry topic.

The big difference is that with cloud computing, you’re renting computing power in a data center somewhere. As far as you’re concerned, it could be on Saturn. Except that the latency figures might be a bit excessive. If you’ve accidentally opened one of those magazines your network administrator takes with him to the bathroom, you might know that these data centers contain racks and racks of servers, all with lots of twinkling lights. If you’ve ever been to a data center, you’ll know that they can be very hot near the server fans, much colder around the cooling vents, and noisy everywhere. All this activity results from removing the heat that the servers produce. But that heat doesn’t get there all by itself – the servers create it from the electricity they use. What’s more, it requires even more electricity to remove that heat.

Consider sending this post on to those who are involved in SW decisions to get them thinking of the impact of their SW code.

When you’re up against deadlines to turn in a software project, you probably are focused on ensuring that you meet the functionality requirements set out in the design specification. If you have enough time, you might consider trying to maximize performance. You might also try to document your code thoroughly so that anyone taking over the project doesn’t need to run Windows for Telepaths to work out what your subroutines actually do. But there is probably one area that you don’t consider: the cost of your code.

You mean what it costs to write the code, right? No.

Er, how about what it costs to compile? You’re getting warmer...

What it costs to support? No, colder again.

OK, you win. What costs do you mean?

I mean what it costs to run your code. In the bad old days, when clouds were just white fluffy things in the sky and all applications ran on real hardware in a server room somewhere or on users’ PCs, then cost simply wasn’t a factor. Sure, you might generate more costs if your application needed beefier hardware to run, but that came out of the cable-pluggers’ capital budget, and we all know that computer hardware needs changing every other year, so the bean-counters didn’t twig. A survey by Avanade showed that 50% of IT departments don’t even budget for the cost of electricity to run their IT systems. For more information, see this Avanade News Release, at http://www.avanade.com/_uploaded/pdf/pressrelease/globalenergyfindingsrelease541750.pdf

Life would be so much easier in the data center if SW developers and others thought of the data center infrastructure costs direct relationship to the code they write and how it is architected.

The good thing is cloud computing is helping to get SW developers to think about the costs to run their code.

If you deploy applications into the cloud, it is highly likely that your service provider will be charging you based on the energy that you use. Although you don’t see electricity itemized as kw/hr, you are billed for CPU, RAM, storage and network resources, all of which consume electricity. The more powerful processor with more memory costs more, not just because the cost of the components, but because they consume more electricity. In many ways, this is an excellent business model, as you don’t have to buy the hardware, maintain it, depreciate it, and finally, replace it. You simply pay for what you use. Or putting it another way, you pay for the resources you use. And this is the point at which you need to ask yourself: How much does my code cost? When power usage directly affects the cost of running your applications, a power-efficient program is more likely to be a profitable one.

The blog post references Visual Studio and Intel resources to help SW developers.

It is possible that future versions of Visual Studio will include options for checking your code for power usage. Until that time, following these recommendations should help minimize the running costs of your applications within a cloud-based environment.

  1. Reduce or eliminate accesses to the hard disk. Use buffering or batch up I/O requests.
  2. Do not use timers and polling to check for process completion. Each time the application polls, it wakes up the processor. Use event triggering to notify completion of a process instead.
  3. Make intelligent use of multiple threads to reduce computation times, but do not generate threads that the application cannot use effectively.
  4. With multiple threads, ensure the threads are balanced and one is not taking all the resources.
  5. Monitor carefully for memory leaks and free up unused memory.
  6. Use additional tools to identify and profile power usage.

For more ideas on how to reduce the memory, check out the following resources and tools:

Energy-Efficient Software Checklist, at http://software.intel.com/en-us/articles/energy-efficient-software-checklist/.

Creating Energy-Efficient Software, at http://software.intel.com/en-us/articles/creating-energy-efficient-software-part-1/.

Intel PowerInformer, at http://software.intel.com/en-us/articles/intel-powerinformer/.

Application Energy Toolkit, at http://software.intel.com/en-us/articles/application-energy-toolkit/.

Read more

Open Source Data Center Initiative Story by Mike Manos

I wrote a post announcing GreenM3 partnering with University of Missouri and ARG Investments with Mike Manos as an industry advisor.  I spent a few paragraphs explaining the use of an Open Source Software model applied to data centers.

Mike Manos took the time to write his own post in response to mine and it is well written story that explains why we are using this approach.

My first reaction was to cut and paste relevant parts and add comments, but the whole story makes sense. So for a change, I am going to copy his whole post below to make sure we have it in two places.

Open Source Data Center Initiative

March 3, 2010 by mmanos

There are many in the data center industry that have repeatedly called for change in this community of ours.  Change in technology, change in priorities, Change for the future.  Over the years we have seen those changes come very slowly and while they are starting to move a little faster now, (primarily due to the economic conditions and scrutiny over budgets more-so than a desire to evolve our space) our industry still faces challenges and resistance to forward progress.   There are lots of great ideas, lots of forward thinking, but moving this work to execution and educating business leaders as well as data center professionals to break away from those old stand by accepted norms has not gone well.

That is why I am extremely happy to announce my involvement with the University of Missouri in the launch of a Not-For-Profit Data Center specific organization.   You might have read the formal announcement by Dave Oharawho launched the news via his industry website, GreenM3.   Dave is another of of those industry insiders who has long been perplexed by the lack of movement and initiative we have had on some great ideas and stand outs doing great work.  More importantly, it doesn’t stop there.  We have been able to put together quite a team of industry heavy-weights to get involved in this effort.  Those announcements are forthcoming, and when they do, I think you will get a sense of the type of sea-change this effort could potentially have.

One of the largest challenges we have with regards to data centers is education.   Those of you who follow my blog know that I believe that some engineering and construction firms are incented ‘not to change’ or implementing new approaches.  The cover of complexity allows customers to remain in the dark while innovation is stifled. Those forces who desire to maintain an aura of black box complexity  around this space and repeatedly speak to the arcane arts of building out  data center facilities have been at this a long time.  To them, the interplay of systems requiring one-off monumental temples to technology on every single build is the norm.  Its how you maximize profit, and keep yourself in a profitable position.

When I discussed this idea briefly with a close industry friend, his first question naturally revolved around how this work would compete with that of the Green Grid, or Uptime Institute, Data Center Pulse, or the other competing industry groups.  Essentially  was this going to yet another competing though-leadership organization.  The very specific answer to this is no, absolutely not.  

These groups have been out espousing best practices for years.  They have embraced different technologies, they have tried to educate the industry.  they have been pushing for change (for the most part).  They do a great job of highlighting the challenges we face, but for the most part have waited around for universal good will and monetary pressures to make them happen.  I dawned on us that there was another way.   You need to ensure that you build something that gains mindshare, that gets the business leadership attention, that causes a paradigm shift.   As we put the pieces together we realized that the solution had to be credible, technical, and above all have a business case around it.   It seemed to us the parallels to the Open Source movement and the applicability of the approach were a perfect match.

To be clear, this Open Source Data Center Initiative is focused around execution.   Its focused around putting together an open and free engineering framework upon which data center designs, technologies, and the like can be quickly put together and more-over standardize the approaches that both end-users and engineering firms approach the data center industry.

Imagine if you will a base framework upon which engineering firms, or even individual engineers can propose technologies and designs, specific solution vendors could pitch technologies for inclusion and highlight their effectiveness, more over than all of that it will remove much mystery behind the work that happens in designing facilities and normalize conversations.   

If you think of the Linux movement, and all of those who actively participate in submitting enhancements, features, even pulling together specific build packages for distribution, one could even see such things emerging in the data center engineering realm.   In fact with the myriad of emerging technologies assisting in more energy efficiency, greater densities, differences in approach to economization (air or water), use of containers or non use of containers, its easy to see the potential for this component based design. 

One might think that we are effectively trying to put formal engineering firms out of business with this kind of work.  I would argue that this is definitely not the case.  While it may have the effect of removing some of the extra-profit that results from the current ‘complexity’ factor, this initiative should specifically drive common requirements, and lead to better educated customers, drive specific standards, and result in real world testing and data from the manufacturing community.  Plus, as anyone knows who has ever actually built a data center, the devil is in the localization and details.  Plus as this is an open-source initiative we will not be formally signing the drawings from a professional engineering perspective.

Manufacturers could submit their technologies, sample application of their solutions, and have those designs plugged into a ‘package’ or ‘RPM’ if I could steal a term from the Redhat Linux nomenclature.  Moreover, we will be able to start driving true visibility of costs both upfront and operating and associate those costs with the set designs with differences and trending from regions around the world.  If its successful, it could be a very good thing. 

We are not naive about this however.  We certainly expect there to be some resistance to this approach out there and in fact some outright negativity from those firms that make the most of the black box complexity components.

We will have more information on the approach and what it is we are trying to accomplish very soon. 

\Mm

Read more

Symbian Mobile OS goes open source, is data center design the next open source opportunity?

Symbian OS went open source today.

Symbian Is Open

As of now, the Symbian platform is completely open source.  And it is Symbian^3, the latest version of the platform, which will be soon be feature complete.

Open sourcing a market-leading product in a dynamic, growing business sector is unprecedented.  Over 330 million Symbian devices have been shipped worldwide, and it is likely that a further 100 million will ship in 2010 with more than 200 million expected to ship annually from 2011 onwards.


Now the platform is free for anyone to use and to contribute to.  It is not only a sophisticated software platform, It is also the focal point of a community. And a lot of the foundation’s effort going forward will be to ensure the community grows and is supported in bringing great innovations to the platform and future devices.

PCWorld write on the 5 benefits of open sourcing Symbian.

Five Benefits of an Open Source Symbian

By Tony Bradley

The Symbian mobile operating system is getting a second life as the Symbian Foundation makes the smartphone platform open source. The lifeline will revitalize the platform, and has benefits for Nokia, smartphone developers, Symbian handsets, and smartphone users.

With open source hitting all aspects of IT including mobile, when will data center designs go open source?  Don’t hold your breath as few of the data center designers are software people, so open source is still a foreign concept for many as designs are protected and transparency of what goes on is heresy to their thinking and business models.

But, maybe as Cloud Computing goes open source with companies like Eucalyptus, people will not see the value in much of how data centers have been built in the past.

Eucalyptus open-sources the cloud (Q&A)

It's reasonably clear that open source is the heart of cloud computing, with open-source components adding up to equal cloud services like Amazon Web Services. What's not yet clear is how much the cloud will wear that open source on its sleeve, as it were.

Eucalyptus, an open-source platform that implements "infrastructure as a service" (IaaS) style cloud computing, aims to take open source front and center in the cloud-computing craze. The project, founded by academics at the University of California at Santa Barbara, is now a Benchmark-funded company with an ambitious goal: become the universal cloud platform that everyone from Amazon to Microsoft to Red Hat to VMware ties into.

Or, rather, that customers stitch together their various cloud assets within Eucalyptus.

Is open source a threat to data center design?  For some maybe, for others it is an opportunity.

For compliance and regulatory issues, eventually cloud computing providers will need to provide some level of transparency on their data center infrastructure.  Enough to meet the needs of governments and other regulatory agencies.  Will this be a driving issue for opening more details on data center infrastructure?

There are those who argue for security reasons, we are not transparent to reduce our risks.  But, open source software believers say the systems are more secure by being transparent and allowing peer review.

Read more