Google takes available space at 111 8th in NYC off the market

DatacenterDynamics reports on Google's move to remove 111 8th ave data center space off the market.

Google takes all available space at key NYC carrier hotel off market

Future of data center providers at 111 8th Ave. uncertain

Published 20th May, 2011 by Yevgeniy Sverdlik

111 8th Avenue in New York

Following its acquisition of one of East Coast’s largest carrier hotels at 111 8th Ave. in New York City, Googlehas taken all the space that was available in the building off the market. The building is home to a number of commercial data center providers, including Digital Realty Trust, Telx and Internap, among others.

I speculated that Google could use the space in 111 8th for carrier negotiations.

Google now owns a premium networking access point in NYC, the biggest concentration of money in the USA with the financials, stock exchanges, and other businesses.

As Google negotiates carrier access in various markets, it can offer a presence in 111 Eighth Ave.  This can change price points, and guarantees of service and access.

If Emerging Market Telecom sets up a relationship with Google, and agrees to a presence in 111 Eighth Ave, then the more the Emerging Market Telecom needs the location due to a variety of economic and technical reasons, the value works for Google.

Did Google just buy one the biggest bargaining chips it could have to negotiate access to WW Telcos?

With Centurylink's purchase of Savvis and Verizon's purchase of Terramark could they do what Google is thinking?  It is interesting to think one building is more valuable that Savvis or Terramark.

Verizon publishes Carbon per Terrabyte metric, 15% improvement 2009 - 2010

Verizon has a press release on its new carbon metric.

Verizon Develops New Metric to Accurately Measure Company's Carbon Efficiency as Broadband, IP, Wireless and Video Services Grow

Metric Will Help Company to Continue Its Energy-Conservation Improvements Amid Increasing Demands on Its Network

NEW YORK – April 28, 2011 –

Verizon has developed a new metric for measuring carbon efficiency, enabling the company, for the first time, to accurately quantify the impact of all of its green initiatives.

The metric will help Verizon continue to make improvements in energy conservation and efficiency, as the rapid increase in demand for broadband, IP network services, wireless data and video increases the demands on the company's network - and the amount of energy needed to operate the network.

Called the "carbon intensity metric," the new measurement was developed by Verizon's Sustainability Office and tested over the past 12 months. The tests showed an improvement of approximately 15 percent in the company's carbon efficiency, from 2009 to 2010.

The metric is derived by first combining Verizon's total carbon emissions (in metric tons) from the electricity, building fuels and vehicle fuels used to run the company's business. Then, that total is divided by the number of terabytes of data that the company transports across its network. (One terabyte equals about 300 feature-length movies.) Verizon transported 78.6 million terabytes across its global network in 2010 - an increase of about 16 percent, compared with 2009.

We’ll see if others adopt this method.

One area where I found deceptive is this graph on the Verizon environmental site.

image

Note how the scale is not provided from 2007 – 2009 there is 10% reduction, but the graph deception makes the reduction look like over 33%.  It’s too bad you can’t make money by finding deceptive graphing techniques, it would keep the marketing folks a lot more honest.

Verizon promotes itself vs. as a Cloud Computing alternative

Verizon's Jeff Deacon gets on a soapbox to promote Verizon as a top choice for cloud computing.

Connected Planet: What differentiates a telecom like Verizon from other cloud providers today?

Jeff Deacon: To start, the end-to-end SLAs on application performance are a key differentiator for telcos that can boast core assets such as hundreds of data centers around the world—necessary as a strong foundation for cloud service infrastructure. Also, owning the global IP networks means telecom operators can help enterprises migrate applications to the cloud with real SLA guarantees. Only a cloud provider that has control of the data center and everything in the data center (as well as the networks underlying those data centers and connecting the enterprises) can offer end-to-end SLA capabilities for mission-critical, heavy-duty applications critical to enterprise businesses.

Additionally, for decades, it is the telcos that do the metering and billing that now enable the usage-based capabilities necessary for measuring, charging and billing for what is actually used in a cloud environment. The back-office systems have to recognize the different types of cloud consumption and move the necessary information into billing and charging systems so customers know exactly what they used and for what they are being charged.

For example, today, we monitor usage on a daily basis when it comes to compute memory and storage, and in a couple of months, we will actually take that down to an hourly level so customers can see what they used on a more granular level.

Verizon closes with points it is a competitor of Google, Microsoft, and Amazon.

Rather than focus on one niche, we want to compete against the platform players like Microsoft and Google, as well as against the unmanaged infrastructure players like Amazon. And, we will layer platform applications on top of what we have so that enterprises have a full range of options, as we have a very diverse base of customers that have different needs at different phases of maturity.

Facebook and Google influencing Networking Technology faster than Telco ecosystem can react

The landscape of Telco is changing.  Fast.  In 2007 the Internet traffic looked like this.

image

In a Google paper they said hey look at 2009.

image

In 2011, Facebook and Google are both in the top 10 and growing faster than the rest.  Light Reading has an article on this trend.

OSA 2011: Optical World Faces Hipster Challenge

MARCH 8, 2011 | Craig Matsumoto | Post a comment |

LOS ANGELES -- OSA Executive Forum 2011 -- The needs of content providers such as Facebookand Google (Nasdaq: GOOG) are starting to influence roadmaps in optical components and systems, and that's accentuating the differences between telcos and new network owners.

Google makes the case that the traditional Telecom cycle is too slow which reminds me of the old way of product design.

Google's case was particularly spotlighted, partly because the company landed two panel spots (both filled by Senior Network Architect Bikash Koley, due to a colleague's illness). Koley made it clear that he thinks the traditional telecom cycles are too slow for Google and don't produce the right kinds of products anyway.

Google makes the point of the power consumption being too big.  How many network guys/gals do you know discuss the power of the network?  Networks have a big role in a green data center.

That's because standards too often aren't developed with the input of the ultimate buyers, he said. By the time products arrive, they're too big and/or power-hungry to suit the next-generation needs that the user is building for. Koley stepped through an example of how it all goes wrong, drawn not from Google but from his experience at a large equipment vendor. (He didn't specify which one, but his resume includes time at Ciena Corp. (Nasdaq: CIEN))

The users (Google and Facebook) are frustrated.

The problem stems from each vendor taking its cues from a neighboring step in the supply chain, rather than going to the source. The result, in Koley's example: A product that arrived years late. "Not talking to a user gives you the wrong time horizon," Koley said.

And, they are doing something about it.

One alternative is to bypass the standards bodies and develop a multisource agreement (MSA), a tactic that's worked for transceiver modules.

"Any time there's a void and there's enough need, especially if there's enough bandwidth that needs to be deployed, there's a vehicle" thanks to MSAs, said Donn Lee, a senior network engineer at Facebook. (Lee appeared on a panel separate from either of Koley's.)

Need evidence of how this might work? Koley pointed to the recently ratified 10x10 multisource agreement (MSA), which was created with input from a spectrum of companies including not just module suppliers, but also cable operators, telcos, Ethernet service providers and Google.

Here is the 10x10 MSA referred to above.

The 10x10 solutions is designed to meet the needs of users who need to go beyond 100 meters but less than 2 km.  Many data centers have link requirements beyond 100 meters, but don’t need to go much more than a few hundred meters.  The 10km solutions for these applications is overkill because the 10x10 solution can meet the link requirements at less than half the cost of 100GBASE-LR4 and about 70% of the power (14W for 10x10 vs 20W for 100GBASE-LR4).  Since the 10x10 is CFP compatible and can fit in the same port as 100GBASE-LR4, customers will see the benefits of the 10x10 over 100GBASE-LR4 for link distances over 100 meters but under 2 km.

Responding to the call by end-users and equipment manufacturers, the 10X10 MSA is established to deliver the industry’s lowest cost 100GbE solution over single-mode fiber.

The 10X10 MSA is defining a new price and performance trajectory for 100GbE that will significantly accelerate the adoption and economics of 100G deployments.

The 10X10 MSA group is backed by a robust ecosystem of end-users, system manufacturers and optical module suppliers including Google, Brocade, JDSU and Santur.
Member companies include Google, Brocade, MRV, Enablence, Cyoptics, JDSU, AFOP, Santur, Oplink, Hitachi Cable America, AMS-IX, EXFO, Huawei, Kotura, FaceBook, Effdon, Cortina Systems and BTI systems… Read More >>

Softbank Telecom joins VMware vCloud data center services group of 7

VMware has a press release on Softbank Telecom joining the group of Singapore-based telecoms operator SingTel and Australian subsidiary Optus, plus Verizon and Colt, as well as Terremark, BlueLock and CSC.

SOFTBANK TELECOM Joins the VMware vCloud® Datacenter Services Program

Organizations in Japan will enjoy the business transformation benefits of true enterprise hybrid cloud computing

SAN FRANCISCO, Calif., – February 21, 2011 – VMware (NYSE: VMW), the global leader in virtualization and cloud infrastructure, today announced that major Japan-based telecommunications operator, SOFTBANK TELECOM Corp., has joined the VMware vCloud® Datacenter Services Program, which delivers globally consistent enterprise-class hybrid cloud computing infrastructure services.

VMware’s strategy for private and public clouds migration looks like it is working with the Telco sector.

Through the partnership, SOFTBANK TELECOM’s private and public sector customers will be able to move workloads seamlessly between their private clouds and SOFTBANK TELECOM’s public cloud. By offering a common infrastructure, VMware vCloud Datacenter Services place full and rapid control in the hands of IT departments, enabling them to monitor, manage and secure their applications across environments.

It looks like VMware is targeting those who find the public clouds like AWS and Rackspace don’t fit the enterprise.

In contrast to the IT industry’s widely-held belief in a “one-cloud-fits-all” approach, VMware’s customer-centric philosophy is that a cloud environment must be tailored to the unique business needs of each enterprise, while enabling organizations to leverage investment in existing IT resources. In addition, migration to cloud should represent incremental and not wholesale change, while offering the flexibility to use on- and off-premise resources in a totally secure fashion.

An example of a feature that enterprises need that they don’t get from public clouds is security.

To help ensure cloud security, SOFTBANK TELECOM’s cloud infrastructure will be built on the secure VMware stack with VMware vShield, Layer 2 isolation, role-based access control and LDAP integration. In addition, the service will add its own auditable security for enterprises through the ISO 27001 security management framework and SAS 70 Type II security compliance.