Greening Security Services in a Data Center by Virtualization

Found this article in echannelline.com which is one of the first I’ve seen discussing the ideas of Green and Security in the same article.

Security platform provider Crossbeam Systems has waded into the Green IT discussion by releasing a white paper outlining steps that CIOs can take to achieve more efficient data centers by reducing the energy consumption of security-related equipment through the adoption of virtualization.

"Security has been a particularly egregious contributor to increasing data center energy consumption," said Peter Doggart, Crossbeam's Director of Product Marketing and the author of the white paper. "The good news is that the advances in security virtualization that Crossbeam has pioneered are helping customers consolidate their security-related hardware on the order of 20-to-one. The dramatic savings, ease of management and reduction of energy benefits to be realized resonate strongly among IT organizations."

The marketing document is kind of an advertorial format, but has some interesting parts like Green Data Center can be achieved by consolidating network security services into fewer devices as there have been device sprawl in the data center.

In the traditional, non-virtualized environment, companies address their
security issues by deploying special-purpose appliances built to run a
host of security applications, from firewalls and content gateways to IDS
devices and URL filters. Connecting this array of appliances is an excess
of additional switching equipment, patch cabling, and load balancers.
In this environment, network security has been in favor of the security
vendors, with their response to each new threat being, “Have I got a box
for you, and by the way, you are going to need a lot of them.”
The good news is there are numerous innovative companies focusing on
a particular security threat area. That focus is a big plus for customers.
The downside is that these focused companies typically require that
another box be added in order to deploy their solution. The requirement
for redundancy and ever-increasing traffic demands accelerate growth
in the number of appliances deployed. This phenomenon is known as
“appliance sprawl”
Appliance sprawl yields extraordinarily complex data center architectures,
leading to wasted space, growing energy usage, and difficulty in fault
diagnosis. Moreover, because these devices require connections to
Layer 2 and 3 network switches plus load balancers, and have limited
networking and application processing power, they essentially become
embedded, single-purpose elements in the network. This means that
when the security services need to be expanded or upgraded, so
does the network – an expensive and inefficient use of IT and security
resources.

This article did server another purpose to remind to contact Guy Brunsdon, giving us an excuse to talk technical stuff while our wives watch the kids. Guy and I get along even though he is Nikon user and I am a Canon dSLR user.

Technical Marketing Director, VMware

Guy Brunsdon is responsible for Technical Marketing of Networking for VMware Infrastructure at VMware. Prior to VMware he spent eight years at Cisco Systems in Australia and the United States in a variety of technical marketing and product marketing roles. Most recently he was responsible for product marketing of the Catalyst product lines in High Performance Computing. Prior to Cisco, Guy was Chief Network Architect for the Telstra enterprise network in Australia. Guy currently resides in Bellevue, WA.

Here is blog post about one of Guy’s presentations.

Today I've started attending Guy Brunsdon speech about best practice in configuring networking.
He started with basic on vSwitch telling that they behave as layer 2 physical switch (so no layer 3 routing).
He put strike on the importance of having network teaming to achieve:

  • better use of bandwidth
  • enhanced availability and performance

Another important feature to be used is VLAN tagging (that implies 802.1Q hardware), moreover in case of Virtual infrastructure deployed on blade system with lack of the ports.

Greening of a network while meeting security SLAs requires a focus on efficiency and cost effectiveness, and i know Guy would be a great resource on this.

Read more

Amazon Web Services provides resizable compute capacity

AWS blog posts an entry they have added the capability to two new "high-cpu" instance types.

Amazon EC2 users now have access to a pair of new "High-CPU" instance types. The new instance types have proportionally more CPU power than memory, and are suitable for CPU-intensive applications. Here's what's now available:

The High-CPU Medium Instance is billed at $0.20 (20 cents) per hour. It features 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units Each), and 350 GB of instance storage, all on a 32-bit platform.

The High-CPU Extra Large Instance is billed at $0.80 (80 cents) per hour. It features 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), and 1,690 GB of instance storage, all on a 64-bit platform.

Behind the scenes amazon uses Citrix Xen for virtualization.

Amazon Elastic Compute Cloud, also known as "EC2", is a commercial web service which allows paying customers to rent computers to run computer applications on. EC2 allows scalable deployment of applications by providing a web services interface through which customers can request an arbitrary number of Virtual Machines, i.e. server instances, on which they can load any software of their choice. Current users are able to create, launch, and terminate server instances on demand, hence the term "elastic". The Amazon implementation allows server instances to be created in zones that are insulated from correlated failures.[1]. EC2 is one of several Web Services provided by Amazon.com under the blanket term Amazon Web Services (AWS).

EC2 uses Xen Virtualization. Each virtual machine, called an instance, is a virtual private server and can be one of three sizes; small, large or extra large. Instances are sized based on EC2 Compute Units which is the equivalent CPU capacity of physical hardware.

1 EC2 Compute Unit equals 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. The three available Instance sizes are sized as follows:

Small Instance

The small instance (default) is the "equivalent of a system with 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform " [1]

Large Instance

The large instance is the "equivalent of a system with 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform"

Extra Large Instance

The extra large instance is the "equivalent of a system with 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform."

Wouldn't it be great if enterprise IT was run this way. Amazon is figuring out how to sell compute better than anyone else, and that is their business as a retailer.

Read more

What is Live Mesh? Ray Ozzie's Live Mesh Services Strategy Document

There has been lots of news on Microsoft's Live Mesh. A good read to understand the strategy for Live Mesh is Ray's Ozzie's Services Strategy Update.  Here are a few excerpts

At the back-end, developers will need to contend with new programming models in the cloud. Whether running on an enterprise grid, or within the true utility computing environment of cloud-based infrastructure, the way a developer will write code, deploy it, debug it, and maintain it will be transformed. The cloud-based environment consists of vast arrays of commodity computers, with storage and the programs themselves being spread across those arrays for scale and redundancy, and loose coupling between the tiers. Independent developers and enterprises alike will move from “scale up” to “scale out” back-end design patterns, embracing this model for its cost, resiliency, flexible capacity, and geo-distribution.

CONNECTED BUSINESS – We will extend the benefits of high-scale cloud-based infrastructure and services to enterprises, in a way that gives them choice and flexibility in intermixing on-premises deployment, partner hosting, or cloud-based service delivery. Businesses large and small will benefit from services that make it easy to dynamically connect and collaborate with partners and customers, using the web to enable a business mesh. Business customers of all sizes will benefit from web-based business services. This vision is being realized today through the likes of Office Live Small Business. For enterprises, our new Microsoft Online Services provide managed, service-based infrastructure through offerings including SharePoint, Exchange, OCS, and Dynamics CRM. Our enterprise solution platform extends to the cloud through SQL Server Data Services, BizTalk Services, and many more services to come. At the lowest level within the enterprise data center, we‟ve begun to deliver on our utility computing vision, with Windows Server 2008 and Hyper-V, and through our Systems Center products including Virtual Machine Manager.

Microsoft's container strategy makes sense to enable Microsoft to deploy and scale infrastructure at a rate higher than the rest of the industry. Ray has pushed Microsoft's Mike Manos to build innovative data centers beyond the rest of the industry. Talk about executive support for data centers. Mike is trying to apply Moore's law to data centers as he needs to in order to support Ray Ozzie's strategy.

Read more

Oracle 11g Goes Green, a database?

ChannelWeb writes on Oracle 11G being a "green solution."

Mention "green solution" and databases aren't usually what jump to mind first. Yet Oracle (NSDQ:ORCL), the industry's market-share leading enterprise database vendor, has developed the latest 11g database as a more streamlined, greener product with new features, such as advanced data compression and Partition Advisor.

Though Redwood Shores, Calif.-based Oracle officially launched 11g last summer, the Test Center opted to examine the database in light of news from archrival Microsoft that its SQL Server 2008 would be delayed by several months.

Ah, a way for Oracle to generate some news to compete against Microsoft's SQL Server 2008.

Where is the Green?

Test Center reviewers also took a look at the advanced data compression capabilities, and here's where the "green" comes in: Compression in 11g goes across all databases -- not just production -- which decreases the level of real estate needed in the data center. Compressing large volumes of data can significantly decrease the need for additional disk space.

In 11g, unstructured documents can be compressed with binary compression and structured documents are compressed by way of duplicate data values. There is less overhead with data compression because of an improved algorithm. Compression is up to four times greater than in previous versions.

Looking a little further at Oracle's web site turned up a Green Data Center article, part 1 and part 2.

A central plank of green IT is server consolidation. According to OnStor's statistics, fifty-five per cent of respondents stated that storage consolidation would be a central element of their green policy. While an even more upbeat Gartner survey found that 92% of respondents had a data centre consolidation planned for, in progress or completed.

Storage consolidation is really important, although equally essential to reducing energy consumption is ensuring companies have streamlined their applications and data. Duplicated data and applications is a major problem in many organizations and these cause a range of operational inefficiencies, including demand for more storage space. Most companies know that at the data and applications levels they are far from efficient, but the problem has been that the risk, cost and time to consolidate applications has put them off. Celona recently conducted a survey amongst telecoms executives and 59% said they'd been so discouraged by an application migration that they decided not to go ahead with it. The new-generation of migration technology overcomes these problems, making the long-awaited benefits of application consolidation a reality.

Many vendors have cottoned on to the fact that there is a sea change in the air, and this is not the oceanic smell of green altruism – there is a distinct whiff of hard business reality about it. "Environmental sentiment is all well and good, and it helps that environmental issues currently enjoy a high media profile, but few companies have the financial freedom to go green overnight" says Simon Sherrington. "They simply can't justify decommissioning equipment unless there is a clear cost benefit in terms of saved opex, or unless the kit is becoming obsolete anyway. That is why companies with comparatively high energy costs, and companies in markets with high rates of technology obsolescence, have been swifter out of the blocks than peers in other industry sectors."

Read more

Green Lab in the Cloud, SkyTap provides virtual lab as a service

One of the areas I evangelize as the area to start Green Efforts in IT/data center is in a company's performance and test labs. Even though many companies want to sell into the data center operations, they do not see the opportunities in the labs.  Labs are power constrained, underutilized just like data centers, but there is usually only one lab manager you need to sell to get your energy savings technology in place.  Also, there are many mor labs in a company than data centers, so your ability to make inroads are much easier. Once in place, then the technology can become viral.

To make it even easier to Green a development/test environment a company SkyTap has announced Virtual Lab as a Service,

SEATTLE – April 10, 2008 – Skytap, the leading provider of cloud-based virtualization solutions, today revealed its vision for the next generation of offerings that harness the power of virtualized infrastructure. The company has announced limited availability of its first product offering, Skytap Virtual Lab, a virtual lab automation solution available as an on-demand service over the Web. The company, formerly known as illumita, has also changed its name to Skytap.
“Skytap provides customers with cloud-based services that enable them to capitalize on the wave of virtualization technology sweeping the industry,” said Scott Roza, chief executive officer of Skytap. “Cloud computing is gaining traction because a growing percentage of companies are demanding solutions that deliver value quickly, scale with business need, and don’t have the risk of an in-house implementation. Skytap’s Virtual Lab, which combines cloud-based virtualized infrastructure with an industry leading lab automation application, has tremendous potential to improve the timely delivery of quality applications to the business while increasing lab efficiency and lowering cost.”
Skytap’s first product, Skytap Virtual Lab, is a ground-breaking virtual lab solution available as a service over the Web. It enables application development and test teams to provision lab infrastructure on demand (including servers, software, networking and storage) and utilize a powerful virtual lab management application to automate the set-up, testing and tear down of complex, multi-tiered environments. It also gives distributed teams the capability to collaborate and rapidly resolve software defects using a virtual lab and virtual project environment.

The Industry Standard has a press article, and points out how this service is a good match for smaller shops. 

Skytap's offering provides users with a hosted platform for setting up and managing virtual labs. The concept of centralizing and automating virtual labs is not new. Companies such as Surgient have had offerings in this space for years.

But that vendor is "going after the higher enterprise play," while Skytap's SaaS offering might be able to gain inroads with smaller shops, said Theresa Lanowitz, an analyst with Voke.

Initially, Skytap's offering is more likely to appeal to customers with an existing setup, said company CEO Scott Roza. "You usually don't get to a customer who has no labs. It's usually someone who has labs, but [the setup is] too small for the requirements," he said. "In that case, rather than replace their labs lock, stock and barrel, we can augment them."

Surgient is another alternative and has been in business since 2003

Virtualization leader VMware has its similar Lab Manager, launched in November 2006, and another startup, VMLogix, has offered its LabManager since July 2007. But Surgient was first in the field, having been founded in 2003 to pursue creating infrastructure snapshots as the ideal test lab for new software. By testing prospective applications in their future production environments, developers learn more about their dependencies and performance in what will be their real-world setting.

Read more