Google Ads

Enter your email address:

Delivered by FeedBurner


This form does not yet contain any fields.

    OpenSolaris Green Home Server – low power and small

    Sun employee Constantin Gonzalez Schmitz has post on his technical decisions for a Green OpenSolaris Home server. His requirements for ECC memory and power efficient make sense to have a reliable low power server.

    A Small and Energy-Efficient OpenSolaris Home Server

    In an earlier entry, I outlined my most important requirements for an optimal OpenSolaris Home Server. It should:

    1. Run OpenSolaris in order to fully leverage ZFS,
    2. Support ECC memory, so data is protected at all times,
    3. Be power-efficient, to help the environment and control costs,
    4. Use a moderate amount of space and be quiet, for some extra WAF points.

    He admits his wife works for AMD, but qualifies his decision for AMD processor based on price, performance, and energy efficiency.

    Disclosure: My wife works for AMD, so I may be slightly biased. But I think the following points are still very valid.

    AMD on the other hand has a number of attractive points for the home server builder:

    • AMD consumer CPUs use the same microarchitecture than their professional CPUs (currently, it's the K10 design). They only vary by number of cores, cache size, number of HT channels, TDP and frequency, which are all results of the manufacturing process. All other microarchitecture features are the same. When using an AMD consumer CPU, you essentially get a "smaller brother" of their high end CPUs.
    • This means you'll also get a built-in memory-controller that supports ECC.
    • This also means less chips to build a system (no Northbridge needed) and thus lower power-consumption.
    • AMD has been using the HyperTransport Interconnect for quite a while now. This is a fast, scaleable interconnect technology that has been on the market for quite a while so chipsets are widely available, proven and low-cost.

    So it was no suprise that even low-cost AMD motherboards at EUR 60 or below are perfectly capable of supporting ECC memory which gives you an important server feature at economic cost.

    My platform conclusion: Due to ECC support, low power consumption and good HyperTransport performance at low cost, AMD is an excellent platform for building a home server.

    To keep things small he uses 2.5” drives.

    While looking for alternatives, I found a nice solution: The Scythe Slot Rafter fits into an unused PCI slot (taking up the breadth of two) and provides space for mounting four 2.5" disks at just EUR 5. These disks are cheap, good enough and I had an unused one lying around anyway, so that was a perfect solution for me.

    And, being concerned about reliability adds a 2nd NIC.

    Extra NIC: The Asus M3A78-CM comes with a Realtek NIC and some people complained about driver issues with OpenSolaris. So I followed the advice on the aforementioned Email thread and bought an Intel NIC which is well supported, just in case.

    Constantin was able to achieve a 45W idle power consumption.

    The Result

    And now for the most important part: How much power does the system consume? I did some testing with one boot disk and 4GB of ECC RAM and measured about 45W idle. While stressing CPU cores, RAM and the disk with multiple instances of sysbench, I could not get the system to consume more than 80W. All in all, I'm very pleased with the numbers, which are about half of what my old system used to consume. I didn't do any detailed performance tests yet, but I can say that the system feels very responsive and compile runs just rush along the screen. CPU temperature won't go beyond the low 50Cs on a hot day, despite using the lowest fan speed, so cooling seems to work well, too.

    It will be interesting to see what follow up posts Constantin writes.

    Click to read more ...


    Google’s Secret to efficient Data Center design – ability to predict performance

    DataCenterKnowledge has a post on Google’s (Public, NASDAQ:GOOG) future envisioning 10 million servers.

    Google Envisions 10 Million Servers

    October 20th, 2009 : Rich Miller

    Google never says how many servers are running in its data centers. But a recent presentation by a Google engineer shows that the company is preparing to manage as many as 10 million servers in the future.

    Google’s Jeff Dean was one of the keynote speakers at an ACM workshop on large-scale computing systems, and discussed some of the technical details of the company’s mighty infrastructure, which is spread across dozens of data centers around the world.

    In his presentation (link via James Hamilton), Dean also discussed a new storage and computation system called Spanner, which will seek to automate management of Google services across multiple data centers. That includes automated allocation of resources across “entire fleets of machines.”

    Going to Jeff Dean’s presentation, I found a Google secret.


    Designs, Lessons and Advice from Building Large
    Distributed Systems

    Designing Efficient Systems
    Given a basic problem definition, how do you choose the "best" solution?
    • Best could be simplest, highest performance, easiest to extend, etc.
    Important skill: ability to estimate performance of a system design
    – without actually having to build it!

    What is Google’s assumption of where computing is going?


    Thinking like an information factory Google describes the machinery as servers, racks, and clusters.  This approach supports the idea of information production.  Google introduces the idea of data centers being like a computer, but I find a more accurate analogy is to think of data centers as information factories.  IT equipment are the machines in the factory, consuming large amounts of electricity for power and cooling the IT load.


    Located in a data center like Dalles, OR


    With all that equipment things must break.  And, yes they do.

    Reliability & Availability
    • Things will crash. Deal with it!
    – Assume you could start with super reliable servers (MTBF of 30 years)
    – Build computing system with 10 thousand of those
    – Watch one fail per day
    • Fault-tolerant software is inevitable
    • Typical yearly flakiness metrics
    – 1-5% of your disk drives will die
    – Servers will crash at least twice (2-4% failure rate)

    The Joys of Real Hardware
    Typical first year for a new cluster:
    ~0.5 overheating (power down most machines in <5 mins, ~1-2 days to recover)
    ~1 PDU failure (~500-1000 machines suddenly disappear, ~6 hours to come back)
    ~1 rack-move (plenty of warning, ~500-1000 machines powered down, ~6 hours)
    ~1 network rewiring (rolling ~5% of machines down over 2-day span)
    ~20 rack failures (40-80 machines instantly disappear, 1-6 hours to get back)
    ~5 racks go wonky (40-80 machines see 50% packetloss)
    ~8 network maintenances (4 might cause ~30-minute random connectivity losses)
    ~12 router reloads (takes out DNS and external vips for a couple minutes)
    ~3 router failures (have to immediately pull traffic for an hour)
    ~dozens of minor 30-second blips for dns
    ~1000 individual machine failures
    ~thousands of hard drive failures
    slow disks, bad memory, misconfigured machines, flaky machines, etc.
    Long distance links: wild dogs, sharks, dead horses, drunken hunters, etc.


    Monitoring is how you know your estimates are correct.

    Add Sufficient Monitoring/Status/Debugging Hooks
    All our servers:
    • Export HTML-based status pages for easy diagnosis
    • Export a collection of key-value pairs via a standard interface
    – monitoring systems periodically collect this from running servers
    • RPC subsystem collects sample of all requests, all error requests, all
    requests >0.0s, >0.05s, >0.1s, >0.5s, >1s, etc.
    • Support low-overhead online profiling
    – cpu profiling
    – memory profiling
    – lock contention profiling
    If your system is slow or misbehaving, can you figure out why?

    Many people have quoted the idea “you can’t manage what you don’t measure.”  But a more advanced concept that Google discusses is “If you don’t know what’s going on, you can’t do
    decent back-of-the-envelope calculations!”

    Know Your Basic Building Blocks
    Core language libraries, basic data structures,
    protocol buffers, GFS, BigTable,
    indexing systems, MySQL, MapReduce, …
    Not just their interfaces, but understand their
    implementations (at least at a high level)
    If you don’t know what’s going on, you can’t do
    decent back-of-the-envelope calculations!

    This ideas being discussed are by a software architect, but the idea applies just as much to data center design.  And, the benefit Google has it has all of IT and development thinking this way.


    And here is another secret to great design.  Say No to features.  But what the data center design industry wants to do is to get you to say yes to everything, because it makes the data center building more expensive increasing profits.


    So what is the big design problem Google is working on?


    Jeff Dean did a great job of putting a lot of good ideas in his presentation, and it was nice Google let him present some secrets we could all learn from.

    Click to read more ...


    CA about to launch EcoSoftware for Green IT

    CNET news reports CA will be releasing an EcoSoftware solution.

    CA jumps into eco-software market

    by Larry Dignan

    CA next week will unveil an integrated sustainability suite designed to track carbon emissions, environmental assessments, metering, and compliance to policies in one dashboard.

    CA calls the suite EcoSoftware and will launch it Monday, according to Christopher Thomas, vice president of energy and sustainability. I ran into Thomas at the Gartner IT Symposium, where the carbon-monitoring software caught my eye.

    There are other efforts designed to track carbon emissions. For instance, Hara and SAP have various applications and others use metering to measure sustainability efforts.

    Read more of "CA jumps into eco software market; Plans to launch carbon tracking suite" at ZDNet's Between the Lines.

    I have written in the past it was natural for management tool vendors – Tivoli, OpenView, and CA to add Green IT management, so this is no surprise.

    We’ll get more details next week as the launch is scheduled for Oct 26.

    Click to read more ...


    Watching a person read my blog – Gartner employee

    I use typepad for my blog, and go through the site statistics to see what people are reading checking out what google search words they use to find my entries.  This gives me an idea of what people are searching  for and who my readers are.  The following I am pretty sure is a Gartner employee.

    This morning at 9:30p PST I wrote a blog post on Gartner’s recommendation for Pattern-based strategy.  At 2:45 and 2:48 pm I got the following hits.


    The first was google search for “gartner advanced analytics.”  My one entry 5 hours earlier is result #6, beating out NetworkWorld.


    The 2nd one is a google search for “reshaping the data center gartner”.  I have the #1 search result, behind google news, but beating gartner’s blog,, and


    I was visiting a friend who works at Google yesterday and we chatted briefly how well my blog works with Google search, but I swear I have no insider information.

    All I know is keep on writing, and keep on looking at my results.

    Thanks for reading my blog.

    Click to read more ...


    Gartner says companies must implement a Pattern-Based Strategy

    In my day job, I help clients be innovative leaders, constantly looking for what it takes to be better than the rest. Gartner recently has announced a new initiative called Pattern-Based Strategy.

    It is a pleasant surprise to have Gartner’s nine analysts come to a recommendation I’ve been using for over five years in IT infrastructure.

    Introducing Pattern-Based Strategy

    7 August 2009

    Yvonne Genovese Valentin T. Sribar Stephen Prentice Betsy Burton Tom Austin Nigel Rayner Jamie Popkin Michael Smith David Newman

    The environment after the recession means business leaders must be more proactive in seeking patterns from conventional and unconventional sources that can positively or negatively impact strategy or operations, and set up a consistent and repeatable response by adjusting business patterns.

    One of the best groups I worked with at Microsoft and still have many friends in is the Patterns & Practices group, and I still have regular discussions of how Data Centers and IT could/should be using a patterns-based approach.

    You’ve probably guessed from the first half our name that we’re rather enthusiastic about design patterns.  Design patterns describe solutions to common issues that occur in application design and development. A large part of what we do involves identifying these common issues and figuring out solutions to them that can be used across different applications or scenarios. Once we have the patterns, we typically package them up in what we call an application block.

    Software people have been some of the early adopters of patterns, but the history of patterns comes from Christopher Alexander, a building architect.

    A pattern must explain why a particular situation causes problems, and why the proposed solution is considered a good one. Christopher Alexander describes common design problems as arising from "conflicting forces" -- such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern's author to determine which solution is "best", must also be documented within the pattern.

    A pattern must also explain when it is applicable. Since two houses may be very different from one another, a design pattern for houses must be broad enough to apply to both of them, but not so vague that it doesn't help the designer make decisions. The range of situations in which a pattern can be used is called its context. Some examples might be "all houses", "all two-story houses", or "all places where people spend time." The context must be documented within the pattern.

    For instance, in Christopher Alexander's work, bus stops and waiting rooms in a surgery center are both part of the context for the pattern "A PLACE TO WAIT."

    I’ve spent most of my career working on the Mac OS/hardware and Windows OS/hardware The use of patterns seemed like a natural thing to do, but not intuitive for the people who deploy IT infrastructure.  With Gartner’s Pattern-Based Strategy, my persuasion challenge is dramatically decreased.

    So, what is good about Gartner’s Pattern-Based announcement?  Their first 2 paragraphs are well written to identify the need.

    Gartner Says Companies Must Implement a Pattern-Based Strategy™ to Increase Their Competitive Advantage

    Analysts Discuss the Framework for Implementing a Pattern-Based Strategy During Gartner Symposium/ITxpo, October 18-22, in Orlando

    STAMFORD, Conn., October 8, 2009 —

    The economic environment rapidly emerging from the recession will force business leaders to look at their opportunities for growth, competitive differentiation, and cost controls in a new way. A Pattern-Based Strategy will help leaders harness and drive change, rather than simply react to it, according to Gartner, Inc.

    A Pattern-Based Strategy provides a framework to proactively seek, model and adapt to leading indicators, often-termed "weak" signals that form patterns in the marketplace. Not only will leading organizations excel at identifying new patterns and exploiting them for competitive advantage, but their own innovation will create new patterns of change within the marketplace that will force others to react.

    They identify the need for closed loop feedback systems to measure the effectiveness of change.


    Most business strategy approaches have long emphasized the need to seek better information and insights to inform strategic decisions and the need for scenario planning and robust organizational change management. Few have connected this activity directly to the execution of successful business outcomes. According to Gartner, successful organizations can achieve this by establishing the following disciplines and proactively using technology to enable each of these activities:

    For the same reason I added modeling and social networking to the list of things I discuss and blog about, Gartner explains.

    Modeling for pattern analysis — Once new patterns are detected or created, business and IT leaders must use collaborative processes, such as scenario planning, to discuss the potential significance, impact and timing of them on the organization's strategy and business operations. The purpose of modeling is to determine which patterns represent great potential or risk to the organization by qualifying and quantifying the impact.

    "Successful organizations will focus their pattern-seeking activities on areas that are most important to their organization," said Ms. Genovese. "Using models to do scenario planning will be critical to fact-based decisions and the transparency of the result."

    I have my black belt in Aikido, and one of the most important skills I figured out to be better is you must develop the skills to change.  Gartner adds this as well.

    Adapting to capture the benefits — Identifying a pattern of change and qualifying the potential impact are meaningless without the ability to adapt and execute to a successful business outcome. Business and IT leaders must adapt strategy, operations and their people's behaviors decisively to capture the benefits of new patterns with a consistent and repeatable response that is focused on results.

    Clients – I told you taking a modeling based approach to discover patterns with real-time monitoring systems will allow you to be ahead of the competition.  And, what better proof than Gartner now promoting the same ideas.  :-)

    Click to read more ...