Google Ads

Enter your email address:

Delivered by FeedBurner

This form does not yet contain any fields.

    Been super busy, have a large queue of things want to write, but short on time

    Tomorrow is DatacenterDynamics Seattle where I’ll be presenting.  Have been researching a bunch of cool ideas, but haven’t had the time to write.  Next week I’ll be going to Intel Developer Forum to see the data center presentations.

    Sorry for not posting, but hope to find some time to soon.

    Thanks for reading.


    Being more efficient in the data center by changing how you organize the hardware

    So much of IT is governed by self optimizing behavior by different fiefdoms.  Security, Web, Data, Apps, Mobile, Marketing, Finance, Manufacturing most of the time own their own gear.  I had a chance to talk to Gigaom’s Jonathan Vanian to discuss how data center clusters (sometimes called cores or pods) are used to organize IT resources to be efficient vs. a departmental approach.  This has been done for so long it’s been well proven.

    Here is Jonathan’s post.

    Want a more efficient data center? Maybe it’s time you tried a core and pod setup


    AUG. 27, 2014 - 5:00 AM PDT

    No Comments


    In order for companies to improve their internal data centers’ efficiency and improve their applications’ performance, many are turning to using a “core and pod” setup. In this type of arrangement, data center operators figure out the best configuration of common data center gear and software to suit their applications’ needs.

    The fundamental idea is to let system architects define systems as resources for departments.

    “You need to have some group that is looking at the bigger picture,” Ohara said. “A group that looks at what is acquired and makes it work all together.”

    If more companies were using this approach making them more efficient many times they can get better price performance making them competitive with public clouds.

    Jonathan wrote about my perspective.

    How pod and core configurations boost performance

    GoogleFacebook and eBay are all examples of major tech companies that have been using the pod and core setup, said Dave Ohara, a Gigaom Research analyst and founder of GreenM3. With massive data centers that need to serve millions of users on a daily basis, it’s important for these companies to have the ability to easily scale their data centers with user demand.

    Using software connected to the customized hardware, companies can now program their gear to take on specific tasks that the gear was not originally manufactured for, like analyzing data with Hadoop, ensuring that resources are optimized for the job at hand and not wasted, Ohara said.

    It used to be that the different departments within a company — such as the business unit or the web services unit — directed the purchases of rack gear as opposed to a centralized data center team that can manage the entirety of a company’s infrastructure.

    Because each department may have had different requirements from each other, the data center ended up being much more bloated than it should have been and resulted in what Ohara referred to as “stranded compute and storage all over the place.”

    And Jonathan was able to discuss with Redapt a systems integrator who has build many clusters/pods/cores.

    Working with companies to build their data centers

    At Redapt, a company that helps organizations configure and build out their data centers, the emergence of the pod and core setup has come out of the technical challenges companies like those in the telecommunications industry face when having to expand their data centers, said Senior Vice President of Cloud Solutions Jeff Dickey.

    By having the basic ingredients of a pod figured out per your organization’s needs, it’s just a matter of hooking together more pods in case you need to scale out, and now you aren’t stuck with an excess amount of equipment you don’t need, explained Dickey.

    Redapt consults with customers on what they are trying to achieve in their data center and handles the legwork involved, such as ordering equipment from hardware vendors and setting up the gear into customized racks. Dickey said that Redapt typically uses an eight-rack-pod configuration to power up application workloads of a similar nature (like multiple data-processing tasks).


    Ending Summer with An Awesome Event, 1st time Skurfing

    It’s been a great summer here in Seattle/Redmond.  One kid is back in school and another goes back next week.

    What better way to end the summer season which is technically not over until Sept 21, but for kids end of summer is when they go back to school with something memorable.


    Mainframes, Mini's, Servers, and Mobiles Oh My, Journey through Computer Development

    Taking a journey through technology world can be scary to many, causing users to panic.  In the beginning users couldn’t get to mainframes or mini’s.  Millions of servers exposed many more users to the scary use of technology.  Mobiles is in the hands of billions.

    Dileep Bhandarkar gave a presentation at the Computer History Museum on Aug 21, 2014.  Dileep’s slide deck is here.


    As a product developer you may think you don’t need to be scared.  Well there were wars going on fighting for survival.  Below Dileep covered the RISC vs. CISC wars.


    And Intel won the battle with the RISC killer chip.


    One of the more peaceful battles was the fight for energy efficiency.  The path of clock rate was consuming too many watts.


    Dileep covered the Data Center as a computer where the battle was/is between Microsoft, Google, and Amazon.


    One overall theme in Dileep’s talk is how volume was behind so many of the wins.  And Mobile has the highest volumes now.



    The inefficient data centers are fading vs. Rapid Growth of the Efficient - Google, AWS, Facebook, Microsoft

    The NRDC fired off a blog post this morning that got a bunch of other media to discuss the waste in the data center industry.

    Our study shows that many small, mid-sized, corporate and multi-tenant data centers still waste much of the energy they use. One of the key issues is that many of the roughly 12 million U.S. servers operate most of the time doing little or no work but still drawing power – up to 30 percent of servers are “comatose” and no longer needed but still drawing significant amounts of power, many others are grossly underutilized. However, opportunities abound to reduce energy waste in the data center industry as a whole. The technology exists, but systemic measures are needed to remove the barriers limiting its broad adoption across the industry. 


    Over the last 5 years the data center growth of Google, Amazon, Facebook, Microsoft and many others has been growing a pace that blows away the rest.  The old dominants of financials has grown slowly or even declined with the exception of the analytics group that has built huge data farms.

    Even though the NRDC raises concerns about the waste, the reality is the Cloud is helping to put so many of these old servers out of business.  But, the top issue is IT asset management is too many times poorly executed, making it difficult to identify the servers that can be turned off.

    Gigaom Research analyst Dave Ohara said the report brings up valid points, but more factors need to be considered. CPU utilization is just one metric, Ohara said via email. “RAM and hard-disks also use up energy … and can be just as underutilized …  The problem is that IT asset management is mostly done as a bookkeeping exercise, not as part of a technical IT operations team who purchases, owns and operates the servers.”

    Too many times the people who operate the servers can’t find the history of who owns the assets and what is on them.

    Page 1 ... 3 4 5 6 7 ... 673 Next 5 Entries »