Google Ads

Enter your email address:

Delivered by FeedBurner

Search
This form does not yet contain any fields.
    Navigation
    Tuesday
    Oct142014

    Are you in an incompatible relationship with your Boss, fixed vs growth mindsets

    I’ve enjoyed reading Carol Dweck’s works on mindsets and found this article that explains the fixed vs. growth mindset.  This graphic illustrates the difference in mindsets.

    NewImage

    (courtesy of Brain Pickings), we see the the main differences between a fixed mindset and a growth mindset:

    I read the section on relationships and substituted the idea of mate for employee.

     People harboring a fixed mindset held on to the belief that their ideal employee would set them on a pedestal and make them feel perfect.

    Do you have a boss like the above?  Or is your boss like this?

    those possessing a growth mindset would opt for an employee that would point out their flaws and help them improve as a person.

    The mistake I have made is having worked for growth bosses for so long and not realizing when I was working for a fixed mindset boss who all the cared about was being put on a pedestal and make them feel perfect.  That’s why the brown noses and ass kissers were getting rewarded.  Now I get it.  It’s those bosses were fixed mindsets.

    The problem is in IT and data centers I think there are many more people who are fixed mindset than growth.  Do you see this battle going on in your organization?

    For people with a growth mindset, personal success occurs when they work as hard as they can to be their best, whereas for those with a fixed mindset, success is all about building their superiority over others. For the former, setbacks are motivating and informative input they can use to become better. For the latter, they’re a label and a sentence.

    Tuesday
    Oct142014

    GE IT obsoletes the low level IT role, goes All-in for Public Cloud

    Infoworld has an interview with GE’s COO of IT Chris Drumgoole.  Bottom line GE is shutting down its own data center and Infrastructure of servers, networking, and storage, transitioning to the Public Cloud.  Below is the closing of the article where people are told to go up the stack or else.

    InfoWorld: The obvious cultural question that everyone asks about moving toward the cloud is the effect on morale. Is my job being outsourced? Am I going to be a victim? How have you dealt with that?

    Drumgoole: It’s a good question. I get asked it any time I speak, especially to our own employees. It’s going to sound like a canned answer, but in our case, it’s true: With our growth rate and the shift that we’re making to software in all of our businesses, there’s no shortage of opportunity to do things up the stack.

    The way I answer that question when my own people ask is that the world is your oyster if you’re willing to make the cultural shift. We’ll gladly teach you [to work on things higher up in the stack] -- we want to invest in you. If you want to make that jump as an individual and you can challenge the status quo and be part of that, we have thousands of openings for you to go do stuff.

    If you’re not willing to make that shift, then yes, you’re going to have to look at yourself in the mirror and have hard conversations around what your career looks like in IT going forward. We’re lucky enough to be so big and of such scale that we can put the choice on the people and say: It’s on you.

    Part of GE’s efforts are to break down silos and change how IT provides a service.  In the past IT organizations were a monopoly who had complete control over how services were delivered.  GE tries to bring back control over Public Cloud efforts by providing security, regulatory, data privacy and other things that business units tend to overlook.

    InfoWorld: Another risk factor when you go to public cloud services is reinventing the siloed organization. Different companies give different levels of freedom to individual business units to go out and get their own cloud service. How do you avoid creating silos?

    Drumgoole: To the point I made earlier, we really view ourselves to be a service provider to our businesses, so our businesses can buy from us or they can buy from others. The best way to think about it is if you’re my oil and gas division you can come to me, as corporate IT, and buy Amazon in order deploy your applications or you can go to Amazon directly or you can go to Azure directly.

    The way we enforce that is we say: OK, if you want to come through me, by definition, you’re going to live and operate in this safe environment. I have already taken care of the things that GE holds dear and our requirements around regulation, security, data privacy and so on. I pre-built and pre-instrumented the environment so that those things are not something you have to worry about. That’s the benefit of coming to me.

    If you decide to go on your own, you certainly can. We’re never going to stop you, but understand that now those things are on you and you have to take care of them. I’ll tell you, in practice ... we’ve made that a losing proposition. That’s where scale comes into play. If we ask what it’s going to cost a business unit to go it alone, we truly are cheaper. So no one ever ends up making that decision, ever. We kind of let the market power enforce that as opposed to trying to put a process in place.

    Monday
    Oct132014

    What content gets presented at Analyst hosted events

    Here is an observation.  What is the characteristic of content presented at a conference run by an analyst company?  Out of all the ideas out there that could be discussed and presented at a conference you could create two groupings, one those that the analysts think they can make money on and the other ideas they don’t make money on.  Oh, there is a third group, those ideas that not only don’t make money for the analyst company, they disrupt their business model. 

    What will be presented at the conference?  Yep, those ideas that they make money on.  I have way more fun discussing those ideas that are disruptive, and figuring out how to make money with those ideas.  Don’t analyst present disruptions?  Yes, after they have figured out how they will make money from the disruption.

    Sunday
    Oct122014

    An OS that scares the Linux Vendors, CoreOS designed for a modern data center

    Being an old time OS guy I once made the observation “I think people would pay money to just have drivers and kernel of the OS updated and leave the new features as options."

    A buddy told me to check out CoreOS. Why?  Because it has the security, service discovery, clustering and updating stuff that guys like AWS haven’t made a priority.  I was surprised at Gigaom Structure when AWS’s Werner Vogel said that security was something developers need to work on developing their apps.  Google’s Urs Hoelzle said Google thinks there are things they can do to make building secure services easier.

    CoreOS makes security #1 priority and many other things that a modern data center group wants.

    CoreOS is a server OS built from the ground up for the modern datacenter. CoreOS provides tools and guidance to ensure your platform is secure, reliable, and stays up to date.

    Small Footprint

    CoreOS utilizes 40% less RAM than typical Linux server installations. We provide a minimal, stable base for you to build your applications or platform on.

    Reliable, Fast, Patching and Updates

    CoreOS machines are patched and updatedfrequently with system patches and new features.

    Built for Scale

    CoreOS is designed for very large scale deployments.PXE boot and diskless configurations are fully supported.

    Infoworld posts on how CoreOS is a threat to Linux vendors.

    Indeed, by changing the very definition of the Linux distribution, CoreOS is an "existential threat" to Red Hat, Canonical, and Suse, according to some suggestions. The question for Red Hat in particular will be whether it can embrace this new way of delivering Linux while keeping its revenue model alive.

    ...

    When I pressed him on what he meant by that last sentence, he elaborated:

    CoreOS is the first cloud-native OS to emerge. It is lightweight, disposable, and tries to embed devops practices in its architecture. RHEL has always been about adding value by adding more. CoreOS creates value by giving you less [see the cattle vs. pets analogy]. If the enterprise trend is toward webscale IT, then CoreOS will become more popular with ops too.

    Project Atomic is a competitor of CoreOS.  You can probably look for more choices with the idea that an OS service that just keeps it updated.  Updated with what?  Bug fixes, performance improvements, and better security.  That’s worth a lot.  

    Sunday
    Oct122014

    Two Ways to Save Server Power - Google (Tune to Latency) vs. Facebook (Efficient Load Balancing)

    Saving energy in the data center is more than a low PUE.  Using 100% renewable power while wasting energy is not a good practice.  I’ve been meaning to post on what Google and Facebook have done in these areas for a while and have been staring at these open browser tabs for a while.

    1st is Google in June 2014 shared its method of turning down the power consumption of a server as low as they could as long as it met performance latency.  The Register covered this method.

    Google has worked out how to save as much as 20 percent of its data-center electricity bill by reaching deep into the guts of its infrastructure and fiddling with the feverish silicon brains of its chips.

    In a paper to be presented next week at the ISCA 2014 computer architecture conference entitled "Towards Energy Proportionality for Large-Scale Latency-Critical Workloads", researchers from Google and Stanford University discuss an experimental system named "PEGASUS" that may save Google vast sums of money by helping it cut its electricity consumption.

    NewImage

    The Google paper is here.

    We presented PEGASUS, a feedback-based controller

    that implements iso-latency power management policy for

    large-scale, latency-critical workloads: it adjusts the powerperformance

    settings of servers in a fine-grain manner so that

    the overall workload barely meets its latency constraints for user

    queries at any load. We demonstrated PEGASUS on a Google

    search cluster. We showed that it preserves SLO latency guarantees

    and can achieve significant power savings during periods

    of low or medium utilization (20% to 40% savings). We also es-

    tablished that overall workload latency is a better control signal

    for power management compared to CPU utilization. Overall,

    iso-latency provides a significant step forward towards the goal

    of energy proportionality for one of the challenging classes of

    large-scale, low-latency workloads.

    Facebook in Aug 2014 shared Autoscale its method of using load balancing to reduce energy consumption.  Gigaom covered this idea.

    The social networking giant found that when its web servers are idle and not taking user requests, they don’t need that much compute to function, thus they only require a relatively low amount of power. As the servers handle more networking traffic, they need to use more CPU resources, which means they also need to consume more energy.

    Interestingly, Facebook found that during relatively quiet periods like midnight, while the servers consumed more energy than they would when left idle, the amount of wattage needed to keep them running was pretty close to what they need when processing a medium amount of traffic during busier hours. This means that it’s actually more efficient for Facebook to have its servers either inactive or running like they would during busier times; the servers just need to have network traffic streamed to them in such a way so that some can be left idle while the others are running at medium capacity.

    Facebook posts on Autoscale here.

    Overall architecture

    In each frontend cluster, Facebook uses custom load balancers to distribute workload to a pool of web servers. Following the implementation of Autoscale, the load balancer now uses an active, or “virtual,” pool of servers, which is essentially a subset of the physical server pool. Autoscale is designed to dynamically adjust the active pool size such that each active server will get at least medium-level CPU utilization regardless of the overall workload level. The servers that aren’t in the active pool don’t receive traffic.

    Figure 1: Overall structure of Autoscale

    We formulate this as a feedback loop control problem, as shown in Figure 1. The control loop starts with collecting utilization information (CPU, request queue, etc.) from all active servers. Based on this data, the Autoscale controller makes a decision on the optimal active pool size and passes the decision to our load balancers. The load balancers then distribute the workload evenly among the active servers. It repeats this process for the next control cycle.