Twitter follows Facebook, both are looking for Server HW Supply Chain Managers

I posted about Facebook looking for a Manager of supply chain operations for server hardware.  The job posting is still open.

But, there is a bit of competition as Twitter also has a job posting open for a Technical Supplier Negotiator (Datacenter Hardware).

About This Job
Do you like to build data center infrastructure? Hardware Operations is looking for an experienced and self-motivated person to help build out Twitter’s growing data center compute power. We’re looking for someone to lead the supply purchasing processes. You should be comfortable working at both the tactical and strategic levels.
Responsibilities

  • Lead the purchasing strategy and execution for hardware
  • Own the total supplier relationship for Twitter’s supply base  
  • Lead all supplier negotiations and execute supplier contract agreements
  • Drive continuous supplier improvement performance programs

If you were qualified for this job would you rather work at Facebook or Twitter? 

One of the biggest problems with a person taking the job at Facebook or Twitter is you would most likely be the one person who does this job, and you don’t have peers to work together with.  There will be support people, and internal customers, but there are logistics expertise is not common in Internet companies.

Why is this an issue?  In my early days I was a distribution logistics engineer at HP, but the only one in the division.  Others who I could discuss ideas with were in other divisions, not my division.  Compare this to being a distribution logistics engineer at UPS or FedEx where there are many times dozens of people with this job role.  For the same reason groups can out innovate the most experienced individual, there are risk to be the only person working in a role.

This role should possibly be outsourced in addition to an internal hire.  Where?  A company that could help out a supply chain manager is Redapt.   I think it is time to go have beers with the Redapt folks at Black Raven Brewing and have discussion on this idea.

image

Someone asked who is Redapt?  Here are quotes with some customers.

Zynga

From start to finish, Redapt made our project a success by introducing innovative technologies and delivering fully integrated racks ahead of schedule.

Mark Stockford, VP of Production Operations, Zynga

Online "E"tailer

Redapt addresses our needs and finds the best solution to our problem. They have proven to be a true partner who really cares about our needs and expectations. We look forward to partnering with them in the future.

Client, National Sporting Goods Retailer

IBM

Redapt is fearless in finding new ways to identify, work and close complex opportunities.  I am always impressed with their commitment to educating their team on leading edge technology, competition and what's new and hot in the industry.

Leslie Johnston, IBM

Who will ship the first Thunderbolt Server? For now use a MacBook Pro as a server to test performance

10 GB Ethernet is expensive due to low volumes.  Fiber channel is lower cost, but still not high volume and not cheap enough for mass deployments.  Now that Apple and Intel have announced Thunderbolt, 10 GB IO connections will be low cost. 

Why not use Thunderbolt for SAN and network connectivity.  Look at the difference between these two designs.

Figure 1 illustrates a typical topology of building out a server cluster today, in which, while the form factors may change, the basic configuration follows a similar pattern. Given the widespread availability of open-source software and off-the-shelf hardware, companies have successfully built large topologies for their internal cloud infrastructure using this architecture.

Figure 1: Typical Data Center I/O interconnect

Figure 2 illustrates a server cluster built using a native PCIe fabric. As is evident, the usage of numerous adapters and controllers is significantly reduced and this results in a tremendous reduction in power and cost of the overall platform, while delivering better performance in terms of lower latency and higher throughput.

Figure 2: PCI Express-based Server Cluster

We'll see what server vendor is first with Thunderbolt support.  For now some innovative users could use a bunch of MacBook Pros.

Apple/Intel Thunderbolt enables low cost 10 GB/s connection

Apple and Intel are announcing support for Thunderbolt support in Apple's new Notebooks.

image

Intel's Light Peak event, Thursday 10 a.m. PT (live blog)

by Josh Lowensohn

  • A photo of Intel's Light Peak technology

A photo of Intel's Light Peak technology

(Credit: Intel)

Intel today is revealing some of the final details of its Light Peak technology as it makes its way into the first wave of consumer and business gadgetry.

Now officially known as Thunderbolt, the data transfer and high-definition PC connection runs at 10 gigabits per second and "can transfer a full-length HD movie in less than 30 seconds," Intel announced this morning.

Part of Thunderbolt is PCI Express.

Here is white paper on PCI express.  Part of the idea behind PCI Express is adding a switch to IO design.

image

image

Given Apple's and Intel's announcement PCI Express interconnects will be cheaper.  See this article in HPC for how PCI express Interconnect could be used in high performance clusters.

January 24, 2011

A Case for PCI Express as a High-Performance Cluster Interconnect

Vijay Meduri, PLX Technology


Page:  1  of  2
1 | 2 All »

In high-speed computing (HPC), there are a number of significant benefits to simplifying the processor interconnect in rack- and chassis-based servers by designing in PCI Express (PCIe). The PCI-SIG, the group responsible for the conventional PCI and the much-higher-performance PCIe standards, has released three generations of PCIe specifications over the last eight years and is fully expected to continue this progression in the future with even newer generations, from which HPC systems will continue to see newer features, faster data throughput and improved reliability.

The latest PCIe specification, Gen 3, runs at 8Gbps per serial lane, enabling a 48-lane switch to handle a whopping 96 GBytes/sec. of full duplex peer to peer traffic. Due to the widespread usage of PCI and PCIe in computing, communications and industrial applications, this interconnect technology's ecosystem is widely deployed and its cost efficiencies as a fabric are enormous. The PCIe interconnect, in each of its generations, offers a clean, high-performance interconnect with low-latency and substantial savings in terms of cost and power. The savings are due to its ability to eliminate multiple layers of expensive switches and bridges that previously were needed to blend various standards. This article explains the key features of a PCIe fabric that now make clusters, expansion boxes and shared-I/O applications relatively easy to develop and deploy.

Here is the idea to use PCI Express for cloud infrastructure.

Figure 1 illustrates a typical topology of building out a server cluster today, in which, while the form factors may change, the basic configuration follows a similar pattern. Given the widespread availability of open-source software and off-the-shelf hardware, companies have successfully built large topologies for their internal cloud infrastructure using this architecture.

Figure 1: Typical Data Center I/O interconnect

Figure 2 illustrates a server cluster built using a native PCIe fabric. As is evident, the usage of numerous adapters and controllers is significantly reduced and this results in a tremendous reduction in power and cost of the overall platform, while delivering better performance in terms of lower latency and higher throughput.

Figure 2: PCI Express-based Server Cluster