Every year there is an international conference for High Performance Computing, or HPC as it is often called.  This is a bit of a niche in that it’s something that many enterprises and researchers need but don’t do themselves and so many don’t have a grasp as to what all is invoved.  It’s a specialized,  potentially expensive and very different environment as well as mindset than the general sysadmin or network engineer will ever see.  The compute power is rated on it’s own scale and it is very competitive.
    While this is a very interesting subject in and of itself, it’s not really the most compelling part of this corner of the technology world.  What is the interesting piece is the foundation needed to make all of this work.  Compute power is obviously integral and I’ll never try to minimize its importance, but what happens when the huge data sets can’t get to the compute resources?  How do the big iron machines communicate?

You guessed it, it’s the network.

Over the course of my career I’ve done a lot of cool networking projects.  One of them, however, stands out far and away more than the others.

    Building the network to support this conference is the real gem and real challenge.  A bit of history on this network called SCinet, can be found on WikiPedia.

“SCinet is a high-performance network that is built, once a year, in support of the annual International Conference for High Performance Computing and Communications also known as the Supercomputing Conference. It is the primary network for the conference and is used by attendees to demonstrate and test high-performance distributed applications.

Originated in 1991 as an initiative within the SC conference to provide networking to attendees, SCinet has grown to become the “World’s Fastest Network” during the duration of the conference. Over the years, SCinet has been used as a platform to test networking technology and applications which have found their way into common use.

At SC|05 [1], SCinet initiated a conference wide InfiniBand infrastructure, combining various IB hardware vendors utilizing OpenIB software.

In previous years, SCinet deployed conference wide networking technologies such as ATM, FDDI, HiPPi before they were deployed commercially.”

    I’ve been involved since 2003, it made sense since my employer was a major supercomputing center, after all.  

Over the years I’ve participated in different capacities and different roles, Security, Wireless, UNIX services and routing.  The team that builds this amazing network do it because they want to.  It’s a labor of love.  It’s also an amazing experience.  This network, called “The worlds fastest network” on more than one occasion is built and torn down in less than a month.  The network has more bandwidth than some small countries, and it’s built specifically to support this conference and the big science that gets demonstrated.  It’s always built with differentheterogeneous equipment.  It’s an amazing interoperability experiment that has a who’s who of big names.  

    This is interesting and worth mentioning because it is an amazing feat that many never even know happens.  It is happening now.  It will happen again next year.  And it will be in a different location, with different gear and a different architecture.  If that’s not enough, it’s also got a research sandbox for all of your SDN needs.  

Junos Cookbook: by Garrett, Aviva [Paperback]

Scalable High Performance Computing for Knowledge Discovery and Data Mining

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]