Is FC dead?!

SNIA Tech Center Computer Lab 2 switching hw (c) 2011 Silverton Consulting, Inc.
SNIA Tech Center Computer Lab 2 switching hw (c) 2011 Silverton Consulting, Inc.

Was at the Pacific Crest/Mosaic annual conference cocktail hour last night surrounded by a bunch of iSCSI/NAS storage vendors and they made the statement that FC is dead.

Apparently, 40GbE is just around the corner and 10GbE cards have started a steep drop in price and are beginning to proliferate through the enterprise.  The vendors present felt that an affordable 40GbE that does iSCSI and/or FCoE would be the death knell for FC as we know it.

As evidence they point to Brocade’s recent quarterly results that shows their storage business is in decline, down 5-6% YoY for the quarter. In contrast, Brocade’s Ethernet business is up this quarter 12-13% YoY (albeit, from a low starting point).  Further confusing the picture, Brocade is starting to roll out 16Gbps FC  (16GFC) while the storage market is still trying to ingest the changeover to 8Gbps FC.

But do we need the bandwidth?

One question is do we need 16GFC or even 40GbE for the enterprise today.  Most vendors speak of the high bandwidth requirements for server virtualization as a significant consumer of enterprise bandwidth.  But it’s unclear to me whether this is reality or just the next wave of technology needing to find a home.

Let’s consider for the moment what 16GFC and 40GbE can do for data transfer. If we assume ~10 bits pre byte then:

  • 16GFC can provide 1.6GB/s of data transfer,
  • 40GbE can provide 4GB/s of data transfer.

Using Storage Performance Council’s SPC-2 results the top data transfer subsystem (IBM DS8K) is rated at 9.7GB/s so with 40GbE it could use about 3 links and for the 16GFC it would be able to sustain this bandwidth using about 7 links.

So there’s at least one storage systems out there that can utilize the extreme bandwidth that such interfaces supply.

Now as for the server side nailing down the true need is a bit harder to do.  Using Amdahl’s IO law, which states there is 1 IO for every 50K instructions, and with Intel’s Core I7 Extreme edition rated at 159KMips, it should be generating about 3.2M IO/s and at 4KB per IO this would be about 12GB/sec.  So the current crop of high processors seem able to consume this level of bandwidth, if present.

FC or Ethernet?

Now the critical question, which interface does the data center use to provide that bandwidth.  The advantages of FC are becoming less so over time as FCoE becomes more widely adopted and any speed advantage that FC had should go away with the introduction of data center 40GbE.

The other benefit that Ethernet offers is a “single data center backbone” which can handle all network/storage traffic.  Many large customers are almost salivating at the possibility of getting by with a single infrastructure for everything vs. having to purchase and support separate cabling, switches and server cards to use FC.

On the other hand, having separate networks, segregated switching, isolation between network and storage traffic can provide better security, availability, and reliability that are hard to duplicate with a single network.

To summarize, one would have to say is that there are some substantive soft benefits to having both Ethernet and FC infrastructure but there are hard cost and operational advantages to having a single infrastructure based on 10GbE or hopefully, someday 40GbE.

—-

So I would have to conclude that FC’s days are numbered especially when 40GbE becomes affordable and thereby, widely adopted in the data center.

Comments?