Is FC dead?!

SNIA Tech Center Computer Lab 2 switching hw (c) 2011 Silverton Consulting, Inc.
SNIA Tech Center Computer Lab 2 switching hw (c) 2011 Silverton Consulting, Inc.

Was at the Pacific Crest/Mosaic annual conference cocktail hour last night surrounded by a bunch of iSCSI/NAS storage vendors and they made the statement that FC is dead.

Apparently, 40GbE is just around the corner and 10GbE cards have started a steep drop in price and are beginning to proliferate through the enterprise.  The vendors present felt that an affordable 40GbE that does iSCSI and/or FCoE would be the death knell for FC as we know it.

As evidence they point to Brocade’s recent quarterly results that shows their storage business is in decline, down 5-6% YoY for the quarter. In contrast, Brocade’s Ethernet business is up this quarter 12-13% YoY (albeit, from a low starting point).  Further confusing the picture, Brocade is starting to roll out 16Gbps FC  (16GFC) while the storage market is still trying to ingest the changeover to 8Gbps FC.

But do we need the bandwidth?

One question is do we need 16GFC or even 40GbE for the enterprise today.  Most vendors speak of the high bandwidth requirements for server virtualization as a significant consumer of enterprise bandwidth.  But it’s unclear to me whether this is reality or just the next wave of technology needing to find a home.

Let’s consider for the moment what 16GFC and 40GbE can do for data transfer. If we assume ~10 bits pre byte then:

  • 16GFC can provide 1.6GB/s of data transfer,
  • 40GbE can provide 4GB/s of data transfer.

Using Storage Performance Council’s SPC-2 results the top data transfer subsystem (IBM DS8K) is rated at 9.7GB/s so with 40GbE it could use about 3 links and for the 16GFC it would be able to sustain this bandwidth using about 7 links.

So there’s at least one storage systems out there that can utilize the extreme bandwidth that such interfaces supply.

Now as for the server side nailing down the true need is a bit harder to do.  Using Amdahl’s IO law, which states there is 1 IO for every 50K instructions, and with Intel’s Core I7 Extreme edition rated at 159KMips, it should be generating about 3.2M IO/s and at 4KB per IO this would be about 12GB/sec.  So the current crop of high processors seem able to consume this level of bandwidth, if present.

FC or Ethernet?

Now the critical question, which interface does the data center use to provide that bandwidth.  The advantages of FC are becoming less so over time as FCoE becomes more widely adopted and any speed advantage that FC had should go away with the introduction of data center 40GbE.

The other benefit that Ethernet offers is a “single data center backbone” which can handle all network/storage traffic.  Many large customers are almost salivating at the possibility of getting by with a single infrastructure for everything vs. having to purchase and support separate cabling, switches and server cards to use FC.

On the other hand, having separate networks, segregated switching, isolation between network and storage traffic can provide better security, availability, and reliability that are hard to duplicate with a single network.

To summarize, one would have to say is that there are some substantive soft benefits to having both Ethernet and FC infrastructure but there are hard cost and operational advantages to having a single infrastructure based on 10GbE or hopefully, someday 40GbE.

—-

So I would have to conclude that FC’s days are numbered especially when 40GbE becomes affordable and thereby, widely adopted in the data center.

Comments?

9 thoughts on “Is FC dead?!

  1. I believe we've all seen that "dead" technologies from Mainframe to tape can be cash cows for many years. I agree that bandwidth is not the battle – FC has enough to keep up with bus architectures and 10GbE doesn't give an advantage and even at 40GbE it will be close enough. FC is a niche solution for storage and any replacement needs to have the scalability, management and reliability. Traditional enterprise data centers that use FC are often hundreds or thousands of nodes, while iSCSI configurations are rarely more than 100 nodes. There are plenty of use cases where 10GbE iSCSI is great, but it's also rare that a large FC customer wants to make the migration off of the existing infrastructure. 40GbE is likely to take quite a few years to roll out; cost and cabling concerns are blocking items similar to what took 10GbE almost a decade to gain traction.
    From an equipment standpoint, the line between FC and Ethernet has been blurred – there are adapters, switches and even optics that can do both. I was hopeful that the "protocol wars" were over, but I would say that if people feel the need to attack FC, it's probably not dead yet.

    1. Stu,Perceptive comment as always. Yes your correct, Tape was pronounced dead in 1979 and has had a long and healthy life even up to today over 3 decades later. Technologies don't die, they just fade away.All that being said. I think the hard advantages are the things to focus on when we talk about protocol wars. And as 10GbE and 40GbE start coming online the hard advantages start to shift away from FC. When that happens enterprise data centers will be hard pressed to standup new FC infrastructure. As for extending current FC infrastructure it may take a long time before what's currently in place get's torn out and replaced by Ethernet alone.So yes I agree, death of a technology is a long term endeavor but you have to look at what one would do if you were standing up new greenfield data centers. I would conclude that in a short period 10GbE will be affordable enough to make using FCoE a no-brainer. And when and if 40GbE comes online it's even easier to go Ethernet.As for the need for the bandwidth. Yes, today there are few applications that need 10GB/s let alone 4Gbps but that never stopped vendors from pushing the technological envelope to support this segment of the marketplace. Their long term hope is that the rest of the market catches up.Ray

  2. Good post as always Ray. I remember when iSCSI was initially being pushed and all the FC is going to die talk started….years later people still depend on FC for their mission Critical Applications.
    As much as Cisco and others would love for this article to be true, sales for Cisco's FC SANs keep rising year upon year, even though they've been marketing FCoE so heavily.
    Here's a link to a whitepaper which I feel will be of interest to anyone seriously considering FC's credentials for the future: http://virtensys.com/files/Fibre_Channel_-_Destro

    1. Archie,Thanks for the comment. Yes the question in my mind is why isn't Cisco selling more FCoE. I think there were technical reasons early on which held back adoption but those should no longer be the case. It's unclear to me how Cisco breaks out their sales w.r.t. FC vs FCoE in their reporting.There were some interesting stat's in the paper you cited on the use of FC something to the extent that ~40% of all FC is unused capacity. Probably, speaks more to Stu's issue that bandwidth is not a concern.Ray

  3. Hi Ray,
    Working for Brocade offers me a perspective on this that I'd like to share.

    As Stuart pointed out, the customer value of Fibre Channel is not based on speed alone. Arguments that FCoE or iSCSI will replace Fibre Channel because you can get 40 GE in the future isn't very satisfying to the customer. Raw link rate isn't the itch they need scratched.

    Ray mentions "hard advantages" which I intepret as first cost. That's certainly important for a customer. But, removing a proven, working, efficient technology and replacing it with a different technology has "hard disadvantages" as well. In my experience, reliability of storage IO is considered mission critical so a customer's decision to change storage technology are often very deliberate ones.

    1. BRCDbreams,Thanks for your comment. Yes changing out infrastructure is a long range decision and not taken lightly. But if a data center can get all the speed they need from 40GbE and all the reliability, availability, serviceability and security from FCoE why would they use FC for new infrastructure installations. However, having the luxury of installing “all new” infrastructure is not very common. In the mean time, continuing to extend current FC infrastructure might make more sense to many while they wait for a new “do over” to come along. But, I would see extending FC activity diminishing over time, as “all new” infrastructure moves to 40GbE/FCoE.Ray

  4. here is the rest of my comment….

    Why does a technology become obsolete? An example is provided by Fibre Channel. The business created an irrresitable force – grow storage by 100% every year. The existing technology, parallel SCSI, couldn't provide enough distance, speed or storage connections so it was ripe for extinction. Fibre Channel entered the storage ecosystem and dominated since it could meet the irrisitable demand of the business.

    Today what irresitable business force places demands on Fibre Channel it can't meet? Is there anything the business demands that Fibre Channel can't provide but FCoE can? If there isn't then extinction is unlikley. Instead, there is adapation and differentiation for market niches Storage area networking technology will differentiate as market niches require. Each SAN protocol (FC, iSCSI, FCoE) is an adaptation looking to meet the needs of a particular market segment. None is likely to cause the extinction of the others as they inhabit largely separate market niches.

Comments are closed.