Commodity hardware loses again …

IMG_4528It seemed only a a couple of years back that everyone was touting how hardware engineering no longer mattered anymore.

What with Intel and others playing out Moore’s law, why invest in hardware engineering when the real smarts were all in software?

We said then that hardware engineered solutions still had a significant place to play but few believed me (see my posts:  Commodity hardware always loses and Commodity hardware debate heats up ).

Well hardware’s back, …

A few examples;

  1. EMC DSSD – at EMCWorld2015 a couple of weeks back, EMC demoed a new rack-scale flash storage system that was targeted at extremely high IOPS and very low latency. DSSD is a classic study on how proprietary hardware could enable new levels of performance. The solution connected to servers over a PCIe switched network, which didn’t really exist before and used hardware engineered, Flash Modules which were extremely dense, extremely fast and extremely reliable. (See my EMCWorld post on DSSD and our Greybeards on Storage (GBoS) podcast with Chad Sakac for more info on DSSD)
  2. Diablo Memory Channel Storage (MCS) /SanDisk UltraDIMMs – Diablo’s MCS is coming out in SanDisk’s UltraDIMM NAND storage that plugs into DRAM slots and provides a memory paged access to NAND storage. The key is that the hardware logic provides overheads that are ~50 μsecs to access NAND storage. (We’ve wrote  about MCS and UltraDIMMs here).
  3. Hitachi VSP G1000 storage and their Hitachi Accelerated Flash (HAF) – recent SPC-1 results showed that a G1000 outfitted with HAF modules could generate over 2M IOPS and had very low latency (220 μsecs).  (See our announcement summary on the Hitachi G1000 here).

Diablo ran into some legal problems but that’s all behind them now, so the way forward is clear of any extraneous hurdles.

There are other examples of proprietary hardware engineering from IBM FlashSystems,  networking companies, PCIe flash vendors and others but these will suffice to make my point.

My point is if you want to gain orders of magnitude of better performance, you need to seriously consider engaging in some proprietary hardware engineering. Proprietary hardware may take longer than software-only solutions (although that’s somewhat of a function of the resources you throw at), but the performance gains are sometimes unobtainable any other way.

~~~~

Chad made an interesting point on our GBoS podcast, hardware innovation is somewhat cyclical. For a period of time, commodity hardware is much better than any storage solution really needs, so innovation swings to the software arena. But over time, software functionality comes up to speed and maxes out the hardware that’s available and then you need more hardware innovation to take performance to the next level. Then the cycle swings back to hardware engineering. And the cycle will swing back and forth again a lot more times before storage is ever through as an IT technology.

Today when it seems that there’s a new software defined storage solution coming out every month we seem very close to peak software innovation with little left for performance gains, but there’s still plenty left if we open our eyes to consider proprietary hardware.

Welcome to the start of the next hardware innovation cycle – take that commodity hardware.

Comments?

Commodity hardware always loses

Herman Miller's Embody Chair by johncantrell (cc) (from Flickr)
A recent post by Stephen Foskett has revisted a blog discussion that Chuck Hollis and I had on commodity vs. special purpose hardware.  It’s clear to me that commodity hardware is a losing proposition for the storage industry and for storage users as a whole.  Not sure why everybody else disagrees with me about this.

It’s all about delivering value to the end user.  If one can deliver equivalent value with commodity hardware than possible with special purpose hardware then obviously commodity hardware wins – no question about it.

But, and it’s a big BUT, when some company invests in special purpose hardware, they have an opportunity to deliver better value to their customers.  Yes it’s going to be more expensive on a per unit basis but that doesn’t mean it can’t deliver commensurate benefits to offset that cost disadvantage.

Supercar Run 23 by VOD Cars (cc) (from Flickr)
Supercar Run 23 by VOD Cars (cc) (from Flickr)

Look around, one sees special purpose hardware everywhere. For example, just checkout Apple’s iPad, iPhone, and iPod just to name a few.  None of these would be possible without special, non-commodity hardware.  Yes, if one disassembles these products, you may find some commodity chips, but I venture, the majority of the componentry is special purpose, one-off designs that aren’t readily purchase-able from any chip vendor.  And the benefits it brings, aside from the coolness factor, is significant miniaturization with advanced functionality.  The popularity of these products proves my point entirely – value sells and special purpose hardware adds significant value.

One may argue that the storage industry doesn’t need such radical miniaturization.  I disagree of course, but even so, there are other more pressing concerns worthy of hardware specialization, such as reduced power and cooling, increased data density and higher IO performance, to name just a few.   Can some of this be delivered with SBB and other mass-produced hardware designs, perhaps.  But I believe that with judicious selection of special purposed hardware, the storage value delivered along these dimensions can be 10 times more than what can be done with commodity hardware.

Cuba Gallery: France / Paris / Louvre / architecture / people / buildings / design / style / photography by Cuba Gallery (cc) (from Flickr)
Cuba Gallery: France / Paris / Louvre / ... by Cuba Gallery (cc) (from Flickr)

Special purpose HW cost and development disadvantages denied

The other advantage to commodity hardware is the belief that it’s just easier to develop and deliver functionality in software than hardware.  (I disagree, software functionality can be much harder to deliver than hardware functionality, maybe a subject for a different post).  But hardware development is becoming more software like every day.  Most hardware engineers do as much coding as any software engineer I know and then some.

Then there’s the cost of special purpose hardware but ASIC manufacturing is getting more commodity like every day.   Several hardware design shops exist that sell off the shelf processor and other logic one can readily incorporate into an ASIC and Fabs can be found that will manufacture any ASIC design at a moderate price with reasonable volumes.  And, if one doesn’t need the cost advantage of ASICs, use FPGAs and CPLDs to develop special purpose hardware with programmable logic.  This will cut engineering and development lead-times considerably but will cost commensurably more than ASICs.

Do we ever  stop innovating?

Probably the hardest argument to counteract is that over time, commodity hardware becomes more proficient at providing the same value as special purpose hardware.  Although this may be true, products don’t have to stand still.  One can continue to innovate and always increase the market delivered value for any product.

If there comes a time when further product innovation is not valued by the market than and only then, does commodity hardware win.  However, chairs, cars, and buildings have all been around for many years, decades, even centuries now and innovation continues to deliver added value.  I can’t see where the data storage business will be any different a century or two from now…

Better storage through hardware

Apple's Xserve (from Apple.com)
Apple's Xserve (from Apple.com)

Chuck Hollis from EMC wrote a post last week on Storage is software about how hardware parts are becoming commoditized and so highly functional that future storage differentiation will only come from software.  I commented that hardware differentiation is also becoming much easier with FPGAs and their ilk.  Chuck replied that yes this may be so but will anyone pay the cost for such differentiation.

My reply deserves a longer discussion.  Chuck’s mentioned Apple as one company differentiating successfully in hardware but thought that this would not apply to storage

Better storage through hardware differentiation

I am a big fan of Apple and so, it’s hard for me to see why something similar could not apply to storage.  IMHO, what Apple has done better than the rest is to reconstruct the user experience, in totality, from one of frustration to one of delight.

Can such a thing be done for storage and if so “will it sell”? I believe Yes to both questions.

Will such a new storage product necessarily require hardware/FPGA development as much as software/firmware development?  Again, yes.

Will anyone create this “better” storage? No easy answers here.

Developing better storage

Such a task involves a complete remaking, from the ground up of a new storage product from the user/admin experience perspective.  But the hard part is that the O/Ss and virtualization systems govern/control much of the storage user/admin experience, not the storage.  As such, much functionality will necessarily be done in software, not hardware.

However, that doesn’t mean that hardware differentiation can’t help. For example, consider storage interfaces.  Today, it’s not unusual to have 6 or more interfaces for a storage system.  But for me it’s hard to see how this couldn’t be better served with 2-10GbE and 2-8GFC and WiFi for an alternate admin interface.  In a similar fashion, look at internal storage interfaces.  It’s hard for me to see any absolute requirement for cabling here.  Ditto for power cabling. And all this just improves the out-of-the-box experience.

Could something similar be done for the normal storage configuration, monitoring, and protection activities? Most certainly.  Even so, much of this differentiation would be done via software/firmware and O/S APIs being used.  However, perhaps some small portion can make use of hardware/packaging differentiation.

I like to think that “I will know it when I see it”.  But, when someone can take storage out of a box, “install, use and protect it” on any O/S, virtualization environment with nothing more than a poster with 5 to 7 blocks as an guide, such “Apple-like” storage will have arrived.

Until then, storage admins will need training, a “storage admin” will be part of a job description, and storage will be something “we do” rather than something “we use”.

So my final answer to Chuck is:  will anyone do it – don’t know.

What do you think?