EMC World 2012 part 1 – VNX/VNXe

Plenty of announcements this morning from EMC World 2012. I’ll try to group them into different posts.  Today’s post Unified Storage Division VNX/VNXe announcements:

  • New VNXe3150 which fills out the lower end of the VNXe line and replaces VNX3100 (but will coexist for awhile). The new storage system supports 2.5″ drives, has quadcore processing, now supports SSDs as a static storage tier (no FAST yet) and has a 100 drive capacity supports 3.5″ 3TB drives and has dual 10GbE port frontend interface cards.  The new system provides 50% performance increase and capacity increase in the same rack.
  • New VNX software now supports 256 read-writeable snapshots per LUN, previous limits were 8 (I think). EMC has also improved storage pooling for both VNX and VNXe which now allows multiple types of RAID groups per pool (previously they all had to be the same RAID level) and rebalancing across RAID groups for better performance, new larger RAID 5 & 6 groups (why?).   They now offer better storage analytics with VMware vCenter which provides impressive integration with FAST VP and FAST CACHE, supplying performance and capacity information, allowing faster diagnosis and resolution of storage issues under VMware.

Stay tuned, more to come I’m sure

 

Securely erasing SSD and disk data

Read a couple of stories this past week or so on securely erasing data but the one that caught my eye was about RunCore and their InVincible SSD.  It seems they have produced a new SSD drive with internal circuitry/mechanisms for securely erasing data.

Securely erasing SSDs

Each InVincible SSD comes with a special cable with two buttons on it one for overwriting the data (intelligent destruction) and the other for destroying the NAND cells (physical destruction).

  • In the erase data mode (intelligent destruction), device data is overwritten on all NAND cells so that the original data is no longer readable.  Presumably as this is an internal feature even over provisioned NAND cells are also overwritten.  Unclear what this does to pages that are no longer programmable but perhaps they even have a way to deal with this.   There was some claim that the device would be rendered to factory new condition but it seems to me that NAND endurance would still have been reduced.
  • In the kill NAND cells mode (physical destruction) apparently the device generates a high enough voltage internally to electronically destroy all the NAND bit cells so they are no longer readable (or writeable).  Wonder if there’s any smoke that emerges when this happens.

Not sure how you insert the special cable because the device has to be powered to do any of this.  It seems to me they would have been better served with an SATA diagnostic command to do the same thing, but maybe the special cable is a bit more apparent.  The cable comes with two buttons one green and the other red (I would have thought yellow and red more appropriate).

But what about my other SSDs?

It’s not as useful as I first thought because what the world really needs is a device that could erase or kill NAND cells on any SSD drive.  That way we could securely erase all SSDs.

I suppose the problem with a universal SSD eraser is that it would need to somehow disable wear leveling to get at over provisioned NAND cells.  Also to physically destroy NAND cells would take some special circuitry. But maybe if we could come up with a standard approach across the industry such a device could be readily available.

I suppose another approach is to encrypt the data and throw away your keys but that seems to simple.

Or maybe just overwrite the data a half dozen or so times with random, repeating data patterns and then their complements. But this may not reach over-provisioned cells and with wear leveling in place all these writes could conceivably go to the same, single NAND page.

New approaches to securely erasing disk data

On another note at SNW early this year I was talking with another vendor and he said that securely erasing disk drives no longer takes multiple (3-7 depending on who you want to believe) passes of overwriting with specified data patterns (random, repeating patterns and complements of same).  He said there was research done recently which had proved this but I could only find this article on [Disk] Data Sanitization.

And sometime this past week I had read another article (don’t know where) about a company shipping a device which degausses standard 3.5″ disk drives.  You just insert a disk inside of it and push a button or two and your data is gone.

Why all the interest in securely erasing data?

It never really goes away. No one wants their data publicly available and securely erasing it after the fact is a simple (but lengthy) approach to deal with it.

But why isn’t everyone using data encryption?  Seems like a subject for another post.

Comments

Image: Safe ‘n green by Robert S. Donovan 

 

 

ReRAM to the rescue

I was at the Solid State Storage Symposium a couple of weeks ago where Robin Harris (StorageMojo) gave the keynote presentation. In his talk, Robin mentioned a new technology on the horizon which holds the promise of replacing DRAM, SRAM and NAND called resistive random access memory (ReRAM or RRAM).

If so, ReRAM will enter the technological race pitting MRAM, Graphene Flash, PCM and racetrack memory as followons for NAND technology.  But none of these have any intention of replacing DRAM.

Problems with NAND

There are a few problems with NAND today but the main problem that affects future NAND technologies is as devices shrink they lose endurance. For instance, today’s SLC NAND technology has an endurance of ~100K P/E (program/erase) cycles, MLC NAND can endure around 5000 P/E cycles and eMLC somewhere in between.  Newly emerging TLC (three bits/cell) has less even endurance than MLC.

But that’s all at 30nm or larger.  The belief is that as NAND feature size shrinks below 20nm its endurance will get much worse, perhaps orders of magnitude worse.

While MLC may be ok for enterprise storage today, much less than 5000 P/E cycles could become a problem and would require ever more sophistication in order to work around these limitation.    Which is why most enterprise class, MLC NAND based storage uses specialized algorithms and NAND controller functionality to support storage reliability and durability.

ReRAM solves NAND, DRAM and NvRAM problems.

Enter ReRAM, it has the potential to be faster than PCM-RAM, has smaller features than MRAM which means more bits per square inch and uses lower voltage than racetrack memory and NAND.    The other nice thing about ReRAM is that it seems readily scaleable to below 30nm feature geometries.  Also as it’s a static memory it doesn’t have to be refreshed like DRAM and thus uses less power.

In addition, it appears that  ReRAM is much more flexible than NAND or DRAM which can be designed and/or tailored to support different memory requirements.   Thus, one ReRAM design can be focused on standard  DRAM applications while another ReRAM design can be targeted at mass storage or solid state drives (SSD).

On the negative side there are still some problems with ReRAM, namely the large “sneak parasitic current” [whatever that is] that impacts adjacent bit cells and drains power.  There are a few solutions to this problem but none yet completely satisfactory.

But it’s a ways out, isn’t it?

No it’s not. BBC and Tech-On reported that Panasonic will start sampling devices soon and plan to reach volume manufacturing next year.   Elpida-Sharp  and HP-Hynix are also at work on ReRAM (or memristor) devices and expect to ship sometime in 2013.  But for the moment it appears that Panasonic is ahead of the pack.

At first, these devices will likely emerge in low power applications but as vendors ramp up development and mass production it’s unclear where it will ultimately end up.

The allure of ReRAM technology is significant in that it holds out the promise of replacing both RAM and NAND used in consumer devices as well as IT equipment with the same single technology.  If you consider that the combined current market for DRAM and NAND is over $50B, people start to notice.

~~~~

Whether ReRAM will meet all of its objectives is yet TBD.  But we seldom see any one technology which has this high a potential.  The one remaining question is why everybody else isn’t going after ReRAM as well, like Samsung, Toshiba and Intel-Micron.

I have to thank StorageMojo and the Solid State Storage Symposium team for bringing ReRAM to my attention.

[Update] @storagezilla (Mark Twomey) said that “… Micron’s aquisition of Elpida gives them a play there.”

Wasn’t aware of that but yes they are definitely in the hunt now.

Comments?

Image: Memristor by Luke Kilpatrick

 

EMC buys ExtremeIO

Wow, $430M for a $25M startup that’s been around since 2009 and hasn’t generated any revenue yet.  It probably compares well against Facebook’s recent $1B acquisition of Instagram but still it seems a bit much.

It certainly signals a significant ongoing interest in flash storage in whatever form that takes. Currently EMC offers PCIe flash storage (VFCache), SSD options in VMAX and VNX, and has plans for a shared flash cache array (project: Thunder).  An all-flash storage array makes a lot of sense if you believe this represents an architecture that can grab market share in storage.

I have talked with ExtremeIO in the past but they were pretty stealthy then (and still are as far as I can tell). Not much details about their product architecture, specs on performance, interfaces or anything substantive. The only thing they told me then was that they were in the flash array storage business.

In a presentation to SNIA’s BOD last summer I said that the storage industry is in revolution.  When a 20 or so device system can generate ~250K or more IO/second with a single controller, simple interfaces, and solid state drives, we are no longer in Kansas anymore.

Can a million IO storage system be far behind.

It seems to me, that doing enterprise storage performance has gotten much easier over the last few years.  Now that doesn’t mean enterprise storage reliability, availability or features but just getting to that level of performance before took 1000s of disk drives and racks of equipment.  Today, you can almost do it in a 2U enclosure and that’s without breaking a sweat.

Well that seems to be the problem, with a gaggle of startups, all vying after SSD storage in one form or another the market is starting to take notice.  Maybe EMC felt that it was a good time to enter the market with their own branded product, they seem to already have all the other bases covered.

Their website mentions that ExtremeIO was a load balanced, deduplicated clustered storage system with enterprise class services (this could mean anything). Nonetheless, a deduplicating, clustered SSD storage system built out of commodity servers could define at least 3 other SSD startups I have recently talked with and a bunch I haven’t talked with in awhile.

Why EMC decided that ExtremeIO was the one to buy is somewhat a mystery.  There was some mention of an advanced data protection scheme for the flash storage but no real details.

Nonetheless, enterprise SSD storage services with relatively low valuation and potential to disrupt enterprise storage might be something to invest in.  Certainly EMC felt so.

~~~~

Comments, anyone know anything more about ExtremeIO?

Analyzing SPECsfs2008 flash use in NFS performance – chart-of-the-month

(SCISFS120316-002) (c) 2012 Silverton Consulting, All Rights Reserved
(SCISFS120316-002) (c) 2012 Silverton Consulting, All Rights Reserved

For some time now I have been using OPS/drive to measure storage system disk drive efficiency but have so far failed to come up with anything similar for flash or SSD use.  The problem with flash in storage is that it can be used as a cache or as a storage device.  Even when used as a storage device under automated storage tiering, SSD advantages can be difficult to pin down.

In my March newsletter as a first attempt to measure storage system flash efficiency I supplied a new chart shown above, which plots the top 10 NFS throughput ops/second/GB of NAND used in the SPECsfs2008 results.

What’s with Avere?

Something different has occurred with the (#1) Avere FXT 3500 44-node system in the chart.   The 44-node Avere system only used ~800GB of flash as a ZIL (ZFS intent log from the SPECsfs report).   However, the 44-node system also had ~7TB of DRAM across their 44-node system, most of which was used for file IO caching.  If we incorporated storage system memory size with flash GB in the above chart it would have dropped the Avere numbers by a factor of 9 while only dropping the others by a factor of ~2X which would still give the Avere a significant advantage but not quite so stunning.  Also, the Avere system frontends other NAS systems, (this one running ZFS) so it’s not quite the same as being a direct NAS storage system like the others on this chart.

The remainder of the chart (#2-10) belongs to NetApp and their FlashCache (or PAM) cards.  Even Oracles Sun ZFS Storage 7320 appliance did not come close to either the Avere FXT 3500 system or the NetApp storage on this chart.  But there were at least 10 other SPECsfs2008 NFS results using some form of flash but were not fast enough to rank on this chart.

Other measures of flash effectiveness

This metric still doesn’t quite capture flash efficiency.  I was discussing flash performance with another startup the other day and they suggested that SSD drive count might be a better  alternative.  With such a measure, it would take into consideration that each SSD has a only a certain performance level it can sustain, not unlike disk drives.

In that case Avere’s 44-node system had 4 drives, and each NetApp system had two FlashCache cards, representing 2-SSDs per NetApp node.  I try that next time to see if it’s  a better fit.

~~~~

The complete SPECsfs2008 performance report went out in SCI’s March newsletter.  But a copy of the report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the SPECsfs performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.

For a more extensive discussion of current NAS or file system storage performance covering SPECsfs2008 (Top 20) results and our new ChampionChart™ for NFS and CIFS storage systems, please see SCI’s NAS Buying Guide available from our website.

As always, we welcome any suggestions or comments on how to improve our analysis of SPECsfs2008 results or any of our other storage performance analyses.


Thoughts on Spring SNW 2012 in Dallas

Viking Technology NAND/DIMM SSD 32TB/1U demo box
Viking Technology NAND/DIMM SSD 32TB/1U demo box

[Updated photo] Well the big news today was the tornado activity in the Dallas area. When the tornado warnings were announced customers were stuck on the exhibit floor and couldn’t leave (which made all the vendors very happy). Meetings with vendors still went on but were held in windowless rooms and took some ingenuity to get to. I offered to meet in the basement but was told I couldn’t go down there.

As for technology at the show, I was pretty impressed with the Viking booth. They had a 512GB MLC NAND flash card placed in spare DIMM slots with MLC or SLC NAND flash storage in them which takes power from the DIMM slot and uses a separate SATA cabling to cable the SSD storage together. It could easily be connected to a MegaRAID card and RAIDed together. The cards are mainly sold to OEMs but they are looking to gain some channel partners willing to sell them directly to end users.

In addition to the MLC NAND/DIMM card, they had a demo box with just a whole bunch of DIMM slots, where they modified the DIMM connections to also support SATA interface through their mother board. They had on display 1U storage box with 32TB of MLC NAND/DIMM cards and a single power supply supporting 6 lanes of SAS connectivity to the storage. Wasn’t clear what they were trying to do with this other than stimulate thought and interest from OEMs. It was a very interesting demo

There a few major vendors including Fujitsu, HDS, HP, and Oracle exhibiting at the show with a slew of minor ones as well. But noticeable by their absence was Dell, EMC, IBM, and NetApp not to mention Brocade, Cisco and Emulex.

Someone noticed that a lot of the smaller SSD startups weren’t here as well, e.g., no PureStorage, NexGen, SolidFire, Whiptail etc. Even FusionIO with their bank of video streams was missing from the show. In times past, smaller startups would use SNWto get vendor and end-user customer attention. I suppose nowadays, they do this at VMworld, Oracle Openworld, Sapphire or other vertical specific conferences.

20120403-181058.jpg
Marc Farley of StorSimple discussing cloud storage

In the SSD space there was Nimbus Data, TMS, Micron and OCZ where here showing off their latest technology. Also, there were a few standard bearers like FalconStor, Veeam, Sepaton, Ultrium and Qlogic were exhibiting as well. A couple of pure cloud players as well like RackSpace, StorSimple and a new player Symform.

Didn’t get to attend any technical sessions today but made the keynote last night which was pretty good. That talk was all about how the CIO has to start playing offense and getting ahead of where the business is heading rather than playing defense playing catchup to where the business needed to be before.

More on SNWusa tomorrow.

SCI SPC-1 results analysis: Top 10 $/IOPS – chart-of-the-month

Column chart showing the top 10 economically performing systems for SPC-1
(SCISPC120226-003) (c) 2012 Silverton Consulting, Inc. All Rights Reserved

Lower is better on this chart.  I can’t remember the last time we showed this Top 10 $/IOPS™ chart from the Storage Performance Council SPC-1 benchmark.  Recall that we prefer our IOPS/$/GB which factors in subsystem size but this past quarter two new submissions ranked well on this metric.  The two new systems were the all SSD Huawei Symantec Oceanspace™ Dorado2100 (#2) and the latest Fujitsu ETERNUS DX80 S2 storage (#7) subsystems.

Most of the winners on $/IOPS are SSD systems (#1-5 and 10) and most of these were all SSD storage system.  These systems normally have better $/IOPS by hitting high IOPS™ rates for the cost of their storage. But they often submit relatively small systems to SPC-1 reducing system cost and helping them place better on $/IOPS.

On the other hand, some disk only storage do well by abandoning any form of protection as with the two Sun J4400 (#6) and J4200 (#8) storage systems which used RAID 0 but also had smaller capacities, coming in at 2.2TB and 1.2TB, respectively.

The other two disk only storage systems here, the Fujitsu ETERNUS DX80 S2 (#7) and the Huawei Symantec Oceanspace S2600 (#9) systems also had relatively small capacities at 9.7TB and 2.9TB respectively.

The ETERNUS DX80 S2 achieved ~35K IOPS and at a cost of under $80K generated a $2.25 $/IOPS.  Of course, the all SSD systems blow that away, for example the Oceanspace Dorado2100 (#2), all SSD system hit ~100K IOPS but cost nearly $90K for a $0.90 $/IOPS.

Moreover, the largest capacity system here with 23.7TB of storage was the Oracle Sun ZFS (#10) hybrid SSD and disk system which generated ~137K IOPS at a cost of ~$410K hitting just under $3.00 $/IOPS.

Still prefer our own metric on economical performance but each has their flaws.  The SPC-1 $/IOPS metric is dominated by SSD systems and our IOPS/$/GB metric is dominated by disk only systems.   Probably some way to do better on the cost of performance but I have yet to see it.

~~~~

The full SPC performance report went out in SCI’s February newsletter.  But a copy of the full report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the full SPC performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.

For a more extensive discussion of current SAN or block storage performance covering SPC-1 (top 30), SPC-2 (top 30) and ESRP (top 20) results please see SCI’s SAN Storage Buying Guide available on our website.

As always, we welcome any suggestions or comments on how to improve our analysis of SPC results or any of our other storage performance analyses.

 

Super Talent releases a 4-SSD, RAIDDrive PCIe card

RAIDDrive UpStream (c) 2012 Super Talent (from their website)
RAIDDrive UpStream (c) 2012 Super Talent (from their website)

Not exactly sure what is happening, but PCIe cards are coming out containing multiple SSD drives.

For example, the recently announced Super Talent RAIDDrive UpStream card contains 4 SAS embedded SSDs that can push storage capacity up to almost a TB of MLC NAND.   They have an optional SLC version but there were no specs provided on this.

It looks like the card uses an LSI RAID controller and SANDforce NAND controller.  Unlike the other RAIDDrive cards that support RAID5, the UpStream can be configured with RAID 0, 1 or 1E (sort of RAID 1 only striped across even or odd drive counts) and currently supports capacities of 220GB, 460GB or 960GB total.

Just like the rest of the RAIDDrive product line, the UpStream card is PCIe x8 connected and requires host software (drivers) for Windows, NetWare, Solaris and other OSs but not for “most Linux distributions”.  Once the software is up, the RAIDDrive can be configured and then accessed just like any other “super fast” DAS device.

Super Talent’s data sheet states UpStream performance at are 1GB/sec Read and 900MB/Sec writes. However, I didn’t see any SNIA SSD performance test results so it’s unclear how well performance holds up over time and whether these performance levels can be independently verified.

It seems just year ago that I was reviewing Virident’s PCIe SSD along with a few others at Spring SNW.   At the time, I thought there were a lot of PCIe NAND cards being shown at the show.  Given Super Talent’s and the many other vendors sporting PCIe SSDs today, there’s probably going to be a lot more this time.

No pricing information was available.

~~~~

Comments?