Cleversafe’s new hardware

Cleversafe new dsNet(tm) Rack (from
Cleversafe new dsNet(tm) Rack (from

Yesterday, Cleversafe announced new Slicestor(r) 2100 and 2200 hardware using 2TB SATA drives. The standard 2100 1U package supports 8TB of raw data and the 2200 new 2U package supports 24TB of data. In addition, a new Accesser(r) 2100 supports 8GB of ECC RAM, and 2 GigE or 10GbE ports for data access.

In addition to the new server hardware, Cleversafe also announced an integrated rack with up to 18 Slicestor 2200s, 2 Accessors 2100s, 1 Omnience (management node), 48-port ethernet switch, and PDUs. This new rack configuration comes pre-cabled and can easily be installed to support an immediate 432TB raw capacity. It’s expected that customers with multiple sites could order 1 or more racks to support a quick installation of Cleversafe storage services.

Cleversafe currently offers iSCSI block services, direct object storage interface and file services interfaces (over iSCSI).  They are finding some success in the media and entertainment space as well as federal and state government data centers.

The federal and state government agencies seem especially interested in Cleversafe for its data security capabilities.  They offer cloud data security via their SecureSlice(tm) technology which encrypts data slices and uses key masking to obscure the key.  With SecureSlice, the only way to decrypt the data is to have enough slices to reconstitute the data.

Also the new Accesser and Slicestor server hardware now uses a drive on motherboard flash unit to hold operating system/Cleversafe software. This allows data drives to only hold customer data and reduces Accesser power requirements while also improving both Slicestor and Accesser reliability.

In a previous post we discussed EMC’s Atmos’s GeoProtect capabilities and although they are not quite at the sophistication of Cleversafe, EMC does offer a sort of data dispersion across sites/racks.  However, it appears that GeoProtect is currently limited to two distinct configurations.  In contrast, Cleversafe allows the user to select the number of Slicestor’s to store data and the threshold required to reconstitute the data.  Doing this allows the user to almost dial up or down the availability and reliability they want for their data.

Cleversafe performs well enough to saturate a single Accesser GigE iSCSI link.  Accessers maintain a sort of preferred routing table which indicates which Slicestors currently have the best performance. By accessing the quickest Slicestors first to reconstitute data, performance can be optimized.  Specifically, for the typical multi-site Cleversafe implementation, knowing current Slicestor to Accesser performance can improve data reconstitution performance considerably.

Full disclosure, I have done work for Cleversafe in the past.

Google vs. National Information Exchange Model

Information Exchange Package Documents (IEPI) lifecycle from
Information Exchange Package Documents (IEPI) lifecycle from

Wouldn’t the National information exchange be better served by deferring the National Information Exchange Model (NIEM) and instead implementing some sort of Google-like search of federal, state, and municipal text data records.  Most federal, state and local data resides in sophisticated databases using their information management tools but such tools all seem to support ways to create a PDF, DOC, or other text output for their information records.   Once in text form, such data could easily be indexed by Google or other search engines, and thus, searched by any term in the text record.

Now this could never completely replace NIEM, e.g., it could never offer even “close-to” real-time information sharing.  But true real-time sharing would be impossible even with NIEM.  And whereas NIEM is still under discussion today (years after its initial draft) and will no doubt require even more time to fully  implement, text based search could be available today with minimal cost and effort.

What would be missing from a text based search scheme vs. NIEM:

  • “Near” realtime sharing of information
  • Security constraints on information being shared
  • Contextual information surrounding data records,
  • Semantic information explaining data fields

Text based information sharing in operation

How would something like a Google type text search work to share government information.  As discussed above government information management tools would need to convert data records into text.  This could be a PDF, text file, DOC file, PPT, and more formats could be supported in the future.

Once text versions of data records were available, it would need to be uploaded to a (federally hosted) special website where a search engine could scan and index it.  Indexing such a repository would be no more complex than doing the same for the web today.  Even so it will take time to scan and index the data.  Until this is done, searching the data will not be available.  However, Google and others can scan web pages in seconds and often scan websites daily so the delay may be as little as minutes to days after data upload.

Securing text based search data

Search security could be accomplished in any number of ways, e.g., with different levels of websites or directories established at each security level.   Assuming one used different websites then Google or another search engine could be directed to search any security level site at your level and below for information you requested. This may take some effort to implement but even today one can restrict a Google search to a set of websites.  It’s conceivable that some script could be developed to invoke a search request based on your security level to restrict search results.

Gaining participation

Once the upload websites/repositories are up and running, getting federal, state and local government to place data into those repositories may take some persuasion.  Federal funding can be used as one means to enforce compliance.  Bootstrapping data loading into the searchable repository can help insure initial usage and once that is established hopefully, ease of access and search effectiveness, can help insure it’s continued use.

Interim path to NIEM

One loses all contextual and most semantic information when converting a database record into text format but that can’t be helped.   What one gains by doing this is an almost immediate searchable repository of information.

For example, Google can be licensed to operate on internal sites for a fair but high fee and we’re sure Microsoft is willing to do the same for Bing/Fast.  Setting up a website to do the uploads can take an hour or so by using something like WordPress and file management plugins like FileBase but other alternatives exist.

Would this support the traffic for the entire nation’s information repository, probably not.  However, it would be an quick and easy proof of concept which could go a long way to getting information exchange started. Nonetheless, I wouldn’t underestimate the speed and efficiency of WordPress as it supports a number of highly active websites/blogs.  Over time such a WordPress website could be optimized, if necessary, to support even higher performance.

As this takes off, perhaps the need for NIEM becomes less time sensitive and will allow it to take a more reasoned approach.  Also as the web and search engines start to become more semantically aware perhaps the need for NIEM becomes less so.  Even so, there may ultimately need to be something like NIEM to facilitate increased security, real-time search, database context and semantics.

In the mean time, a more primitive textual search mechanism such as described above could be up and available for download within a day or so. True, it wouldn’t provide real time search, wouldn’t provide everything NIEM could do, but it could provide viable, actionable information exchange today.

I am probably over simplifying the complexity to provide true information sharing but such a capability could go a long way to help integrate governmental information sharing needed to support national security.

Atmos GeoProtect vs RAID

The Night Lights of Europe (as seen from space) by woodleywonderworks (cc) (from flickr)
The Night Lights of Europe (as seen from space) by woodleywonderworks (cc) (from flickr)

Yesterday, twitterland was buzzing about EMC’s latest enhancement to their Atmos Cloud Storage platform called GeoProtect.  This new capability improves cloud data protection by supporting erasure code data protection rather than just pure object replication.

Erasure coding has been used for over a decade in storage and some of the common algorithms are Reed-Solomon, Cauchy Reed-Soloman, EVENODD coding, etc.  All these algorithms provide a way for splitting up customer data into data instances and parity (encoding) to allow some number of data or parity instances to be erased (or lost) while still providing customer data.  For example, a R-S encoding scheme we used in the past (called RAID 6+) had 13 data fragments and 2 parity fragments.  Such an encoding scheme supported the simultaneous failure of any two drives and could still supply (reconstruct) customer data.

But how does RAID differ from something like GeoProtect.

  • RAID is typically within a storage array and not across storage arrays
  • RAID is typically limited to a small number of alternative configurations of data disks and parity disks which cannot be altered in the field, and
  • Currently, RAID typically doesn’t support more than two disk failures while still being able to recover customer data (see Are RAIDs days numbered?)

As I understand it GeoProtect currently supports only two different encoding schemes which can provide for different levels of data instance failures while still protecting customer data.  And with GeoProtect you are protecting data across Atmos nodes and potentially across different geographic locations not just within storage arrays.  Also, with Atmos this is all policy driven and data that comes into the system can use any object replication policy or either of the two GeoProtect policies supported today.

Although the nice thing about R-S encoding is that it doesn’t have to be fixed to two different encoding schemes.  And as it’s all software, new coding schemes could easily be released over time, possibly someday being entirely something a user could dial up or down at their whim.

But this would seem much more like what Cleversafe has been offering in their SliceStor product.  With Cleversafe the user can specify exactly how much redundancy they want to support and the system takes care of everything else.  In addition, Cleversafe has implemented a more fine grained approach (with many more fragments) and data and parity are intermingled in each stored fragment.

It’s not a big stretch for Atmos to go from two GeoProtect configurations to four or more.  Unclear to me what the right number would be but once you get past 3 or so, it might be easier to just code a generic R-S routine that can handle any configuration the customer wants but I may be oversimplifying the mathematics here.

Nonetheless, in future versions of Atmos I wouldn’t be surprised if it’s possible that through policy management the way data is protected could change over time. Specifically, while data is being frequently accessed, one could use object replication or less compressed encoding to speed up access but once access frequency diminishes (or time passes), data can then protected with more storage efficient encoding schemes which would reduce the data footprint in the cloud while still offering similar resiliency to data loss.

Full disclosure I have worked for Cleversafe in the past and although I am currently working with EMC, I have had no work from EMC’s Atmos team.

ESRP 1K to 5Kmbox performance – chart of the month

ESRP 1001 to 5000 mailboxes, database transfers/second/spindle
ESRP 1001 to 5000 mailboxes, database transfers/second/spindle

One astute reader of our performance reports pointed out that some ESRP results could be skewed by the number of drives that are used during a run.  So, we included a database transfers per spindle chart in our latest Exchange Solution Review Program (ESRP) report on 1001 to 5000 mailboxes in our latest newsletter.  The chart shown here is reproduced from that report and shows the number of overall database transfers attained (total of read and write) for the top 10 storage subsystems reporting in the latest ESRP results.

This cut of the system performance shows a number of diverse systems:

  • Some storage systems had 20 disk drives and others had 4.
  • Some of these systems were FC storage (2), some were SAS attached storage (3), but most were iSCSI storage.
  • Mailbox counts supported by these subsystems ranged from 1400 to 5000 mailboxes.

What’s not shown is the speed of the disk spindles. Also none of these systems are using SSD or NAND cards to help sustain their respective workloads.

A couple of surprises here:

  • iSCSI systems should have shown up much worse than FC storage. True, the number 1 system (NetApp FAS2040) is FC while the numbers 2&3 are iSCSI, the differences are not that great.  It would seem that protocol overhead is not a large determinant in spindle performance for ESRP workloads.
  • The number of drives used also doesn’t seem to matter much.  The FAS2040 had 12 spindles while the AX4-5i only had 4.  Although this cut of the data should minimize drive count variability, one would think that more drives would result in higher overall performance for all drives.
  • Such performance approaches drive limits of just what a 15Krpm drive can sustain.  No doubt some of this is helped by system caching, but no amount of cache can hold all the database write and read data for the duration of a Jetstress run.  It’s still pretty impressive, considering typical 15Krpm drives (e.g., Seagate 15K.6) can probably do ~172 random 64Kbyte block IOs/second. The NetApp FAS2040 hit almost 182 database transfers/second/spindle, perhaps not 64Kbyte blocks and maybe not completely random, but impressive nonetheless.

The other nice thing about this metric, is that it doesn’t correlate that well with any other ESRP metrics we track, such as aggregate database transfers, database latencies, database backup throughput etc. So it seems to measure a completely different dimension of Exchange performance.

The full ESRP report went out to our newsletter subscribers last month and a copy of the report will be up on the dispatches page of our website later this month. However, you can get this information now and subscribe to future newsletters to receive future full reports even earlier, just subscribe by email.

As always, we welcome any suggestions on how to improve our analysis of ESRP or any of our other storage system performance results.  This new chart was a result of one such suggestion.

SNIA Tech Center Grand Opening – Mystery Storage Contest

SNIA Technology Center Grand Opening Event - mingling before the show
SNIA Technology Center Grand Opening Event - mingling before the show

Yesterday in Colorado Springs SNIA held a grand opening for their new Technology Center.  They have moved their tech center about a half mile closer to Pikes Peak.

The new center has less data center floor space than the old one but according to Wayne Adams/EMC and Chairman of SNIA Board, this is a better fit for what SNIA is doing today.  These days SNIA doesn’t do as many plugfests requiring equipment to all be co-located on the same data center floor.  Most of SNIA’s plugfests today are done over the web, remotely, across continent wide distances.

SNIA’s new tech center is being leased from LSI which occupies the other half of the building.  If you were familiar with the old tech center it was leased from HP which resided in the other half of the old tech center.  This seems to work well for SNIA in the past, providing access to a large number of technical experts which can be called on to help out when needed.

A couple of things I didn’t realize about SNIA:

  • They have been in Colorado Springs since 2001
  • They have only 14 USA employees but over 4000 volunteers
  • They host a world wide IO trace library which member companies contribute and can access
  • All the storage equipment in their data center is provided for free by vendor/member companies.
  • SNIA certification training is one of the top 10 certifications as judged by an independent agency

Took a tour of their technology display area highlighting SNIA initiatives:

SNIA Green Storage Initiative display station
SNIA Green Storage Initiative display station
  • Green Storage Initiative display had a technician working on getting the power meter working properly.  One of only two analyzers I saw in their data centers.  The green storage initiative is all about the energy consumption of storage.
  • Solid State Storage Initiative display had a presentation on the SSSI activities and white papers which they have produced on this technology.
  • XAM initiative section had a couple of people talking about the importance of XAM to compliance activities and storage archives.
  • FCIA section had a talk on FC and its impact on storage present and futures.

Other initiatives were on display as well but I spent less time studying them.  In the conference room and 2 training rooms, SNIA had presentations on their Certification activity and storage training opportunities.  Howie Goldstein (HGAI Associates) was in one of the training rooms talking about education he provides through SNIA.

SNIA Tech Center Computer Lab 1
SNIA Tech Center Computer Lab 1

The new tech center has two computer labs.  Lab 1 seemed to have just about every vendors storage hardware.  As you can see from the photo, each storage subsystem was dedicated to SNIA initiative activities.  Didn’t see a lot of servers using this storage but they were probably located in computer lab 2.  In the picture one can see EMC, HDS, APC, and at the end, 3PAR storage.  On the other side of the aisle (not shown) was HP, NetApp, PillarData, and more HDS storage (and probably missed one or two more).

Don’t recall the SAN switch hardware but it wouldn’t surprise me to include a representative selection of all the vendors.  There was more switch hardware in Lab 2 and there we could make easily make out (McData or now) Brocade switch hardware.

SNIA Tech Center Computer Lab 2 switching hw
SNIA Tech Center Computer Lab 2 switching hw

Computer Lab 2 seemed to have most of the server hardware and more storage.  But both labs looked pretty clean from my perspective, probably due to all the press and grand opening celebration.  It ought to look more lived in/worked in over time.  I always like labs to be a bit more chaotic (see my price of quality post for a look at a busy HP EVA lab).

You would think with all SNIAs focus on plugfests there would be a lot more hardware analyzers floating around the two labs.  But outside of the power meter in the Green Storage Initiative display the only other analyzer like equipment I saw was a lone workstation behind some of the storage in lab 2.  It was way too clean to actually be in use. There ought to be post-it notes all over it, cables hanging all around it, with manuals and other documentation underneath it (but maybe I am a little old school docs should all be online nowadays).  Also, I didn’t see one white board in either of the labs, also a clear sign of early life.

SNIA Tech Center Lab 2 Lone portable workstation
SNIA Tech Center Lab 2 Lone portable workstation

We didn’t get to see much of the office space but it looked like plenty of windows and decent sized offices. Not sure how many people would be assigned to each but they have to put the volunteers and employees someplace.

Mystery Storage Contest – 1

And now for a new contest. See if you can determine the storage I am showing in the photo below.  Please submit your choice via comment(s) to this post and be sure to supply a valid email address.

Contest participant(s) will all receive a subscription to my (free) monthly Storage Intelligence email newsletter.  One winner will be chosen at random from all correct entries and will earn a free coupon code for 30% off any Silverton Consulting Briefings purchased through the web (once I figure out how to do this).

Correct entries must supply a valid email and identify the storage vendor and product model depicted in the picture.  Bonus points will be awarded for anyone who can tell the raw capacity of the subsystem in the picture.

SNIA volunteers, SNIA employees, SNIA member company employees and family members of any of these are not allowed to submit answers.  The contest will be closed 90 days after this post is published.  (And would someone from SNIA Technology Center please call me at 720-221-7270 and provide the identification and raw capacity of the storage subsystem depicted below in Computer Lab 2 on the NorthWest Wall.)

SNIA Technology Center - Mystery Storage 1
SNIA Technology Center - Mystery Storage 1

Remember our first Mystery Storage Contest closes in 90 days.

Also if you would like to submit an entry picture for future mystery storage contests please indicate so in your comment and I will be happy to contact you directly.

15PB a year created by CERN

The Large Hadron Collider/ATLAS at CERN by Image Editor (cc) (from flickr)
The Large Hadron Collider/ATLAS at CERN by Image Editor (cc) (from flickr)

That’s what CERN produces from their 6 experiments each year.  How this data is subsequently accessed and replicated around the world is an interesting tale in multi-tier data grids.

When an experiment is run at CERN, the data is captured locally in what’s called Tier 0.  After some initial processing, this data is stored on tape at Tier 0 (CERN) and then replicated to over 10 Tier 1 locations around the world which then become a second permanent repository for all CERN experiment data.  Tier 2 centers can request data from Tier 1 locations and process the data and return results to Tier 1 for permanent storage.  Tier 3 data centers can request data and processing time from Tier 2 centers to analyze the CERN data.

Each experiment has it’s own set of Tier 1 data centers that store its results.  According to the latest technical description I could find, the Tier 0 (at CERN) and most Tier 1 data centers provide a tape storage permanent repository for experimental data frontended by a disk cache.  Tier 2 can have similar resources but are not expected to be a permanent repository for data.

Each Tier 1 data center has it’s own hierarchical management system (HMS) or mass storage system (MSS) based on any number of software packages such as HPSS, CASTOR, Enstore, dCache, DPM, etc., most of which are open source products.  But regardless of the HMS/MSS implementation they all a set of generic storage management services based on the Storage Resource Manager (SRM) as defined by a consortium of research centers and provide a set of file transfer protocols defined by yet another set of standards by Globus or gLite.

Each Tier 1 data center manages their own storage element (SE).  Each experiment storage element has disk storage with optionally tape storage (using one or more of the above disk caching packages) and provides authentication/security, provides file transport,  and maintains catalogs and local databases.  These catalogs and local databases index the data sets or files available on the grid for each experiment.

Data stored in the grid are considered read-only and can never be modified.  It is intended for users that need to process this data read it from Tier 1 data centers, process the data and create new data which is then stored in the grid. New data added Data to be placed in the grid must be registered to the LCG file catalogue and transferred to a storage element to be replicated throughout the grid.

CERN data grid file access

“Files in the Grid can be referred to by different names: Grid Unique IDentifier (GUID), Logical File Name (LFN), Storage URL (SURL) and Transport URL (TURL). While the GUIDs and LFNs identify a file irrespective of its location, the SURLs and TURLs contain information about where a physical replica is located, and how it can be accessed.” (taken from the gLite user guide).

  • GUIDs look like guid:<unique_string> files are given an unique GUID when created and can never be changed.  The unique string portion of the GUID is typically a combination of MAC address and time-stamp and is unique across the grid.
  • LFNs look like lfn:<unique_string> files can have many different LFNs all pointing or linking to the same data.  LFN unique strings typically follow unix-like conventions for file links.
  • SURLs look like srm:<se_hostname>/path and provide a way to access data located at a storage element.  SURLs are transformed to TURLs.  SURLs are immutable and are unique to a storage element.
  • TURLs look like <protocol>://<se_hostname>:<port>/path and are obtained dynamically from a storage element.  TURLs can have any format after the // that uniquely identifies the file to the storage element but typically they have a se_hostname, port and file path.

GUIDs and LFNs are used to lookup a data set in the global LCG file catalogue .  After file lookup a set of site specific replicas are returned (via SURLs) which are used to request file transfer/access from a nearby storage element.  The storage element accepts the file’s SURL and assigns a TURL which can then be used to transfer the data to wherever it’s needed.  TURLs can specify any file transfer protocol supported across the grid

CERN data grid file transfer protocols supported for transfer and access currently include:

  • GSIFTP – a grid security interface enabled subset of the GRIDFTP interface as defined by Globus
  • gsidcap – a grid security interface enabled feature of dCache for ftp access.
  • rfio – remote file I/O supported by DPM. There is both a secure and an insecure version of rfio.
  • file access – local file access protocols used to access the file data locally at the storage element.

While all storage elements provide the GSIFTP protocol, the other protocols supported depend on the underlying HMS/MSS system implemented by the storage element for each experiment.  Most experiments use one type of MSS throughout their  world wide storage elements and as such, offer the same file transfer protocols throughout the world.

If all this sounds confusing, it is.  Imagine 15PB a year of data replicated to over 10 Tier 1 data centers which can then be securely processed by over 160 Tier 2 data centers around the world.  All this supports literally thousands of scientist who have access to every byte of data created by CERN experiments and scientists that post process this data.

Just exactly how this data is replicated to the Tier 1 data centers and how a scientists processes such data must be the subject for other posts.

Future iBook pricing

Apple's iPad iBook app (from
Apple's iPad iBook app (from

All the news about iPad and iBooks app got me thinking. There’s been much discussion on e-book pricing but no one is looking at what to charge for items other than books.  I look at this as something like what happened to albums when iTunes came out.  Individual songs were now available without having to buy the whole album.

As such, I started to consider what iBooks should charge for items outside of books.  Specifically,

  • Poems – no reason the iBooks app should not offer poems as well as books but what’s a reasonable price for a poem.  I believe Natalie Goldberg in Writing Down the Bones: Freeing the Writer Within used to charge $0.25 per poem.  So this is a useful lower bound, however considering inflation (and assuming $0.25 was 1976 pricing), in today’s prices this would be closer to $1.66.  With iBooks app’s published commission rate (33% for Apple) future poets would walk away with $1.11 per poem.
  • Haiku – As a short form poem I would argue that a Haiku should cost less than a poem.  So, maybe $0.99 per haiku,would be a reasonable price.
  • Short stories – As a short form book pricing for short stories needs to be somehow proportional to normal e-book pricing.  A typical book has about 10 chapters and as such, it might be reasonable to consider a short story as equal to a chapter.  So maybe 1/10th the price of an e-book is reasonable.  With the prices being discussed for books this would be roughly the price we set for poems.  No doubt incurring the wrath of poets forevermore, I  am willing to say this undercuts the worth of short stories and would suggest something more on the order of $2.49 for a short story.  (Poets please forgive my transgression.)
  • Comic books – Comic books seem close to short stories and with their color graphics would do well on the iPad.  It seems to me that these might be priced somewhere in between short stories and poems,  perhaps at $1.99 each.
  • Magazine articles – I see no reason that magazine articles shouldn’t be offered as well as short stories outside the magazine itself. Once again, color graphics found in most high end magazines should do well on the iPad.  I would assume pricing similar to short stories would make sense here.

University presses, the prime outlet for short stories today, seem similar to small record labels.  Of course,  the iBooks app could easily offer to sell their production as e-books in addition to selling their stories separately. Similar considerations apply to poetry publishers. Selling poems and short stories outside of book form might provide more exposure for the authors/poets and in the long run, more revenue for them and their publishers.  But record companies will attest that your results may vary.

Regarding magazine articles and comic books there seems to be a dependance on advertising revenue that may suffer from iBook publishing.  This could be dealt with by incorporating publisher advertisements in iBook displays of an article or comic book.   However, significant advertisement revenue comes from ads placed outside of articles, such as in back matter, around the table of contents, in-between articles, etc.  This will need to change with the transition to e-articles – revenues may suffer.

Nonetheless, all these industries can continue to do what they do today.  Record companies still exist, perhaps not doing as well as before iTunes, but they still sell CDs.  So there is life after iTunes/iBooks, but one things for certain – it’s different.

Probably missing whole categories of items that could be separated from book form as sold today,  But in my view, anything that could be offered separately probably will be.  Comments?

Intel-Micron new 25nm/8GB MLC NAND chip


Intel-Micron Flash Technologies just issued another increase in NAND density. This one’s manages to put 8GB on a single chip with MLC(2) technology in a 167mm square package or roughly a half inch per side.

You may recall that Intel-Micron Flash Technologies (IMFT) is a joint venture between Intel and Micron to develop NAND technology chips. IMFT chips can be used by any vendor and typically show up in Intel SSDs as well as other vendor systems. MLC technology is more suitable for use in consumer applications but at these densities it’s starting to make sense for use by data centers as well. We have written before about MLC NAND used in the enterprise disk by STEC and Toshiba’s MLC SSDs. But in essence MLC NAND reliability and endurability will ultimately determine its place in the enterprise.

But at these densities, you can just throw more capacity at the problem to mask MLC endurance concerns. For example, with this latest chip, one could conceivably have a single layer 2.5″ configuration with almost 200GBs of MLC NAND. If you wanted to configure this as 128GB SSD you could use the additional 72GB of NAND for failing pages. Doing this could conceivably add more than 50% to the life of an SSD.

SLC still has better (~10X) endurance but being able to ship 2X the capacity in the same footprint can help.  Of course, MLC and SLC NAND can be combined in a hybrid device to give some approximation of SLC reliability at MLC costs.

IMFT made no mention of SLC NAND chips at the 25nm technology node but presumably this will be forthcoming shortly.  As such, if we assume the technology can support a 4GB SLC NAND in a 167mm**2 chip it should be of significant interest to most enterprise SSD vendors.

A couple of things missing from yesterday’s IMFT press release, namely

  • read/write performance specifications for the NAND chip
  • write endurance specifications for the NAND chip

SSD performance is normally a function of all the technology that surrounds the NAND chip but it all starts with the chip.  Also, MLC used to be capable of 10,000 write/erase cycles and SLC was capable of 100,000 w/e cycles but most recent technology from Toshiba (presumably 34nm technology) shows a MLC NAND write/erase endurance of only 1400 cycles.  Which seems to imply that as the NAND technology increases density write endurance rates degrade. How much is subject to much debate and with the lack of any standardized w/e endurance specifications and reporting, it’s hard to see how bad it gets.

The bottom line, capacity is great but we need to know w/e endurance to really see where this new technology fits.  Ultimately, if endurance degrades significantly such NAND technology will only be suitable for consumer products.  Of course at ~10X (just guessing) the size of the enterprise market maybe that’s ok.