Atomristors, a new single (atomic) layer memristor

Read an article the other day about the “Atomristor: non-volatile resistance  switching in atomic sheets of transition metal dichalcogenides” (TMDs), an ACS publication. The article shows research that discovered an atomic sheet level version of a memristor. The device is an atomic sheet of TMD that is sandwiched between two (gold, silver or graphene) electrodes.

They refer to the device switching non-volatile resistance (NVR) from low to high or vice versa but from our perspective it could just as easily be considered a non-volatile device usable for memory, storage, or electronic circuitry.

Prior to this research, it was believed that such resistance switching could not be accomplished with single atomic, sub-nanometre (0.7nm) sized, sheet of material.

NVR atomristor technological properties

The researchers discovered that NVR switching can occur at different device temperatures, sheet areas, compliance current, voltage sweep rate, and layer thickness. In all five degrees of freedom were tested to show that  TMD atomristors had wide applicability and allowed for significant environmental and electronic variability.

Not only was the effect extremely versatile, the researchers identified multiple materials which could be used for the atomic sheet. In fact, TMD are a class of materials and they showed 4 different TMD materials that had the NVR effect.

Surprisingly, some TMD materials exhibited the NVR effect using unipolar voltages and some using bipolar voltages, and some could use both.

The researchers went a long way to showing that the NVR was due to the atomic sheet. In one instance they specifically used non-lithographic measures to fabricate the devices. This process utilized graphene manufacturing like methods to produce an atomic sheet ontop of gold foil and depositing another gold layer ontop of that.

But they also used standard fabrication techniques to build the atomristor devices as well. Using these different fabrication methods, they were able to show the NVR effect using different electrodes types, testing gold, silver, and graphene, all of which worked similarly.

The paper talked of using atomristors in a software defined radio, as a electronic circuit/cross bar switch, or as a memory/storage device. But they also indicated that it could easily be used in a neuromorphic computer as well, effectively simulating neuron like computations.

There’s much more information in the ACS article.

How does it compare to flash?

As compared to flash, atomristors NVR devices should be able to provide higher levels (bits per mm) of density. And due to the lower current (~1v) required for (bipolar) NVR setting, reading and resetting, there’s a lower probability of leakage of stored charges as they’re scaled down to nm sizes.

And of course it comes in 2d sheets, so it’s just as amenable to 3D arrays as NAND and 3DX is today. That means that as fabs start scaling 3D NAND up in layers, atomristor NVR devices should be able to follow their technology roadmap to be scaled up just as high.

Atomristor computers, storage or switch devices

Going from the “lab” to an IT shop is a multifaceted endeavour that takes a lot of time. There are many steps to needed to get to commercialization and many lab breakthroughs never make it that far because of complexity, economics, and other factors.

For instance, memristors were first proposed in 1971 and HP(E) researchers first discovered material that could produce the memristor effect in 2008. In March 2012, HRL fabricated the first memristor chip on CMOS. In Dec. 2017, >9 years later, at their Discover Conference, HPE showed off “The Machine”, a prototype of a memristor based computer to the public. But we are still waiting to see one on the market for sale.

That being said, memristor technologies didn’t exist before 2008, so the use of these devices in a computer took sometime to be understood. The fact that atomristors are “just” an extremely, thinner version of memristors should help it be get to market faster that original memristor technologies. But how much faster than 9-12 years is anyone’s guess.

~~~~

Comments?

Picture Credit(s): All graphics and pictures are from the article in ACS

A tale of two AFAs: EMC DSSD D5 & Pure Storage FlashBlade

There’s been an ongoing debate in the analyst community about the advantages of software only innovation vs. hardware-software innovation (see Commodity hardware loses again and Commodity hardware always loses posts). Here is another example where two separate companies have turned to hardware innovation to take storage innovation to the next level.

DSSD D5 and FlashBlade

DSSD-d5Within the last couple of weeks, two radically different AFAs were introduced. One by perennial heavyweight EMC with their new DSSD D5 rack scale flash system and the other by relatively new comer Pure Storage with their new FlashBlade storage system.FB

These two arrays seem to be going after opposite ends of the storage market: the 5U DSSD D5 is going after both structured and unstructured data that needs ultra high speed IO access (<100µsec) times and the 4U FlashBlade going after more general purpose unstructured data. And yet the two have have many similarities at least superficially.
Continue reading “A tale of two AFAs: EMC DSSD D5 & Pure Storage FlashBlade”

3D NAND, how high can it go?

450_x_492_3d_nand_32_layer_stackI was at the Flash Memory Summit a couple of weeks ago and a presenter (from Hynix, I think) got up and talked about how 3D NAND was going to be the way forward for all NAND technology. I always thought we were talking about a handful of layers. But on the slide he had what looked to be a skyscraper block with 20-40 layers of NAND.

Currently shipping 3D NAND

It seems all the major NAND fabs are shipping 30+ layer 3D NAND. Samsung last year said they were shipping 32-layer 3D (V-)NANDToshiba announced earlier this year that they had 48-layer 3D NANDHynix is shipping 36-layer 3D NAND.  Micron-Intel is also shipping 32-layer 3D NAND. Am I missing anyone?

Samsung also said that they will be shipping a 32GB, 48-layer V-NAND chip later this year. Apparently, Samsung is also working on 64-layer V-NAND in their labs and are getting good results.  In an article on Samsung’s website they mentioned the possibility of 100 layers of NAND in a 3D stack.

The other NAND fabs are also probably looking at adding layers to their 3D NAND but aren’t talking as much about it. i5QVjaOmlEZHmjM34GrH3NFORjU9A-xAk_JUvkzS8Os

Earlier this year on a GreyBeards on Storage Podcast we talked with Jim Handy, Director at Objective Analysis on what was going on in NAND fabrication. Talking with Jim was fascinating but one thing he said was that with 3D NAND, building a hole with the right depth, width and straight enough was a key challenge. At the time I was thinking a couple of layers deep. Boy was I wrong.

How high/deep can 3D NAND go?

On the podcast, Jim said he thought that 3D NAND would run out of gas around 2023. Given current press releases, it seems NAND fabs are adding ~16 layers a year to their 3D-NAND.

So if 32 to 48 layers is todays 3D-NAND and we can keep adding 16 layers/year through 2023 that’s 8 years *16 layers or an additional 128 layers  to the 32  to 48 layers currently shipping. With that rate we should get to 160 to 176 layer 3D NAND chips. And if 48 layers is 32GB then we maybe we could see  ~+100GB  3D NAND chips.

This of course means that there is no loss in capacity as we increase layers. Also that the industry can continue to add 16 layers/year to 3D-NAND chips.

I suppose there’s one other proviso, that nothing else comes along that is less expensive to fabricate while still providing ever increasing capacity of lightening fast, non-volatile storage (see a recent post on 3D XPoint NVM technology).

Photo Credit(s):

  1. Micron’s press release on 3D NAND, (c) 2015 Micron
  2. Toshiba’s press release as reported by AnandTech, (c) 2015 Toshiba

Flash’s only at 5% of data storage

7707062406_6508dba2a4_oWe have been hearing for years that NAND flash is at price parity with disk. But at this week’s Flash Memory Summit, Darren Thomas, VP Storage BU, Micron said at his keynote that NAND only store 5% of the bits in a data center.

Darren’s session was all about how to get flash to become more than 5% of data storage and called this “crossing the chasm”. I assume the 5% is against yearly data storage shipped.

Flash’s adoption rate

Darren, said last year flash climbed from 4% to 5% of data center storage, but he made no mention on whether flash’s adoption was accelerating. According to another of Darren’s charts, flash is expected to ship ~77B Gb of storage in 2015 and should grow to about 240B Gb by 2019.

If the ratio of flash bits shipped to data centers (vs. all flash bits shipped) holds constant then Flash should be ~15% of data storage by 2019. But this assumes data storage doesn’t grow. If we assume a 10% Y/Y CAGR for data storage, then flash would represent about ~9% of overall data storage.

Data growth at 10% could be conservative. A 2012 EE Times article said2010-2015 data growth CAGR would be 32%  and IDC’s 2012 digital universe report said that between 2012 and 2020, data will double every two years, a ~44% CAGR. But both numbers could be talking about the world’s data growth, not just data center.

How to cross this chasm?

Geoffrey Moore, author of Crossing the Chasm, came up on stage as Darren discussed what he thought it would take to go beyond early adopters (visionaries) to early majority (pragmatists) and reach wider flash adoption in data center storage. (See Wikipedia article for a summary on Crossing the Chasm.)

As one example of crossing the chasm, Darren talked about the electric light bulb. At introduction it competed against candles, oil lamps, gas lamps, etc. But it was the most expensive lighting system at the time.

But when people realized that electric lights could allow you to do stuff at night and not just go to sleep, adoption took off. At that time competitors to electric bulb did provide lighting it just wasn’t that good and in fact, most people went to bed to sleep at night because the light then available was so poor.

However, the electric bulb  higher performing lighting solution opened up the night to other activities.

What needs to change in NAND flash marketing?

From Darren’s perspective the problem with flash today is that marketing and sales of flash storage are all about speed, feeds and relative pricing against disk storage. But what’s needed is to discuss the disruptive benefits of flash/NAND storage that are impossible to achieve with disk today.

What are the disruptive benefits of NAND/flash storage,  unrealizable with disk today.

  1. Real time analytics and other RT applications;
  2. More responsive mobile and data center applications;
  3. Greener, quieter, and potentially denser data center;
  4. Storage for mobile, IoT and other ruggedized application environments.

Only the first three above apply  to data centers. And none seem as significant  as opening up the night, but maybe I am missing a few.

Also the Wikipedia article cited above states that a Crossing the Chasm approach works best for disruptive or discontinuous innovations and that more continuous innovations (doesn’t cause significant behavioral change) does better with Everett Roger’s standard diffusion of innovation approaches (see Wikepedia article for more).

So is NAND flash a disruptive or continuous innovation?  Darren seems firmly in the disruptive camp today.

Comments?

Photo Credit(s): 20-nanometer NAND flash chip, IntelFreePress’ photostream

Next generation NVM, 3D XPoint from Intel + Micron

cross_point_image_for_photo_capsuleEarlier this week Intel-Micron announced (see webcast here and here)  a new, transistor-less NVM with 1000 time the speed (10µsec access time for NAND) of NAND [~10ns (nano-second) access times] and at 10X the density of DRAM (currently 16Gb/DRAM chip). They call the new technology 3D XPoint™ (cross-point) NVM (non-volatile memory).

In addition to the speed and density advantages, 3D XPoint NVM also doesn’t have the endurance problems associated with todays NAND. Intel and Micron say that it has 1000 the endurance of today’s NAND (MLC NAND endurance is ~3000 write (P/E) cycles).

At that 10X current DRAM density it’s roughly equivalent to todays MLC/TLC NAND capacities/chip. And at 1000 times the speed of NAND, it’s roughly equivalent in performance to DDR4 DRAM. Of course, because it’s non-volatile it should take much less power to use than current DRAM technology, no need for power refresh.

We have talked about the end of NAND before (see The end of NAND is here, maybe). If this is truly more scaleable than NAND it seems to me that the it does signal the end of NAND. It’s just a matter of time before endurance and/or density growth of NAND hits a wall and then 3D XPoint can do everything NAND can do but better, faster and more reliably.

3D XPoint technology

The technology comes from a dual layer design which is divided into columns and at the top and bottom of the columns are accessor connections in an orthogonal pattern that together form a grid to access a single bit of memory.  This also means that 3D Xpoint NVM can be read and written a bit at a time (rather than a “page” at a time with NAND) and doesn’t have to be initialized to 0 to be written like NAND.

The 3D nature of the new NVM comes from the fact that you can build up as many layers as you want of these structures to create more and more NVM cells. The microscopic pillar  between the two layers of wiring include a memory cell and a switch component which allows a bit of data to be accessed (via the switch) and stored/read (memory cell). In the photo above the yellow material is a switch and the green material is a memory cell.

A memory cell operates by a using a bulk property change of the material. Unlike DRAM (floating gates of electrons) or NAND (capacitors to hold memory values). As such it uses all of the material to hold a memory value which should allow 3D XPoint memory cells to scale downwards much better than NAND or DRAM.

Intel and Micron are calling the new 3D XPoint NVM storage AND memory. That is suitable for fast access, non-volatile data storage and non-volatile processor memory.

3D XPoint NVM chips in manufacturing today

First chips with the new technology are being manufactured today at Intel-Micron’s joint manufacturing fab in Idaho. The first chips will supply 128Gb of NVM and uses just two layers of 3D XPoint memory.

Intel and Micron will independently produce system products (read SSDs or NVM memory devices) with the new technology during 2016. They mentioned during the webcast that the technology is expected to be attached (as SSDs) to a PCIe bus and use NVMe as an interface to read and write it. Although if it’s used in a memory application, it might be better attached to the processor memory bus.

The expectation is that the 3D XPoint cost/bit will be somewhere in between NAND and DRAM, i.e. more expensive than NAND but less expensive than DRAM. It’s nice to be the only companies in the world with a new, better storage AND memory technology.

~~~~

Over the last 10 years or so, SSDs (solid state devices) all used NAND technologies of one form or another, but after today SSDs can be made from NAND or 3D XPoint technology.

Some expected uses for the new NVM is in gaming applications (currently storage speed and memory constrained) and for in-memory databases (which are memory size constrained).  There was mention on the webcast of edge analytics as well.

Welcome to the dawn of a new age of computer storage AND memory.

Photo Credits: (c) 2015 Intel and Micron, from Intel’s 3D XPoint website

Nanterro emerges from stealth with CNT based NRAM

512px-Types_of_Carbon_NanotubesNanterro just came out of stealth this week and bagged $31.5M in a Series E funding round. Apparently, Nanterro has been developing a new form of non-volatile RAM (NRAM), based on Carbon Nanotubes (CNT), which seems to work like an old T-bar switch, only in the NM sphere and using CNT for the wiring.

They were founded in 2001, and are finally  ready to emerge from stealth. Nanterro already has 175+ issued patents, with another 200 patents pending. The NRAM is currently in production at 7 CMOS fabs already and they are sampling 4Mb NRAM chips  to a number of customers.

NRAM vs. NAND

Performance of the NRAM is on a par with DRAM (~100 times faster than NAND), can be configured in 3D and supports MLC (multi-bits per cell) configurations.  NRAM also supports orders of magnitude more (assume they mean writes) accesses and stores data much longer than NAND.

The only question is the capacity, with shipping NAND on the order of 200Gb, NRAM is  about 2**14X behind NAND. Nanterre claims that their CNT-NRAM CMOS process can be scaled down to <5nm. Which is one or two generations below the current NAND scale factor and assuming they can pack as many bits in the same area, should be able to compete well with NAND.They claim that their NRAM technology is capable of Terabit capacities (assumed to be at the 5nm node).

The other nice thing is that Nanterro says the new NRAM uses less power than DRAM, which means that in addition to attaining higher capacities, DRAM like access times, it will also reduce power consumption.

It seems a natural for mobile applications. The press release claims it was already tested in space and there are customers looking at the technology for automobiles. The company claims the total addressable market is ~$170B USD. Which probably includes DRAM and NAND together.

CNT in CMOS chips?

Key to Nanterro’s technology was incorporating the use of CNT in CMOS processes, so that chips can be manufactured on current fab lines. It’s probably just the start of the use of CNT in electronic chips but it’s one that could potentially pay for the technology development many times over. CNT has a number of characteristics which would be beneficial to other electronic circuitry beyond NRAM.

How quickly they can ramp the capacity up from 4Mb seems to be a significant factor. Which is no doubt, why they went out for Series E funding.

So we have another new non-volatile memory technology.On the other hand, these guys seem to be a long ways away from the lab, with something that works today and the potential to go all the way down to 5nm.

It should interesting as the other NV technologies start to emerge to see which one generates sufficient market traction to succeed in the long run. Especially as NAND doesn’t seem to be slowing down much.

Comments?

Picture Credits: Wikimedia.com

Why EMC is doing Project Lightening and Thunder

Picture of atmospheric lightening striking ground near a building at night
rayo 3 by El Garza (cc) (from Flickr)

Although technically Project Lightening and Thunder represent some interesting offshoots of EMC software, hardware and system prowess,  I wonder why they would decide to go after this particular market space.

There are plenty of alternative offerings in the PCIe NAND memory card space.  Moreover, the PCIe card caching functionality, while interesting is not that hard to replicate and such software capability is not a serious barrier of entry for HP, IBM, NetApp and many, many others.  And the margins cannot be that great.

So why get into this low margin business?

I can see a couple of reasons why EMC might want to do this.

  • Believing in the commoditization of storage performance.  I have had this debate with a number of analysts over the years but there remain many out there that firmly believe that storage performance will become a commodity sooner, rather than later.  By entering the PCIe NAND card IO buffer space, EMC can create a beachhead in this movement that helps them build market awareness, higher manufacturing volumes, and support expertise.  As such, when the inevitable happens and high margins for enterprise storage start to deteriorate, EMC will be able to capitalize on this hard won, operational effectiveness.
  • Moving up the IO stack.  From an applications IO request to the disk device that actually services it is a long journey with multiple places to make money.  Currently, EMC has a significant share of everything that happens after the fabric switch whether it is FC,  iSCSI, NFS or CIFS.  What they don’t have is a significant share in the switch infrastructure or anywhere on the other (host side) of that interface stack.  Yes they have Avamar, Networker, Documentum, and other software that help manage, secure and protect IO activity together with other significant investments in RSA and VMware.   But these represent adjacent market spaces rather than primary IO stack endeavors.  Lightening represents a hybrid software/hardware solution that moves EMC up the IO stack to inside the server.  As such, it represents yet another opportunity to profit from all the IO going on in the data center.
  • Making big data more effective.  The fact that Hadoop doesn’t really need or use high end storage has not been lost to most storage vendors.  With Lightening, EMC has a storage enhancement offering that can readily improve  Hadoop cluster processing.  Something like Lightening’s caching software could easily be tailored to enhance HDFS file access mode and thus, speed up cluster processing.  If Hadoop and big data are to be the next big consumer of storage, then speeding cluster processing will certainly help and profiting by doing this only makes sense.
  • Believing that SSDs will transform storage. To many of us the age of disks is waning.  SSDs, in some form or another, will be the underlying technology for the next age of storage.  The densities, performance and energy efficiency of current NAND based SSD technology are commendable but they will only get better over time.  The capabilities brought about by such technology will certainly transform the storage industry as we know it, if they haven’t already.  But where SSD technology actually emerges is still being played out in the market place.  Many believe that when industry transitions like this happen it’s best to be engaged everywhere change is likely to happen, hoping that at least some of them will succeed. Perhaps PCIe SSD cards may not take over all server IO activity but if it does, not being there or being late will certainly hurt a company’s chances to profit from it.

There may be more reasons I missed here but these seem to be the main ones.  Of the above, I think the last one, SSD rules the next transition is most important to EMC.

They have been successful in the past during other industry transitions.  If anything they have shown similar indications with their acquisitions by buying into transitions if they don’t own them, witness Data Domain, RSA, and VMware.  So I suspect the view in EMC is that doubling down on SSDs will enable them to ride out the next storm and be in a profitable place for the next change, whatever that might be.

And following lightening, Project Thunder

Similarly, Project Thunder seems to represent EMC doubling their bet yet again on the SSDs.  Just about every month I talk to another storage startup coming out in the market providing another new take on storage using every form of SSD imaginable.

However, Project Thunder as envisioned today is not storage, but rather some form of external shared memory.  I have heard this before, in the IBM mainframe space about 15-20 years ago.  At that time shared external memory was going to handle all mainframe IO processing and the only storage left was going to be bulk archive or migration storage – a big threat to the non-IBM mainframe storage vendors at the time.

One problem then was that the shared DRAM memory of the time was way more expensive than sophisticated disk storage and the price wasn’t coming down fast enough to counteract increased demand.  The other problem was making shared memory work with all the existing mainframe applications was not easy.  IBM at least had control over the OS, HW and most of the larger applications at the time.  Yet they still struggled to make it usable and effective, probably some lesson here for EMC.

Fast forward 20 years and NAND based SSDs are the right hardware technology to make  inexpensive shared memory happen.  In addition, the road map for NAND and other SSD technologies looks poised to continue the capacity increase and price reductions necessary to compete effectively with disk in the long run.

However, the challenges then and now seem as much to do with software that makes shared external memory universally effective as with the hardware technology to implement it.  Providing a new storage tier in Linux, Windows and/or VMware is easier said than done. Most recent successes have usually been offshoots of SCSI (iSCSI, FCoE, etc).  Nevertheless, if it was good for mainframes then, it certainly good for Linux, Windows and VMware today.

And that seems to be where Thunder is heading, I think.

Comments?

 

Comments?

Will Hybrid drives conquer enterprise storage?

Toyota Hybrid Synergy Drive Decal: RAC Future Car Challenge by Dominic's pics (cc) (from Flickr)
Toyota Hybrid Synergy Drive Decal: RAC Future Car Challenge by Dominic's pics (cc) (from Flickr)

I saw where Seagate announced the next generation of their Momentus XT Hybrid (SSD & Disk) drive this week.  We haven’t discussed Hybrid drives much on this blog but it has become a viable product family.

I am not planning on describing the new drive specs here as there was an excellent review by Greg Schulz at StorageIOblog.

However, the question some in the storage industry have had is can Hybrid drives supplant data center storage.  I believe the answer to that is no and I will tell you why.

Hybrid drive secrets

The secret to Seagate’s Hybrid drive lies in its FAST technology.  It provides a sort of automated disk caching that moves frequently accessed OS or boot data to NAND/SSD providing quicker access times.

Storage subsystem caching logic has been around in storage subsystems for decade’s now, ever since the IBM 3880 Mod 11&13 storage control systems came out last century.  However, these algorithms have gotten much more sophisticated over time and today can make a significant difference in storage system performance.  This can be easily witnessed by the wide variance in storage system performance on a per disk drive basis (e.g., see my post on Latest SPC-2 results – chart of the month).

Enterprise storage use of Hybrid drives?

The problem with using Hybrid drives in enterprise storage is that caching algorithms are based on some predictability of access/reference patterns.  When you have a Hybrid drive directly connected to a server or a PC it can view a significant portion of server IO (at least to the boot/OS volume) but more importantly, that boot/OS data is statically allocated, i.e., doesn’t move around all that much.   This means that one PC session looks pretty much like the next PC session and as such, the hybrid drive can learn an awful lot about the next IO session just by remembering the last one.

However, enterprise storage IO changes significantly from one storage session (day?) to another.  Not only are the end-user generated database transactions moving around the data, but the data itself is much more dynamically allocated, i.e., moves around a lot.

Backend data movement is especially true for automated storage tiering used in subsystems that contain both SSDs and disk drives. But it’s also true in systems that map data placement using log structured file systems.  NetApp Write Anywhere File Layout (WAFL) being a prominent user of this approach but other storage systems do this as well.

In addition, any fixed, permanent mapping of a user data block to a physical disk location is becoming less useful over time as advanced storage features make dynamic or virtualized mapping a necessity.  Just consider snapshots based on copy-on-write technology, all it takes is a write to have a snapshot block be moved to a different location.

Nonetheless, the main problem is that all the smarts about what is happening to data on backend storage primarily lies at the controller level not at the drive level.  This not only applies to data mapping but also end-user/application data access, as cache hits are never even seen by a drive.  As such, Hybrid drives alone don’t make much sense in enterprise storage.

Maybe, if they were intricately tied to the subsystem

I guess one way this could all work better is if the Hybrid drive caching logic were somehow controlled by the storage subsystem.  In this way, the controller could provide hints as to which disk blocks to move into NAND.  Perhaps this is a way to distribute storage tiering activity to the backend devices, without the subsystem having to do any of the heavy lifting, i.e., the hybrid drives would do all the data movement under the guidance of the controller.

I don’t think this likely because it would take industry standardization to define any new “hint” commands and they would be specific to Hybrid drives.  Barring standards, it’s an interface between one storage vendor and one drive vendor.  Probably ok if you made both storage subsystem and hybrid drives but there aren’t any vendor’s left that does both drives and the storage controllers.

~~~~

So, given the state of enterprise storage today and its continuing proclivity to move data around accross its backend storage,  I believe Hybrid drives won’t be used in enterprise storage anytime soon.

Comments?