Are neuromorphic chips a dead end?

Read a recent article about Intel’s Pohoiki Beach neuromorphic system and their Loihi chips, that has scaled up to 8M neurons in IEEE Spectrum (Intel’s neuromorphic system hits 8 M neurons). In the last month or so I wrote up about a two startups one of which seemed (?) to be working on a neuromorphic chip development (see my Photonics computing sees the light of day post).

But first please take our new poll:

I’ve been writing about neuromorphic chips since 2011, 8 long years (see IBM SyNAPSE chip post from 2011 or search my site for “neuromorphic”) and none have been successfully reached the market. The problems with neurmorphic architectures have always been twofold, scaling AND software.

Scaling up neurons

The human brain has ~86B neurons (see wikipedia human brain article). So, 8 million neuromorphic neurons is great, but it’s about 10K X too few. And that doesn’t count the connections between neurons. Some human neurons have over 1000 connections between nerve cells (can’t seem to find this reference anymore?).

Wikimedia commons (481px-Chemical_synapse_schema_cropped)
Wikimedia commons (481px-Chemical_synapse_schema_cropped)

To get from a single chip with 125K neurons to their 8M neuron system, Intel took 64 chips and put them on a couple of boards. To scale that to 86B or so would take ~690, 000 of their neuromorphic chips. Now, no one can say if there’s not some level below 85B neuromorphic neurons, that could support a useful AI solution, but the scaling problem still exists.

Then there’s the synapse connections between neuromorphic neurons problem. The article says that Loihi chips are connected in a heirarchical routing network, which implies to me that there are switches and master switches (and maybe a really big master switch) in their 8M neuromorphic neuron system. Adding another 4 orders of magnitude more neuromorphic neurons to this may be impossible or at least may require another 4 sets of progressively larger switches to be added to their interconnect network. There’s a question of how many hops and the resultant latency in connecting two neuromorphic neurons together but that seems to be the least of the problem with neuromorphic architectures.

Missing software abstractions

The first time I heard about neuromorphic chips I asked what the software looks like and the only thing I heard was that it was complex and not very user friendly and they didn’t want to talk about it.

I keep asking about software for neuromorphic chips and still haven’t gotten a decent answer. So, what’s the problem. In today’s day and age, software is easy to do, relatively inexpensive to produce and can range from spaghetti code to a hierarchical masterpieces, so there’s plenty of room to innovate here.

But whenever I talk to engineers about what the software looks like, it almost seems like a software version of an early plug board unit-record computer (essentially card sorters). Only instead of wires, you have software neuromorphic network connections and instead of electro-magnetic devices, one has software spiking neuromorphic neuron hardware.

The way we left plugboards behind was by building up hardware abstractions such as adders, shifters, multipliers, etc. and moving away from punch cards as a storage medium. Somewhere along this transition, we created programing languages like (macro) Assemblers, COBOL, FORTRAN, LISP, etc. It’s the software languages that brought computing out of the labs and into the market.

It’s been at least 8 years now, and yet, no-one has built a spiking neuromorphic computer language yet. Why not?

I think the problem is there’s no level of abstraction above a neuron. Where’s the aritmetic logic unit (ALU) or register equivalents in neuromorphic computers? They don’t exist as far as I can see.

Until we can come up with some higher levels of abstraction, coding neuromorphic chips is going to be an engineering problem not a commercial endeavor.

But neuromorphism has advantages

The IEEE article states a couple of advantages for neuromorphic computing: less energy to perform inferencing (and possibly training) and the ability to train on incremental data rather than having to train across whole datasets again.

Yes these are great, but there’s a gaggle of startups (e.g., see New GraphCore GC2 chip…, AI processing at the edge, TPU and HW-SW innovation) going after the energy problem in AI DL using Von Neumann architectures.

And the incremental training issue doesn’t seem any easier when you have ~80B neurons, with an occasional 1000s of connections between them to adjust correctly. From my perspective, its training advantage seems illusory at best.

Another advantage of neuromorphism is that it simulates the real analog logic of a human brain. Again, that’s great but a brain takes ~22 years to train (college level). Maybe because neuromorphic chips are electronic perhaps training can be done 100 times faster. But there’s still the software issue

~~~~

I hate to be the bearer of bad news. There’s been some major R&D spend on neuromorphism and it continues today with no abatement.

I just think we’d all be better served figuring out how to program the beast than on –spending more to develop more chip hardware..

This is hard for me to say, as I have always been a proponent of hardware innovation. It’s just that neuromorphic software tools don’t exist yet. And I’m afraid, I don’t see any easy way forward to make any progress on this.

Comments?.

Picture credit(s):

Need memory, Intel’s Optane DC PM to the rescue

I attended Intel’s DataCentric Innovation Conference Tech Field Day eXclusive (TFDx) last April. There were a couple of items Intel presented at the show that peaked my interest there, one of which was DL Boost (see my Intel’s new DL Boost for AI inferencing blog post) and the other was Optane DC PM (data center persistent memory). This post is about Optane DC PM.

As you already know, Optane SSDs have been on the market now for at least a year or so and have not gained much market traction as of yet. I and others attribute this to the high price differential between Optane SSDs and NVMe Flash SSDs but others may say it’s a matter of production yields – probably a little of both.

But Optane, as announced, always had another form factor (if that’s the right term), as persistent memory that could dramatically increase the size of server memory to support new memory intensive applications at a lower price than DRAM.

I was at Nutanix .NEXT conference last week and saw a 4 socket server from DELL that had 6TB of DRAM in it (and 4-44 core CPUs). I didn’t ask the price but when I mentioned I wanted one for my home office, they said it could easily heat my house. So the other problem with a lot of DRAM is power consumption.

Optane DC PM (data center persistent) memory is intended to solve both the high cost and high power consumption problems of DRAM.

How does it work in a server

The newer Intel motherboards support up to 12 slots of memory per socket. And up to 6 of these can be Optane DC PM (512GB DIMM) or 3TB per socket. Optane DC PM is accessed via L1-L2 caching just like any other memory. Apparently with a dual socket system you can have up to 11 Optane DC PM DIMMs on the motherboard.

L1-L2 cache access times are on the order of picoseconds (10**[-12] seconds), DRAM is on the order of nanoseconds (10**[-9] seconds) and flash is on the order of 100 microseconds (100*10**[-6] seconds). So there’s a vast access time gulf between DRAM and Flash that could be exploited with the right technology – enter Optane DC PM.

The only detailed info I could find on Optane DC PM access times was in a research paper (see Basic performance of Intel Optane DC PMM research paper) and it said Optane DC PM assessing times are ~350 nanoseconds, or close to right between DRAM and Flash. At the show the development team indicated that Optane DC PM support about 3GB/sec of bandwidth per module (DIMM).

There are two ways to use Optane DC PM:

  1. Memory mode – in Memory mode, the data in Optane DC PM is thrown away during a power cycle. You must use a block of DRAM as a cache or rather a virtual memory block to the Optane DC PM acting as a paging store. Data is brought into the DRAM cache when accessed using its (virtual) DRAM address and when no longer used. it gets evicted (destaged) back out to Optane DC PM. When power is cycled the data in Optane DC PM is cleared out. Optane DC PM supports AES XTS-256 bit encryption and can easily be cleared by throwing away encryption keys during a power cycle.
  2. App Direct mode – in App Direct mode, Optane DC PM is accessed directly using application APIs and its data persists across power cycles. For App Direct mode, Optane DC PM is still AES 256 encrypted but here the encryption key is maintained across power cycles but is locked on power up and you need to use a pass phrase to unlock it. In this mode, applications are responsible for flushing (L1-L2) caches to Optane to retains all data written through L1-L2 to the Optane DC PM. There’s a GitHub Persistent Memory Development Kit (PMDK) library for that supports the App Direct mode API that applications need to use.

Both modes use DDR-T, (transactional DDR4) a new memory bus protocol for Optane DC PM access. In the DDR-T protocol, access to the memory bus is requested by a CPU and is granted by an Optane DC PM DIMM. All Optane DC PM DIMMs on a system can be accessed in parrallel.

You can use RDMA to replicate (App direct?) Optane DC PM data from one system to another. In order to support Memory and App Direct mode, Optane DC PM required CPU, BIOS and (application) software changes.

Most of the Optane DC PM support and cryptology logic is implemented in hardware. Optane DC PM has an address indirection table (AIT) to support 3D XPoint wear leveling maintained in DRAM but flushed to Optane during power loss. Transfers to 3D XPoint media is in 256 byte cache lines but the memory bus operates in 64 byte cache lines, so there’s a (DRAM) buffer between media and memory bus.

Optane also supports a high availability, or up to two device failure mode. In this scenario, if one Optane DC PM DIMM fails, the system can swap another spare Optane DC PM DIMM into that address space and continue to operate. If a 2nd Optane DC PM fails then the system fails. Not sure what happens to the data on the original Optane DC PM DIMM during a failure. It seems to me data would be lost, but it could depend on its failure mode.

In Memory mode, the expected ratio between DRAM size and Optane DC PM size is should be 32GB DRAM/6TB Optane DC PM. At the TFDx event, the Optane DC PM team had some performance charts showing different DRAM cache miss rates. Intel also announced new CPU monitoring statistics to track application/workloads impacting DRAM/Optane DC PM in Memory mode and to track Optane DC PM health.

Optane DC PM usage modes are established by the BIOS. It’s flexible enough to have the Optane DC PM usage modes be defined on a region by region basis. Not exactly sure what a region is, but it could be an address range spanning MB(?) of Optane DC PM. With both modes in operation on a system, data can be moved from Memory mode Optane to App direct mode Optane or vice versa.

Intel expects that lifetime of an Optane DC PM DIMM to be from 200-370PB of data writes. Optane DC PMs have a 5 year warrantee. Given its bandwidth (3GB/sec), 200PB of data writes should last ~2 years but that’s at 100% duty cycle, writing 3GB of data, every second of every day. So, 5 years should be a reasonable guarantee using a more realistic ~40% duty cycle.

What applications use Optane DC PM

The one of interest to most people seems to be SAP HANA. According to the development team, SAP HANA could use App Direct mode for main database storage and use DRAM for its delta column store. Cassandra could also use Optane in App Direct mode in a similar fashion.

Also something like a REDIS for key-value store could use Optane DC PM to store Values and use DRAM to store Keys.

But any application could take advantage of the extra memory made available with Optane DC PM DIMMs in Memory mode today. Of course any use of Optane DC PM would require the right levels of Intel Xeon CPUs (Cascade Lake), BIOSes and motherboards.

~~~~

Interested in learning more, TFDx videos of the event are available on the website noted previously. Also these TFDx bloggers also have posts specifically on Optane DC PM.

The coolest thing since sliced bread – Optane by Matt Leib, (@MBLeib)

Intel’s crossover point: A 3D spork by Stephen Foskett (@SFoskett)

Intel answering SAP HANA’s tough questions by Keith Townsend (@CTOAdvisor)

Comments?

Intel’s new DL Boost for DL AI inferencing

I was at a TechFieldDay Extra with Intel Data Centric Innovation Conference last week in San Francisco. It was a lavish affair with many industry analysts in attendance besides the TFDx crew.

At the event Intel announced a number of new products including the availability of their next generation scaleable Xeon processor chips, new Optane DC PM (DIMM) and software, new Ethernet (800) NIC cards, new FPGA line (10nm) and DL (deep learning inferencing) boost functionality.

But first please take our new poll:

I was most interested in the DL Boost and Optane DC PM solutions. For this post I focus on DL Boost.

DL Boost for DL inferencing on Xeon

Intel’s DL Boost technology provides a new integer 8 bit precision (INT 8) matrix multiply & summation instruction which can be used to speed up DL inferencing operations. As those who have been following along with my AI-DL-machine learning (ML) blog posts (latest being Learning Machine Learning part 3), probably know, deep learning machine learning that processes data to create a neural network made up with a number of layers and a number of nodes each of which represents a floating point weight used to transform inputs into outputs.

All DL AI projects involve at least two phases: model training and model inferencing (prediction, classification, AI result, etc.). Although both of these activities involve matrix calculations, model training involves a lot more of these compute intensive operations than inferencing. In fact, while training typically is done on GPUs or other special purpose compute hardware (TPU, IPUs, etc.) inferencing can typically be done on standard off the shelf CPUs.

Historically. inferencing used floating point matrix multiplication and summation functionality ,taking input from sensors, logs, photos, etc. and performing the model logic to create an output.

Intel believes (with industry analyst agreement) that over time, 50% or more of the DL AI workload is going to involve inferencing. Hence, the focus on this end of the AI workload, at least for now.

For example, although speech recognition AI can take a long time to process audio recordings and use reinforcement learning to train a recognition model. But, once trained, you could use that recognition AI model in anything from smart speakers, to speech to text dictation machines, to voice response systems, etc. In all of these the recognition model is passed a voice recording (or voice in real time) and processes these to create a text version of the speech.

But all of this has historically been done in floating point (FP) 32 (bit precision) or FP 16. Google’s TPU is capable of doing this with less precision, but to my knowledge, up to this point, it’s always been floating point.

What is DL Boost

What Intel has done with DL Boost is to create a new X86 instruction which can perform an integer (INT) 8 (bit precision) matrix multiplication and summation with less cycles than what it took before. Intel believes if customers were to modify their trained AI neural network models to move from FP 32 (or 16) to INT 8, they could perform inferencing much faster on Xeon Cascade Lake CPUs, than they could before and not have to rely on GPUs for this activity at all.

Yes, this does require hand optimization of trained AI neural network. Some of this may be automated, but not all. Intel claims the precision loss, if done properly, is less than a few percent and it’s impact on AI inferencing correctness is negligible at best.

At the moment, for all the DL modeling I have done, i have never looked at the trained model’s weights leaving this to TensorFlow/Keras to manage for me. But I’m not creating production level DL AI systems (yet). So, I don’t know what it would take to modify my AI models to use INT 8 nor what level of degradation in correctness would ensue. But I also don’t have Cascade Lake Xeon CPUs available.

Some potential problems here:

  1. Manual activity to hand tune the INT 8 neural network is not going to be that popular, except for those organizations where inferencing requires GPUs.
  2. Most production DL AI models, undergo some form of personalization for a user or implementation instance which would require a further FP to INT conversion for each user/implementation.
  3. Most production DL AI models also undergo periodic retraining to fine tune the model with the latest data that has been accumulated. This would also require further FP to INT conversion after each training cycle.

In the end, there’s an advantage for production AI inferencing, for models that don’t require substantial retraining/personalization as they don’t change that often. And there’s a definite cost advantage to using DL Boost INT 8, for those AI inferencing that must use GPUs today to perform in real time or under other performance constraints.

But hand converting neural networks, reminds me of creating assembly code for modules that can impact performance. This is normally reserved for only a select modules or functionality that executud a lot. However, DL models are much more monolithic and by definition, less modular. Identifying which models (or model layers) within a production DL AI solution that are performance sensitive and hand optimizing them to work on CPUs rather than GPUs, seems like a hard task.

It would be better from my perspective to create a single FP 16 matrix multiplication instruction. Alternatively, create some software that would automatically convert any DL AI model (or model layer) from FP to INT 8. That way DL Boost optimization would be just another step in the model training process and could be automatically generated to see if A) it loses too much sensitivity and B) if it’s worthwhile using CPU inferencing.

~~~~

Comments?

GPU growth and the compute changeover

Attended SC17 last month in Denver and Nvidia had almost as big a presence as Intel. Their VR display was very nice as compared to some of the others at the show.

GPU past

GPU’s were originally designed to support visualization and the computation to render a specific scene quickly and efficiently. In order to do this they were designed with 100s to now 1000s of arithmetically intensive (floating point) compute engines where each engine could be given an individual pixel or segment of an image and compute all the light rays and visual aspects pertinent to that scene in a very short amount of time. This created a quick and efficient multi-core engine to render textures and map polygons of an image.

Image rendering required highly parallel computations and as such more compute engines meant faster scene throughput. This led to todays GPUs that have 1000s of cores. In contrast, standard microprocessor CPUs have 10-60 compute cores today.

GPUs today 

Funny thing, there are lots of other applications for many core engines. For example, GPUs also have a place to play in the development and mining of crypto currencies because of their ability to perform many cryptographic operations a second, all in parallel

Another significant driver of GPU sales and usage today seems to be AI, especially machine learning. For instance, at SC17, visual image recognition was on display at dozens of booths besides Intel and Nvidia. Such image recognition  AI requires a lot of floating point computation to perform well.

I saw one article that said GPUs can speed up Machine Learning (ML) by a factor of 250 over standard CPUs. There’s a highly entertaining video clip at the bottom of the Nvidia post that shows how parallel compute works as compared to standard CPUs.

GPU’s play an important role in speech recognition and image recognition (through ML) as well. So we find that they are being used in self-driving cars, face recognition, and other image processing/speech recognition tasks.

The latest Apple X iPhone has a Neural Engine which my best guess is just another version of a GPU. And the iPhone 8 has a custom GPU.

Tesla is also working on a custom AI engine for its self driving cars.

So, over time, GPUs will have an increasing role to play in the future of AI and crypto currency and as always, image rendering.

 

Photo Credit(s): SC17 logo, SC17 website;

GTX1070(GP104) vs. GTX1060(GP106) by Fritzchens Fritz;

Intel 2nd Generation core microprocessor codenamed Sandy Bridge wafer by Intel Free Press

Intel Cloud Day 2016 news and views

 A couple of weeks back I was at Intel Cloud Day 2016 with the rest of the TFD team. We listened to a number of presentations from Intel Management team mostly about how the IT world was changing and how they planned to help lead the transition to the new cloud world.

The view from Intel is that any organization with 1200 to 1500 servers has enough scale to do a private cloud deployment that would be more economical than using public cloud services. Intel’s new goal is to facilitate (private) 10,000 clouds, being deployed across the world.

In order to facilitate the next 10,000, Intel is working hard to introduce a number of new technologies and programs that they feel can make it happen. One that was discussed at the show was the new OpenStack scheduler based on Google’s open sourced, Kubernetes technologies which provides container management for Google’s own infrastructure but now supports the OpenStack framework.

Another way Intel is helping is by building a new 1000 (500 now) server cloud test lab in San Antonio, TX. Of course the servers will be use the latest Xeon chips from Intel (see below for more info on the latest chips). The other enabling technology discussed a lot at the show was software defined infrastructure (SDI) which applies across the data center, networking and storage.

According to Intel, security isn’t the number 1 concern holding back cloud deployments anymore. Nowadays it’s more the lack of skills that’s governing how quickly the enterprise moves to the cloud.

At the event, Intel talked about a couple of verticals that seemed to be ahead of the pack in adopting cloud services, namely, education and healthcare.  They also spent a lot of time talking about the new technologies they were introducing today.
Continue reading “Intel Cloud Day 2016 news and views”

QoM 16-001: Will NVMe GA in enterprise storage over the next 12 months? Yes 0.68 probability

NVMeThe latest analyst forecast contest Question of the Month (QoM 16-001) is on whether NVMe PCIe-SSDs will GA in enterprise storage over the next 12 months? For more information on our analyst forecast contest, please check out the post.

There are a couple of considerations that would impact NVMe adoption.

Availability of NVMe SSDs?

Intel, Samsung, Seagate and WD-HGST are currently shipping 2.5″ & HH-HL NVMe PCIe SSDs for servers. Hynix, Toshiba, and others had samples at last year’s Flash Memory Summit and promised production early this year. So yes, they are available, from at least 3 sources now, including enterprise class storage vendors, with more coming online over the year.

Some advantages of NVMe SSDs?

Advantages of NVMe (compiled from NVMe organization and other NVMe sources):

  • Lower SSD write and read IO access latencies
  • Higher mixed IOPS performance
  • Widespread OS support (not necessarily used in storage systems
  • Lower power consumption
  • X4 PCIe support
  • NVMe over FC Fabric (new RDMA) support

Disadvantages of NVMe SSDs?

Disadvantages of NVMe (compiled from NVMe drive reviewers and other sources):

  • Smaller form factors limit (MLC) capacity SSDs
  • New cabling (U.2) for 2.5″ SSDs
  • BIOS changes to support boot from NVMe (not much of a problem in storage systems)

Not many enterprise storage vendors use PCIe Flash

Current storage vendors that use PCIe flash (sourced from web searches on PCIe flash for major storage vendors):

  • Using PCIe SSDs as part or only storage tier
    • Kamanario K2 all flash array
    • NexGen storage hybrid storage
  • NetApp (PCIe) FlashCache
  • Others (?2) with Volatile cache backed by PCIe SSDs
  • Others (?2) using PCIe SSD as Non-volatile cache

Only a few of these will have new storage hardware out over the next 12 months. I estimated (earlier) about 1/3 of current storage vendors will release new hardware over the next 12 months.

The advantages of NVMe don’t matter as much unless you have a lot of PCIe flash in your system, so the 2 vendors above that use PCIe SSDs as storage are probably most likely to move to NVMe, but the limited size of NVMe drives, the meagre performance speed up to storage available from NVMe, may make NVMe adoption less likely.  So maybe there’s a 0.3 probability * 1/3 (of vendors with hardware refresh) * 2 (vendors using PCIe flash as storage) or ~0.2.

For the other 5 candidates listed above, the advantages for NVMe aren’t that significant, so if they are refreshing their hardware, there’s maybe a low chance that they will take on NVMe, mainly because it’s going to become the prominent PCIe flash protocol, So maybe that adds another 0.15 of probability * 1/3 * 5 or ~0.25. (When I originally formulated the NVMe QoM I had not anticipated NVMe SSDs backing up volatile cache but they certainly exist, today.)

Other potential candidate for NVMe are all start ups. EMC DSSD uses PCIe fabric for it’s NAND support, and could already be making use of NVMe. (Although, I would  not classify DSSD as an enterprise storage vendor.)

But there may be other start ups out there using PCIe flash that would consider moving to NVMe. A while back, I estimated there’s ~3 startups likely to emerge over the next year. It’s almost a certainty that they would all have some sort of flash storage., but maybe only one of them would make use of PCIe SSDs. And it’s unclear whether they would use NVMe drives as main storage or for caching. So, splitting the difference in probabilities, we will use 0.23 probability * 1 or ~0.23.

So total up my forecast we forecast for NVMe adoption in GA enterprise storage hardware over the next 12 months to be Yes with 0.68 probability. 

The other likely candidates that will support NVMe are software defined storage or hyper converged storage. I don’t list these as enterprise storage vendors but I could be convinced that this was a mistake. If I add in SW defined storage the probability goes up, to high 0.80s to low 0.90s.

Comments?

 

Next generation NVM, 3D XPoint from Intel + Micron

cross_point_image_for_photo_capsuleEarlier this week Intel-Micron announced (see webcast here and here)  a new, transistor-less NVM with 1000 time the speed (10µsec access time for NAND) of NAND [~10ns (nano-second) access times] and at 10X the density of DRAM (currently 16Gb/DRAM chip). They call the new technology 3D XPoint™ (cross-point) NVM (non-volatile memory).

In addition to the speed and density advantages, 3D XPoint NVM also doesn’t have the endurance problems associated with todays NAND. Intel and Micron say that it has 1000 the endurance of today’s NAND (MLC NAND endurance is ~3000 write (P/E) cycles).

At that 10X current DRAM density it’s roughly equivalent to todays MLC/TLC NAND capacities/chip. And at 1000 times the speed of NAND, it’s roughly equivalent in performance to DDR4 DRAM. Of course, because it’s non-volatile it should take much less power to use than current DRAM technology, no need for power refresh.

We have talked about the end of NAND before (see The end of NAND is here, maybe). If this is truly more scaleable than NAND it seems to me that the it does signal the end of NAND. It’s just a matter of time before endurance and/or density growth of NAND hits a wall and then 3D XPoint can do everything NAND can do but better, faster and more reliably.

3D XPoint technology

The technology comes from a dual layer design which is divided into columns and at the top and bottom of the columns are accessor connections in an orthogonal pattern that together form a grid to access a single bit of memory.  This also means that 3D Xpoint NVM can be read and written a bit at a time (rather than a “page” at a time with NAND) and doesn’t have to be initialized to 0 to be written like NAND.

The 3D nature of the new NVM comes from the fact that you can build up as many layers as you want of these structures to create more and more NVM cells. The microscopic pillar  between the two layers of wiring include a memory cell and a switch component which allows a bit of data to be accessed (via the switch) and stored/read (memory cell). In the photo above the yellow material is a switch and the green material is a memory cell.

A memory cell operates by a using a bulk property change of the material. Unlike DRAM (floating gates of electrons) or NAND (capacitors to hold memory values). As such it uses all of the material to hold a memory value which should allow 3D XPoint memory cells to scale downwards much better than NAND or DRAM.

Intel and Micron are calling the new 3D XPoint NVM storage AND memory. That is suitable for fast access, non-volatile data storage and non-volatile processor memory.

3D XPoint NVM chips in manufacturing today

First chips with the new technology are being manufactured today at Intel-Micron’s joint manufacturing fab in Idaho. The first chips will supply 128Gb of NVM and uses just two layers of 3D XPoint memory.

Intel and Micron will independently produce system products (read SSDs or NVM memory devices) with the new technology during 2016. They mentioned during the webcast that the technology is expected to be attached (as SSDs) to a PCIe bus and use NVMe as an interface to read and write it. Although if it’s used in a memory application, it might be better attached to the processor memory bus.

The expectation is that the 3D XPoint cost/bit will be somewhere in between NAND and DRAM, i.e. more expensive than NAND but less expensive than DRAM. It’s nice to be the only companies in the world with a new, better storage AND memory technology.

~~~~

Over the last 10 years or so, SSDs (solid state devices) all used NAND technologies of one form or another, but after today SSDs can be made from NAND or 3D XPoint technology.

Some expected uses for the new NVM is in gaming applications (currently storage speed and memory constrained) and for in-memory databases (which are memory size constrained).  There was mention on the webcast of edge analytics as well.

Welcome to the dawn of a new age of computer storage AND memory.

Photo Credits: (c) 2015 Intel and Micron, from Intel’s 3D XPoint website

Interesting sessions at SNIA DSI Conference 2015

I attended the SNIA Data Storage Innovation (DSI) Conference in Santa Clara, CA last week and ran into a number of old friends and met a few new ones. While attending the conference, there were a few sessions that seemed to bring the conference to life for me.

Microsoft Software Defined Storage Journey

Jose Barreto, Principal Program Manager – Microsoft, spent a little time on what’s currently shipping with Scale-out File Service, Storage Spaces and other storage components of Windows software defined storage solutions. Essentially, what Microsoft is learning from Azure cloud deployments it is slowly but surely being implemented in Windows Server software and other solutions.

Microsoft ‘s vision is that customers can have their own private cloud storage with partner storage systems (SAN & NAS), with Microsoft SDS (Scale-out File Server with Storage Spaces), with hybrid cloud storage (StorSimple with Azure storage) and public cloud storage (Azure storage).

Jose also mentioned other recent innovations like the Cloud Platform System using Microsoft software, Dell compute, Force 10 networking and JBOD (PowerVault MD3060e) storage in a rack.

Some recent Microsoft SDS innovations include:

  • HDD and SSD storage tiering;
  • Shared volume storage;
  • System Center volume and unified storage management;
  • PowerShell integration;
  • Multi-layer redundancy across nodes, disk enclosures, and disk devices; and
  • Independent scale-out of compute or storage.

Probably a few more I’m missing here but these will suffice.

Then, Jose broke some news on what’s coming next in Windows Server storage offerings:

  • Quality of service (QoS) – Windows Server provides QoS capabilities which allows one to limit the IO activity and can be used to specify min and max IOPS or latency at a VM or VHD level. The scale-out storage service will balance the IO activity across the cluster to meet this QoS specification. Apparently the balancing algorithm came from Microsoft Research but Jose didn’t go into great detail on what it did differently other than being “fairer” applying QoS constraints.
  • Rolling upgrades – Windows Server now supports a cluster running different versions of software. Now one can take a cluster node down and update its software and re-activate it into the same cluster. Previously, code upgrades had to take a whole cluster down at a time.
  • Synchronous replication – Windows Server now supports synchronous Storage Replicast the volume level. Previously Storage Replicas were limited to asynch.
  • Higher VM storage resiliency – Windows will now pause a VM rather than terminate it during transient storage interruptions. This allows VMs to sustain operations across transient outages. VMs are in PausedCritical state until the storage comes back and then they are restarted automatically.
  • Shared-nothing Storage Spaces – Windows Storage Spaces can be configured across cluster nodes without shared storage. Previously, Storage Spaces required shared JBOD storage between cluster nodes. This feature removes this configuration constraint and allows JBOD storage to only be accessible fro a single node.

Jose did not name what this  “Vnext” was going to be called and didn’t provide a specific time frame other than it’s coming out shortly.

Archival Disc Technology

Yasumori  Hino from Panasonic and Jun Nakano from Sony presented information on a brand new removable media technology or Cold Storage. Previous to there session there was another one from HDS Federal Corporation on their BluRay jukebox but Yasumori’s and Jun’s session was more noteworthy.The  new Archive Disc is the next iteration in optical storage beyond BlueRay and targeted at long term archive or “cold” storage.

As a prelude to the Archive Disc discussion Yasumori played a CD that was pressed in 1982 (52nd Street, Billy Joel album) on his current generation laptop to show the inherent downward compatibility in optical disc technology.

In 1980 IBM 3480 disk drives were refrigerator sized, multi $10K devices, and held 2.3GB. As far as I know there aren’t any of these still in operation. And IBM/STK tape was reel to reel and took up a whole rack. There may be a few of these devices still operating these days but not many.  I still have a CD collection (but then I am a GreyBeard 🙂 that I still listen to occasionally.

IMG_4399The new Archive Disc includes:

  • More resilient media to high humidity, high temperature, salt water, and EMP and other magnetic disturbances. As proof, a BlueRay disk was submerged in sea water for 5 weeks and was still able to be read. Data on BlueRay and the new Archive disk is recorded without using electro magnetics and is recorded in a very stable oxide recording material layer. They project that the new Archive disc has a media life of 50 years at 50C and 1000 years at 25C under high humidity conditions.
  • Dual sided, triple layered which uses land and groove recording to provide 300GB of data storage. BlueRay also uses a land and groove disk format but only records on the land portion of the disc. Track pitch for BlueRay is 320nm whereas for the Archive disc it’s only 225nm.
  • Data transfer speeds of 90MB/sec with two optical heads, one per side. Each head can read/write data at 45MB/sec. They project double or quadrouple this data transfer rate by using more pairs of optical heads .

They also presented a roadmap for a 2nd gen 500GB and 3rd gen 1TB Archive disc using higher linear density changes and better signal processing technology.

Cold storage is starting to get some more interest these days what with all the power consumption going into today’s data centers and the never ending data tsunami. Archive and BluRay optical storage consume no power at rest and only consume power when mounting/dismounting and reading/writing/spinning. Also with optical discs imperviousness to high temp and humidity, optical storage could be stored outside of air conditioned data centers.

The Storage Revolution

The final presentation of interest to me was by Andrea Nelson from Intel. Intel has lately been focusing on helping partners and vendors provide more effective storage offerings. These aren’t storage solutions but rather storage hardware, components and software developed in collaboration with storage vendors and partners that make it easier for them to offer storage solutions using Intel hardware. One example of this collaboration is IBM hardware assist Real Time Compression available on new V7000 and FlashSystem V9000 storage hardware.

As the world turns to software defined storage, Intel wants those solutions to make use of their hardware. (Although, at the show I heard from one another new SDS vendor that was planning to use X86 as well as ARM servers).

Intel has:

  • QuickAssist Acceleration technology – such as hardware assist data compression,
  • Storage Acceleration software libraries – open source erasure coding and other low-level compute intensive functions, and
  • Cache Acceleration software – uses Intel SSDs as a data cache for storage applications.

There wasn’t of a technical description of these capabilities as in other DSI sessions but with the industry moving more and more to SDS, Intel’s got a vested interest in seeing it be implemented on their hardware.

~~~~

That’s about it. I sat in on quite a number of other sessions but nothing else stuck out as significant or interesting to me as these threes sessions.

Comments?