QoM1610: Will NVMe over Fabric GA in enterprise AFA by Oct’2017

NVMeNVMe over fabric (NVMeoF) was a hot topic at Flash Memory Summit last August. Facebook and others were showing off their JBOF (see my Facebook moving to JBOF post) but there were plenty of other NVMeoF offerings at the show.

NVMeoF hardware availability

When Brocade announced their Gen6 Switches they made a point of saying that both their Gen5 and Gen6 switches currently support NVMeoF protocols. In addition to Brocade’s support, in Dec 2015 Qlogic announced support for NVMeoF for select HBAs. Also, as of  July 2016, Emulex announced support for NVMeoF in their HBAs.

From an Ethernet perspective, Qlogic has a NVMe Direct NIC which supports NVMe protocol offload for iSCSI. But even without NVMe Direct, Ethernet 40GbE & 100GbE with  iWARP, RoCEv1-v2, iSCSI SER, or iSCSI RDMA all could readily support NVMeoF on Ethernet. The nice thing about NVMeoF for Ethernet is not only do you get support for iSCSI & FCoE, but CIFS/SMB and NFS as well.

InfiniBand and Omni-Path Architecture already support native RDMA, so they should already support NVMeoF.

So hardware/firmware is already available for any enterprise AFA customer to want NVMeoF for their data center storage.

NVMeoF Software

Intel claims that ~90% of the software driver functionality of NVMe is the same for NVMeoF. The primary differences between the two seem to be the NVMeoY discovery and queueing mechanisms.

There are two fabric methods that can be used to implement NVMeoF data and command transfers: capsule mode where NVMe commands and data are encapsulated in normal fabric packets or fabric dependent mode where drivers make use of native fabric memory transfer mechanisms (RDMA, …) to transfer commands and data.

12679485_245179519150700_14553389_nA (Linux) host driver for NVMeoF is currently available from Seagate. And as a result, support for NVMeoF for Linux is currently under development, and  not far from release in the next Kernel (I think). (Mellanox has a tutorial on how to compile a Linux kernel with NVMeoF driver support).

With Linux coming out, Microsoft Windows and VMware can’t be far behind. However, I could find nothing online, aside from base NVMe support, for either platform.

NVMeoF target support is another matter but with NICs/HBAs & switch hardware/firmware and drivers presently available, proprietary storage system target drivers are just a matter of time.

Boot support is a major concern. I could find no information on BIOS support for booting off of a NVMeoF AFA. Arguably, one may not need boot support for NVMeoF AFAs as they are probably not a viable target for storing App code or OS software.

From what I could tell, normal fabric multi-pathing support should work fine with NVMeoF. This should allow for HA NVMeoF storage, a critical requirement for enterprise AFA storage systems these days.

NVMeoF advantages/disadvantages

Chelsio and others have shown that NVMeoF adds ~8μsec of additional overhead beyond native NVMe SSDs, which if true would warrant implementation on all NVMe AFAs. This may or may not impact max IOPS depending on scale-ability of NVMeoF.

For instance, servers (PCIe bus hardware) typically limit the number of private NVMe SSDs to 255 or less. With an NVMeoF, one could potentially have 1000s of shared NVMe SSDs accessible to a single server. With this scale, one could have a single server attached to a scale-out NVMeoF AFA (cluster) that could supply ~4X the IOPS that a single server could perform using private NVMe storage.

Base level NVMe SSD support and protocol stacks are starting to be available for most flash vendors and operating systems such as, Linux, FreeBSD, VMware, Windows, and Solaris. If Intel’s claim of 90% common software between NVMe and NVMeoF drivers is true, then it should be a relatively easy development project to provide host NVMeoF drivers.

The need for special Ethernet hardware that supports RDMA may delay some storage vendors from implementing NVMeoF AFAs quickly. The lack of BIOS boot support may be a minor irritant in comparison.

NVMeoF forecast

AFA storage systems, as far as I can tell, are all about selling high IOPS and very-low latency IOs. It would seem that NVMeoF would offer early adopter AFA storage vendors a significant performance advantage over slower paced competition.

In previous QoM/QoW posts we have established that there are about 13 new enterprise storage systems that come out each year. Probably 80% of these will be AFA, given the current market environment.

Of the 10.4 AFA systems coming out over the next year, ~20% of these systems pride themselves on being the lowest latency solutions in the market, and thus command high margins. One would think these systems would be the first to adopt NVMeoF. But, most of these systems have their own, proprietary flash modules and do not use standard (NVMe) SSDs and can use their own proprietary interface to their proprietary flash storage. This will delay any implementation for them until they can convert their flash storage to NVMe which may take some time.

On the other hand, most (70%) of the other AFA systems, that currently use SAS/SATA SSDs, could boost their IOP counts and drastically reduce their IO  response times, by implementing NVMe SSDs and NVMeoF. But converting SAS/SATA backends to NVMe will take time and effort.

But, there are a select few (~10%) of AFA systems, that already use NVMe SSDs in their AFAs, and for these few, they would seem to have a fast track towards implementing NVMeoF. The fact that NVMeoF is supported over all fabrics and all storage interface protocols make it even easier.

Moreover, NVMeoF has been under discussion since the summer of 2015, which tells me that astute AFA vendors have already had 18+ months to develop it. With NVMeoF host drivers & hardware available since Dec. 2015, means hardware and software exist to test and validate against.

I believe that NVMeoF will be GA’d within the next 12 months by at least one enterprise AFA system. So my QoM1610 forecast for NVMeoF is YES, with a 0.83 probability.

Comments?

 

 

 

QoM1608: The coming IOT tsunami or not

Techpinions ChartSaw an interesting chart the other day in a post in TechPinions (Searching for What’s Next) showing the sales in millions over time of PCs, Tablets and Smart Phones.  From the chart, PC sales peaked 2010-2012 and that Tablet sales have at flat lined (2016). Not sure what’s projections vs. actuals but the story on SmartPhones have yet to run out and they had rapid sales growth between 2008 and 2014.

The other thing to take from this chart is that device adoption is speeding up. It took 20 years to reach peak PC sales but it only took ~10 years to reach peak Smartphones sales.
Continue reading “QoM1608: The coming IOT tsunami or not”

(QoM16-002): Will Intel Omni-Path GA in scale out enterprise storage by February 2016 – NO 0.91 probability

opa-cardQuestion of the month (QoM for February is: Will Intel Omni-Path (Architecture, OPA) GA in scale out enterprise storage by February 2016?

In this forecast enterprise storage are the major and startup vendors supplying storage to data center customers.

What is OPA?

OPA is Intel’s replacement for InfiniBand and starts out at 100Gbps. It’s intended more for high performance computing (HPC), to be used as an inter-cluster server interconnect or next generation fabric. Intel says it “will maintain consistency and compatibility with existing Intel True Scale Fabric and InfiniBand APIs by working through the open source OpenFabrics Alliance (OFA) software stack on leading Linux* distribution releases”. Seems like Intel is making it as easy as possible for vendors to adopt the technology.
Continue reading “(QoM16-002): Will Intel Omni-Path GA in scale out enterprise storage by February 2016 – NO 0.91 probability”

QoM 16-001: Will NVMe GA in enterprise storage over the next 12 months? Yes 0.68 probability

NVMeThe latest analyst forecast contest Question of the Month (QoM 16-001) is on whether NVMe PCIe-SSDs will GA in enterprise storage over the next 12 months? For more information on our analyst forecast contest, please check out the post.

There are a couple of considerations that would impact NVMe adoption.

Availability of NVMe SSDs?

Intel, Samsung, Seagate and WD-HGST are currently shipping 2.5″ & HH-HL NVMe PCIe SSDs for servers. Hynix, Toshiba, and others had samples at last year’s Flash Memory Summit and promised production early this year. So yes, they are available, from at least 3 sources now, including enterprise class storage vendors, with more coming online over the year.

Some advantages of NVMe SSDs?

Advantages of NVMe (compiled from NVMe organization and other NVMe sources):

  • Lower SSD write and read IO access latencies
  • Higher mixed IOPS performance
  • Widespread OS support (not necessarily used in storage systems
  • Lower power consumption
  • X4 PCIe support
  • NVMe over FC Fabric (new RDMA) support

Disadvantages of NVMe SSDs?

Disadvantages of NVMe (compiled from NVMe drive reviewers and other sources):

  • Smaller form factors limit (MLC) capacity SSDs
  • New cabling (U.2) for 2.5″ SSDs
  • BIOS changes to support boot from NVMe (not much of a problem in storage systems)

Not many enterprise storage vendors use PCIe Flash

Current storage vendors that use PCIe flash (sourced from web searches on PCIe flash for major storage vendors):

  • Using PCIe SSDs as part or only storage tier
    • Kamanario K2 all flash array
    • NexGen storage hybrid storage
  • NetApp (PCIe) FlashCache
  • Others (?2) with Volatile cache backed by PCIe SSDs
  • Others (?2) using PCIe SSD as Non-volatile cache

Only a few of these will have new storage hardware out over the next 12 months. I estimated (earlier) about 1/3 of current storage vendors will release new hardware over the next 12 months.

The advantages of NVMe don’t matter as much unless you have a lot of PCIe flash in your system, so the 2 vendors above that use PCIe SSDs as storage are probably most likely to move to NVMe, but the limited size of NVMe drives, the meagre performance speed up to storage available from NVMe, may make NVMe adoption less likely.  So maybe there’s a 0.3 probability * 1/3 (of vendors with hardware refresh) * 2 (vendors using PCIe flash as storage) or ~0.2.

For the other 5 candidates listed above, the advantages for NVMe aren’t that significant, so if they are refreshing their hardware, there’s maybe a low chance that they will take on NVMe, mainly because it’s going to become the prominent PCIe flash protocol, So maybe that adds another 0.15 of probability * 1/3 * 5 or ~0.25. (When I originally formulated the NVMe QoM I had not anticipated NVMe SSDs backing up volatile cache but they certainly exist, today.)

Other potential candidate for NVMe are all start ups. EMC DSSD uses PCIe fabric for it’s NAND support, and could already be making use of NVMe. (Although, I would  not classify DSSD as an enterprise storage vendor.)

But there may be other start ups out there using PCIe flash that would consider moving to NVMe. A while back, I estimated there’s ~3 startups likely to emerge over the next year. It’s almost a certainty that they would all have some sort of flash storage., but maybe only one of them would make use of PCIe SSDs. And it’s unclear whether they would use NVMe drives as main storage or for caching. So, splitting the difference in probabilities, we will use 0.23 probability * 1 or ~0.23.

So total up my forecast we forecast for NVMe adoption in GA enterprise storage hardware over the next 12 months to be Yes with 0.68 probability. 

The other likely candidates that will support NVMe are software defined storage or hyper converged storage. I don’t list these as enterprise storage vendors but I could be convinced that this was a mistake. If I add in SW defined storage the probability goes up, to high 0.80s to low 0.90s.

Comments?

 

(Storage QoM 16-001): Will we see NVM Express (NVMe) drives GA’d in enterprise storage over the next year

NVMeFirst, let me state that QoM stands for Question of the Month. Doing these forecast can be a lot of work, and rather than focusing my whole blog on weekly forecast questions and answers, I would like to do something else as well. So, from now on we are doing only one new forecast a month.

So for the first question of 2016, we will forecast whether NVMe SSDs will be GA’d in enterprise storage over the next year.

NVM Express (NVMe) means the new PCIe interface for SSD storage. Wikipedia has a nice description of NVMe. As discussed there, NVMe was designed for higher performance and enhanced parallelism which comes with the PCI Express (PCIe) bus. The current version of the NVMe spec is 1.2a (available here).

GA means generally available for purchase by any customer.

Enterprise storage systems refers to mid-range and enterprise class storage systems from major AND non-major storage vendors, which includes startups.

Over the next year means by 19 January 2017.

Special thanks to Kacey Lai (@mrdedupe), Primary Data for suggesting this months question.

Current and updates to previous forecasts

 

Update on QoW 15-001 (3DX) forecast:

News out today indicates that 3DX (3D XPoint non-volatile memory) samples may be available soon but it could take another 12 to 18 months to get it into production. 3DX manufacturing is more challenging than current planar NAND technology and uses about 100 new materials, many of which are currently single sourced. We already built into our 3DX forecast potential delays in reaching production in 6 months. The news above says this could be worse than  expected. As such, I feel even stronger that there is less of a possibility of 3DX shipping in storage systems by next December. So I would update my forecast for QoW 15-001 to NO with an 0.75 probability at this time.

So current forecasts for QoW 15-001 are:

A) YES with 0.85 probability; and

B) NO with 0.75 probability

Current QoW 15-002 (3D TLC) forecast

We have 3 active participants, current forecasts are:

A) Yes with 0.95 probability;

B) No with 0.53 probability; and

C) Yes with 1.0 probability

Current QoW 15-003 (SMR disk) forecast

We have 1 active participant, current forecast is:

A) Yes with 0.85 probability

 

(Storage QoW 15-003): SMR disks in GA enterprise storage in 12 months? Yes@.85 probability

Hard Disk by Jeff Kubina (cc) (from Flickr)
Hard Disk by Jeff Kubina (cc) (from Flickr)

(Storage QoW 15-003): Will we see SMR (shingled magnetic recording) disks in GA enterprise storage systems over the next 12 months?

Are there two vendors of SMR?

Yes, both Seagate and HGST have announced and currently shipping (?) SMR drives, HGST has a 10TB drive and Seagate has an 8TB drive on the market since last summer.

One other interesting fact is that SMR will be the common format for all future disk head technologies including HAMR, MAMR, & BPMR (see presentation).

What would storage vendors have to do to support SMR drives?

Because of the nature of SMR disks, writes overlap other tracks so they must be written, at least in part, sequentially (see our original post on Sequential only disks). Another post I did reported on recent work by Garth Gibson at CMU (Shingled Magnetic Recording disks) which showed how multiple bands or zones on an SMR disk could be used some of which could be written randomly and others which could be written sequentially but all could be read randomly. With such an approach you could have a reasonable file system on an SMR device with a metadata partition (randomly writeable) and a data partition (sequentially writeable).

In order to support SMR devices, changes have been requested for the T10 SCSI  & T13 ATA command protocols. Such changes would include:

  • SMR devices support a new write cursor for each SMR sequential band.
  • SMR devices support sequential writes within SMR sequential bands at the write cursor.
  • SMR band write cursors can be read, statused and reset to 0. SMR sequential band LBA writes only occur at the band cursor and for each LBA written, the SMR device increments the band cursor by one.
  • SMR devices can report their band map layout.

The presentation refers to multiple approaches to SMR support or SMR drive modes:

  • Restricted SMR devices – where the device will not accept any random writes, all writes occur at a band cursor, random writes are rejected by the device. But performance would be predictable. 
  • Host Aware SMR devices – where the host using the SMR devices is aware of SMR characteristics and actively manages the device using write cursors and band maps to write the most data to the device. However, the device will accept random writes and will perform them for the host. This will result in sub-optimal and non-predictable drive performance.
  • Drive managed SMR devices – where the SMR devices acts like a randomly accessed disk device but maps random writes to sequential writes internally using virtualization of the drive LBA map, not unlike SSDs do today. These devices would be backward compatible to todays disk devices, but drive performance would be bad and non-predictable.

Unclear which of these drive modes are currently shipping, but I believe Restricted SMR device modes are already available and drive manufacturers would be working on Host Aware and Drive managed to help adoption.

So assuming Restricted SMR device mode availability and prototypes of T10/T13 changes are available, then there are significant but known changes for enterprise storage systems to support SMR devices.

Nevertheless, a number of hybrid storage systems already implement Log Structured File (LSF) systems on their backends, which mostly write sequentially to backend devices, so moving to a SMR restricted device modes would be easier for these systems.

Unclear how many storage systems have such a back end, but NetApp uses it for WAFL and just about every other hybrid startup has a LSF format for their backend layout. So being conservative lets say 50% of enterprise hybrid storage vendors use LSF.

The other 60% would have more of a problem implementing SMR restricted mode devices, but it’s only a matter of time before  all will need to go that way. That is assuming they still use disks. So, we are primarily talking about hybrid storage systems.

All major storage vendors support hybrid storage and about 60% of startups support hybrid storage, so adding these to together, maybe about 75% of enterprise storage vendors have hybrid.

Using analysis on QoW 15-001, about 60% of enterprise storage vendors will probably ship new hardware versions of their systems over the next 12 months. So of the 13 likely new hardware systems over the next 12 months, 75% have hybrid solutions and 50% have LSF, or ~4.9 new hardware systems will be released over the next 12 months that are hybrid and have LSF backends already.

What are the advantages of SMR?

SMR devices will have higher storage densities and lower cost. Today disk drives are running 6-8TB and the SMR devices run 8-10TB so a 25-30% step up in storage capacity is possible with SMR devices.

New drive support has in the past been relatively easy because command sets/formats haven’t changed much over the past 7 years or so, but SMR is different and will take more effort to support. The fact that all new drives will be SMR over time gives more emphasis to get on the band wagon as soon as feasible. So, I would give a storage vendor a 80% likelihood of implementing SMR, assuming they have new systems coming out, are already hybrid and are already using LSF.

So of the ~4.9 systems that are LSF/Hybrid/being released *.8, says ~3.9 systems will introduce SMR devices over the next 12 months.

For non-LSF hybrid systems, the effort seems much harder, so I would give the likelihood of implementing SMR about a 40% chance. So of the ~8.1 systems left that will be introduced in next year, 75% are hybrid or ~6.1 systems and they have a 40% likelihood of implementing SMR so ~2.4 of these non-LSF systems will probably introduce SMR devices.

There’s one other category that we need to consider and that would be startups in stealth. These could have been designing their hybrid storage for SMR from the get go. In QoW 15-001 analysis I assumed another ~1.8 startup vendors would emerge to GA over the next 12 months. And if we assume that 0.75% of these were hybrid then there’s ~1.4 startups vendors that could be using SMR technology in their hybrid storage for a (4.9+2.4+1.4(1.8*.75)= 8.7 systems have a high probability of SMR implementation over the next 12 months in GA enterprise storage products.

Forecast

So my forecast of SMR adoption by enterprise storage is Yes for .85 probability (unclear what the probability should be, but it’s highly probable).

~~~~

Comments?

(Storage QoW 15-003): Will we see SMR disks in GA enterprise storage systems over the next 12 months?

SMR refers to very high density (10TB), shingled magnetic recording (SMR) hard disk devices.

GA means generally available for purchase by any customer.

Enterprise storage systems refers to mid-range and enterprise class storage systems from major AND non-major storage vendors, which includes startups.

Over the next 12 months means by 22 December 2016.

We discussed our Analyst Forecasting and previous Questions of the week (QoW) on 3D XPoint technology ( forecast) and 3D TLC NAND technology (forecast), with present status below.

Present QoW forecasts:

(#Storage-QoW 15-001) – Will 3D XPoint be GA’d in  enterprise storage systems within 12 months? 2 active forecasters, current forecasts are:

A) YES with 0.85 probability; and

B) NO with 0.62 probability.

(Storage-QoW 15-002) 3D TLC NAND GA’d in major vendor storage next year? 3 active participants, current forecasts are:

A) Yes with 0.95 probability;

B) No with 0.53 probability; and

C) Yes with 1.0 probability

 

 

 

(Storage-QoW 15-002) 3D TLC NAND GA’d in major vendor storage next year – NO 0.53

Latest forecast question is: Will 3D TLC NAND be GA’d in major storage products in 12 months?

Splitting up the QoW into more answerable questions:

A) Will any vendor be shipping 3D TLC NAND SSDs/PCIe cards over the next 9 months?

Samsung will is reportedly already shipping 3D TLC NAND SSDs and PCIe cards as of August 13, 2015 and will be producing 48 layer 256Gb 3D TLC NAND memory soon.  Unclear what 3D TLC NAND technology will be shipping in the next generation drives due out soon but they are all spoken of as read-intensive/write-light storage.

One consideration is that major storage vendors typically will not introduce new storage technologies unless it’s available from multiple suppliers. This is not always the case and certainly not for internally developed storage but has been a critical criteria for most major vendors. But in the above reference, it was reported that SK Hynix and Toshiba are gearing up for 2016 shipments of 48 layer 3D TLC NAND as well, how long it takes to get these into SSD/PCIe cards is another question.

A number of startups are rumored to be using 3D TLC and Kamanario has publicly announced that their systems already use 3D TLC.

My probability of a second source for 3D TLC storage coming out within the first 9 months of next year is 0.75 

B) What changes will be required for storage vendors to utilize 3D TLC NAND storage?

The important changes will be SSD endurance and IO performance.

NAND endurance is rated at DWPD (drive writes per day). Current Samsung 3D TLC SSDs are reportedly rated anywhere from 1.3 to 3.5 DWPD for a 5 year warranty period and newer 3D TLC SSDs are rated at 5 DWPD (unknown warranty period). Current enterprise (800GB) MLC drives are reportedly rated at 10-25 DWPD (for 5 years). So if we use 3.5 DWPD for 3D TLC and 17.5 DWPD for MLC, 3D TLC NAND has a ~5X reduction in endurance.

As for performance, if we use the Samsung reported performance of 160K random reads and 18K random writes vs. an HGST 800GB MLC SSD that has 145K random read and 100K random write performance. There is a reduction of ~5.6X in write performance.  Read performance is actually better with 3D TLC NAND.

In order for major vendors to handle, a reduction in 3D TLC endurance, they will need to limit the amount of data written to these devices. Conveniently, in order for major vendors to deal with the reduction in 3D TLC write performance, they will also have to limit the amount of data written to these devices.

Hence, one potential solution is a multi-tiering, all flash array which uses standard MLC SSD/PCIe cards to absorb the heavy write activity and data from this tier, that is relatively unused, could be archived (?) over time to a 2nd tier of storage consisting of 3D TLC SSD/PCIe cards.

This is not that unusual and it’s being done today for hybrid (disk-SSD) systems with automated storage tiering. Only in this case, data is moved to SSD only if it’s accessed frequently. For 3D TLC the tiering policy should be changed from access frequency to time since last access. Doing so in a hybrid array with disk, MLC SSD and TLC SSD, would require the creation of an additional pool of storage and could be accomplished with software changes alone. There are current major vendor storage systems that already support 3 tiers of storage. And some which already support archiving to cloud storage, so these sorts of changes are present in current shipping product.

So yes there’s a reduction in endurance and yes it has worse write performance but it’s still much faster than disk and most major vendors already have software to be able to handle diverse performance storage. So accomodating the new 3D TLC storage shouldn’t be much of a problem.

New storage technology like this usually doesn’t require a hardware change to use. So the only thing that needs to be changed to accomodate the new 3D TLC is software functionality

So if the 3D TLC 2nd source was available there’s a 0.9 probability that some major storage vendor would adopt the technology over the next year.

3) What are the advantages of 3D TLC storage?

Price should be cheaper than MLC storage and the density (GB/volume) should be better. So in this case, it’s a reduction in cost/GB and increase GB/volume. So for these reasons alone it should probably be adopted.

The advantages are good and would certainly give a major vendor an edge in capacity density and in $/GB or at least get them to parity (barring any  functionality differential) with startups adopting the technology.

So given the advantages present in the technology, I would say there should be a 0.7 probability of adoption within the next 12 months.  

Forecast for QoW 15-002 is:

0.75*0.90*0.70 = 0.47 probability of YES adoption or .53 probability of NO adoption of 3D TLC NAND in major storage vendor products over the next 12 months

Update on QoW 15-001 forecast:

I have an update to my post that forecast for QoW 15-001 as a No with 0.62 probability. This question was on the adoption of 3D XPoint (3DX) technology in any enterprise storage vendor product within a year.

It has been brought to my attention that Intel mentioned the cost of producing 3DX was somewhere between 1/2 and 1/4 the cost of DRAM. Also, recent information has come to light that Intel-Micron will price 3DX between 3D NAND and DRAM. So my analysis as to the cost differential for caching technologies is way off (20X). So there would be a significant cost advantage in using the technology for volatile and non-volatile cache. Even if the chips cost nothing, it might be on the order of $3-5K cheaper with 3DX than battery/superCap backed up DRAM and volatile DRAM caching. So it exists but less than a significant cost saver.  So this being the case, I would have to adjust my 0.35 probability of adoption in this use up to 0.65.  I failed to incorporate this parameter in my final forecast, so all that analysis was for nothing. 

Another potential use is as a non-volatile write buffer for SSDs and even more important for 3D TLC NAND (see above). As this is in an SSD, software and hardware integration is commonplace so there’s a higher probability of adoption there as well. And as there are more SSDs than DRAM caching the cost differential could be more significant. Then again, it would depend on two technologies being adopted (TLC and 3DX) so it’s less likely than any one alone.

The other news (to me) was that Intel announced they would incorporate proprietary changes in DIMM bus to support 3DX as one approach. This does not lend credence to widespread adoption.  But probably only applies to server support for the technology, so I would reduce my probability there to 0.55

Updated forecast for QoW 15-001 is now:

  1. chip in production stays at .85, so there’s still 2.6 potential systems that could adopt the technology directly
  2. 0.85 probability that chips in production * 0.55 probability of servers with the technology  * 0.65 probability that a storage vendor would adopt the technology to replace caching, so (=) ~0.30 probability of server adoption in storage, and with 18 potential vendors thats another 5.5 systems potentially adopting the technology.
  3. Add in the two-three startups that likely will emerge, with similar probability of adoption, or 0.30, which is another 0.9 systems

For a total of 2.6+5.5+0.9=9 systems out of ~24 or 0.38 probability of adoption.

So my updated forecast still stands at No with a .62 probability.