QoM 16-001: Will NVMe GA in enterprise storage over the next 12 months? Yes 0.68 probability

NVMeThe latest analyst forecast contest Question of the Month (QoM 16-001) is on whether NVMe PCIe-SSDs will GA in enterprise storage over the next 12 months? For more information on our analyst forecast contest, please check out the post.

There are a couple of considerations that would impact NVMe adoption.

Availability of NVMe SSDs?

Intel, Samsung, Seagate and WD-HGST are currently shipping 2.5″ & HH-HL NVMe PCIe SSDs for servers. Hynix, Toshiba, and others had samples at last year’s Flash Memory Summit and promised production early this year. So yes, they are available, from at least 3 sources now, including enterprise class storage vendors, with more coming online over the year.

Some advantages of NVMe SSDs?

Advantages of NVMe (compiled from NVMe organization and other NVMe sources):

  • Lower SSD write and read IO access latencies
  • Higher mixed IOPS performance
  • Widespread OS support (not necessarily used in storage systems
  • Lower power consumption
  • X4 PCIe support
  • NVMe over FC Fabric (new RDMA) support

Disadvantages of NVMe SSDs?

Disadvantages of NVMe (compiled from NVMe drive reviewers and other sources):

  • Smaller form factors limit (MLC) capacity SSDs
  • New cabling (U.2) for 2.5″ SSDs
  • BIOS changes to support boot from NVMe (not much of a problem in storage systems)

Not many enterprise storage vendors use PCIe Flash

Current storage vendors that use PCIe flash (sourced from web searches on PCIe flash for major storage vendors):

  • Using PCIe SSDs as part or only storage tier
    • Kamanario K2 all flash array
    • NexGen storage hybrid storage
  • NetApp (PCIe) FlashCache
  • Others (?2) with Volatile cache backed by PCIe SSDs
  • Others (?2) using PCIe SSD as Non-volatile cache

Only a few of these will have new storage hardware out over the next 12 months. I estimated (earlier) about 1/3 of current storage vendors will release new hardware over the next 12 months.

The advantages of NVMe don’t matter as much unless you have a lot of PCIe flash in your system, so the 2 vendors above that use PCIe SSDs as storage are probably most likely to move to NVMe, but the limited size of NVMe drives, the meagre performance speed up to storage available from NVMe, may make NVMe adoption less likely.  So maybe there’s a 0.3 probability * 1/3 (of vendors with hardware refresh) * 2 (vendors using PCIe flash as storage) or ~0.2.

For the other 5 candidates listed above, the advantages for NVMe aren’t that significant, so if they are refreshing their hardware, there’s maybe a low chance that they will take on NVMe, mainly because it’s going to become the prominent PCIe flash protocol, So maybe that adds another 0.15 of probability * 1/3 * 5 or ~0.25. (When I originally formulated the NVMe QoM I had not anticipated NVMe SSDs backing up volatile cache but they certainly exist, today.)

Other potential candidate for NVMe are all start ups. EMC DSSD uses PCIe fabric for it’s NAND support, and could already be making use of NVMe. (Although, I would  not classify DSSD as an enterprise storage vendor.)

But there may be other start ups out there using PCIe flash that would consider moving to NVMe. A while back, I estimated there’s ~3 startups likely to emerge over the next year. It’s almost a certainty that they would all have some sort of flash storage., but maybe only one of them would make use of PCIe SSDs. And it’s unclear whether they would use NVMe drives as main storage or for caching. So, splitting the difference in probabilities, we will use 0.23 probability * 1 or ~0.23.

So total up my forecast we forecast for NVMe adoption in GA enterprise storage hardware over the next 12 months to be Yes with 0.68 probability. 

The other likely candidates that will support NVMe are software defined storage or hyper converged storage. I don’t list these as enterprise storage vendors but I could be convinced that this was a mistake. If I add in SW defined storage the probability goes up, to high 0.80s to low 0.90s.

Comments?

 

SCI’s (Storage QoW 15-001) 3D XPoint in next years storage, forecast=NO with 0.62 probability

20147811875_413b041e3f_z
So as to my forecast for the first question of the week: (#Storage-QoW 2015-001) – Will 3D XPoint be GA’d in  enterprise storage systems within 12 months?

I believe the answer will be Yes with a 0.38 probability or conversely, No with a 0.62 probability.

We need to decompose the question to come up with a reasonable answer.

1. How much of an advantage will 3D XPoint provide storage systems?

The claim is 1000X faster than NAND, 1000X endurance of NAND, & 10X density of DRAM. But, I believe the relative advantage of the new technology depends mostly on its price. So now the question is what would 3D XPoint technology cost ($/GB).

It’s probably going to be way more expensive than NAND $/GB (@2.44/64Gb-MLC or ~$0.31/GB). But how will it be priced relative to  DRAM (@$2.23/4Gb DDR4 or ~$4.46/GB) and (asynch) SRAM (@$7.80/ 16Mb or $3900.00/GB)?

More than likely, it’s going to cost more than DRAM because it’s non-volatile and almost as fast to access. As for how it relates to SRAM, the pricing gulf between DRAM and asynch SRAM is so huge, I think pricing it even at 1/10th SRAM costs, would seriously reduce the market. And I don’t think its going to be too close to DRAM, so maybe ~10X the cost of DRAM, or $44.60/GB.  [Probably more like a range of prices with $44.60 at 0.5 probable, $22.30 at 0.25 and $66.90 at 0.1. Unclear how I incorporate such pricing variability into a forecast.]

At $44.60/GB, what could 3D XPoint NVM replace in a storage system: 1) non-volatile cache; 2) DRAM caches, 3) Flash caches; 4) PCIe flash storage or 5) SSD storage in storage control units.

Non-volatile caching uses battery backed DRAM (with or without SSD offload) and SuperCap backed DRAM with SSD offload. Non-volatile caches can be anywhere from 1/16 to 1/2 total system cache size. The average enterprise class storage has ~412GB of cache, so non-volatile caching could be anywhere from 26 to 206GB or lets say ~150GB of 3D XPoint, which at ~$45/GB, would cost $6.8K in chips alone, add in $1K of circuitry and it’s $7.8K

  • For battery backed DRAM – 150GB of DRAM would cost ~$670 in chips, plus an SSD (~300GB) at ~$90, and 2 batteries (8hr lithium battery costs $32) so $64. Add charging/discharging circuitry, battery FRU enclosures, (probably missing something else) but maybe all the extras come to another $500 or ~$1.3K total. So the at $45/GB the 3D Xpoint non-volatile cache would run ~6.0X the cost of battery backed up DRAM.
  • For superCAP backed DRAM – similarly, a SuperCAP cache would have the same DRAM and SSD costs ($670 & $90 respectively). The costs for SuperCAPS in equivalent (Wh) configurations, run 20X the price of batteries, so $1.3K. Charging/discharging circuitry and FRU enclosures would be simpler than batteries, maybe 1/2 as much, so add $250 for all the extras, which means a total SuperCAP backed DRAM cost of ~$2.3K., which puts 3D Xpoint at 3.4X the cost of superCAP backed DRAM.

In these configurations a 3D XPoint non-volatile memory would replace lot’s of circuitry (battery-charging/discharging & other circuitry or SuperCAP-charging/discharging & other circuitry) and the SSD. So, 3D XPoint non-volatile cache could drastically simplify hardware logic and also software coding for power outages/failures. Less parts and coding has some intrinsic value beyond pure cost, difficult to quantify, but substantive, nonetheless.

As for using 3D XPoint to replace volatile DRAM cache another advantage is you wouldn’t need to have a non-volatile cache and systems wouldn’t have to copy data between caches. But at $45/GB, costs would be significant. A 412GB DRAM cache would cost $1.8K in DRAM chips and maybe another $1K in circuitry, so~ $2.8K. Doing one in 3D XPoint would run $18K in chips and the same $1K in circuitry, so $19K.  But we eliminate the non-volatile cache. Factoring that in, the all 3D XPoint cache would run ~$19K vs. DRAM volatile and (SuperCAP backed) non-volatile cache $2.8K+$2.3K= $5.1 or ~3.7X higher costs.

Again, the parts cost differential is not the whole story. But replacing volatile cache AND non-volatile cache would probably require more coding not less.

As for using 3D XPoint as a replacement or FlashCache I don’t think it’s likely because the cost differential at $45/GB is ~100X Flash costs (not counting PCIe controller and other logic) . Ditto for PCIe Flash and SSD storage.

Being 1000X denser than DRAM is great, but board footprint is not a significant storage system cost factor today.

So at a $45/GB price maybe there’s a 0.35 likelihood that storage systems would adopt the technology.

2. How many vendors are likely to GA new enterprise storage hardware in the next 12 months?

We can use major vendors to help estimate this. I used IBM, EMC, HDS, HP and NetApp as representing the major vendors for this analysis.

IBM (2 for 4) 

  • They just released a new DS8880 last fall and their prior version DS8870 came out in Oct. 2013, so the DS8K seems to be on a 24 month development cycle. So, its very unlikely we will see a new DS8K be released in next 12 month. 
  • SVC engine hardware DH8 was introduced in May 2014. SVC CG8 engine was introduced in May 2011. So SVC hardware seems to be on a 36 month cycle. So, its very unlikely we will see a new SVC hardware engine will be released in the next 12 months.
  • FlashSystem 900 hardware was just rolled out 1Q 2015  and FlashSystem 840 was introduced in January of 2014. So FlashSystem hardware is on a ~15 month hardware cycle. So, it is very likely that a new FlashSystem hardware will be released in the next 12 months. 
  • XIV Gen 3 hardware was introduced in July of 2011. Unclear when Gen2 was rolled out but IBM acquired XIV in Jan of 2008 and released an IBM version in August, 2008. So XIV’s on a ~36 month cycle. So, it is very likely that a new generation of XIV will be released in the next 12 months. 

EMC ([4] 3 for 4) 

  • VMAX3 was GA’d in 3Q (Sep) 2014. VMAX2 was available Sep 2012, which puts VMAX on 24 month cycle. So, it’s very likely that a new VMAX will be released in the next 12 months.
  • VNX2 was announced May, 2013 and GA’d Sep 2013. VNX 1 was announced Jan ,2011 and GA’d by May 2011. So that puts VNX on a ~28 month cycle. Which means we have should have already seen a new one, so it’s very likely we will see a new version of VNX in the next 12 months.  
  • XtremIO hardware was introduced in Mar, 2013 with no new significant hardware changes since. With a lack of history to guide us let’s assume a 24 month cycle. So, it’s very likely we will see a new version of XtremIO hardware in the next 12 months.
  • Isilon S200/X200 was introduced April, 2011 and X400 was released in May, 2012. Which put Isilon on a 13 month cycle then but nothing since.  So, it’s very likely we will see a new version of Isilon hardware in the next 12 months. 

However, having EMC’s unlikely to update all their storage hardware in the same 12 moths. That being said, XtremIO could use a HW boost as IBM and the startups are pushing AFA technology pretty hard here. Isilon is getting long in the tooth, so that’s another likely changeover. Since VNX is more overdue than VMAX, I’d have to say it’s likely new VNX, XtremIO & Isilon hardware will be seen over the next year. 

HDS (1 of 3) 

  • Hitachi VSP G1000 came out in Apr of 2014. HDS VSP came out in Sep of 2010. So HDS VSP is on a 43 month cycle. So it’s very unlikely we will see a new VSP in 12 months. 
  • Hitachi HUS VM came out in Sep 2012.  As far as I can tell there were no prior generation systems. But HDS just came out with the G200-G800 series, leaving the HUS VM as the last one not updated so, it’s very likely we will see a new version of HUS VM in the next 12 months.
  • Hitachi VSP G800, G600, G400, G200 series came out in Nov of 2015. Hitachi AMS 2500 series came out in April, 2012. So the mid-range systems seem to be on an 43 month cycle. So it’s very unlikely we will see a new version of HDS G200-G800 series in the next 12 months.

HP (1 of 2) 

  • HP 3PAR 20000 was introduced August, 2015 and the previous generation system, 3PAR 10000 was introduced in June, 2012. This puts the 3PAR on a 38 month cycle. So it’s very unlikely we will see a new version of 3PAR in the next 12 months. 
  • MSA 1040 was introduced in Mar 2014. MSA 2040 was introduced in May 2013. This puts the MSA on ~10 month cycle. So it’s very likely we will see a new version of MSA in the next 12 months. 

NetApp (2 of 2)

  • FAS8080 EX was introduced June, 2014. FAS6200 was introduced in Feb, 2013. Which puts the highend FAS systems on a 16 month cycle. So it’s very likely we will see a new version high-end FAS in the next 12 months.
  • NetApp FAS8040-8060 series scale out systems were introduced in Feb 2014. FAS3200 series was introduced in Nov of 2012. Which puts the FAS systems on a 15 month cycle. A new midrange release seems overdue, so it’s very likely we will see a new version of mid-range FAS in the next 12 months.

Overall the likelihood of new hardware being released by major vendors is 2+3+1+1+2=9/15 or ~0.60 probability of new hardware in the next 12 months.

Applying 0.60 to non-major storage vendors that typically only have one storage system GA’d at a time, which includes Coho Data, DataCore, Data Gravity, Dell, DDN, Fujitsu, Infinidat, NEC, Nexenta, NexGen Storage, Nimble, Pure, Qumulo, Quantum, SolidFire, Tegile, Tintri, Violin Memory, X-IO, and am probably missing a couple more. So of these ~21 non-major/startup vendors, we are likely to see ~13 new (non-major) hardware systems in the next 12 months. 

Some of these non-major systems are based on standard off-the-shelf, Intel server hardware and some vendors (Infinidat, Violin Memory & X-IO) have their own hardware designed systems. Of the 9 major vendor products identified above, six (IBM XIV, EMC VNX, EMC Isilon, EMC XtremIO, HP MSA and NetApp mid-range) use off the shelf, server hardware.

So all told my best guess is we should see (9+13=)22 new enterprise storage systems introduced in next 12 months from major and non-major storage vendors. 

3. How likely is it that Intel-Micron will come out with GA chip products in the next 6 months?

They claimed they were sampling products to vendors back at Flash Summit in August 2015. So it’s very likely (0.85 probability) that Intel-Micron will produce 3D XPoint chips in the next 12 months.

Some systems (IBM FlashSystems, NetApp high-end, and HUS VM) could make use of raw chips or even a new level of storage connected to a memory bus. But all of them could easily take advantage of a 3D XPoint device that was an NVMe PCIe connected storage.

But to be useable for most vendor storage systems being GA’d over the next year, any new chip technology has to be available for use in 6 months at the latest.

4. How likely is it that Intel-Micron will produce servers with 3D XPoint in the next 6 months?

Listening in at Flash Summit this seems to be their preferred technological approach to market. And as most storage vendors use standard Intel Servers this would seem to be an easiest way to adopt it. If the chips are available, I deem it 0.65 probability that Intel will GA server hardware in the next 6 months with 3D XPoint technology. 

Not sure any of the major or non-major vendors above could possible use server hardware introduced later than 6 months but Qumulo uses Agile development and releases GA code every 2 weeks, so they could take this on later than most.

But given the chip pricing, lack of significant advantage, and coding update requirements, I deem it 0.33 probability that vendors will adopt the technology even if it’s in a new server that they can use.

Summary

So there’s a 0.85 probability of chips available within 6 months for 3 potential major system that leaves us with 2.6 systems using 3D XPoint chip technology directly. 

With a 0.65 probability of servers coming out in 6 months using 3D XPoint and a 0.45 of new storage systems adopting the technology for caching. That says there’s a 0.29 probability and with 18 new systems coming out. That says 5.2 systems could potentially adopt the server technology.

For a total of 7.8 systems out of a potential 22 new systems or a 0.35 probability. 

That’s just the known GA non-major and storage startups what about the stealth(ier) startups without GA storage like Primary Data. There’s probably 2 or 3 non-GA storage startups. And if we assume the same 0.6 vendors will have GA hardware next year that is an additional 1.8 systems. More than likely these will depend on standard servers, so the 0.65 probability of Intel servers probability applies. So it’s likely we will see an additional 1.2 systems here or a total of 9.0 new systems that will adopt 3D XPoint tech in the next 12 months.

So it’s 9 systems out of 23.8 or ~0,38 probable. So my forecast is Yes at 0.38 probable. 

Pricing is a key factor here. I assumed a single price but it’s more likely a range of possibilities and factoring in a pricing range would be more accurate but I don’t know how, yet.

~~~~

I could go on for another 1000 words and still be no closer to an estimate. Somebody please check my math.

Comments?

Photo Credit(s): (iTech Androidi) 3D XPoint – Intel’s new Storage chip is 1000 faster than flash memory

Next generation NVM, 3D XPoint from Intel + Micron

cross_point_image_for_photo_capsuleEarlier this week Intel-Micron announced (see webcast here and here)  a new, transistor-less NVM with 1000 time the speed (10µsec access time for NAND) of NAND [~10ns (nano-second) access times] and at 10X the density of DRAM (currently 16Gb/DRAM chip). They call the new technology 3D XPoint™ (cross-point) NVM (non-volatile memory).

In addition to the speed and density advantages, 3D XPoint NVM also doesn’t have the endurance problems associated with todays NAND. Intel and Micron say that it has 1000 the endurance of today’s NAND (MLC NAND endurance is ~3000 write (P/E) cycles).

At that 10X current DRAM density it’s roughly equivalent to todays MLC/TLC NAND capacities/chip. And at 1000 times the speed of NAND, it’s roughly equivalent in performance to DDR4 DRAM. Of course, because it’s non-volatile it should take much less power to use than current DRAM technology, no need for power refresh.

We have talked about the end of NAND before (see The end of NAND is here, maybe). If this is truly more scaleable than NAND it seems to me that the it does signal the end of NAND. It’s just a matter of time before endurance and/or density growth of NAND hits a wall and then 3D XPoint can do everything NAND can do but better, faster and more reliably.

3D XPoint technology

The technology comes from a dual layer design which is divided into columns and at the top and bottom of the columns are accessor connections in an orthogonal pattern that together form a grid to access a single bit of memory.  This also means that 3D Xpoint NVM can be read and written a bit at a time (rather than a “page” at a time with NAND) and doesn’t have to be initialized to 0 to be written like NAND.

The 3D nature of the new NVM comes from the fact that you can build up as many layers as you want of these structures to create more and more NVM cells. The microscopic pillar  between the two layers of wiring include a memory cell and a switch component which allows a bit of data to be accessed (via the switch) and stored/read (memory cell). In the photo above the yellow material is a switch and the green material is a memory cell.

A memory cell operates by a using a bulk property change of the material. Unlike DRAM (floating gates of electrons) or NAND (capacitors to hold memory values). As such it uses all of the material to hold a memory value which should allow 3D XPoint memory cells to scale downwards much better than NAND or DRAM.

Intel and Micron are calling the new 3D XPoint NVM storage AND memory. That is suitable for fast access, non-volatile data storage and non-volatile processor memory.

3D XPoint NVM chips in manufacturing today

First chips with the new technology are being manufactured today at Intel-Micron’s joint manufacturing fab in Idaho. The first chips will supply 128Gb of NVM and uses just two layers of 3D XPoint memory.

Intel and Micron will independently produce system products (read SSDs or NVM memory devices) with the new technology during 2016. They mentioned during the webcast that the technology is expected to be attached (as SSDs) to a PCIe bus and use NVMe as an interface to read and write it. Although if it’s used in a memory application, it might be better attached to the processor memory bus.

The expectation is that the 3D XPoint cost/bit will be somewhere in between NAND and DRAM, i.e. more expensive than NAND but less expensive than DRAM. It’s nice to be the only companies in the world with a new, better storage AND memory technology.

~~~~

Over the last 10 years or so, SSDs (solid state devices) all used NAND technologies of one form or another, but after today SSDs can be made from NAND or 3D XPoint technology.

Some expected uses for the new NVM is in gaming applications (currently storage speed and memory constrained) and for in-memory databases (which are memory size constrained).  There was mention on the webcast of edge analytics as well.

Welcome to the dawn of a new age of computer storage AND memory.

Photo Credits: (c) 2015 Intel and Micron, from Intel’s 3D XPoint website

Windows Server 2012 R2 storage changes announced at TechEd

Microsoft TechEd Trends driving IT todayMicrosoft TechEd USA is this week and they announced a number of changes to the storage services that come with Windows Server 2012 R2

  • Azure DRaaS – Microsoft is attempting to democratize DR by supporting a new DR-as-a-Service (DRaaS).  They now have an Azure service that operates in conjunction with Windows Server 2012 R2 that provides orchestration and automation for DR site failover and fail back to/from remote sites.  Windows Server 2012 R2 uses Hyper-V Replica to replicate data across to the other site. Azure DRaaS supports DR plans (scripts) to identify groups of Hyper-V VMs which need to be brought up and their sequencing. VMs within a script group are brought up in parallel but different groups are brought up in sequence.  You can have multiple DR plans, just select the one to execute. You must have access to Azure to use this service. Azure DR plans can pause for manual activities and have the ability to invoke PowerShell scripts for more fine tuned control.  There’s also quite a lot of setup that must be done, e.g. configure Hyper-V hosts, VMs and networking at both primary and secondary locations.  Network IP injection is done via mapping primary to secondary site IP addresses. The Azzure DRaaS really just provides the orchestration of failover or fallback activity. Moreover, it looks like Azure DRaaS is going to be offered by service providers as well as private companies. Currently, Azure’s DRaaS has no support for SAN/NAS replication but they are working with vendors to supply an SRM-like API to provide this.
  • Hyper-V Replica changes – Replica support has been changed from a single fixed asynchronous replication interval (5 minutes) to being able to select one of 3 intervals: 15 seconds; 5 minutes; or 30 minutes.
  • Storage Spaces Automatic Tiering – With SSDs and regular rotating disk in your DAS (or JBOD) configuration , Windows Server 2012 R2 supports automatic storage tiering. At Spaces configuration time one dedicates a certain portion of SSD storage to tiering.  There is a scheduled Windows Server 2012 task which is then used to scan the previous periods file activity and identify which file segments (=1MB in size) that should be on SSD and which should not. Then over time file segments are moved to an  appropriate tier and then, performance should improve.  This only applies to file data and files can be pinned to a particular tier for more fine grained control.
  • Storage Spaces Write-Back cache – Another alternative is to dedicate a certain portion of SSDs in a Space to write caching. When enabled, writes to a Space will be cached first in SSD and then destaged out to rotating disk.  This should speed up write performance.  Both write back cache and storage tiering can be enabled for the same Space. But your SSD storage must be partitioned between the two. Something about funneling all write activity to SSDs just doesn’t make sense to me?!
  • Storage Spaces dual parity – Spaces previously supported mirrored storage and single parity but now also offers dual parity for DAS.  Sort of like RAID6 in protection but they didn’t mention the word RAID at all.  Spaces dual parity does have a write penalty (parity update) and Microsoft suggests using it only for archive or heavy read IO.
  • SMB3.1 performance improvements of ~50% – SMB has been on a roll lately and R2 is no exception. Microsoft indicated that SMB direct using a RAM DISK as backend storage can sustain up to a million 8KB IOPS. Also, with an all-flash JBOD, using a mirrored Spaces for backend storage, SMB3.1 can sustain ~600K IOPS.  Presumably these were all read IOPS.
  • SMB3.1 logging improvements – Changes were made to SMB3.1 event logging to try to eliminate the need for detail tracing to support debug. This is an ongoing activity but one which is starting to bear fruit.
  • SMB3.1 CSV performance rebalancing – Now as one adds cluster nodes,  Cluster Shared Volume (CSV) control nodes will spread out across new nodes in order to balance CSV IO across the whole cluster.
  • SMB1 stack can be (finally) fully removed – If you are running Windows Server 2012, you no longer need to install the SMB1 stack.  It can be completely removed. Of course, if you have some downlevel servers or clients you may want to keep SMB1 around a bit longer but it’s no longer required for Server 2012 R2.
  • Hyper-V Live Migration changes – Live migration can now take advantage of SMB direct and its SMB3 support of RDMA/RoCE to radically speed up data center live migration. Also, Live Migration can now optionally compress the data on the current Hyper-V host, send compressed data across the LAN and then decompress it at target host.  So with R2 you have three options to perform VM Live Migration traditional, SMB direct or compressed.
  • Hyper-V IO limits – Hyper-V hosts can now limit the amount of IOPS consumed by each VM.  This can be hierarchically controlled providing increased flexibility. For example one can identify a group of VMs and have a IO limit for the whole group, but each individual VM can also have an IO limit, and the group limit can be smaller than the sum of the individual VM limits.
  • Hyper-V supports VSS backup for Linux VMs – Windows Server 2012 R2 has now added support for non-application consistent VSS backups for Linux VMs.
  • Hyper-V Replica Cascade Replication – In Windows Server 2012, Hyper V replicas could be copied from one data center to another. But now with R2 those replicas at a secondary site can be copied to a third, cascading the replication from the first to the second and then the third data center, each with their own replication schedule.
  • Hyper-V VHDX file resizing – With Windows Server 2012 R2 VHDX file sizes can now be increased or reduced for both data and boot volumes.
  • Hyper-V backup changes – In previous generations of Windows Server, Hyper-V backups took two distinct snapshots, one instantaneously and the other at quiesce time and then the two were merged together to create a “crash consistent” backup. But with R2, VM backups only take a single snapshot reducing overhead and increasing backup throughput substantially.
  • NVME support – Windows Server 2012 R2 now ships with a Non-Volatile Memory Express (NVME) driver for PCIe flash storage.  R2’s new NVME driver has been tuned for low latency and high bandwidth and can be used for non-clustered storage spaces to improve write performance (in a Spaces write-back cache?).
  • CSV memory read-cache – Windows Server 2012 R2 can be configured to set aside some host memory for a CSV read cache.  This is different than the Spaces Write-Back cache.  CSV caching would operate in conjunction with any other caching done at the host OS or elsewhere.

That’s about it. Some of the MVPs had a preview of R2 up in Redmond, but all of this was to be announced in TechEd, New Orleans, this week.

~~~~

Image: Microsoft TechEd by BetsyWeber