Tag Archives: NVMe over fabric

43: GreyBeards talk Tier 0 again with Yaniv Romem CTO/Founder & Josh Goldenhar VP Products of Excelero

In this episode, we talk with another next gen, Tier 0 storage provider. This time our guests are Yaniv Romem CTO/Founder  & Josh Goldenhar (@eeschwa) VP Products from Excelero, another new storage startup out of Israel.  Both Howard and I talked with Excelero at SFD12 (videos here) earlier last month in San Jose. I was very impressed with their raw performance and wrote a popular RayOnStorage blog post on their system (see my 4M IO/sec@227µsec 4KB Read… post) from our discussions during SFD12.

As we have discussed previously, Tier 0, next generation flash arrays provide very high performing storage at very low latencies with modest to non-existent advanced storage services. They are intended to replace server, direct access SSD storage with a more shared, scaleable storage solution.

In our last podcast (with E8 Storage) they sold a hardware Tier 0 appliance. As a different alternative, Excelero is a software defined, Tier 0 solution intended to be used on any commodity or off the shelf server hardware with high end networking and (low to high end) NVMe SSDs.

Indeed, what impressed me most with their 4M IO/sec, was that target storage system had almost 0 CPU utilization. (Read the post to learn how they did this). Excelero mentioned that they were able to generate high (11M random 4KB) IO/sec on  Intel Core 7, desktop-class CPU. Their one need in a storage server is plenty of PCIe lanes. They don’t even need to have dual socket storage servers, single socket CPU’s work just fine as long as the PCIe lanes are there.

Excelero software

Their intent is to bring Tier 0 capabilities out to all big storage environments. By providing a software only solution it could be easily OEMed by cluster file system vendors or HPC system vendors and generate amazing IO performance needed by their clients.

That’s also one of the reasons that they went with high end Ethernet networking rather than just Infiniband, which would have limited their market to mostly HPC environments. Excelero’s client software uses RoCE/RDMA hardware to perform IO operations with the storage server.

The other thing having little to no target storage server CPU utilization per IO operation gives them is the ability to scale up to 1000 of hosts or storage servers without reaching any storage system bottlenecks.  Another concern eliminated by minimal target server CPU utilization is that you can’t have a noisy neighbor problem, because there’s no target CPU processing to be shared.  Yet another advantage with Excelero is that bandwidth is only  limited by storage server PCIe lanes and networking.  A final advantage of their approach is that they can support any of the current and upcoming storage class memory devices supporting NVMe (e.g., Intel Optane SSDs).

The storage services they offer include RAID 0, 1 and 10 and a client side logical volume manager which supports multi-pathing. Logical volumes can span up to 128 storage servers, but can be accessed by almost any number of hosts. And there doesn’t appear to be a specific limit on the number of logical volumes you can have.

 

They support two different protocols across the 40GbE/100GbE networks. Standard NVMe over Fabric or RDDA (Excelero patented, proprietary Remote Direct Disk Array access). RDDA is what mainly provides the almost non-existent target storage server CPU utilization. But even with standard NVMe over Fabric they support low target CPU utilization. One proviso, with NVMe over Fabric, they do add shared volume functionality to support RAID device locking and additional fault tolerance capabilities.

On Excelero’s roadmap is thin provisioning, snapshots, compression and deduplication. However, they did mention that adding advanced storage functionality like this will impede performance. Currently, their distributed volume locking and configuration metadata is not normally accessed during an IO but when you add thin provisioning, snapshots and data reduction, this metadata needs to become more sophisticated and will necessitate some amount of access during and after an IO operation.

Excelero’s client software runs in Linux kernel mode client and they don’t currently support VMware or Hyper-V. But they do support KVM as a hypervisor and would be willing to support the others, if APIs were published or made available.

They also have an internal OpenStack Cinder driver but it’s not part of their OpenStack’s release yet. They’re waiting for snapshot to be available before they push this into the main code base. Ditto for Docker Engine but this is more of a beta capability today.

Excelero customer experience

One customer (NASA Ames/Moffat Field) deployed a single 2TB NVMe SSD across 128 hosts and had a single 256TB logical volume shared and accessed by all 128 hosts.

Another customer configured Excelero behind a clustered file system and was able to generate 30M randomized IO/sec at 200µsec latencies but more important, 140GB/sec of bandwidth. It turns out high bandwidth is important to many big data applications that have to roll lots of data into their analytics clusters, processing it and output results, and then do it all over again. Bandwidth limitations can impact the success of these types of applications.

By being software only they can be used in a standalone storage server or as a hyper-converged solution where applications and storage can be co-resident on the same server. As noted above, they currently support Linux O/Ss for their storage and client software and support any X86 Intel processor, any RDMA capable NIC, and any NVMe SSD.

Excelero GTM

Excelero is focused on the top 200 customers, which includes the hyper-scale providers like FaceBook, Google, Microsoft and others. But hyper-scale customers have huge software teams and really a single or few, very large/complex applications which they can create/optimize a Tier 0 storage for themselves.

It’s really the customers just below the hyper-scalar class, that have similar needs for high low latency IO/sec or high IO bandwidth (or both) but have 100s to 1000s of applications and they can’t afford to optimize them all for Tier 0 flash. If they solve sharing Tier 0 flash storage in a more general way, say as a block storage device. They can solve it for any application. And if the customer insists, they could put a clustered file system or even an object storage (who would want this) on top of this shared Tier 0 flash storage system.

These customers may currently be using NVMe SSDs within their servers as a DAS device. But with Excelero these resources can be shared across the data center. They think of themselves as a top of rack NVMe storage system.

On their website they have listed a few of their current customers and their pretty large and impressive.

NVMe competition

Aside from E8 Storage, there are few other competitors in Tier 0 storage. One recently announced a move to an NVMe flash storage solution and another killed their shipping solution. We talked about what all this means to them and their market at the end of the podcast. Suffice it to say, they’re not worried.

The podcast runs ~50 minutes. Josh and Yaniv were very knowledgeable about Tier 0, storage market dynamics and were a delight to talk with.   Listen to the podcast to learn more.


Yaniv Romem CTO and Founder, Excelero

Yaniv Romem has been a technology evangelist at disruptive startups for the better part of 20 years. His passions are in the domains of high performance distributed computing, storage, databases and networking.
Yaniv has been a founder at several startups such as Excelero, Xeround and Picatel in these domains. He has served in CTO and VP Engineering roles for the most part.


Josh Goldenhar, Vice President Products, Excelero

Josh has been responsible for product strategy and vision at leading storage companies for over two decades. His experience puts him in a unique position to understand the needs of our customers.
Prior to joining Excelero, Josh was responsible for product strategy and management at EMC (XtremIO) and DataDirect Networks. Previous to that, his experience and passion was in large scale, systems architecture and administration with companies such as Cisco Systems. He’s been a technology leader in Linux, Unix and other OS’s for over 20 years. Josh holds a Bachelor’s degree in Psychology/Cognitive Science from the University of California, San Diego.

35: GreyBeards talk Flash Memory Summit wrap-up with Jim Handy, Objective Analysis

In this episode, we talk with Jim Handy (@thessdguy), memory and flash analyst at Objective Analysis. Jim’s been on our podcast before and last time we had a great talk on flash trends. As Jim, Howard and Ray were all at Flash Memory Summit (FMS 2016) last week, we thought it appropriate to get together and discuss what we found interesting at the summit

Flash is undergoing significant change. We started our discussion with which vendor had the highest density flash device. It’s not that easy to answer because of all the vendors at the show. For example Micron’s shipping a 32 GB chip and Samsung announced a 1TB BGA. And as for devices, Seagate announced a monster, 3.5″ 60TB SSD.

MicroSD cards have 16-17 NAND chips plus a mini-controller, at that level, with a 32GB chip, we could have a ~0.5TB MicroSD card in the near future. No discussion on pricing but Howard’s expectation is that they will be expensive.

NVMe over fabric push

One main topic of conversation at FMS was how NVMe over fabric is emerging. There were a few storage vendors at FMS taking advantage of this, including E8 Storage and Mangstor, both showing off NVMe over Ethernet flash storage. But there were plenty of others talking NVMe over fabric and all the major NAND manufacturers couldn’t talk enough about NVMe.

Facebook’s keynote had a couple of surprises. One was their request for WORM (QLC) flash.  It appears that Facebook plans on keeping user data forever. Another item of interest was their Open Compute Project Lightning JBOF (just a bunch of flash) device using NVMe over Ethernet (see Ray’s post on Facebook’s move to JBOF). They were also interested in ganging up M.2 SSDs into a single package. And finally they discussed their need for SCM.

Storage class memory

The other main topic was storage class memory (SCM), and all the vendors talked about it. Sadly, the timeline for Intel-Micron 3D Xpoint has them supplying sample chips/devices by year end next year (YE2017) and releasing devices to market with SCM the following year (2018). They did have one (hand built) SSD at the show with remarkable performance.

On the other hand, there are other SCM’s on the market, including EverSpin (MRAM) and CrossBar (ReRAM). Both of these vendors had products on display but their capacities were on the order of Mbits rather than Gbits.

It turns out they’re both using ~90nm fab technology and need to get their volumes up before they can shrink their technologies to hit higher densities. However, now that everyone’s talking about SCM, they are starting to see some product wins.  In fact, Mangstor is using EverSpin as a non-volatile write buffer.

Jim explained that 90nm is where DRAM was in 2005 but EverSpin/CrossBar’s bit density is better than DRAM was at the time. But DRAM is now on 15-10nm class technologies and sell 10B DRAM chips/year. EverSpin and CrossBar (together?) are doing more like 10M chips/year. The costs to shrink to the latest technology are ~$100M to generate the masks required. So for these vendors, volumes have to go up drastically before capacity can increase significantly.

Also, at the show Toshiba mentioned they’re focusing on ReRAM for their SCM.

As Jim recounted, the whole SCM push has been driven by Intel and their need to keep improving the performance of memory and storage, otherwise they felt their processor sales would stall.

3D NAND is here

IMG_6719Just about every NAND manufacturer talked about their 3D NAND chips, ranging from 32 layers to 64 layers. From Jim’s perspective, 3D NAND was inevitable, as it was the only way to continue scaling in density and reducing bit costs for NAND.

Samsung was first to market with 3D NAND as a way to show technological leadership. But now everyone’s got it and providing future discussions on bit density and number of layers.  What their yields are is another question. But Planar NAND’s days are over.

Toshiba’s FlashMatrix

IMG_6720Toshiba’s keynote discussed a new flash storage system called the FlashMatrix but at press time they had yet to share their slides with the FMS team,  so  information on FlashMatrix was sketchy at best.

However, they had one on the floor and it looked like a bunch of M2 flash across an NVMe (over Ethernet?) mesh backplane with compute engines connected at the edge.

We had a hard time understanding why Toshiba would do this. Our best guess is perhaps they want to provide OEMs an alternative to SanDisk’s Infiniflash.

The podcast runs over 50 minutes and covers flash technology on display at the show and the history of SCM. I think Howard and Ray could easily spend a day with Jim and not exhaust his knowledge of Flash and we haven’t really touched on DRAM. Listen to the podcast to learn more.

Jim Handy, Memory and Flash analyst at Objective Analysis.

JH Mug BW

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media.  He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com.