56: GreyBeards talk high performance file storage with Liran Zvibel, CEO & Co-Founder, WekaIO

This month we talk high performance, cluster file systems with Liran Zvibel (@liranzvibel), CEO and Co-Founder of WekaIO, a new software defined, scale-out file system. I first heard of WekaIO when it showed up on SPEC sfs2014 with a new SWBUILD benchmark submission. They had a 60 node EC2-AWS cluster running the benchmark and achieved, at the time, the highest SWBUILD number (500) of any solution.

At the moment, WekaIO are targeting HPC and Media&Entertainment verticals for their solution and it is sold on an annual capacity subscription basis.

By the way, a Wekabyte is 2**100 bytes of storage or ~ 1 trillion exabytes (2**60).

High performance file storage

The challenges with HPC file systems is that they need to handle a large number of files, large amounts of storage with high throughput access to all this data. Where WekaIO comes into the picture is that they do all that plus can support high file IOPS. That is, they can open, read or write a high number of relatively small files at an impressive speed, with low latency. These are becoming more popular with AI-machine learning and life sciences/genomic microscopy image processing.

Most file system developers will tell you that, they can supply high throughput  OR high file IOPS but doing both is a real challenge. WekaIO’s is able to do both while at the same time supporting billions of files per directory and trillions of files in a file system.

WekaIO has support for up to 64K cluster nodes and have tested up to 4000 cluster nodes. WekaIO announced last year an OEM agreement with HPE and are starting to build out bigger clusters.

Media & Entertainment file storage requirements are mostly just high throughput with large (media) file sizes. Here WekaIO has a more competition from other cluster file systems but their ability to support extra-large data repositories with great throughput is another advantage here.

WekaIO cluster file system

WekaIO is a software defined  storage solution. And whereas many HPC cluster file systems have metadata and storage nodes. WekaIO’s cluster nodes are combined meta-data and storage nodes. So as one scale’s capacity (by adding nodes), one not only scales large file throughput (via more IO parallelism) but also scales small file IOPS (via more metadata processing capabilities). There’s also some secret sauce to their metadata sharding (if that’s the right word) that allows WekaIO to support more metadata activity as the cluster grows.

One secret to WekaIO’s ability to support both high throughput and high file IOPS lies in  their performance load balancing across the cluster. Apparently, WekaIO can be configured to constantly monitoring all cluster nodes for performance and can balance all file IO activity (data transfers and metadata services) across the cluster, to insure that no one  node is over burdened with IO.

Liran says that performance load balancing was one reason they were so successful with their EC2 AWS SPEC sfs2014 SWBUILD benchmark. One problem with AWS EC2 nodes is a lot of unpredictability in node performance. When running EC2 instances, “noisy neighbors” impact node performance.  With WekaIO’s performance load balancing running on AWS EC2 node instances, they can  just redirect IO activity around slower nodes to faster nodes that can handle the work, in real time.

WekaIO performance load balancing is a configurable option. The other alternative is for WekaIO to “cryptographically” spread the workload across all the nodes in a cluster.

WekaIO uses a host driver for Posix access to the cluster. WekaIO’s frontend also natively supports (without host driver) NFSv3, SMB3.1, HDFS and AWS S3  protocols.

WekaIO also offers configurable file system data protection that can span 100s of failure domains (racks) supporting from 4 to 16 data stripes with 2 to 4 parity stripes. Liran said this was erasure code like but wouldn’t specifically state what they are doing differently.

They also support high performance storage and inactive storage with automated tiering of inactive data to object storage through policy management.

WekaIO creates a global name space across the cluster, which can be sub-divided into one to thousands  of file systems.

Snapshoting, cloning & moving work

WekaIO also has file system snapshots (readonly) and clones (read-write) using re-direct on write methodology. After the first snapshot/clone, subsequent snapshots/clones are only differential copies.

Another feature Howard and I thought was interesting was their DR as a Service like capability. This is, using an onprem WekaIO cluster to clone a file system/directory, tiering that to an S3 storage object. Then using that S3 storage object with an AWS EC2 WekaIO cluster to import the object(s) and re-constituting that file system/directory in the cloud. Once on AWS, work can occur in the cloud and the process can be reversed to move any updates back to the onprem cluster.

This way if you had work needing more compute than available onprem, you could move the data and workload to AWS, do the work there and then move the data back down to onprem again.

WekaIO’s RtOS, network stack, & NVMeoF

WekaIO runs under Linux as a user space application. WekaIO has implemented their own  Realtime O/S (RtOS) and high performance network stack that runs in user space.

With their own network stack they have also implemented NVMeoF support for (non-RDMA) Ethernet as well as InfiniBand networks. This is probably another reason they can have such low latency file IO operations.

The podcast runs ~42 minutes. Linar has been around  data storage systems for 20 years and as a result was very knowledgeable and interesting to talk with. Liran almost qualifies as a Greybeard, if not for the fact that he was clean shaven ;/. Listen to the podcast to learn more.

Linar Zvibel, CEO and Co-Founder, WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design and development for a portfolio of rich social media applications.

 

Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.

55: GreyBeards storage and system yearend review with Ray & Howard

In this episode, the Greybeards discuss the year in systems and storage. This year we kick off the discussion with a long running IT trend which has taken off over the last couple of years. That is, recently the industry has taken to buying pre-built appliances rather than building them from the ground up.

We can see this in all the hyper-converged solutions available  today but it goes even deeper than that. It seems to have started with the trend in organizations to get by with less man-women power.

This led to a desire to purchase pre-buit software applications and now, appliances rather than build from parts. It just takes to long to build and lead architects have better things to do with their time than checking compatibility lists, testing and verifying that hardware works properly with software. The pre-built appliances are good enough and doing it yourself doesn’t really provide that much of an advantage over the pre-built solutions.

Next, we see the coming systems using NVMe over Fabric storage systems as sort of a countertrend to the previous one. Here we see some customers paying well for special purpose hardware with blazing speed that takes time and effort to get working right, but the advantages are significant. Both Howard and I were at the Excelero SFD12 event and it blew us away. Howard also attended the E8 Storage SFD14 event which was another example along a similar vein.

Finally, the last trend we discussed was the rise of 3D TLC and the absence of 3DX and other storage class memory (SCM) technologies to make a dent in the marketplace. 3D TLC NAND is coming out of just about every fab these days and resulting in huge (but costly) SSDs, in the multi-TB range.  Combine these with NVMe interfaces and you have msec access to almost a PB of storage without breaking a sweat.

The missing 3DX SCM tsunami some of us predicted is mainly due to the difficulties in bringing new fab technologies to market. We saw some of this in the stumbling with 3D NAND but the transition to 3DX and other SCM technologies is a much bigger change to new processes and technology. We all believe it will get there someday but for the moment, the industry just needs to wait until the fabs get their yields up.

The podcast runs over 44 minutes. Howard and I could talk for hours on what’s happening in IT today. Listen to the podcast to learn more.

Howard Marks is the Founder and Chief Scientist of howardmarksDeepStorage, a prominent blogger at Deep Storage Blog and can be found on twitter @DeepStorageNet.

 

Ray Lucchesi is the President and Founder of Silverton Consulting, a prominent blogger at RayOnStorage.com, and can be found on twitter @RayLucchesi.

50: Greybeards wrap up Flash Memory Summit with Jim Handy, Director at Objective Analysis

In this episode we talk with Jim Handy (@thessdguy), Director at Objective Analysis,  a semiconductor market research organization. Jim is an old friend and was on last year to discuss Flash Memory Summit (FMS) 2016. Jim, Howard and I all attended FMS 2017 last week  in Santa Clara and Jim and Howard were presenters at the show.

NVMe & NVMeF to the front

Although, unfortunately the show floor was closed due to fire, there were plenty of sessions and talks about NVMe and NVMeF (NVMe over fabric). Howard believes NVMe & NVMeF seems to be being adopted much quicker than anyone had expected. It’s already evident inside storage systems like Pure’s new FlashArray//X, Kamanario and E8 storage, which is already shipping block storage with NVMe and NVMeF.

Last year PCIe expanders and switches seemed like the wave of the future but ever since then, NVMe and NVMeF has taken off. Historically, there’s been a reluctance to add capacity shelves to storage systems because of the complexity of (FC and SAS) cable connections. But with NVMeF, RoCE and RDMA, it’s now just an (40GbE or 100GbE) Ethernet connection away, considerably easier and less error prone.

3D NAND take off

Both Samsung and Micron are talking up their 64 layer 3D NAND and the rest of the industry following. The NAND shortage has led to fewer price reductions, but eventually when process yields turn up, the shortage will collapse and pricing reductions should return en masse.

The reason that vertical, 3D is taking over from planar (2D) NAND is that planar NAND can’t’ be sharing much more and 15nm is going to be the place it stays at for a long time to come. So the only way to increase capacity/chip and reduce $/Gb, is up.

But as with any new process technology, 3D NAND is having yield problems. But whenever the last yield issue is solved, which seems close,  we should see pricing drop precipitously and much more plentiful (3D) NAND storage.

One thing that has made increasing 3D NAND capacity that much easier is string stacking. Jim describes string stacking as creating a unit, of say 32 layers, which you can fabricate as one piece  and then layer ontop of this an insulating layer. Now you can start again, stacking another 32 layer block ontop and just add another insulating layer.

The problem with more than 32-48 layers is that you have to (dig) create  holes (connecting) between all the layers which have to be (atomically) very straight and coated with special materials. Anyone who has dug a hole knows that the deeper you go, the harder it is to make the hole walls straight. With current technology, 32 layers seem just about as far as they can go.

3DX and similar technologies

There’s been quite a lot of talk the last couple of years about 3D XPoint (3DX) and what it  means for the storage and server industry. Intel has released Octane client SSDs but there’s no enterprise class 3DX SSDs as of yet.

The problem is similar to 3D NAND above, current yields suck.  There’s a chicken and egg problem with any new chip technologies. You need volumes to get the yield up and you need yields up to generate the volumes you need. And volumes with good yields generate profits to re-invest in the cycle for next technology.

Intel can afford to subsidize (lose money) 3DX technology until they get the yields up, knowing full well that when they do, it will become highly profitable.

The key is to price the new technology somewhere between levels in the storage hierarchy, for 3DX that means between NAND and DRAM. This does mean that 3DX will be more of between memory and SSD tier than a replacement for for either DRAM or SSDs.

The recent emergence of NVDIMMs have provided the industry a platform (based on NAND and DRAM) where they can create the software and other OS changes needed to support this mid tier as a memory level. So that when 3DX comes along as a new memory tier they will be ready

NAND shortages, industry globalization & game theory

Jim has an interesting take on how and when the NAND shortage will collapse.

It’s a cyclical problem seen before in DRAM and it’s a question of investment. When there’s an oversupply of a chip technology (like NAND), suppliers cut investments or rather don’t grow investments as fast as they were. Ultimately this leads to a shortage and which then leads to  over investment to catch up with demand.  When this starts to produce chips the capacity bottleneck will collapse and prices will come down hard.

Jim believes that as 3D NAND suppliers start driving yields up and $/Gb down, 2D NAND fabs will turn to DRAM or other electronic circuitry whichwill lead to a price drop there as well.

Jim mentioned game theory is the way the Fab industry has globalized over time. As emerging countries build fabs, they must seek partners to provide the technology to produce product. They offer these companies guaranteed supplies of low priced product for years to help get the fabs online. Once, this period is over the fabs never return to home base.

This approach has led to Japan taking over DRAM & other chip production, then Korea, then Taiwan and now China. It will move again. I suppose this is one reason IBM got out of the chip fab business.

The podcast runs ~49 minutes but Jim is a very knowledgeable, chip industry expert and a great friend from multiple  events. Howard and I had fun talking with him again. Listen to the podcast to learn more.

Jim Handy, Director at Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media.  He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

43: GreyBeards talk Tier 0 again with Yaniv Romem CTO/Founder & Josh Goldenhar VP Products of Excelero

In this episode, we talk with another next gen, Tier 0 storage provider. This time our guests are Yaniv Romem CTO/Founder  & Josh Goldenhar (@eeschwa) VP Products from Excelero, another new storage startup out of Israel.  Both Howard and I talked with Excelero at SFD12 (videos here) earlier last month in San Jose. I was very impressed with their raw performance and wrote a popular RayOnStorage blog post on their system (see my 4M IO/sec@227µsec 4KB Read… post) from our discussions during SFD12.

As we have discussed previously, Tier 0, next generation flash arrays provide very high performing storage at very low latencies with modest to non-existent advanced storage services. They are intended to replace server, direct access SSD storage with a more shared, scaleable storage solution.

In our last podcast (with E8 Storage) they sold a hardware Tier 0 appliance. As a different alternative, Excelero is a software defined, Tier 0 solution intended to be used on any commodity or off the shelf server hardware with high end networking and (low to high end) NVMe SSDs.

Indeed, what impressed me most with their 4M IO/sec, was that target storage system had almost 0 CPU utilization. (Read the post to learn how they did this). Excelero mentioned that they were able to generate high (11M random 4KB) IO/sec on  Intel Core 7, desktop-class CPU. Their one need in a storage server is plenty of PCIe lanes. They don’t even need to have dual socket storage servers, single socket CPU’s work just fine as long as the PCIe lanes are there.

Excelero software

Their intent is to bring Tier 0 capabilities out to all big storage environments. By providing a software only solution it could be easily OEMed by cluster file system vendors or HPC system vendors and generate amazing IO performance needed by their clients.

That’s also one of the reasons that they went with high end Ethernet networking rather than just Infiniband, which would have limited their market to mostly HPC environments. Excelero’s client software uses RoCE/RDMA hardware to perform IO operations with the storage server.

The other thing having little to no target storage server CPU utilization per IO operation gives them is the ability to scale up to 1000 of hosts or storage servers without reaching any storage system bottlenecks.  Another concern eliminated by minimal target server CPU utilization is that you can’t have a noisy neighbor problem, because there’s no target CPU processing to be shared.  Yet another advantage with Excelero is that bandwidth is only  limited by storage server PCIe lanes and networking.  A final advantage of their approach is that they can support any of the current and upcoming storage class memory devices supporting NVMe (e.g., Intel Optane SSDs).

The storage services they offer include RAID 0, 1 and 10 and a client side logical volume manager which supports multi-pathing. Logical volumes can span up to 128 storage servers, but can be accessed by almost any number of hosts. And there doesn’t appear to be a specific limit on the number of logical volumes you can have.

 

They support two different protocols across the 40GbE/100GbE networks. Standard NVMe over Fabric or RDDA (Excelero patented, proprietary Remote Direct Disk Array access). RDDA is what mainly provides the almost non-existent target storage server CPU utilization. But even with standard NVMe over Fabric they support low target CPU utilization. One proviso, with NVMe over Fabric, they do add shared volume functionality to support RAID device locking and additional fault tolerance capabilities.

On Excelero’s roadmap is thin provisioning, snapshots, compression and deduplication. However, they did mention that adding advanced storage functionality like this will impede performance. Currently, their distributed volume locking and configuration metadata is not normally accessed during an IO but when you add thin provisioning, snapshots and data reduction, this metadata needs to become more sophisticated and will necessitate some amount of access during and after an IO operation.

Excelero’s client software runs in Linux kernel mode client and they don’t currently support VMware or Hyper-V. But they do support KVM as a hypervisor and would be willing to support the others, if APIs were published or made available.

They also have an internal OpenStack Cinder driver but it’s not part of their OpenStack’s release yet. They’re waiting for snapshot to be available before they push this into the main code base. Ditto for Docker Engine but this is more of a beta capability today.

Excelero customer experience

One customer (NASA Ames/Moffat Field) deployed a single 2TB NVMe SSD across 128 hosts and had a single 256TB logical volume shared and accessed by all 128 hosts.

Another customer configured Excelero behind a clustered file system and was able to generate 30M randomized IO/sec at 200µsec latencies but more important, 140GB/sec of bandwidth. It turns out high bandwidth is important to many big data applications that have to roll lots of data into their analytics clusters, processing it and output results, and then do it all over again. Bandwidth limitations can impact the success of these types of applications.

By being software only they can be used in a standalone storage server or as a hyper-converged solution where applications and storage can be co-resident on the same server. As noted above, they currently support Linux O/Ss for their storage and client software and support any X86 Intel processor, any RDMA capable NIC, and any NVMe SSD.

Excelero GTM

Excelero is focused on the top 200 customers, which includes the hyper-scale providers like FaceBook, Google, Microsoft and others. But hyper-scale customers have huge software teams and really a single or few, very large/complex applications which they can create/optimize a Tier 0 storage for themselves.

It’s really the customers just below the hyper-scalar class, that have similar needs for high low latency IO/sec or high IO bandwidth (or both) but have 100s to 1000s of applications and they can’t afford to optimize them all for Tier 0 flash. If they solve sharing Tier 0 flash storage in a more general way, say as a block storage device. They can solve it for any application. And if the customer insists, they could put a clustered file system or even an object storage (who would want this) on top of this shared Tier 0 flash storage system.

These customers may currently be using NVMe SSDs within their servers as a DAS device. But with Excelero these resources can be shared across the data center. They think of themselves as a top of rack NVMe storage system.

On their website they have listed a few of their current customers and their pretty large and impressive.

NVMe competition

Aside from E8 Storage, there are few other competitors in Tier 0 storage. One recently announced a move to an NVMe flash storage solution and another killed their shipping solution. We talked about what all this means to them and their market at the end of the podcast. Suffice it to say, they’re not worried.

The podcast runs ~50 minutes. Josh and Yaniv were very knowledgeable about Tier 0, storage market dynamics and were a delight to talk with.   Listen to the podcast to learn more.


Yaniv Romem CTO and Founder, Excelero

Yaniv Romem has been a technology evangelist at disruptive startups for the better part of 20 years. His passions are in the domains of high performance distributed computing, storage, databases and networking.
Yaniv has been a founder at several startups such as Excelero, Xeround and Picatel in these domains. He has served in CTO and VP Engineering roles for the most part.


Josh Goldenhar, Vice President Products, Excelero

Josh has been responsible for product strategy and vision at leading storage companies for over two decades. His experience puts him in a unique position to understand the needs of our customers.
Prior to joining Excelero, Josh was responsible for product strategy and management at EMC (XtremIO) and DataDirect Networks. Previous to that, his experience and passion was in large scale, systems architecture and administration with companies such as Cisco Systems. He’s been a technology leader in Linux, Unix and other OS’s for over 20 years. Josh holds a Bachelor’s degree in Psychology/Cognitive Science from the University of California, San Diego.

35: GreyBeards talk Flash Memory Summit wrap-up with Jim Handy, Objective Analysis

In this episode, we talk with Jim Handy (@thessdguy), memory and flash analyst at Objective Analysis. Jim’s been on our podcast before and last time we had a great talk on flash trends. As Jim, Howard and Ray were all at Flash Memory Summit (FMS 2016) last week, we thought it appropriate to get together and discuss what we found interesting at the summit

Flash is undergoing significant change. We started our discussion with which vendor had the highest density flash device. It’s not that easy to answer because of all the vendors at the show. For example Micron’s shipping a 32 GB chip and Samsung announced a 1TB BGA. And as for devices, Seagate announced a monster, 3.5″ 60TB SSD.

MicroSD cards have 16-17 NAND chips plus a mini-controller, at that level, with a 32GB chip, we could have a ~0.5TB MicroSD card in the near future. No discussion on pricing but Howard’s expectation is that they will be expensive.

NVMe over fabric push

One main topic of conversation at FMS was how NVMe over fabric is emerging. There were a few storage vendors at FMS taking advantage of this, including E8 Storage and Mangstor, both showing off NVMe over Ethernet flash storage. But there were plenty of others talking NVMe over fabric and all the major NAND manufacturers couldn’t talk enough about NVMe.

Facebook’s keynote had a couple of surprises. One was their request for WORM (QLC) flash.  It appears that Facebook plans on keeping user data forever. Another item of interest was their Open Compute Project Lightning JBOF (just a bunch of flash) device using NVMe over Ethernet (see Ray’s post on Facebook’s move to JBOF). They were also interested in ganging up M.2 SSDs into a single package. And finally they discussed their need for SCM.

Storage class memory

The other main topic was storage class memory (SCM), and all the vendors talked about it. Sadly, the timeline for Intel-Micron 3D Xpoint has them supplying sample chips/devices by year end next year (YE2017) and releasing devices to market with SCM the following year (2018). They did have one (hand built) SSD at the show with remarkable performance.

On the other hand, there are other SCM’s on the market, including EverSpin (MRAM) and CrossBar (ReRAM). Both of these vendors had products on display but their capacities were on the order of Mbits rather than Gbits.

It turns out they’re both using ~90nm fab technology and need to get their volumes up before they can shrink their technologies to hit higher densities. However, now that everyone’s talking about SCM, they are starting to see some product wins.  In fact, Mangstor is using EverSpin as a non-volatile write buffer.

Jim explained that 90nm is where DRAM was in 2005 but EverSpin/CrossBar’s bit density is better than DRAM was at the time. But DRAM is now on 15-10nm class technologies and sell 10B DRAM chips/year. EverSpin and CrossBar (together?) are doing more like 10M chips/year. The costs to shrink to the latest technology are ~$100M to generate the masks required. So for these vendors, volumes have to go up drastically before capacity can increase significantly.

Also, at the show Toshiba mentioned they’re focusing on ReRAM for their SCM.

As Jim recounted, the whole SCM push has been driven by Intel and their need to keep improving the performance of memory and storage, otherwise they felt their processor sales would stall.

3D NAND is here

IMG_6719Just about every NAND manufacturer talked about their 3D NAND chips, ranging from 32 layers to 64 layers. From Jim’s perspective, 3D NAND was inevitable, as it was the only way to continue scaling in density and reducing bit costs for NAND.

Samsung was first to market with 3D NAND as a way to show technological leadership. But now everyone’s got it and providing future discussions on bit density and number of layers.  What their yields are is another question. But Planar NAND’s days are over.

Toshiba’s FlashMatrix

IMG_6720Toshiba’s keynote discussed a new flash storage system called the FlashMatrix but at press time they had yet to share their slides with the FMS team,  so  information on FlashMatrix was sketchy at best.

However, they had one on the floor and it looked like a bunch of M2 flash across an NVMe (over Ethernet?) mesh backplane with compute engines connected at the edge.

We had a hard time understanding why Toshiba would do this. Our best guess is perhaps they want to provide OEMs an alternative to SanDisk’s Infiniflash.

The podcast runs over 50 minutes and covers flash technology on display at the show and the history of SCM. I think Howard and Ray could easily spend a day with Jim and not exhaust his knowledge of Flash and we haven’t really touched on DRAM. Listen to the podcast to learn more.

Jim Handy, Memory and Flash analyst at Objective Analysis.

JH Mug BW

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media.  He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com.