Category Archives: SSD

51: GreyBeards talk hyper convergence with Lee Caswell, VP Product, Storage & Availability BU, VMware

Sponsored by:

VMware

In this episode we talk with Lee Caswell (@LeeCaswell), Vice President of Product, Storage and Availability Business Unit, VMware.  This is the second time Lee’s been on our show, the previous one back in April of last year when he was with his prior employer. Lee’s been at VMware for a little over a year now and has helped lead some significant changes in their HCI offering, vSAN.

VMware vSAN/HCI business

Many customers struggle to modernize their data centers with funding being the primary issue. This is very similar to what happened in the early 2000s as customers started virtualizing servers and consolidating storage. But today, there’s a new option, server based/software defined storage like VMware’s vSAN, which can be deployed for little expense and grown incrementally as needed. VMware’s vSAN customer base is currently growing by 150% CAGR, and VMware is adding over 100 new vSAN customers a week.

Many companies say they offer HCI, but few have adopted the software-only business model this entails. The transition from a hardware-software, appliance-based business model to a software-only business model is difficult and means a move from a high revenue-lower margin business to a lower revenue-higher margin business. VMware, from its very beginnings, has built a sustainable software-only business model that extends to vSAN today.

The software business model means that VMware can partner easily with a wide variety of server OEM partners to supply vSAN ReadyNodes that are pre-certified and jointly supported in the field. There are currently 14 server partners for vSAN ReadyNodes. In addition, VMware has co-designed the VxRail HCI Appliance with Dell EMC, which adds integrated life-cycle management as well as Dell EMC data protection software licenses.

As a result, customers can adopt vSAN as a build or a buy option for on-prem use and can also leverage vSAN in the cloud from a variety of cloud providers, including AWS very soon. It’s the software-only business model that sets the stage for this common data management across the hybrid cloud.

VMware vSAN software defined storage (SDS)

The advent of Intel Xeon processors and plentiful, relatively cheap SSD storage has made vSAN an easy storage solution for most virtualized data centers today. SSDs removed any performance concerns that customers had with hybrid HCI configurations. And with Intel’s latest Xeon Scalable processors, there’s more than enough power to handle both application compute and storage compute workloads.

From Lee’s perspective, there’s still a place for traditional SAN storage, but he sees it more for cold storage that is scaled independently from servers or for bare metal/non-virtualized storage environments. But for everyone else using virtualized data centers, they really need to give vSAN a look.

Storage vendors shifting sales

It used to be that major storage vendor sales teams would lead with hardware appliance storage solutions and then move to HCI when pushed. The problem was that a typical SAN storage sale takes 9 months to complete and then 3 years of limited additional sales.

To address this, some vendors have taken the approach where they lead with HCI and only move to legacy storage when it’s a better fit. With VMware vSAN, it’s a quicker sales cycle than legacy storage because HCI costs less up front and there’s no need to buy the final storage configuration with the first purchase. VMware vSAN HCI can grow as the customer applications needs dictate, generating additional incremental sales over time.

VMware vSAN in AWS

Recently, VMware has announced VMware Cloud in AWS.What this means is that you can have vSAN storage operating in an AWS cloud just like you would on-prem. In this case, workloads could migrate from cloud to on-prem and back again with almost no changes. How the data gets from on-prem to cloud is another question.

Also the pricing model for VMware Cloud in AWS moves to a consumption based model, where you pay for just what you use on a monthly basis. This way VMware Cloud in AWS and vSAN is billed monthly, consistent with other AWS offerings.

VMware vs. Microsoft on cloud

There’s a subtle difference in how Microsoft and VMware are adopting cloud. VMware came from an infrastructure platform and is now implementing their infrastructure on cloud. Microsoft started as a development platform and is taking their cloud development platform/stack and bringing it to on-prem.

It’s really two different philosophies in action. We now see VMware doing more for the development community with vSphere Integrated Containers (VIC), Docker Containers, Kubernetes, and Pivotal Cloud foundry. Meanwhile Microsoft is looking to implement the Azure stack for on-prem environments, and they are focusing more on infrastructure. In the end, enterprises will have terrific choices as the software defined data center frees up customers dollars and management time.

The podcast runs ~25 minutes. Lee is a very knowledgeable individual and although he doesn’t qualify as a Greybeard (just yet), he has been in and around the data center and flash storage environments throughout most of his career. From his diverse history, Lee has developed a very business like perspective on data center and storage technologies and it’s always a pleasure talking with him.  Listen to the podcast to learn more.

Lee Caswell, V.P. of Product, Storage & Availability Business Unit, VMware

Lee Caswell leads the VMware storage marketing team driving vSAN products, partnerships, and integrations. Lee joined VMware in 2016 and has extensive experience in executive leadership within the storage, flash and virtualization markets.

Prior to VMware, Lee was vice president of Marketing at NetApp and vice president of Solution Marketing at Fusion-IO (now SanDisk). Lee was a founding member of Pivot3, a company widely considered to be the founder of hyper-converged systems, where he served as the CEO and CMO. Earlier in his career, Lee held marketing leadership positions at Adaptec, and SEEQ Technology, a pioneer in non-volatile memory. He started his career at General Electric in Corporate Consulting.

Lee holds a bachelor of arts degree in economics from Carleton College and a master of business administration degree from Dartmouth College. Lee is a New York native and has lived in northern California for many years. He and his wife live in Palo Alto and have two children. In his spare time Lee enjoys cycling, playing guitar, and hiking the local hills.

50: Greybeards wrap up Flash Memory Summit with Jim Handy, Director at Objective Analysis

In this episode we talk with Jim Handy (@thessdguy), Director at Objective Analysis,  a semiconductor market research organization. Jim is an old friend and was on last year to discuss Flash Memory Summit (FMS) 2016. Jim, Howard and I all attended FMS 2017 last week  in Santa Clara and Jim and Howard were presenters at the show.

NVMe & NVMeF to the front

Although, unfortunately the show floor was closed due to fire, there were plenty of sessions and talks about NVMe and NVMeF (NVMe over fabric). Howard believes NVMe & NVMeF seems to be being adopted much quicker than anyone had expected. It’s already evident inside storage systems like Pure’s new FlashArray//X, Kamanario and E8 storage, which is already shipping block storage with NVMe and NVMeF.

Last year PCIe expanders and switches seemed like the wave of the future but ever since then, NVMe and NVMeF has taken off. Historically, there’s been a reluctance to add capacity shelves to storage systems because of the complexity of (FC and SAS) cable connections. But with NVMeF, RoCE and RDMA, it’s now just an (40GbE or 100GbE) Ethernet connection away, considerably easier and less error prone.

3D NAND take off

Both Samsung and Micron are talking up their 64 layer 3D NAND and the rest of the industry following. The NAND shortage has led to fewer price reductions, but eventually when process yields turn up, the shortage will collapse and pricing reductions should return en masse.

The reason that vertical, 3D is taking over from planar (2D) NAND is that planar NAND can’t’ be sharing much more and 15nm is going to be the place it stays at for a long time to come. So the only way to increase capacity/chip and reduce $/Gb, is up.

But as with any new process technology, 3D NAND is having yield problems. But whenever the last yield issue is solved, which seems close,  we should see pricing drop precipitously and much more plentiful (3D) NAND storage.

One thing that has made increasing 3D NAND capacity that much easier is string stacking. Jim describes string stacking as creating a unit, of say 32 layers, which you can fabricate as one piece  and then layer ontop of this an insulating layer. Now you can start again, stacking another 32 layer block ontop and just add another insulating layer.

The problem with more than 32-48 layers is that you have to (dig) create  holes (connecting) between all the layers which have to be (atomically) very straight and coated with special materials. Anyone who has dug a hole knows that the deeper you go, the harder it is to make the hole walls straight. With current technology, 32 layers seem just about as far as they can go.

3DX and similar technologies

There’s been quite a lot of talk the last couple of years about 3D XPoint (3DX) and what it  means for the storage and server industry. Intel has released Octane client SSDs but there’s no enterprise class 3DX SSDs as of yet.

The problem is similar to 3D NAND above, current yields suck.  There’s a chicken and egg problem with any new chip technologies. You need volumes to get the yield up and you need yields up to generate the volumes you need. And volumes with good yields generate profits to re-invest in the cycle for next technology.

Intel can afford to subsidize (lose money) 3DX technology until they get the yields up, knowing full well that when they do, it will become highly profitable.

The key is to price the new technology somewhere between levels in the storage hierarchy, for 3DX that means between NAND and DRAM. This does mean that 3DX will be more of between memory and SSD tier than a replacement for for either DRAM or SSDs.

The recent emergence of NVDIMMs have provided the industry a platform (based on NAND and DRAM) where they can create the software and other OS changes needed to support this mid tier as a memory level. So that when 3DX comes along as a new memory tier they will be ready

NAND shortages, industry globalization & game theory

Jim has an interesting take on how and when the NAND shortage will collapse.

It’s a cyclical problem seen before in DRAM and it’s a question of investment. When there’s an oversupply of a chip technology (like NAND), suppliers cut investments or rather don’t grow investments as fast as they were. Ultimately this leads to a shortage and which then leads to  over investment to catch up with demand.  When this starts to produce chips the capacity bottleneck will collapse and prices will come down hard.

Jim believes that as 3D NAND suppliers start driving yields up and $/Gb down, 2D NAND fabs will turn to DRAM or other electronic circuitry whichwill lead to a price drop there as well.

Jim mentioned game theory is the way the Fab industry has globalized over time. As emerging countries build fabs, they must seek partners to provide the technology to produce product. They offer these companies guaranteed supplies of low priced product for years to help get the fabs online. Once, this period is over the fabs never return to home base.

This approach has led to Japan taking over DRAM & other chip production, then Korea, then Taiwan and now China. It will move again. I suppose this is one reason IBM got out of the chip fab business.

The podcast runs ~49 minutes but Jim is a very knowledgeable, chip industry expert and a great friend from multiple  events. Howard and I had fun talking with him again. Listen to the podcast to learn more.

Jim Handy, Director at Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media.  He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

43: GreyBeards talk Tier 0 again with Yaniv Romem CTO/Founder & Josh Goldenhar VP Products of Excelero

In this episode, we talk with another next gen, Tier 0 storage provider. This time our guests are Yaniv Romem CTO/Founder  & Josh Goldenhar (@eeschwa) VP Products from Excelero, another new storage startup out of Israel.  Both Howard and I talked with Excelero at SFD12 (videos here) earlier last month in San Jose. I was very impressed with their raw performance and wrote a popular RayOnStorage blog post on their system (see my 4M IO/sec@227µsec 4KB Read… post) from our discussions during SFD12.

As we have discussed previously, Tier 0, next generation flash arrays provide very high performing storage at very low latencies with modest to non-existent advanced storage services. They are intended to replace server, direct access SSD storage with a more shared, scaleable storage solution.

In our last podcast (with E8 Storage) they sold a hardware Tier 0 appliance. As a different alternative, Excelero is a software defined, Tier 0 solution intended to be used on any commodity or off the shelf server hardware with high end networking and (low to high end) NVMe SSDs.

Indeed, what impressed me most with their 4M IO/sec, was that target storage system had almost 0 CPU utilization. (Read the post to learn how they did this). Excelero mentioned that they were able to generate high (11M random 4KB) IO/sec on  Intel Core 7, desktop-class CPU. Their one need in a storage server is plenty of PCIe lanes. They don’t even need to have dual socket storage servers, single socket CPU’s work just fine as long as the PCIe lanes are there.

Excelero software

Their intent is to bring Tier 0 capabilities out to all big storage environments. By providing a software only solution it could be easily OEMed by cluster file system vendors or HPC system vendors and generate amazing IO performance needed by their clients.

That’s also one of the reasons that they went with high end Ethernet networking rather than just Infiniband, which would have limited their market to mostly HPC environments. Excelero’s client software uses RoCE/RDMA hardware to perform IO operations with the storage server.

The other thing having little to no target storage server CPU utilization per IO operation gives them is the ability to scale up to 1000 of hosts or storage servers without reaching any storage system bottlenecks.  Another concern eliminated by minimal target server CPU utilization is that you can’t have a noisy neighbor problem, because there’s no target CPU processing to be shared.  Yet another advantage with Excelero is that bandwidth is only  limited by storage server PCIe lanes and networking.  A final advantage of their approach is that they can support any of the current and upcoming storage class memory devices supporting NVMe (e.g., Intel Optane SSDs).

The storage services they offer include RAID 0, 1 and 10 and a client side logical volume manager which supports multi-pathing. Logical volumes can span up to 128 storage servers, but can be accessed by almost any number of hosts. And there doesn’t appear to be a specific limit on the number of logical volumes you can have.

 

They support two different protocols across the 40GbE/100GbE networks. Standard NVMe over Fabric or RDDA (Excelero patented, proprietary Remote Direct Disk Array access). RDDA is what mainly provides the almost non-existent target storage server CPU utilization. But even with standard NVMe over Fabric they support low target CPU utilization. One proviso, with NVMe over Fabric, they do add shared volume functionality to support RAID device locking and additional fault tolerance capabilities.

On Excelero’s roadmap is thin provisioning, snapshots, compression and deduplication. However, they did mention that adding advanced storage functionality like this will impede performance. Currently, their distributed volume locking and configuration metadata is not normally accessed during an IO but when you add thin provisioning, snapshots and data reduction, this metadata needs to become more sophisticated and will necessitate some amount of access during and after an IO operation.

Excelero’s client software runs in Linux kernel mode client and they don’t currently support VMware or Hyper-V. But they do support KVM as a hypervisor and would be willing to support the others, if APIs were published or made available.

They also have an internal OpenStack Cinder driver but it’s not part of their OpenStack’s release yet. They’re waiting for snapshot to be available before they push this into the main code base. Ditto for Docker Engine but this is more of a beta capability today.

Excelero customer experience

One customer (NASA Ames/Moffat Field) deployed a single 2TB NVMe SSD across 128 hosts and had a single 256TB logical volume shared and accessed by all 128 hosts.

Another customer configured Excelero behind a clustered file system and was able to generate 30M randomized IO/sec at 200µsec latencies but more important, 140GB/sec of bandwidth. It turns out high bandwidth is important to many big data applications that have to roll lots of data into their analytics clusters, processing it and output results, and then do it all over again. Bandwidth limitations can impact the success of these types of applications.

By being software only they can be used in a standalone storage server or as a hyper-converged solution where applications and storage can be co-resident on the same server. As noted above, they currently support Linux O/Ss for their storage and client software and support any X86 Intel processor, any RDMA capable NIC, and any NVMe SSD.

Excelero GTM

Excelero is focused on the top 200 customers, which includes the hyper-scale providers like FaceBook, Google, Microsoft and others. But hyper-scale customers have huge software teams and really a single or few, very large/complex applications which they can create/optimize a Tier 0 storage for themselves.

It’s really the customers just below the hyper-scalar class, that have similar needs for high low latency IO/sec or high IO bandwidth (or both) but have 100s to 1000s of applications and they can’t afford to optimize them all for Tier 0 flash. If they solve sharing Tier 0 flash storage in a more general way, say as a block storage device. They can solve it for any application. And if the customer insists, they could put a clustered file system or even an object storage (who would want this) on top of this shared Tier 0 flash storage system.

These customers may currently be using NVMe SSDs within their servers as a DAS device. But with Excelero these resources can be shared across the data center. They think of themselves as a top of rack NVMe storage system.

On their website they have listed a few of their current customers and their pretty large and impressive.

NVMe competition

Aside from E8 Storage, there are few other competitors in Tier 0 storage. One recently announced a move to an NVMe flash storage solution and another killed their shipping solution. We talked about what all this means to them and their market at the end of the podcast. Suffice it to say, they’re not worried.

The podcast runs ~50 minutes. Josh and Yaniv were very knowledgeable about Tier 0, storage market dynamics and were a delight to talk with.   Listen to the podcast to learn more.


Yaniv Romem CTO and Founder, Excelero

Yaniv Romem has been a technology evangelist at disruptive startups for the better part of 20 years. His passions are in the domains of high performance distributed computing, storage, databases and networking.
Yaniv has been a founder at several startups such as Excelero, Xeround and Picatel in these domains. He has served in CTO and VP Engineering roles for the most part.


Josh Goldenhar, Vice President Products, Excelero

Josh has been responsible for product strategy and vision at leading storage companies for over two decades. His experience puts him in a unique position to understand the needs of our customers.
Prior to joining Excelero, Josh was responsible for product strategy and management at EMC (XtremIO) and DataDirect Networks. Previous to that, his experience and passion was in large scale, systems architecture and administration with companies such as Cisco Systems. He’s been a technology leader in Linux, Unix and other OS’s for over 20 years. Josh holds a Bachelor’s degree in Psychology/Cognitive Science from the University of California, San Diego.

42: GreyBeards talk next gen, tier 0 flash storage with Zivan Ori, CEO & Co-founder E8 Storage.

In this episode, we talk with Zivan Ori (@ZivanOri), CEO and Co-founder of E8 Storage, a new storage startup out of Israel. E8 Storage provides a tier 0, next generation all flash array storage solution for HPC and high end environments that need extremely high IO performance, with high availability and modest data services. We first saw E8 Storage at last years Flash Memory Summit (FMS 2016) and have wanted to talk with them since.

Tier 0 storage

The Greybeards discussed new tier 0 solutions in our annual yearend industry review podcast. As we saw it then, tier 0 provides lightening fast (~100s of µsec) read and write IO operations and millions of IO/sec. There are not a lot of applications that need this level of speed and quantity of IOs but for those that do, Tier 0 storage is their only solution.

In the past Tier 0, was essentially SSDs sitting on a PCIe bus, isolated to a single server. But today, with the emergence of NVMe protocols and SSDs, 40/50/100GBE NICs and switches and RDMA  protocols, this sort of solution can be shared across from racks of servers.

There were a few shared Tier 0 solutions available in the past but their challenge was that they all used proprietary hardware. With today’s new hardware and protocols, these new Tier 0 systems often perform as good or much better than the old generation but with off the shelf hardware.

E8 came to the market (emerged out of stealth and GA’d in September of 2016) after NVMe protocols, SSDs and RDMA were available in commodity hardware and have taken advantage of all these new capabilities.

E8 Storage system hardware & software

E8 Storage offers a 2U HA appliance with 24, hot-pluggable NVMe SSDs connected to it and support 8 client or host ports. The  hardware appliance has two controllers, two power supplies, and two batteries. The batteries are used to hold up a DRAM write cache until it can be flushed to internal storage for power failures. They don’t do any DRAM read caching because the performance off the NVMe SSDs is more than fast enough.

The 24 NVMe SSDs are all dual ported for fault tolerance and provide hot-pluggable replacement for better servicing in the field. One E8 Storage system can supply up to 180TB of usable, shared NVMe flash storage.

E8 Storage uses RDMA (RoCE) NICs between client servers and their storage system, which support 40GBE, 50GBE or 100GBE networking.

E8 does not do data reduction (thin provisioning, data deduplication or data compression) on their storage, so usable capacity = effective capacity.  Their belief is that these services consume a lot of compute/IO limiting IO/sec and increasing response times and as the price of NVMe SSD capacity is coming down over time these activities become less useful.

They also have client software that provides a fault tolerant initiator for their E8 storage. This client software supports MPIO and failover across controllers in the event of a controller outage. The client software currently runs on just about any flavor of Linux available today and E8 is working to port this to other OSs based on customer requests.

Storage provisioning and management is through a RESTful API, CLI or web based GUI management portal. Hardware support is supplied by E8 Storage and they offer a 3 year warranty on their system with the ability to extend this to 5 years, if needed.

One problem with today’s standard NVMe over Fabric solutions is that they lack any failover capabilities and really have no support for data protection. By developing their own client software, E8 provides fault tolerance and data protection for Tier 0 storage. They currently supported RAID 0 and 5 for E8 Storage and RAID 6 is in development.

Performance

Everyone wants native DAS-NVMe SSD storage but unlike server Tier 0 solutions, E8 Storage’s 180TB of NVMe capacity can be shared across up to 100 servers (currently have 96 servers talking to a single E8 Storage appliance at one customer).  By moving this capacity out to a shared storage device it can be be made more fault tolerant, more serviceable and be amortized over more servers. However the problem with doing this has always been the lack of DAS like performance.

Talking to Zivan, he revealed that a single E8 Storage service was capable of 5M IO/sec, and at that rate, the system delivers an average response time of  300µsec and for a more reasonable 4M IO/sec, the system can deliver ~120µsec response times. He said they can saturate a 100GBE network by operating at 10M IO/sec. He didn’t say what the response time was at 10M IO/sec but with network saturation, response times probably went exponentially higher.

The other thing that Zivan mentioned was that the system delivered these response times with a very small variance (standard deviation). I believe he mentioned 1.5 to 3% standard deviations which at 120µsec is 18 to 36µsec and even at 300µsec its 45 to 90µsec. We have never see this level of response times, response time variance and IO/sec in a single shared storage system before.

E8 Storage

Zivan and many of his team previously came from IBM XIV storage. As such, they have  been involved in developing and supporting enterprise class storage systems for quite awhile now. So, E8 Storage knows what it takes to create products that can survive in 7X24, high end, highly active and demanding environments.

E8 Storage currently has customers in production in the US. They are seeing primary interest  in their system from the HPC, FinServ, and Retail industries but any large customers could have the need for something like this.  They sell their storage for from $2 to $3/GB.

The podcast runs ~42 minutes, and Zivan was easy to talk with and has a good grasp of the storage industry technologies.  Listen to the podcast to learn more.

Zivan Ori CEO & Co-Founder, E8 Storage

Mr. Zivan Ori is the co-founder and CEO of E8 Storage. Before founding E8 Storage, Mr. Ori held the position of IBM XIV R&D Manager, responsible for developing the IBM XIV high-end, grid-scale storage system, and served as Chief Architect at Stratoscale, a provider of hyper-converged infrastructure.

Prior to IBM XIV, Mr. Ori headed Software Development at Envara (acquired by Intel) and served as VP R&D at Onigma (acquired by McAfee).

40: Greybeards storage industry yearend review podcast

In this episode, the Greybeards discuss the year in storage and naturally we kick off with the consolidation trend in the industry and the big one last year, the DELL-EMC acquisition. How the high margin EMC storage business is going to work in a low margin company like Dell is the subject of much speculation. That and which of the combined companies storage products will make it through the transition make for interesting discussions. And Finally what exactly is Dell’s long term strategy is another question.

We next turn to the coming of age of object storage. A couple of years ago, object storage was being introduced to a wider market but few wanted to code to RESTful interfaces. Nowadays, that seems to be less of a concern and the fact that one can have onsite/offsite/cloud based object storage repositories from open source, proprietary solutions and everything in between is making object storage a much more appealing option to enterprise IT.

Finally, we discuss the new Tier 0. What with NVMe SSDs and the emergence of NVMe over Fabric coming out last year, Tier 0 has never looked so promising.  You may recall that Tier 0 was hot about 5 years with TMS and Violin and others coming out with lightning fast storage IO. But with DELL-EMC DSSD: startups (E8 storage, Mangstor, Apeiron data systems, and others); NVMDIMMs, CrossBar, and Everspin coming out with denser offerings; and other SCM (Micron, HPE, IBM, others?) technologies on the horizon, Tier 0 has become red hot again.

Sorry about the occasional airplane noise and other audio anomalies. The podcast runs  over 47 minutes. Howard and I could talk for hours on what’s happening in the storage industry. Listen to the podcast to learn more.

Ray Lucchesi is the President and Founder of Silverton Consulting, a prominent blogger at RayOnStorage.com, and can be found on twitter @RayLucchesi.

Howard Marks is the Founder and Chief Scientist of howardmarksDeepStorage, a prominent blogger at Deep Storage Blog and can be found on twitter @DeepStorageNet.