0102 GreyBeards talk big memory data with Charles Fan, CEO & Co-founder, MemVerge

It’s been a couple of months since we last talked with a startup, so the GreyBeards thought it was time. We reached out to Charles Fan (@CharlesFan14), CEO and Co-Founder of MemVerge to find out about their big memory solution or as Charles likes to call it, “software defined (big) memory”. Although neither Matt or I had ever talked with Charles before, he’s been just about everywhere in the storage industry throughout his career.

If you have been following my RayOnStorage blog you will have seen a post (Need memory, Intel’s Optane DC PM to the rescue) last year on Intel’s new Persistent Memory solutions using 3D XPoint, called Optane DC PM (data center, persistent memory) . At the announcement Intel made available a couple of ways customers could use Optane DC PM (PMem).

Optane DC PM primer

Native Optane DC PM access modes include:

  • A Memory Mode, which has Pmem emulating a large volatile memory space and uses a defined ratio of DRAM to PMem as a cache to access the Optane DC PM memory behind it.
  • An Application Direct (AppDirect) Mode which supports two sub-modes: a storage device mode that uses Pmem to emulate a persistent, 4KB block storage device; and a byte addressable, persistent memory address space mode that uses Pmem to emulate a large, non-volatile memory space . AppDirect memory content persists across boots, power failures and other system crashes.

Native PMem modes are selectected in the BIOS and are deployed at Boot time. Optane DC PM on a server can be split up into any of the three modes. And currently with Optane DC PM (Gen 1), a single server can have up to 6TB of DC PM which will go up to 8TB with Optane DC PM Gen 2 coming out later this year.

MemVerge Memory Machine

MemVerge has written a “software defined memory” service called the Memory Machine, that sits above the Intel Optane DC PM in server(s) and provides application access AND data services for PMem. .

Charles likens their Memory Machine to what VMware did for CPU cores, ie. they provide memory virtualization. This, Charles believes will bring on the age of Big Memory applications. He feels that PMem, with Memory Machine on top of it, will eliminate the need for high performance, tier 0 storage. Tier 0 storage is ~$10B market today, which he sees shifting from networked storage to PMem solutions. 

Memory Machine Data Services

One of the data services that the Memory Machine offers is a Pmem snapshot service. PMem thick or thin snapshots can be taken any (infinite) number of times (for thick snapshots storage space availability may limit their number) and can be taken up to once per minute. PMem thin snapshots take little time to accomplish and are very PMem space efficient but thick snapshots are a PMem to PMem copy of data, which will take longer to accomplish and will take double the memory of the original PMem being snapshot.

One significant use case for Pmem snapshots is for checkpoint crash recovery. Charles mentioned many securities and financial analysis firms use KDB as streaming data base service to monitor/analyze market activity and provide automated trading and other market services. These firms are always trying to gain an advantage through speed and reduced latency and as a result have moved their time sensitive processing to use in memory data structures/databases.

However, because checkpointing for crash recovery takes time, they usually checkpoint in memory databases only once a day (after market close) and maintain a log of database transactions on SSD. If there’s a system crash, they reload the last checkpoint and re-play all the transaction logs since that checkpoint to bring their in memory database back to the point of crash. Due to the number of transactions these firms do, this sort of crash recoverys can take hours.

With Memory Machine, these customers can take in memory checkpoints every minute and in the event of a crash, only have to re-play a minutes worth of transaction logs which could be done in no time to get back up

Other environments do similar checkpoint crash recoveries all of which could also take advantage of PMem snapshots to take more frequent checkpoints. Charles mentioned Rendering farms on the podcast but long scientific simulations (HPC) and others use checkpoints for crash recovery.

Another data (or application) service offered by Memory Machine is application cloning. Most in memory applications are single threaded. meaning they can only take advantage of a single CPU core (thread). In order to speed up processing, customers must shard (split up) or copy their database and application onto other servers/CPU/cores to provide more processing power. Memory Machine can use its thick or thin snapshots to clone applications in seconds.

Charles also mentioned that Memory Machine offers PMem dynamic reconfiguration. That is instead of having to make BIOS changes and re-boot server(s) to re-allocate PMem across different applications, Memory Machine is allocated 100% of the PMem at boot time but then, on demand, anytime its operating, operators using MemVerge’s GUI/CLI can carve Pmem up into any number of application memory spaces. That is as application demand for in memory data changes, operations can use the Memory Machine to re-allocate PMem to keep up.

Memory Machine also supports PMem clustering or scaling across servers. With the current 6TB (and soon 8TB) per server PMem limit, some customer applications still run out of memory. Memory Machine is able to cluster or aggregate PMem across up to 32 servers to support a single larger, PMem address space of 192TB (Gen 1) or 256TB (Gen 2) DC PM. The Memory Machine uses an RDMA (RoCE Ethernet or InfiniBand) cluster interconnect which adds ~1 microsecond of overhead to access PMem in another server. This comes with PMem automatic data tiering using DRAM, local (on the server) PMem and remote (across cluster interconnect) PMem.

Charles mentioned another data service provided by Memory Machine is (Synch or Asynch) replication. One use case for replication is to create a Pub-Sub service for market data.

Charles believes that in memory databases and data processing workloads are just starting to become popular these days. Besides KDB and rendering, other data processing such as AI training/inferencing, Reddis applications, and other database systems are able to take advantage of in memory, large data structures to speed up their data processing

MemVerge’s EAP (early access program) opened up recently (5/19/2020). Charles suggested anyone using large, in memory data processing, take a look at what the Memory Machine can do and contact them to sign up.

The podcast runs ~45 minutes. Charles was very articulate as well as knowledgeable about the technology and its applications. He was great to talk tech with. Matt and I had a fun time talking Optane DC PM and Memory Machine functionality/applications with him. Listen to the podcast to learn more.

Charles Fan, CEO & Co-founder, MemVerge

Charles Fan is co-founder and CEO of MemVerge. Prior to MemVerge, Charles was a SVP/GM at VMware, founding the storage business unit that developed the Virtual SAN product.

Charles also worked at EMC and was the founder of the EMC China R&D Center. Charles joined EMC via the acquisition of Rainfinity, where he was a co-founder and CTO.

Charles received his Ph.D. and M.S. in Electrical Engineering from the California Institute of Technology, and his B.E. in Electrical Engineering from the Cooper Union.

78: GreyBeards YE2018 IT industry wrap-up podcast

In this, our yearend industry wrap up episode, we discuss trends and technology impacting the IT industry in 2018 and what we can see ahead for 2019 and first up is NVMeoF

NVMeoF has matured

In the prior years, NVMeoF was coming from startups, but last year it’s major vendors like IBM FlashSystem, Dell EMC PowerMAX and NetApp AFF releasing new NVMeoF storage systems. Pure Storage was arguably earliest with their NVMeoF JBOF.

Dell EMC, IBM and NetApp were not far behind this curve and no doubt see it as an easy way to reduce response time without having to rip and replace enterprise fabric infrastructure.

In addition, NVMeoFstandards have finally started to stabilize. With the gang of startups, standards weren’t as much of an issue as they were more than willing to lead, ahead of standards. But major storage vendors prefer to follow behind standards committees.

As another example, VMware showed off an NVMeoF JBOF for vSAN. A JBoF like this improves vSAN storage efficiency for small clusters. Howard described how this works but with vSAN having direct access to shared storage, it can reduce data and server protection requirements for storage. Especially, when dealing with small clusters of servers becoming more popular these days to host application clusters.

The other thing about NVMeoF storage is that NVMe SSDs have also become very popular. We are seeing them come out in everyone’s servers and storage systems. Servers (and storage systems) hosting 24 NVMe SSDs is just not that unusual anymore. For the price of a PCIe switch, one can have blazingly fast, direct access to a TBs of NVMe SSD storage.

HCI reaches critical mass

HCI has also moved out of the shadows. We recently heard news thet HCI is outselling CI. Howard and I attribute this to the advances made in VMware’s vSAN 6.2 and the appliance-ification of HCI. That and we suppose NVMe SSDs (see above).

HCI makes an awful lot of sense for application clusters that VMware is touting these days. CI was easy but an HCI appliance cluster is much, simpler to deploy and manage

For VMware HCI, vSAN Ready Nodes are available from just about any server vendor in existence. With ready nodes, VARs and distributors can offer an HCI appliance in the channel, just like the majors. Yes, it’s not the same as a vendor supplied appliance, doesn’t have the same level of software or service integration, but it’s enough.

[If you want to learn more, Howard’s is doing a series of deep dive webinars/classes on HCI as part of his friend’s Ivan’s ipSpace.net. The 1st 2hr session was recorded 11 December, part 2 goes live 22 January, and the final installment on 5 February. The 1st session is available on demand to subscribers. Sign up here]

Computional storage finally makes sense

Howard and I 1st saw computational storage at FMS18 and we did a podcast with Scott Shadley of NGD systems. Computational storage is an SSD with spare ARM cores and DRAM that can be used to run any storage intensive, Linux application or Docker container.

Because it’s running in the SSD, it has (even faster than NVMe) lightening fast access to all the data on the SSD. Indeed, And the with 10s to 1000s of computational storage SSDs in a rack, each with multiple ARM cores, means you can have many 1000s of cores available to perform your data intensive processing. Almost like GPUs only for IO access to storage (SPUs?).

We tried this at one vendor in the 90s, executing some database and backup services outboard but it never took off. Then in the last couple of years (Dell) EMC had some VM services that you could run on their midrange systems. But that didn’t seem to take off either.

The computational storage we’ve seen all run Linux. And with todays data intensive applications coming from everywhere these days, and all the spare processing power in SSDs, it might finally make sense.

Futures

Finally, we turned to what we see coming in 2019. Howard was at an Intel Analyst event where they discussed Optane DIMMs. Our last podcast of 2018 was with Brian Bulkowski of Aerospike who discussed what Optane DIMMs will mean for high performance database systems and just about any memory intensive server application. For example, affordable, 6TB memory servers will be coming out shortly. What you can do with 6TB of memory is another question….

Howard Marks, Founder and Chief Scientist, DeepStorage

Howard Marks is the Founder and Chief Scientist of DeepStorage, a prominent blogger at Deep Storage Blog and can be found on twitter @DeepStorageNet.

Raymond Lucchesi, Founder and President, Silverton Consulting

Ray Lucchesi is the President and Founder of Silverton Consulting, a prominent blogger at RayOnStorage.com, and can be found on twitter @RayLucchesi. Signup for SCI’s free, monthly e-newsletter here.

77: GreyBeards talk high performance databases with Brian Bulkowski, Founder & CTO, Aerospike

In this episode we discuss high performance databases and the storage needed to get there, with Brian Bulkowski, Founder and CTO of Aerospike. Howard met Brian at an Intel Optane event last summer and thought he’d be a good person to talk with. I couldn’t agree more.

Howard and I both thought Aerospike was an in memory database but we were wrong. Aerospike supports in memory, DAS resident and SAN resident distributed databases.

Database performance is all about the storage (or memory)

When Brian first started Aerospike, they discovered that other enterprise database vendors were using fast path SAS SSDs for backend storage and so that’s where Aerospike started with on storage.

As NVMe SSDs came out, Brian expected higher performance but wasn’t too impressed with what he found out with NVMe SSD’s real performance as compared to SAS SSDs. However lately, the SSD industry has bifurcated into fast, low-capacity (NVMe) SSDs and slow, large capacity (SAS) SSDs. And over time the Linux Kernel (4.4 and above) has sped up NVMe IO stack. So now he has become more of a proponent of NVMe SSDs for high performing database storage.

In addition to SAS and NVMe SSDs, Aerospike supports SAN storage. One recent large customer uses SAN shared storage and loves the performance. Moreover, Aerospike also offers an in memory database option for the ultimate in high performance (low capacity) databases.

Write IO performance

One thing that Aerospike is known for is their high performance under mixed R:W workloads. Brian says just about any database can perform well with an 80:20 R:W IO mix, but at 50:50 R:W, most databases fall over.

Aerospike did detailed studies of SSD performance with high write IO and used SSD native APIs to understand what exactly was going on with SAS SSDs. Today, they understand when SSDs go into garbage collection and and can quiesce IO activity to them during these slowdowns. Similar APIs are available for NVMe SSDs.

Optane memory

The talk eventually turned to Optane DIMMs (3D Crosspoint Memory). With Optane DIMMs, server memory address space will increase from 1TB to 6TB. From Brian’s perspective this is still not enough to host a copy of a typical database but it would suffice to hold cache a  database index. Which is exactly how they are going to use Optane DIMMs.

Optane DIMMs are accessed via PMEM (an Intel open source memory access API) and can specify  caching (L1-L2-L3) characteristics, so that the processor(s) data and instruction caching tiers don’t get flooded with database information. Aerospike has done for in-memory databases in the past, it’s just requires a different API.

As a distributed database, they support data protection for DAS and in memory databases through mirroring, dual redundancy.  But Aerospike was developed as a  distributed database, so data can be sharded, across multiple servers to support higher, parallelized performance.

With Optane DIMMs being 1000X faster than NVMe SSD, the performance bottleneck has now moved back to the network. Given the dual redundancy data protection scheme, any data written on one server would need to be also written (across the network) to another server.

Data consistency in databases

This brought us around to the subject of database consistency.  Brian said Aerospike database consistency for reads was completely parameterized, e.g. one can specify linear (database wide) consistency to session level consistency, with some steps in between. Aerospike is always 100% write consistent but read consistency can be relaxed for better performance.

Howard and I took a deep breath and said data has to be a 100% consistent. Brian disagreed, and in fact, historically relational databases were not fully read consistent. Somehow this felt like a religious discussion and in the end, we determined that database consistency is just another knob to turn if you want high performance.

Brian mentioned that  Aerospike is available in an open source edition which anyone can access and download. He suggested we tell our DBA friends about it, maybe, if we have any…

The podcast runs ~44 minutes. Brian’s been around databases for a long time and seemingly, most of that time has been figuring out the best ways to use storage to gain better performance. He has a great perspective on  NVMe vs. SAS SSD performance as well as (real) memory vs SCM performance, which we all need to understand better as SCM rolls out. Possibly, barring the consistency discussion, Brian was also easy to talk with.  Listen to our podcast to learn more.

Brian Bulkowski, Founder and CTO, Aerospike

Brian is a Founder and the CTO of Aerospike. With almost 30 years in Silicon Valley, his motivation for starting Aerospike was the confluence of what he saw as the rapidly advancing flash storage technology with lower costs that weren’t being fully leveraged by database systems as well as the scaling limitations of sharded MySQL systems and the need for a new distributed database.

He was able to see these needs as both a Lead Engineer at Novell and Chief Architect at Cable Solutions at Liberate – where he built a high-performance, embedded networking stack and high scale broadcast server infrastructure.