116: GreyBeards talk VCF on VxBlock 1000 with Martin Hayes, DMTS, Dell Technologies

Sponsored By:

This past week, we had a great talk with Martin Hayes (@hayes_martinf), Distinguished Member Technical Staff at Dell Technologies about running VMware Cloud Foundation (VCF) on VxBlock 1000 converged infrastructure (CI). It used to be that Cloud Foundation required VMware vSAN primary storage but that changed a few years ago. . When that happened, the Dell Technologies team saw it as a great opportunity to support VCF on VxBlock CI.

This is the first GreyBeards podcast for Martin, but he was extremely knowledgeable about VxBlock and Cloud Foundation technologies. He’s been a technical product manager on the VxBlock converged infrastructure at Dell Technologies for many years. He’s an expert on Cloud Foundation and he knows an awful lot more about VMware NSX-T networking than seems reasonable (good thing). In any case, Martin’s expertise covers the whole gamut of VCF services as well as VxBlock 1000 infrastructure. The podcast is a bit longer than our normal sponsored podcast but there was a lot of information to cover. Listen to the podcast to learn more.

With VCF enabling primary storage on networked storage systems, all the storage vendors in the world gave a mighty cheer. But VMware Cloud Foundation still requires the vSAN servers to run its management domain. Late in 2020, VxBlock 1000 from Dell Technologies released a new software defined version of its Advanced Management Platform (AMP) to run on vSAN Ready Nodes. AMP is VxBlock’s management platform but also runs management domains for VCF and NSX-T.

For workload domains, VxBlock 1000 offers Cisco UCS M5 rack and blade servers, that can be configured to support just about any workload needed by a data center.

Historically, VMware vSphere problems with DR weren’t as much storage replication issues as networking problems. But NSX-T and VCF seemed to have solved that problem.

And with vRealize Automation plugins and NSX-T APIs, customers can have 0 touch network provisioning which enables the use of IaaS or infrastructure as code for their data center.

VMware vVOLs are now available with Dell EMC PowerMax storage. So, now VxBlock 1000 customers can use vSphere storage policy-based management (SPBM) as well as automated vVOL replication for data on PowerMax.

VMware NSX-T implements Application Virtual Networks (AVNs) using a GENEVE overlay network, which make extensive use of encapsulation. But where there’s encapsulation, de-encapsulation must follow to access outside networks. All this (encapsulation on ingress, de-encapsulation on egress) is done through NSX-T Edge clusters.

The net result of all this is that VMware customers have more choice, i.e., now they can run VCF on HCI or CI. And with VxBlock 1000 CI, VCF customers can select a best of breed components for each level of their 3-tier infrastructure.

Martin Hayes, DMTS, Dell Technologies

Martin Hayes is a Technical Product Manager at Dell Technologies, where he develops and executes data center product strategies that incorporate virtualization, software-defined networking (SDN) and converged systems.

Previously, he served in network advisory and architect roles at Dell EMC, converged systems pioneer VCE and Irish broadband provider eircom.

115-GreyBeards talk database acceleration with Moshe Twitto, CTO&Co-founder, Pliops

We seem to be on a computational tangent this year. So we thought it best to talk with Moshe Twitto, CTO and Co-Founder at Pliops (@pliopsltd). We had first seen them at SFD21 (see videos of their sessions here) and their talk on how they could speed up database IO was pretty impressive. Essentially, they have a database/storage accelerator board used to increase block store IO activity to NVMe SSDs but also provide a key-value store IO accelerator,

Moshe was very knowledgeable about the technology and had previously worked at Samsung for their SSD group. He knew a lot about what happens underneath the covers of an SSD and what it takes to speed up IO. It turns out that many in memory databases use persistent key value stores to persist data or to operate in non- (or partial-) memory-mode. Listen to the podcast to learn more.

The Pliops board plugs into the PCIe bus and accelerates IO to NVMe SSDs connected to the bus or can act to accelerate IO to JBoF that’s networked behind it. Their board uses FPGA(s), NVDimms of their own design and DRAM to accelerate database IO using NVMe SSDS.

Pliops operates in one of two modes, as a Key-Value store or as a Block store. Their Key-Value store takes advantage of block store capabilities, so we start there.

In block mode, Pliops provides inline hardware data compression and encryption. Compression requires support for variable length blocks on backend SSDs. To better support this, they pack multiple compressed blocks into physical blocks. They also use a virtualization service to support mapping host LBAs to physical block addresses (using an internal key-value store). Hardware, inline encryption is also provided on a LUN (or namespace) basis. This could enable each database to have its own key. They have a root-of-trust secret key used to encrypt customer namespace (database) keys.

They also optimize physical block layouton the SSD to reduce write amplification (doing more than one write to the NAND for every host write to the SSD).

Block mode also supports smart caching. This is especially useful for database journaling/loging which reuses a portion of LBA address space (blocks} as a revolving journal/log. These blocks are overwritten with new data often and data written to them need not be destaged to NVMe SSDs as long as it can be maintained in NVDimm storage. At some point it gets destaged but probably only when log activity slows down (if ever) or some timeout occurs.

For their key-value storage accelerator, they have implemented an API that’s similar to RocksDB, a persistent key-value store, which is used as a physical storage backend for Reddis and similar in-memory databases. However, the challenge with RocksDB is that there are lots of tuning knobs/parameters. So getting right takes some work. But all this can be avoided just by using Pliops.

We didn’t talk too much about how their key-value store works. Moshe says they optimize the key structures and key data so that all database keys can be retained in their board’s memory and just by doing that, they can have immediate (1 IO) access to any data block pointed to by those keys.

He did mention that they provide ~the same performance for a database getting 10-25% host cache hit rates using their board as that same database would support with a 80-90% host cache hit rate not using their board. Some of this was shown at SFD21 (so check out the videos above for more performance info)

A couple of other advantages they bring to the table. As they are interposed between the host and the NVMe SSDs they can take advantage of their NVDIMMs and memory to write much wider stripes than the host writes. This allows them to reduce SSD read and write amplification (due to less garbage collection) by writing more full NAND pages. All this also reduces physical host (data) writes/day which can significantly improve SSD endurance.

Somewhere in all that smart caching and data compression, they are able to also decrease response times It turns out that databases that don’t use RocksDB or depend on key-value stores can easily take advantage of all their block store functionality to improve IO performance.

They mostly market their product to hyperscalers and superscalers. His definition of super-scalers was any organization that operates at public cloud levels but is not a public cloud (e.g., big social media companies).

Moshe Twitto, CTO & Co-founder Pliops

Moshe is an expert in advanced data management and coding algorithms. Prior to co-founding Pliops, Moshe served as CTO of Samsung’s SSD Controller Development Center in Israel.

Moshe holds MSEE, BSEE degrees from Technion University, Summa Cum Laude and served in the Unit 8200 Intelligence Division of the Israel Defense Corps.

114: GreyBeards talk computational storage with Tong Zhang, Co-Founder & Chief Scientist, ScaleFlux

Seeing as how one topic on last years FMS2020 wrap-up with Jim Handy was the rise of computational storage and it’s been a long time (see GreyBeards talk with Scott Shadley at NGD Systems) since we discussed this, we thought it time to check in on the technology. So we reached out to Dr. Tong Zhang, Chief Scientist and Co-founder, ScaleFlux to see what’s going on. ScaleFlux is seeing rising adoption of their product in hyper-scalers as well as large enterprises. Their computational storage is a programmable FPGA based 4TB and 8TB SSD.

Tong was very knowledgeable on current industry trends (Moore’s law slowing & others) that have created an opening for computational storage and other outboard compute. He also is well versed into how some of the worlds biggest customers are using the technology to work faster and cheaper in their data centers. Listen to the podcast to learn more.

At the start Tong mentioned Alibaba’s use of ScaleFlux’s transparent, line speed, outboard encryption/decryption and compression/decompression. And, depending on the data, they can see compression ratios far exceeding 2:1. As such, customers not only benefit from a cheaper $/GB but can also see better NAND endurance and higher performance.

Hosts can do compression and encryption but doing so takes a lot of CPU cycles. It turns out that compression is more compute intensive than encryption. Tong said that most modern cores can encrypt/decrypt at 1GB/sec but, depending on the compression algorithm, can only compress at 40 to 100MB/sec. But in any case doing so on the host consumes a lot of CPU instruction cycles. With ScaleFlux, they can compress and decompress at PCIe bus speeds.

Most storage controllers that offer compression/decompression must have some sort of LBA (logical block address) virtualization. Because while the host may be writing 512 or 4096 byte blocks, what’s actually written to the NAND is more like, 231 or 1999 bytes. So packing these odd, variable length blocks into NAND blocks can become a problem. But most SSDs already have a flash translation layer (FTL) where LBA addresses are mapped, over time, to different physical NAND page/block addresses. ScaleFlux has combined support for LBA virtualization and FTL into the same process and by doing so, they reduce IO overhead to perform better.

ScaleFlux’s drive is an NVMe SSD, which already supports great native response times but when you are transferring 1/2 or less of (compressed) data from the host onto NAND, you can reduce latencies even more. .

Although their current generation product is based on TLC NAND they are working on the next generation which will support QLC. And the benefits of writing and reading less data should also help QLC endurance and performance.

Although ScaleFlux is seeing great adoption with just outboard transparent compression and encryption, there is more that could be done, For example,

  • Filtering query’s at the drive rather than at the host. If customers can send a search key/phrase or other filtering request directly to the drive, the drive can pass over all it’s data and send back just the data that matches that filter request.
  • Transcoding and other data format changes. Although transcoding makes a lot of sense to do outboard, Tong also mentioned format changes. We asked him to clarify and he said consider a row based database that needs to be accessed in columnar format. If the drive could change the format from one to the other, it opens up more analytics tool sets.

At the moment, ScaleFlux engineering teams are the ones that program the FPGA to perform outboard functionality. But in a future release, they plan to adding ARM cores in a SoC, which can handle more general purpose outboard functionality as code.

Because of this added complexity of compression, encryption and other outboard logic, we asked Tong what power loss protection was available at the drive level. Tong assured us that once data has been received by their device, it is maintained across a power failure with CAPs and other logic to offload it.

Tong also mentioned that Intel, AWS and the NVMe standard committee are looking at adding some computational storage support into the NVMe standard, so applications and host software can invoke and maybe modify outboard functionality on the fly. Sort of like loading containers of functionality to run on the fly on an SSD drive.

Dr. Tong Zhang, Chief Scientist and Co-fonder, ScaleFlux

Dr. Tong Zhang is a well-established researcher with significant contributions to data storage systems and VLSI signal processing. Dr. Zhang is responsible for developing key techniques and algorithms for ScaleFlux’s Computational Storage products and exploring their use in mainstream application domains.

He is currently a Professor at Rensselaer Polytechnic Institute (RPI). His current and past research span over database, filesystem, solid-state and magnetic data storage devices and systems, digital signal processing and communication, error correction coding, VLSI architectures, and computer architecture.

He has published over 150 technical papers at prestigious USENIX/IEEE/ACM conferences and journals with the citation h-index of 36, and has served as general and technical program chairs for several premier conferences. Among his many research accomplishments, he made pioneering contributions to establishing flash memory signal processing and enabling practical implementation of low-density parity-check (LDPC) codecs. He received two best paper awards and has over 20 issued/pending US patent applications.

He holds BS/MS degrees in EE from the Xi’an Jiaotong University, China, and PhD degree in ECE from the University of Minnesota.

113: GreyBeards talk storage for next gen. workloads with Liran Zvibel, Co-Founder & CEO WekaIO

Sponsored By:

I’ve known Liran Zvibel, Co-founder and CEO of Weka IO for many years now and it’s the second time he’s been on our show, (see: Episode 56: GreyBeards talk high performance file storage...). In those days, WekaIO was just coming out and hitting the world with this extremely high-performing, scale out unstructured data solution. Well since then, they’ve just gotten better.

Keith and I had a great time talking with Liran again. Liran has deep knowledge about unstructured data and how enterprises use it these days. WekaIO’s story, over the last two years has gone beyond great performance to real world, hybrid cloud offerings e as well as going after the cloud native app’s (read Kubernetes [K8S]) persistent storage. Listen to the podcast to learn more.

We started with a history lesson on WekaIO. Back in those days (which persists today, I might add) there were many IO workloads that required companies to purchase different solutions for different work. For example, they needed DAS or SAN for performance, NAS for ease of access and object for scale. WekaIO came out with an answer to all these problems in a single, scaleable storage system. That is, they performed IO as fast as DAS or SAN block, had all the ease of access of NAS, and could scale as much as object.

However, the real culprit holding the world back was “NFS”. At the outset NFS was designed (back in the 1990s) with the then current networking speeds available (10-100Mbps), which performed just fine at those speeds. But when 10-100GbE came out in the 2000’s, NFS’s metadata overhead was too chatty to support wire speeds. Thus, any storage that depended on NFS protocols couldn’t supply (small) files fast enough for modern applications.

This is why WekaIO has moved to not only support NFS and SMB but also POSIX and NVIDIA® GPUDirect® Storage interfaces. By offering POSIX, WekaIO is able to plug into standard Linux and Windows server systems and provide excellent small file performance. Of course applications that demand small file performance today are mostly data analytics and AI/ML/DL workloads.

Consequently., NVIDIA came out with their GPUDirect Storage protocol to address getting small file (data) into GPUs faster. With GPUDirect, storage systems can RDMA data directly from storage to GPU memory and vice versa, with no OS intervention (other than to set up the transfer). If you happen to have a small file, high performing storage system attached to your fabric that supports GPUDirect , like WekaIO, you can significantly speed up your AI/ML/DL workloads.

Next we started talking K8S storage. WekaIO usestheir POSIX interface in their CSI plugin to support K8S container persistent storage. Again, supplying high performance for small files seems to be tailor made for K8S container applications that exist today and will for the foreseeable future.

Enter the cloud. Almong other things, WekaIO is a AWS primary storage vendor. It also offers snap to cloud. And with both of these in tandem, it’s just become a lot easier to move and access your unstructured data in the cloud. Liran mentioned that WekaIO primary storage in AWS operates across AZ’s. This means it can be configured to support better availability than EBS.

Large BioPharma companies are using WekaIO in AWS to store and process field data and research data, so that this work can be done around the world. Some companies have run out of compute in a single AZ (unbelievable I know but it’s COVID). By offering multi-AZ support unstructured data access with WekaIO, these companies can spread their compute across AZ’s and region and still access their data. And when their products are ready for gov’t certification, having all this data in the cloud, can make provide an easy way to have gov’t access this same data.

Liran Zvibel, Co-founder and CEO WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design, and development for a portfolio of rich social media applications.

Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.

112: GreyBeards annual year end wrap-up with Keith & Matt

It’s the end of the year, so time for our regular year end wrap up discussion with the GreyBeards. 2020 has been an interesting year to say the least. It started out just fine, then COVID19 showed up and threw a wrench in everyone’s plans and as the year closes, we were just starting to see some semblance of the new normal, when one of the largest security breaches in years shows up. Whew, almost glad that’s over and onto 2021.

As always the GreyBeards had a great discussion on these and other topics to highlight the year just past. The talk was wide ranging and hard to characterize but I did my best below. Listen to the podcast to learn more.

COVID19s impact on the enterprise

It will probably take some time before we learn the true, long term impacts of COVID19 on IT but one major change has to be the massive Work From Home (WFH) transition that took place overnight.

While WFH can be more productive for some, the lack of face2face interaction can be challenging for others. The fact that many of the GreyBeards have been working from home for decades now, left us a bit oblivious to how jarring this transition can be for newcomers.

There’s definitely some psychological changes that need to occur to be productive at WFH. Organization skills become even more important. Structured interactions (read conference calls, zoom/webex and other forms of communication become much more important. And then there’s security.

Turns out VMware and others have been touting VDI solutions for the past decade or so to better support remote work and at the same time providing corporate levels of security for remote work. While occasionally this doesn’t work quite as well as expected, it’s certainly much much better than having end users access corporate data without any security around that data or worse yet, the “bring your own device”. All these VDI solutions had a field day when WFH happened.

Many workers found they could be more productive at WFH, due the less distractions, no commute time and more flexible hours. What happens when COVID19 is vanquished to all these current WFHers is anyone’s guess.

We thought there might be less need for large office campuses/buildings. But there’s something to be said for more collaboration and random interactions through face2face meetings that can only occur in an office setting with workers present at the same time. Some organizations will take to this new way of work while others will try to dial WFH back to non-existent. Where your organization fits on this spectrum and why, will be telling across a number of dimensions.

The rise of ARM

There’s been a slow but steady improvement in ARM processors over the last almost half century. Nowadays it’s starting to make a place for itself in the enterprise. ARH has always been the goto microprocessor for low power solutions (like smartphones) but nowadays they are being deployed in the cloud and even the enterprise. These can be used as server processors but even outside servers, ARM cores are showing up in hardware accelerators as the brains behind SmartNICs, DPUs, SPUs, etc.

Keith made mention AWS 2nd generation Graviton 64-bit ARM processor EC2 instances. And yes there’s significant cost ( & power) savings that can be had using AWS Graviton ARM instances. So the cloud is starting to adopt them. Somewhere over the past couple of years I heard that VMware was porting ESX to work on ARM cores.

But apparently, it’s not just as simple as dropping an ARM multi-core processor into a server and recompiling your code and away you go. Applications need a certain amount of optimization to run effectively on ARM processors. And the speed up between non-optimized and optimized versions of an application running on ARM cores is significant.

As for SmartNICs and DPUs, these are data networking hardware accelerators that provide real time processing capabilities needed to keep up with higher speed networking, 100GbE and beyond. These DPUs perform deep packet inspection, data compression, encryption and other services all at wire speeds.. Yes you could devote 1 or more X86 cores to do this, but it’s much cheaper (and more effective) to do this outside the CPU core. Moreover, performing this activity at the network entry point to the server means that much of this data doesn’t have to be transferred back and forth through server memory. So not only does it save CPU core cycles but also memory size and memory & PCIe bus bandwidth. We published a recent podcast with Kevin Deierling, NVIDIA Networking discussing DPUs if you want to learn more.

Pat made mention at (virtual) VMworld their plans to port ESX to the DPU. Keith followed up on this and asked some other exec’s at VMware about this and they said VMware will more likely support DPUs as just another hardware accelerator in their cluster. In either case, CPU cycles should be freed up and this should help VMware use X86 cores more efficiently. And perhaps this will help them engage in more CPU constrained environments such as Telcom.

Then there’s computational storage. We have been watching this technology for a couple of years now and it’s seeing some success in being deployed to public cloud environments. They seem to be being used to provide outboard data compression. It’s unclear whether these systems depend on ARM processing or not but my bet is that they do. To learn more about computational storage check out these podcasts, FMS2020 wrap up with Jim Handy and our talk with Scott Shadley on NGD’s computational storage.

System security

At yearend, we are learning of a massive security breach throughout US government IT facilities. All based on what is believed to be a Russian hack to a software package that is embedded in a popular networking tool software solution, SolarWinds. They are calling this a software supply chain hack. Although we are mainly hearing about government agencies being hacked, SolarWinds is also pervasive in the enterprise as well.

There have been many hardware supply chain hacks in the past, where a board supplier used chips or logic that weren’t properly vetted. Over time, hardware suppliers have started to scrutinize their supply chains better and have reduced this risk.

And the US government have been lobbying for the industry to use a security chip with a backdoor or to supply back doors to smartphone encryption capabilities. Luckily, so far, none of these have been implemented by industry.

What Russia has shown us is that this particular hack is not limited to the hardware sphere. Software supply chain risk can’t be ignored anymore.

This means that any software application supplier will need to secure their supply chain or bring it all in house. Which may mean that costs for these packages will go up. It’s possible that using a pure open source supply chain may reduce this risk as well. At least that’s the promise of open source.

We said 2020 was an interesting year and it’s going out with a bang.

Matt Leib (@MBLeib), one of our co-hosts, has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing.. His blog is at Virtually Tied to My Desktop and he’s on LinkedIN.

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.