119: GreyBeards talk distributed cloud file systems with Glen Shok, VP Alliances,Panzura

This month we turn to distributed (cloud) filesystems as we talk with Glen Shok (@gshok), VP of Alliances for Panzura. Panzura uses backend (cloud or onprem, S3 compatible) object store with a ring of software (VMs) or hardware (appliance) gateways that provides caching for local files as well as managing and maintaining metadata which creates a global NFS and SMB file system with near local access times.

Glen is an industry (without the grey beard) veteran with the knowledge to back that up. He’s been in the industry so long that we could probably have spent an hour just talking about where people are that we both know. Listen to the podcast to learn more

The interesting part about Panzura is their gateway ring. It not only manages local file caching and metadata maintenance/access, but it provides an out-of-(data path)-band file (byte range) lock coordination service, cache coherency (via delta block changes) and other services. All the metadata (and data) is backed up on backend object storage, but it’s the direct access to the metadata and its out of band control path as well as its caching service that supplies the near local access times for data.

Panzura supports any public (AWS, Azure, GCP & IBM) cloud object storage for backend data storage as well as a few, on prem, solutions (I think Glen mentioned IBM COS & Cloudian and their website mentions Wasabi, Scality and NetApp StorageGrid). Glen said they are on each of the public cloud’s marketplaces and with virtual gateways, its very easy to spin up and try.

Their system provides global (local, at the gateway) dedupe to reduce backend storage footprint and (both out of band and from backend storage) delta block changes for local cache updates. So in the event that an old version of the file happens to be present in their local cache gateway, it only needs retrieve the changed data from the object storage backend (or another gateway). All this local caching, dedupe and changed block tracking, helps to reduce cloud egress charges.

Data written to backend storage is immutable and versioned. So customers can retrieve any version of any file that was ever destaged to their backend. Glen said they write huge objects, presumably to help reduce storage footprint, IO overhead and API calls.

Glen claimed what with 3-way replication within a cloud region and 1-way replication outside the cloud region, customers no longer have to backup data. I respectively disagreed. He believes over time, customers will come to realize their use of backups for restores, becomes so rare that they can reduce backup frequency, if not eliminate it altogether. Some follow on discussion ensued, but in the end we seemed to agree to disagree on this topic.

Panzura also supports cross cloud mirroring. So, one could have their data mirrored from one cloud to another. One of these clouds will be used as a primary and only in the event that a majority of the gateway rings agree that the primary is DOWN and the secondary is UP, will they all automatically cut over to using the secondary storage cloud. While failover is automated, fail back requires operator intervention.

Panzura is charged for on managed data capacity. But cloud or on prem object storage is in addition to this and is charged for separately by the object storage provider.

As far what size file systems they support, Glen mentioned that they are ZFS internally, so any size imaginable. But he did concede, that at some point, metadata management becomes a problem and that they often suggest splitting apart 20PB file systems into 2 10PB (gateway rings) file systems to deal with this issue.

As for other solutions offered by Panzura, they have a K8s container block storage for persistent volumes that scales in capacity/performance using K8s services/resources.

Glen Shok, VP Alliances, Panzura

Glen Shok has been in the data center and storage industry for over 20 years.

Starting his career at Cisco in the late 90s. Moving to a few startups which were acquired by Brocade and Oracle. Glen has held positions in sales, sales leadership, product management and marketing, and Office of the CTO at Zones, prior to coming to Panzura.

He can’t decide what he likes to do, but at Panzura, he’s the VP of Strategic Alliances.

118: GreyBeards talks cloud-native object storage with Greg DiFraia, Scality and Stephen Bacon, HPE

Sponsored By:

And

Keith and I have talked with Stephen Bacon, Senior Director, Big Data Category, HPE, before (not on our podcast) but not Greg DiFraia, GM Americas, Scality. Both were very knowledgeable about how containerization is changing IT and the role of object storage in this transition. Scality’s ARTESCA, takes this changed world view to its logical conclusion, with a new, light-weight, cloud-native object storage system for Kubernetes (K8s) that is optimized to store and manage data, edge to core to cloud.

This is a significant joint Scality-HPE solution that has been a long time coming. As evidence of the level of the partnership between the two, ARTESCA will be exclusive to HPE for the next six months. Listen to the podcast to learn more.

We started our discussion on where the new IT world is going. It’s more of an application centric view, that spans multiple distinct infrastructure environments. New applications that live in this new IT world consume lots of data and more often than not, that data resides on object storage. And just like the developers are creating K8s container apps so they can scale easily, any object storage those apps access , needs similar scalability.

This means that edge solutions like smart cars, smart drones, smart sensors, etc. are doing some serious work. Some smart cars are producing a TB of data a day, all of which needs to be analyzed to adjust safe driving algorithms or to re-train AI/ML/DL neural networks.

Moreover, edge apps today are increasingly deploying embedded AI/ML/DL inferencing. That means the days of frozen/archived data are going away. With embedded AI, there’s an ongoing need to re-train on new (and old) data which requires all data to be readily (and speedily) accessible.

Scality has always been strong in high performance, multi-PB environments that needed rock-solid reliability and availability. And over time, they expanded their solution to access cloud data storage resources as well. But ARTESCA goes after a new market entirely. with a light weight, edge to core to cloud deployable object store, that can run anywhere, start small and grow as large as needed.

ARTESCA, Scality’s new cloud native object storage solution comes as a full stack, distributed collection of container based micro-services that runs on K8s. This includes not only object storage services but also a data management control plane.

The management control plane allows for the ingestion and use of any S3 compatible storage to hold ARSTECA data. But it also includes workflow actions that supply automated data movement between locations, data synchronization between sites and other services used to deploy, coordinate, and manage an edge, core and cloud solution as a single object storage environment.

ARSTECA runs on a number of different HPE hardware platforms from scale-out 1U servers optimized for server density, good compute-storage performance for general purpose workloads to a cluster in a box, 2U 4 server solutions that can provide huge amounts of compute-storage horsepower in a small form-factor environment. 

Stephen Bacon,  Senior Director, Big Data Category, HPE,

Stephen leads the Big Data Category, a rapidly growing multi-hundred million dollar business within HPE Storage comprised of the Apollo 4000 family of intelligent data storage servers, data analytics solutions, and the Complete Program of software partner-based solutions including with Cohesity, Commvault, Qumulo, Scality, and Veeam.

His responsibilities span Product Management, Engineering Program Management, Integration Engineering, Partner Go-To-Market, and Partner Operations.

Stephen has held a variety of worldwide, Asia Pacific and Japan region, and New Zealand country roles spanning software, servers, storage, and partnerships in his more than 20 year IT industry career.

Greg DiFraia, GM Americas, Scality

Greg has been working in the Enterprise IT Solutions Market for over 20 years and brings a unique blend of technical and business leadership to the team at Scality.

Before joining Scality, Greg was VP of Strategic Alliances at Turbonomic, a hybrid cloud workload automation developer. There, he had a front-row seat to see the emerging challenges of multi-cloud; experience that has huge value in today’s multi-cloud world.

Having spent the 13 years prior to that with EMC/Dell EMC as Global Sales leader for Object Storage, Director of Sales Strategy for Mid Market business, and, most recently, CTO, Elastic Cloud Storage (ECS), Greg led the global sales strategy for the ECS software-defined storage platform.

117: GreyBeards talk HPC file systems with Frank Herold, CEO of ThinkParQ, makers of BeeGFS

We return back to our storage thread with a discussion of HPC file systems with Frank Herold, (@BeeGFS) CEO of ThinkParQ GmbH, the makers of BeeGFS. I’ve seen BeeGFS start to show up in some IO500 top storage benchmark results and as more and more data keeps coming online every day, we thought it time to start finding out how our friends in the HPC world handle their data deluge.

Frank’s a former rocket scientist, that’s been in and around the storage industry for years, and was very knowledgeable about BeeGFS’s software defined, parallel file system. He seemed to have a great grasp of the IO requirements in HPC, Life Sciences and other HPC-like applications. Listen to the podcast to learn more.

Turns out that ThinkParQ is a spinoff of the research institute in Germany that originally developed BeeGFS parallel file system. There are apparently two version of their product one which is publicly available (downloadable from their website) and another with commercial support. It’s not quite 100% open source but it’s got a lot of open source in it and their GIT repository is available

BeeGFS was primarily focused on HPC workloads but as this type of work has become more mainstream, they have moved beyond HPC and now have significant installations in Life Sciences, Oil&Gas and many other big data environments.

It runs on x86/AMD, OpenPower, and ARM CPUs. BeeGFS comes as a number of services, one of which is a storage service which uses a backend with ZFS or XFS file system. It also uses (POSIX compliant) host client software to access their system. There’s also a metadata and monitoring service. Most of the time these services run on separate servers but BeeGFS also supports a “converged mode”, where all these services run on a single server. And you can have multiple converged mode servers in a cluster.

BeeGFS is a parallel file system. This means that it intrinsically supports multiple metadata services/servers and multiple storage servers which allow it to scale up storage bandwidth and performance considerably beyond single appliance systems. Data is automatically distributed across all the storage servers in the configuration, unless you specify that data reside on specific, say all flash storage servers. Similarly, metadata is automatically distributed across all metadata servers in the system.

They don’t support any specific RAID protection other than mirroring and that really to speed up read throughput. Rather they depend on the underlying XFS/ZFS file system to provide drive failure protection (RAID5/6).

One of BeeGFS’s selling points is that it has few tuning parameters that a customer needs to fiddle with. Frank said it runs quite well right out of the box.

BeeGFS offers a single name space that spans the cluster (of metadata servers/storage servers). But customers can elect to split this name space across a subset of these metadata and storage servers, and by doing so they create multiple BeeGFS clusters.

There’s no inherent support for NFS or SMB but customers can configure NFS or SAMBA servers that use BeeGFS as backend storage. Also, there’s no data reduction built into BeeGFS and no automatic data tiering across the backend storage (file systems).

But as noted above, customers can direct which backend storage to use to hold their data. And they do offer a CLI data movement primitive and customers can use this in conjunction with other software to implement storage tiering or do it themselves.

Metadata performance is extremely important for small files and for large multi Billion object file systems. BeeGFS uses extensive metadata caching to provide faster access to this information.

Speaking of small file performance, we had a decent discussion on the tradeoffs involved between small and large file performance. And although BeeGFS has decent small file performance it’s not a be all for every small file intensive application. According to Frank, not every small file workload is optimal for BeeGFS.

They offer BeeOND which is BeeGFS on demand. This is an integration with Slurm workload scheduler (HPC work scheduler) that allows customers to spin up a scratch BeeGFS parallel file system across compute servers with storage.

Slurm’s BeeOND integration brings all BeeGFS services up and deploys them on compute nodes you specify. At this point you have a fully installed BeeGFS (scratch) parallel file system. Customers may use this scratch file system to support any compute-data intensive workload theyneed to run. When no longer needed, Slurm can be directed to automatically dismantle the BeeGFSl file system.

We talked about BeeGFS partners. They have a number of regional partners that provide installation and onsite support and a number of technical partners, such as NetApp, Dell, HPE and INSPUR, that supply BeeGFS configured servers and systems for deployment/installation.

Frank Herold, CEO ThinkparQ

Frank Herold is the CEO of ThinkParQ GmbH – the company behind BeeGFS. He actively leads the company and the product strategy of BeeGFS as a global player for parallel high-performance file systems.

Prior to joining ThinkParQ, he held various senior management positions within ADIC and Quantum Corporation, responsible for market segments within the academic and scientific research, oil and gas, broadcast and video surveillance sectors, focusing on large scale, high-performance and enterprise accounts within EMEA. 

Frank has over 25 years of experience in the IT industry and holds a master’s degree in engineering (Dipl. -Ing.) in rocket science.

116: GreyBeards talk VCF on VxBlock 1000 with Martin Hayes, DMTS, Dell Technologies

Sponsored By:

This past week, we had a great talk with Martin Hayes (@hayes_martinf), Distinguished Member Technical Staff at Dell Technologies about running VMware Cloud Foundation (VCF) on VxBlock 1000 converged infrastructure (CI). It used to be that Cloud Foundation required VMware vSAN primary storage but that changed a few years ago. . When that happened, the Dell Technologies team saw it as a great opportunity to support VCF on VxBlock CI.

This is the first GreyBeards podcast for Martin, but he was extremely knowledgeable about VxBlock and Cloud Foundation technologies. He’s been a technical product manager on the VxBlock converged infrastructure at Dell Technologies for many years. He’s an expert on Cloud Foundation and he knows an awful lot more about VMware NSX-T networking than seems reasonable (good thing). In any case, Martin’s expertise covers the whole gamut of VCF services as well as VxBlock 1000 infrastructure. The podcast is a bit longer than our normal sponsored podcast but there was a lot of information to cover. Listen to the podcast to learn more.

With VCF enabling primary storage on networked storage systems, all the storage vendors in the world gave a mighty cheer. But VMware Cloud Foundation still requires the vSAN servers to run its management domain. Late in 2020, VxBlock 1000 from Dell Technologies released a new software defined version of its Advanced Management Platform (AMP) to run on vSAN Ready Nodes. AMP is VxBlock’s management platform but also runs management domains for VCF and NSX-T.

For workload domains, VxBlock 1000 offers Cisco UCS M5 rack and blade servers, that can be configured to support just about any workload needed by a data center.

Historically, VMware vSphere problems with DR weren’t as much storage replication issues as networking problems. But NSX-T and VCF seemed to have solved that problem.

And with vRealize Automation plugins and NSX-T APIs, customers can have 0 touch network provisioning which enables the use of IaaS or infrastructure as code for their data center.

VMware vVOLs are now available with Dell EMC PowerMax storage. So, now VxBlock 1000 customers can use vSphere storage policy-based management (SPBM) as well as automated vVOL replication for data on PowerMax.

VMware NSX-T implements Application Virtual Networks (AVNs) using a GENEVE overlay network, which make extensive use of encapsulation. But where there’s encapsulation, de-encapsulation must follow to access outside networks. All this (encapsulation on ingress, de-encapsulation on egress) is done through NSX-T Edge clusters.

The net result of all this is that VMware customers have more choice, i.e., now they can run VCF on HCI or CI. And with VxBlock 1000 CI, VCF customers can select a best of breed components for each level of their 3-tier infrastructure.

Martin Hayes, DMTS, Dell Technologies

Martin Hayes is a Technical Product Manager at Dell Technologies, where he develops and executes data center product strategies that incorporate virtualization, software-defined networking (SDN) and converged systems.

Previously, he served in network advisory and architect roles at Dell EMC, converged systems pioneer VCE and Irish broadband provider eircom.

115-GreyBeards talk database acceleration with Moshe Twitto, CTO&Co-founder, Pliops

We seem to be on a computational tangent this year. So we thought it best to talk with Moshe Twitto, CTO and Co-Founder at Pliops (@pliopsltd). We had first seen them at SFD21 (see videos of their sessions here) and their talk on how they could speed up database IO was pretty impressive. Essentially, they have a database/storage accelerator board used to increase block store IO activity to NVMe SSDs but also provide a key-value store IO accelerator,

Moshe was very knowledgeable about the technology and had previously worked at Samsung for their SSD group. He knew a lot about what happens underneath the covers of an SSD and what it takes to speed up IO. It turns out that many in memory databases use persistent key value stores to persist data or to operate in non- (or partial-) memory-mode. Listen to the podcast to learn more.

The Pliops board plugs into the PCIe bus and accelerates IO to NVMe SSDs connected to the bus or can act to accelerate IO to JBoF that’s networked behind it. Their board uses FPGA(s), NVDimms of their own design and DRAM to accelerate database IO using NVMe SSDS.

Pliops operates in one of two modes, as a Key-Value store or as a Block store. Their Key-Value store takes advantage of block store capabilities, so we start there.

In block mode, Pliops provides inline hardware data compression and encryption. Compression requires support for variable length blocks on backend SSDs. To better support this, they pack multiple compressed blocks into physical blocks. They also use a virtualization service to support mapping host LBAs to physical block addresses (using an internal key-value store). Hardware, inline encryption is also provided on a LUN (or namespace) basis. This could enable each database to have its own key. They have a root-of-trust secret key used to encrypt customer namespace (database) keys.

They also optimize physical block layouton the SSD to reduce write amplification (doing more than one write to the NAND for every host write to the SSD).

Block mode also supports smart caching. This is especially useful for database journaling/loging which reuses a portion of LBA address space (blocks} as a revolving journal/log. These blocks are overwritten with new data often and data written to them need not be destaged to NVMe SSDs as long as it can be maintained in NVDimm storage. At some point it gets destaged but probably only when log activity slows down (if ever) or some timeout occurs.

For their key-value storage accelerator, they have implemented an API that’s similar to RocksDB, a persistent key-value store, which is used as a physical storage backend for Reddis and similar in-memory databases. However, the challenge with RocksDB is that there are lots of tuning knobs/parameters. So getting right takes some work. But all this can be avoided just by using Pliops.

We didn’t talk too much about how their key-value store works. Moshe says they optimize the key structures and key data so that all database keys can be retained in their board’s memory and just by doing that, they can have immediate (1 IO) access to any data block pointed to by those keys.

He did mention that they provide ~the same performance for a database getting 10-25% host cache hit rates using their board as that same database would support with a 80-90% host cache hit rate not using their board. Some of this was shown at SFD21 (so check out the videos above for more performance info)

A couple of other advantages they bring to the table. As they are interposed between the host and the NVMe SSDs they can take advantage of their NVDIMMs and memory to write much wider stripes than the host writes. This allows them to reduce SSD read and write amplification (due to less garbage collection) by writing more full NAND pages. All this also reduces physical host (data) writes/day which can significantly improve SSD endurance.

Somewhere in all that smart caching and data compression, they are able to also decrease response times It turns out that databases that don’t use RocksDB or depend on key-value stores can easily take advantage of all their block store functionality to improve IO performance.

They mostly market their product to hyperscalers and superscalers. His definition of super-scalers was any organization that operates at public cloud levels but is not a public cloud (e.g., big social media companies).

Moshe Twitto, CTO & Co-founder Pliops

Moshe is an expert in advanced data management and coding algorithms. Prior to co-founding Pliops, Moshe served as CTO of Samsung’s SSD Controller Development Center in Israel.

Moshe holds MSEE, BSEE degrees from Technion University, Summa Cum Laude and served in the Unit 8200 Intelligence Division of the Israel Defense Corps.