132: GreyBeards talk fast embedded k-v stores with Speedb’s Co-Founder&CEO Adi Gelvan

We’ve been talking a lot about K8s of late so we thought it was time to get back down to earth and spend some time with Adi Gelvan (@speedb_io), Co-founder and CEO of Speedb, an embedded key-value store, drop-in/replacement for RocksDB, that significantly improves on its IO performance for large metadata databases.

At Adi’s last job they were searching for a key-values store or database to manage the substantial metadata they needed. After looking at RocksDB, they found it had a number of performance problems, especially as you got up to lots of metadata. Speedb was specifically designed to address the problems they found. Listen to the podcast to learn more.

RocksDB is a key-value store engine that manages the metadata for just about every open source project in existence that uses metadata. RocksDB is a Facebook open source fork of Google’s LevelDB database.

The main issues with RocksDB is that when you have a lot of metadata (key:volume pairs), RocksDB performance suffers from highly variable latency and write stalls.

Most RocksDB users are aware of these problems and turn to sharding the database to address them (by essentially shrinking the amount of metadata under management within a single node/instance..

Historically, key-volume stores used B+-trees to store data. B+-trees are great for reading, but bad for writing. Namely the B+-tree usually had to be rebalanced when entries were added and potentially when they were updated. This could cause a cascade of read-write IO throughout the tree, delaying the original IO.

Log Structured Merge trees (LMS-trees) were created to reduce write problems while at the same time provide B+-tree speed for reading. Essentially, an LSM-tree is an in-memory, sequence of (sometimes sorted) key-value pairs that can be written (destaged) to multiple sequential (sorted) string tables (SST) files on some backing store. A hierarchical index is maintained in memory to identify which SSTs holds which key-value data.

RocksDB uses LSM-tree, in memory data structures, to buffer writes. When memory becomes full, the LSM tree can be destaged to backing store to one or more SST files. Howewer, SSTs, when first written, aren’t necessarily in sorted key order, and they may contain duplicate key-value entries to what’s already in other SSTs.

So earlier versions of SSTs will need to be read back in, compacted (duplicate key-value entries deleted), sorted and written back out. The earliest version of the SSTs is considered Level 0 (L0), the next (1st level compacted and sorted) is considered L1, and this process can go on generating L2 to Ln SSTs. We would call this garbage collection, the metadata world calls it compaction.

But each time an SST is written out that’s another read of all the key-value pairs AND another write to storage. In SSDs we would call these repeated writes, write amplification. It turns out that RocksDB can have up to a 30X write amplification for a key-value entry. This means that instead of just writing it once or twice it’s written (and reread) up to 30 times. This IO takes away bandwidth and processing power from normal metadata read and write activity, which impacts IO performance

As GreyBeards know, storage (and flash) garbage collection can lead to unpredictable latencies and system busy times. Intense garbage collection (for SSDs) can seemingly hold off or stall all other IO for some amount of time during this activity. This is the main reason why RocksDB has highly variable latencies and write stalls.

Garbage collection is not an issue when you have limited amounts of metadata entries (key:volume pairs), but as you get more entries, ongoing garbage collection can become a serious impediment to performing IO. When we say “large metadata stores” we are talking 30GBs of metadata, with probably, billions of key-volume pair entries.

There appears to be two dimensions to (RocksDB) LSM-tree/SST file performance. One is the number of levels allowed and the other is the size of SST file.

Speedb determined that two dimensions weren’t sufficient to solve RocksDB performance problems. And sharding the database seemed to be putting the burden on the customer to fix the issue. So Speedb restructured their LSM-trees and SSTs to create 3 or more dimensions to tune for database performance.

With Speedb’s restructured LMS-tree and SST files, they reduce write amplification for large metadata databases, from 30X to 5X. That alone could easily increase system performance by a factor of 6.

Adi mentioned that for one cloud based customer, they were able to double performance with 1/4 the (cloud instance) server hardware, essentially providing an ~8X improvement in performance over RocksDB.

Adi also mentioned that they are targeting system developers with large metadata stores. Luckily Speedb is a fully RocksDB compatible replacement. This means developers should only take ~30 minutes to convert a system to use Speedb.

We also asked about pricing. Adi said there’s two current pricing models: 1) OEMs pay a revenue share to use Speedb and 2) non-OEMs can license the product on a per node per month basis. Given that Speedb node efficiency over RocksDB, there should be a less nodes required to support the same performance for any given metadata store.

Adi also mentioned they are in the process of releasing an open source version of Speedb that incorporates some of the enterprise product. This way developers can try Speedb to see how it works for free. It won’t bethe complete product but it’s better than native RocksDB.

Adi Gelvan, Co-Founder and CEO Speedb

Adi Gelvan is co-founder and CEO of Speedb, a data management startup, that provides a drop-in replacement for RocksDB embedded storage engine. 

Adi was a former IT infrastructure manager with over two decades of management, commercialization and executive sales position. Adi specializes in leading global software technology companies like Infinidat and SQream to outstanding growth.

Adi holds a double academic degree in mathematics & computer science.

130: GreyBeards talk high-speed database access using Apache Arrow Flight, with James Duong and David Li

We had heard about Apache Arrow and Arrow Flight as being a hi-performing database with access speeds to match for a while now and finally got a chance to hear what it was all about with James Duong, Co-Fourder of Bit Quill Technologies/Senior Staff Developer at Dremio and David Li (@lidavidm), Apache PMC and software developer at Voltron Data.

First, Apache Arrow is an open source, in memory data base (GitHub repo) for columnar data that enables lightening fast access and processing of data. Apache Arrow Flight is a set of interfaces, protocols, and services that parallelizes access to load and unload Arrow data over the network, from storage to memory and back, very fast. Listen to the podcast to learn more.

Columnar databases are all the rage these days and have more or less taken over from row oriented data bases. With row based database, data is stored (and accessed) row by row. In a columnar database, data is stored in columns, i.e, all data for one column is stored in sequence and then the next column is stored in sequence. Columnar databases can be queried/processed faster than row databased (depending on whether you are looking at/accessing multiple columns per row or not). And columnar data should compress better as all the data in a single column is of the same type..

Also the fact that columns are located contiguous in memory means if you process a column at a time, CPU data caches should work better. This is because they can grab a whole vector (columns worth of data) with one request.

Arrow data is processed and accessed in record batches. These are 2D segments which represent all the columns in a sequence/set of rows. And record batches are the unit of parallelism in Arrow and Arrow Flight. So an Arrow client operating on a CPU thread/core/chip or server could be processing one record batch while another CPU thread/core/CPU or server could process a different record batch.

Arrow Flight (GitHub RPC format doc repo) is an RPC framework that includes API’s, protocols, standards (for on storage, on wire and in memory) and libraries used to transfer Arrow data and metadata (record batches) across the network. For the typical system there exists Flight clients and Flight services in a system.

Arrow Flight currently uses Google’s gRPC for data transfers. gRPC is a open source remote procedure call (RPC) service that supports within data center, across data centers and out to the edge processing services. Although Arrow Flight is currently implemented on top of gRPC, other network protocols will be supported in the future.

What makes Arrow Flight so fast is its ability to support parallel transfers. That is customers can configure Arrow (Flight) clients across clusters of servers and Arrow (Flight) services residing on one or more other servers. Any client can request metadata and record batches from any end point (Flight service) in the data center. And yes Arrow data can be supplied from multiple end points by being mirrored/replicated. All data transfers can operate in parallel across all Flight client and services, with no known bottleneck other than the network.

A single stream of Arrow Flight data was able to deliver 20GB/sec. The fact that you can have any (?) number of Arrow Flight data streams in operation at the same time makes that a very interesting number.

Also, Arrow data can be stored on or sourced from typical data lakes such as Azure Data Lake, AWS S3, Google Cloud storage, etc.

Another advantage of Arrow Flight is the ability to use the same format on the wire and in storage. Normally JDBC (and ODBC) have on storage and on wire formats which require format conversion (serialization) to move data from storage/memory to wire and another conversion (deserialization) to move data from on wire format to in storage/memory format. Arrow Flight does away with serialization and deserialization of data all together and uses the same format for on wire and in storage.

Arrow Flight SQL allows Arrow processing of SQL database data. My understanding is that customers using non Arrow databases such as Oracle, SQL Server, Postgres, etc. can use Arrow Flight SQL to provide Arrow in-memory database processin/query execution for their data.

Arrow and Arrow flight are primarily used to process data analytics workloads but Arrow also has a new execution engine, the Arrow Gandiva project, that enables vectorized processing of Arrow data. This is a special execution engine for Arrow that supports X86 cores with AVX instructions, (NVIDIA) GPUs, and FPGAs.

There’s also an open source package, Fletcher, used to create Arrow and Arrow flight processing HDLs so that customers can add Arrow data processing and Arrow Flight data transfer functionality to custom built FPGAs.

One challenge with open source software is support for problems/bugs that crop up. An active developer community helps, but enterprise customers require professional, on call 7×24 (5×12?) support for all their critical (and most non-critical) software. Voltron Data (David’s) company provides paid for support for Arrow Flight and Arrow data services.

The other major problem with open source software has been use complexity. At the moment the Arrow Flight team is very responsive in clarifying documentation and are trying to make it easier to use. But at the moment Arrow Flight is mostly a set of APIs, libraries and connectors that end users can use to standup Arrow (Flight) clients and servers to transfer Arrow data between them.

James Duong, Co-Founder Bit Quill Technologies & Sr. Staff Developer at Dremio

An Apache Arrow contributor, cofounder at Bit Quill Technologies, and contributor to Dremio Corporation projects, James Duong has worked with databases for over 15 years, from backend query engines to drivers and protocols. He’s worked with a variety of relational, big data, and cloud databases including Dremio, SQL Server, Redshift, and Hive.

Previously at Simba Technologies, James architected and built connectors for sources, as well as designing the Simba Engine SDK for developing connectivity solutions for any data source.

Bit Quill Technologies, the company James helped co-found, builds back end software in the data and cloud space. Bit Quill has built a name for itself as a producer of high-quality software, a collaborative approach to design and development, and a love for good tech and happy people.

Balancing his passion for the data ecosystem with a young family, James occasionally steps away from it all to go hiking.

David Li, Apache Arrow PMC and software engineer at Voltron Data

David is a PMC member for Apache Arrow and a software engineer at Voltron Data (formerly known as Ursa Computing). Prior to that, he worked on data services and Apache Arrow at Two Sigma.

David holds an M.Eng. in Computer Science from Cornell University.

123: GreyBeards talk data analytics with Sean Owen, Apache Spark committee/PMC member & Databricks, lead data scientist

The GreyBeards move up the stack this month with a talk on big data and data analytics with Sean Owen (@sean_r_owen), Data Science lead at Databricks and Apache Spark committee and PMC member. The focus of the talk was on Apache Spark.

Spark is an Apache Software Foundation open-source data analytics project and has been up and running since 2010. Sean is a long time data scientist and was extremely knowledgeable about data analytics, data science and the role that Spark has played in the analytics ecosystem. Listen to the podcast to learn more.

Spark is not an infrastructure solution as much as an application framework. It’s seems to be a data analytics solution specifically designed to address Hadoops shortcomings. At the moment, it has replaced Hadoop and become the go to solution for data analytics across the world. Essentially, Spark takes data analytic tasks/queries and runs them, very quickly against massive data sets.

Spark takes analytical tasks or queries and splits them up into stages that are run across a cluster of servers. Spark can use many different cluster managers (see below) to schedule stages across worker nodes attempting to parallelize as many as possible.

Spark has replaced Hadoop mainly because it’s faster and has a better, easier to use API. Spark was written in Scala which runs on JVM, but its API supports SQL, Java, R (R on Spark) and Python (PySpark). The latter two have become the defacto standard languages for data science and AI, respectively.

Storage for Spark data can reside on HDFS, Apache HBase, Apache Solr, Apache Kudu and (cloud) object storage. HDFS was the original storage protocol for Hadoop. HBase is the Apache Hadoop database. Apache Solr was designed to support high speed, distributed, indexed search. Apache Kudu is a high speed distributed database solution. Spark, where necessary, can also use local disk storage for interim result storage.

Spark supports three data models: RDD (resilient distributed dataset); DataFrames (column headers and rows of data, like distributed CSVs); and DataSets (distributed typed and untyped data). Spark DataFrame data can be quite large, it seems nothing to have a 100M row dataframe. Spark Datasets are a typed version of dataframes which are only usable in Java API as Python and R have no data typing capabilities.

One thing that helped speed up Spark processing over Hadoop, is its native support for in-memory data. With Hadoop, intermediate data had to be stored on disk. With in-memory data, Spark supports the option to keep it in memory, speeding up subsequent processing of this data. Spark data can be pinned or cached in memory using the API calls. And the availability of bigger servers with Intel Optane or just lots more DRAM, have made this option even more viable.

Another thing that Spark is known for is its support of multiple cluster managers. Spark currently supports Apache Mesos, Kubernetes, Apache Hadoop YARN, and Spark’s own, standalone cluster manager. In any of these, Spark has a main driver program that takes in analytics requests, breaks them into stages and schedules worker nodes to execute them..

Most data analytics work is executed in batch mode, offline, with incoming data stored on disk/flash someplace (see storage options above). But Spark can also run in real-time, streaming mode processing data streams. Indeed, Spark can be combined with Apache Kafka to process Kafka topic streams.

I asked about high availability (HA) characteristics, specifically for data. Sean mentioned that data HA is more of a storage consideration. But Spark does support HA for analytics jobs/tasks as a whole. As stages are essentially state-less tasks, analytics HA can be done by monitoring stage execution to completion and if needed, re-scheduling failed stages to run on other worker nodes.

Regarding Spark usability, it has a CLI and APIs but no GUI. Spark has a number of parameters (I counted over 20 for the driver program alone), that can be used to optimize its execution. So it’s maybe not the easiest solution to configure and optimize by hand, but that’s where other software systems, such as Databricks (see link above) comes in. Databricks supplies a managed Spark solution for customers that don’t want/need to deal with all the configuration complexity of Spark.

Sean Owen, Lead Data Scientist, Databricks and Apache Spark PMC member

Sean is a principal solutions architect focusing on machine learning and data science at Databricks. He is an Apache Spark committee and PMC member, and co-author of Advanced Analytics with Spark.

Previously, Sean was director of Data Science at Cloudera and an engineer at Google.

120: GreyBeards talk CEPH storage with Phil Straw, Co-Founder & CEO, SoftIron

GreyBeards talk universal CEPH storage solutions with Phil Straw (@SoftIronCEO), CEO of SoftIron. Phil’s been around IT and electronics technology for a long time and has gone from scuba diving electronics, to DARPA/DOD researcher, to networking, and is now doing storage. He’s also their former CTO and co-founder of the company. SoftIron make hardware storage appliances for CEPH, an open source, software defined storage system.

CEPH storage includes file (CEPHFS, POSIX), object (S3) and block (RBD, RADOS block device, Kernel/librbd) services and has been out since 2006. CEPH storage also offers redundancy, mirroring, encryption, thin provisioning, snapshots, and a host of other storage options. CEPH is available as an open source solution, downloadable at CEPH.io, but it’s also offered as a licensed option from RedHat, SUSE and others. For SoftIron, it’s bundled into their HyperDrive storage appliances. Listen to the podcast to learn more.

SoftIron uses the open source version of CEPH and incorporates this into their own, HyperDrive storage appliances, purpose built to support CEPH storage.

There are two challenges to using open source solutions:

  • Support is generally non-existent. Yes, the open source community behind the (CEPH) project supplies bug fixes and can possibly answer some questions but this is not considered enterprise support where customers require 7x24x365 support for a product
  • Useability is typically abysmal. Yes, open source systems can do anything that anyone could possibly want (if not, code it yourself), but trying to figure out how to use any of that often requires a PHD or two.

SoftIron has taken both of these on to offer a CEPH commercial product offering.

Take support, SoftIron offers enterprise level support that customers can contract for on their own, even if they don’t use SoftIron hardware. Phil said the would often get kudos for their expert support of CEPH and have often been requested to offer this as a standalone CEPH service. Needless to say their support of SoftIron appliances is also excellent.

As for ease of operations, SoftIron makes the HyperDrive Storage Manager appliance, which offers a standalone GUI, that takes the PHD out of managing CEPH. Anything one can do with the CEPH CLI can be done with SoftIron’s Storage Manager. It’s also a very popular offering with SoftIron customers. Similar to SoftIron’s CEPH support above, customers are requesting that their Storage Manager be offered as a standalone solution for CEPH users as well.

HyperDrive hardware appliances are storage media boxes that offer extremely low-power storage for CEPH. Their appliances range from high density (120TB/1U) to high performance NVMe SSDs (26TB/1U) to just about everything in between. On their website, I count 8 different storage appliance offerings with various spinning disk, hybrid (disk-SSD), SATA and NVMe SSDs (SSD only) systems.

SoftIron designs, develops and manufacturers all their own appliance hardware. Manufacturing is entirely in the US and design and development takes place in the US and Europe only. This provides a secure provenance for HyperDrive appliances that other storage companies can only dream about. Defense, intelligence and other security conscious organizations/industries are increasingly concerned about where electronic systems come from and want assurances that there are no security compromises inside them. SoftIron puts this concern to rest.

Yes they use CPUs, DRAMs and other standardized chips as well as storage media manufactured by others, but SoftIron has have gone out of their way to source all of these other parts and media from secure, trusted suppliers.

All other major storage companies use storage servers, shelves and media that come from anywhere, usually sourced from manufacturers anywhere in the world.

Moreover, such off the shelf hardware usually comes with added hardware that increases cost and complexity, such as graphics memory/interfaces, Cables, over configured power supplies, etc., but aren’t required for storage. Phil mentioned that each HyperDrive appliance has been reduced to just what’s required to support their CEPH storage appliance.

Each appliance has 6Tbps network that connects all the components, which means no cabling in the box. Also, each storage appliance has CPUs matched to its performance requirements, for low performance appliances – ARM cores, for high performance appliances – AMD EPYC CPUs. All HyperDrive appliances support wire speed IO, i.e, if a box is configured to support 1GbE or 100GbE, it transfers data at that speed, across all ports connected to it.

Because of their minimalist hardware design approach, HyperDrive appliances run much cooler and use less power than other storage appliances. They only consume 100W or 200W for high performance storage per appliance, where most other storage systems come in at around 1500W or more.

In fact, SoftIron HyperDrive boxes run so cold, that they don’t need fans for CPUs, they just redirect air flom from storage media over CPUs. And running colder, improves reliability of disk and SSD drives. Phil said they are seeing field results that are 2X better reliability than the drives normally see in the field.

They also offer a HyperDrive Storage Router that provides a NFS/SMB/iSCSI gateway to CEPH. With their Storage Router, customers using VMware, HyperV and other systems that depend on NFS/SMB/iSCSI for storage can just plug and play with SoftIron CEPH storage. With the Storage Router, the only storage interface HyperDrive appliances can’t support is FC.

Although we didn’t discuss this on the podcast, in addition to HyperDrive CEPH storage appliances, SoftIron also provides HyperCast, transcoding hardware designed for real time transcoding of one or more video streams and HyperSwitch networking hardware, which supplies a secure provenance, SONiC (Software for Open Networking in [the Azure] Cloud) SDN switch for 1GbE up to 100GbE networks.

Standing up PB of (CEPH) storage should always be this easy.

Phil Straw, Co-founder & CEO SoftIron

The technical visionary co-founder behind SoftIron, Phil Straw initially served as the company’s CTO before stepping into the role as CEO.

Previously Phil served as CEO of Heliox Technologies, co-founder and CTO of dotFX, VP of Engineering at Securify and worked in both technical and product roles at both Cisco and 3Com.

Phil holds a degree in Computer Science from UMIST.

117: GreyBeards talk HPC file systems with Frank Herold, CEO of ThinkParQ, makers of BeeGFS

We return back to our storage thread with a discussion of HPC file systems with Frank Herold, (@BeeGFS) CEO of ThinkParQ GmbH, the makers of BeeGFS. I’ve seen BeeGFS start to show up in some IO500 top storage benchmark results and as more and more data keeps coming online every day, we thought it time to start finding out how our friends in the HPC world handle their data deluge.

Frank’s a former rocket scientist, that’s been in and around the storage industry for years, and was very knowledgeable about BeeGFS’s software defined, parallel file system. He seemed to have a great grasp of the IO requirements in HPC, Life Sciences and other HPC-like applications. Listen to the podcast to learn more.

Turns out that ThinkParQ is a spinoff of the research institute in Germany that originally developed BeeGFS parallel file system. There are apparently two version of their product one which is publicly available (downloadable from their website) and another with commercial support. It’s not quite 100% open source but it’s got a lot of open source in it and their GIT repository is available

BeeGFS was primarily focused on HPC workloads but as this type of work has become more mainstream, they have moved beyond HPC and now have significant installations in Life Sciences, Oil&Gas and many other big data environments.

It runs on x86/AMD, OpenPower, and ARM CPUs. BeeGFS comes as a number of services, one of which is a storage service which uses a backend with ZFS or XFS file system. It also uses (POSIX compliant) host client software to access their system. There’s also a metadata and monitoring service. Most of the time these services run on separate servers but BeeGFS also supports a “converged mode”, where all these services run on a single server. And you can have multiple converged mode servers in a cluster.

BeeGFS is a parallel file system. This means that it intrinsically supports multiple metadata services/servers and multiple storage servers which allow it to scale up storage bandwidth and performance considerably beyond single appliance systems. Data is automatically distributed across all the storage servers in the configuration, unless you specify that data reside on specific, say all flash storage servers. Similarly, metadata is automatically distributed across all metadata servers in the system.

They don’t support any specific RAID protection other than mirroring and that really to speed up read throughput. Rather they depend on the underlying XFS/ZFS file system to provide drive failure protection (RAID5/6).

One of BeeGFS’s selling points is that it has few tuning parameters that a customer needs to fiddle with. Frank said it runs quite well right out of the box.

BeeGFS offers a single name space that spans the cluster (of metadata servers/storage servers). But customers can elect to split this name space across a subset of these metadata and storage servers, and by doing so they create multiple BeeGFS clusters.

There’s no inherent support for NFS or SMB but customers can configure NFS or SAMBA servers that use BeeGFS as backend storage. Also, there’s no data reduction built into BeeGFS and no automatic data tiering across the backend storage (file systems).

But as noted above, customers can direct which backend storage to use to hold their data. And they do offer a CLI data movement primitive and customers can use this in conjunction with other software to implement storage tiering or do it themselves.

Metadata performance is extremely important for small files and for large multi Billion object file systems. BeeGFS uses extensive metadata caching to provide faster access to this information.

Speaking of small file performance, we had a decent discussion on the tradeoffs involved between small and large file performance. And although BeeGFS has decent small file performance it’s not a be all for every small file intensive application. According to Frank, not every small file workload is optimal for BeeGFS.

They offer BeeOND which is BeeGFS on demand. This is an integration with Slurm workload scheduler (HPC work scheduler) that allows customers to spin up a scratch BeeGFS parallel file system across compute servers with storage.

Slurm’s BeeOND integration brings all BeeGFS services up and deploys them on compute nodes you specify. At this point you have a fully installed BeeGFS (scratch) parallel file system. Customers may use this scratch file system to support any compute-data intensive workload theyneed to run. When no longer needed, Slurm can be directed to automatically dismantle the BeeGFSl file system.

We talked about BeeGFS partners. They have a number of regional partners that provide installation and onsite support and a number of technical partners, such as NetApp, Dell, HPE and INSPUR, that supply BeeGFS configured servers and systems for deployment/installation.

Frank Herold, CEO ThinkparQ

Frank Herold is the CEO of ThinkParQ GmbH – the company behind BeeGFS. He actively leads the company and the product strategy of BeeGFS as a global player for parallel high-performance file systems.

Prior to joining ThinkParQ, he held various senior management positions within ADIC and Quantum Corporation, responsible for market segments within the academic and scientific research, oil and gas, broadcast and video surveillance sectors, focusing on large scale, high-performance and enterprise accounts within EMEA. 

Frank has over 25 years of experience in the IT industry and holds a master’s degree in engineering (Dipl. -Ing.) in rocket science.