120: GreyBeards talk CEPH storage with Phil Straw, Co-Founder & CEO, SoftIron

GreyBeards talk universal CEPH storage solutions with Phil Straw (@SoftIronCEO), CEO of SoftIron. Phil’s been around IT and electronics technology for a long time and has gone from scuba diving electronics, to DARPA/DOD researcher, to networking, and is now doing storage. He’s also their former CTO and co-founder of the company. SoftIron make hardware storage appliances for CEPH, an open source, software defined storage system.

CEPH storage includes file (CEPHFS, POSIX), object (S3) and block (RBD, RADOS block device, Kernel/librbd) services and has been out since 2006. CEPH storage also offers redundancy, mirroring, encryption, thin provisioning, snapshots, and a host of other storage options. CEPH is available as an open source solution, downloadable at CEPH.io, but it’s also offered as a licensed option from RedHat, SUSE and others. For SoftIron, it’s bundled into their HyperDrive storage appliances. Listen to the podcast to learn more.

SoftIron uses the open source version of CEPH and incorporates this into their own, HyperDrive storage appliances, purpose built to support CEPH storage.

There are two challenges to using open source solutions:

  • Support is generally non-existent. Yes, the open source community behind the (CEPH) project supplies bug fixes and can possibly answer some questions but this is not considered enterprise support where customers require 7x24x365 support for a product
  • Useability is typically abysmal. Yes, open source systems can do anything that anyone could possibly want (if not, code it yourself), but trying to figure out how to use any of that often requires a PHD or two.

SoftIron has taken both of these on to offer a CEPH commercial product offering.

Take support, SoftIron offers enterprise level support that customers can contract for on their own, even if they don’t use SoftIron hardware. Phil said the would often get kudos for their expert support of CEPH and have often been requested to offer this as a standalone CEPH service. Needless to say their support of SoftIron appliances is also excellent.

As for ease of operations, SoftIron makes the HyperDrive Storage Manager appliance, which offers a standalone GUI, that takes the PHD out of managing CEPH. Anything one can do with the CEPH CLI can be done with SoftIron’s Storage Manager. It’s also a very popular offering with SoftIron customers. Similar to SoftIron’s CEPH support above, customers are requesting that their Storage Manager be offered as a standalone solution for CEPH users as well.

HyperDrive hardware appliances are storage media boxes that offer extremely low-power storage for CEPH. Their appliances range from high density (120TB/1U) to high performance NVMe SSDs (26TB/1U) to just about everything in between. On their website, I count 8 different storage appliance offerings with various spinning disk, hybrid (disk-SSD), SATA and NVMe SSDs (SSD only) systems.

SoftIron designs, develops and manufacturers all their own appliance hardware. Manufacturing is entirely in the US and design and development takes place in the US and Europe only. This provides a secure provenance for HyperDrive appliances that other storage companies can only dream about. Defense, intelligence and other security conscious organizations/industries are increasingly concerned about where electronic systems come from and want assurances that there are no security compromises inside them. SoftIron puts this concern to rest.

Yes they use CPUs, DRAMs and other standardized chips as well as storage media manufactured by others, but SoftIron has have gone out of their way to source all of these other parts and media from secure, trusted suppliers.

All other major storage companies use storage servers, shelves and media that come from anywhere, usually sourced from manufacturers anywhere in the world.

Moreover, such off the shelf hardware usually comes with added hardware that increases cost and complexity, such as graphics memory/interfaces, Cables, over configured power supplies, etc., but aren’t required for storage. Phil mentioned that each HyperDrive appliance has been reduced to just what’s required to support their CEPH storage appliance.

Each appliance has 6Tbps network that connects all the components, which means no cabling in the box. Also, each storage appliance has CPUs matched to its performance requirements, for low performance appliances – ARM cores, for high performance appliances – AMD EPYC CPUs. All HyperDrive appliances support wire speed IO, i.e, if a box is configured to support 1GbE or 100GbE, it transfers data at that speed, across all ports connected to it.

Because of their minimalist hardware design approach, HyperDrive appliances run much cooler and use less power than other storage appliances. They only consume 100W or 200W for high performance storage per appliance, where most other storage systems come in at around 1500W or more.

In fact, SoftIron HyperDrive boxes run so cold, that they don’t need fans for CPUs, they just redirect air flom from storage media over CPUs. And running colder, improves reliability of disk and SSD drives. Phil said they are seeing field results that are 2X better reliability than the drives normally see in the field.

They also offer a HyperDrive Storage Router that provides a NFS/SMB/iSCSI gateway to CEPH. With their Storage Router, customers using VMware, HyperV and other systems that depend on NFS/SMB/iSCSI for storage can just plug and play with SoftIron CEPH storage. With the Storage Router, the only storage interface HyperDrive appliances can’t support is FC.

Although we didn’t discuss this on the podcast, in addition to HyperDrive CEPH storage appliances, SoftIron also provides HyperCast, transcoding hardware designed for real time transcoding of one or more video streams and HyperSwitch networking hardware, which supplies a secure provenance, SONiC (Software for Open Networking in [the Azure] Cloud) SDN switch for 1GbE up to 100GbE networks.

Standing up PB of (CEPH) storage should always be this easy.

Phil Straw, Co-founder & CEO SoftIron

The technical visionary co-founder behind SoftIron, Phil Straw initially served as the company’s CTO before stepping into the role as CEO.

Previously Phil served as CEO of Heliox Technologies, co-founder and CTO of dotFX, VP of Engineering at Securify and worked in both technical and product roles at both Cisco and 3Com.

Phil holds a degree in Computer Science from UMIST.

115-GreyBeards talk database acceleration with Moshe Twitto, CTO&Co-founder, Pliops

We seem to be on a computational tangent this year. So we thought it best to talk with Moshe Twitto, CTO and Co-Founder at Pliops (@pliopsltd). We had first seen them at SFD21 (see videos of their sessions here) and their talk on how they could speed up database IO was pretty impressive. Essentially, they have a database/storage accelerator board used to increase block store IO activity to NVMe SSDs but also provide a key-value store IO accelerator,

Moshe was very knowledgeable about the technology and had previously worked at Samsung for their SSD group. He knew a lot about what happens underneath the covers of an SSD and what it takes to speed up IO. It turns out that many in memory databases use persistent key value stores to persist data or to operate in non- (or partial-) memory-mode. Listen to the podcast to learn more.

The Pliops board plugs into the PCIe bus and accelerates IO to NVMe SSDs connected to the bus or can act to accelerate IO to JBoF that’s networked behind it. Their board uses FPGA(s), NVDimms of their own design and DRAM to accelerate database IO using NVMe SSDS.

Pliops operates in one of two modes, as a Key-Value store or as a Block store. Their Key-Value store takes advantage of block store capabilities, so we start there.

In block mode, Pliops provides inline hardware data compression and encryption. Compression requires support for variable length blocks on backend SSDs. To better support this, they pack multiple compressed blocks into physical blocks. They also use a virtualization service to support mapping host LBAs to physical block addresses (using an internal key-value store). Hardware, inline encryption is also provided on a LUN (or namespace) basis. This could enable each database to have its own key. They have a root-of-trust secret key used to encrypt customer namespace (database) keys.

They also optimize physical block layouton the SSD to reduce write amplification (doing more than one write to the NAND for every host write to the SSD).

Block mode also supports smart caching. This is especially useful for database journaling/loging which reuses a portion of LBA address space (blocks} as a revolving journal/log. These blocks are overwritten with new data often and data written to them need not be destaged to NVMe SSDs as long as it can be maintained in NVDimm storage. At some point it gets destaged but probably only when log activity slows down (if ever) or some timeout occurs.

For their key-value storage accelerator, they have implemented an API that’s similar to RocksDB, a persistent key-value store, which is used as a physical storage backend for Reddis and similar in-memory databases. However, the challenge with RocksDB is that there are lots of tuning knobs/parameters. So getting right takes some work. But all this can be avoided just by using Pliops.

We didn’t talk too much about how their key-value store works. Moshe says they optimize the key structures and key data so that all database keys can be retained in their board’s memory and just by doing that, they can have immediate (1 IO) access to any data block pointed to by those keys.

He did mention that they provide ~the same performance for a database getting 10-25% host cache hit rates using their board as that same database would support with a 80-90% host cache hit rate not using their board. Some of this was shown at SFD21 (so check out the videos above for more performance info)

A couple of other advantages they bring to the table. As they are interposed between the host and the NVMe SSDs they can take advantage of their NVDIMMs and memory to write much wider stripes than the host writes. This allows them to reduce SSD read and write amplification (due to less garbage collection) by writing more full NAND pages. All this also reduces physical host (data) writes/day which can significantly improve SSD endurance.

Somewhere in all that smart caching and data compression, they are able to also decrease response times It turns out that databases that don’t use RocksDB or depend on key-value stores can easily take advantage of all their block store functionality to improve IO performance.

They mostly market their product to hyperscalers and superscalers. His definition of super-scalers was any organization that operates at public cloud levels but is not a public cloud (e.g., big social media companies).

Moshe Twitto, CTO & Co-founder Pliops

Moshe is an expert in advanced data management and coding algorithms. Prior to co-founding Pliops, Moshe served as CTO of Samsung’s SSD Controller Development Center in Israel.

Moshe holds MSEE, BSEE degrees from Technion University, Summa Cum Laude and served in the Unit 8200 Intelligence Division of the Israel Defense Corps.

114: GreyBeards talk computational storage with Tong Zhang, Co-Founder & Chief Scientist, ScaleFlux

Seeing as how one topic on last years FMS2020 wrap-up with Jim Handy was the rise of computational storage and it’s been a long time (see GreyBeards talk with Scott Shadley at NGD Systems) since we discussed this, we thought it time to check in on the technology. So we reached out to Dr. Tong Zhang, Chief Scientist and Co-founder, ScaleFlux to see what’s going on. ScaleFlux is seeing rising adoption of their product in hyper-scalers as well as large enterprises. Their computational storage is a programmable FPGA based 4TB and 8TB SSD.

Tong was very knowledgeable on current industry trends (Moore’s law slowing & others) that have created an opening for computational storage and other outboard compute. He also is well versed into how some of the worlds biggest customers are using the technology to work faster and cheaper in their data centers. Listen to the podcast to learn more.

At the start Tong mentioned Alibaba’s use of ScaleFlux’s transparent, line speed, outboard encryption/decryption and compression/decompression. And, depending on the data, they can see compression ratios far exceeding 2:1. As such, customers not only benefit from a cheaper $/GB but can also see better NAND endurance and higher performance.

Hosts can do compression and encryption but doing so takes a lot of CPU cycles. It turns out that compression is more compute intensive than encryption. Tong said that most modern cores can encrypt/decrypt at 1GB/sec but, depending on the compression algorithm, can only compress at 40 to 100MB/sec. But in any case doing so on the host consumes a lot of CPU instruction cycles. With ScaleFlux, they can compress and decompress at PCIe bus speeds.

Most storage controllers that offer compression/decompression must have some sort of LBA (logical block address) virtualization. Because while the host may be writing 512 or 4096 byte blocks, what’s actually written to the NAND is more like, 231 or 1999 bytes. So packing these odd, variable length blocks into NAND blocks can become a problem. But most SSDs already have a flash translation layer (FTL) where LBA addresses are mapped, over time, to different physical NAND page/block addresses. ScaleFlux has combined support for LBA virtualization and FTL into the same process and by doing so, they reduce IO overhead to perform better.

ScaleFlux’s drive is an NVMe SSD, which already supports great native response times but when you are transferring 1/2 or less of (compressed) data from the host onto NAND, you can reduce latencies even more. .

Although their current generation product is based on TLC NAND they are working on the next generation which will support QLC. And the benefits of writing and reading less data should also help QLC endurance and performance.

Although ScaleFlux is seeing great adoption with just outboard transparent compression and encryption, there is more that could be done, For example,

  • Filtering query’s at the drive rather than at the host. If customers can send a search key/phrase or other filtering request directly to the drive, the drive can pass over all it’s data and send back just the data that matches that filter request.
  • Transcoding and other data format changes. Although transcoding makes a lot of sense to do outboard, Tong also mentioned format changes. We asked him to clarify and he said consider a row based database that needs to be accessed in columnar format. If the drive could change the format from one to the other, it opens up more analytics tool sets.

At the moment, ScaleFlux engineering teams are the ones that program the FPGA to perform outboard functionality. But in a future release, they plan to adding ARM cores in a SoC, which can handle more general purpose outboard functionality as code.

Because of this added complexity of compression, encryption and other outboard logic, we asked Tong what power loss protection was available at the drive level. Tong assured us that once data has been received by their device, it is maintained across a power failure with CAPs and other logic to offload it.

Tong also mentioned that Intel, AWS and the NVMe standard committee are looking at adding some computational storage support into the NVMe standard, so applications and host software can invoke and maybe modify outboard functionality on the fly. Sort of like loading containers of functionality to run on the fly on an SSD drive.

Dr. Tong Zhang, Chief Scientist and Co-fonder, ScaleFlux

Dr. Tong Zhang is a well-established researcher with significant contributions to data storage systems and VLSI signal processing. Dr. Zhang is responsible for developing key techniques and algorithms for ScaleFlux’s Computational Storage products and exploring their use in mainstream application domains.

He is currently a Professor at Rensselaer Polytechnic Institute (RPI). His current and past research span over database, filesystem, solid-state and magnetic data storage devices and systems, digital signal processing and communication, error correction coding, VLSI architectures, and computer architecture.

He has published over 150 technical papers at prestigious USENIX/IEEE/ACM conferences and journals with the citation h-index of 36, and has served as general and technical program chairs for several premier conferences. Among his many research accomplishments, he made pioneering contributions to establishing flash memory signal processing and enabling practical implementation of low-density parity-check (LDPC) codecs. He received two best paper awards and has over 20 issued/pending US patent applications.

He holds BS/MS degrees in EE from the Xi’an Jiaotong University, China, and PhD degree in ECE from the University of Minnesota.

113: GreyBeards talk storage for next gen. workloads with Liran Zvibel, Co-Founder & CEO WekaIO

Sponsored By:

I’ve known Liran Zvibel, Co-founder and CEO of Weka IO for many years now and it’s the second time he’s been on our show, (see: Episode 56: GreyBeards talk high performance file storage...). In those days, WekaIO was just coming out and hitting the world with this extremely high-performing, scale out unstructured data solution. Well since then, they’ve just gotten better.

Keith and I had a great time talking with Liran again. Liran has deep knowledge about unstructured data and how enterprises use it these days. WekaIO’s story, over the last two years has gone beyond great performance to real world, hybrid cloud offerings e as well as going after the cloud native app’s (read Kubernetes [K8S]) persistent storage. Listen to the podcast to learn more.

We started with a history lesson on WekaIO. Back in those days (which persists today, I might add) there were many IO workloads that required companies to purchase different solutions for different work. For example, they needed DAS or SAN for performance, NAS for ease of access and object for scale. WekaIO came out with an answer to all these problems in a single, scaleable storage system. That is, they performed IO as fast as DAS or SAN block, had all the ease of access of NAS, and could scale as much as object.

However, the real culprit holding the world back was “NFS”. At the outset NFS was designed (back in the 1990s) with the then current networking speeds available (10-100Mbps), which performed just fine at those speeds. But when 10-100GbE came out in the 2000’s, NFS’s metadata overhead was too chatty to support wire speeds. Thus, any storage that depended on NFS protocols couldn’t supply (small) files fast enough for modern applications.

This is why WekaIO has moved to not only support NFS and SMB but also POSIX and NVIDIA® GPUDirect® Storage interfaces. By offering POSIX, WekaIO is able to plug into standard Linux and Windows server systems and provide excellent small file performance. Of course applications that demand small file performance today are mostly data analytics and AI/ML/DL workloads.

Consequently., NVIDIA came out with their GPUDirect Storage protocol to address getting small file (data) into GPUs faster. With GPUDirect, storage systems can RDMA data directly from storage to GPU memory and vice versa, with no OS intervention (other than to set up the transfer). If you happen to have a small file, high performing storage system attached to your fabric that supports GPUDirect , like WekaIO, you can significantly speed up your AI/ML/DL workloads.

Next we started talking K8S storage. WekaIO usestheir POSIX interface in their CSI plugin to support K8S container persistent storage. Again, supplying high performance for small files seems to be tailor made for K8S container applications that exist today and will for the foreseeable future.

Enter the cloud. Almong other things, WekaIO is a AWS primary storage vendor. It also offers snap to cloud. And with both of these in tandem, it’s just become a lot easier to move and access your unstructured data in the cloud. Liran mentioned that WekaIO primary storage in AWS operates across AZ’s. This means it can be configured to support better availability than EBS.

Large BioPharma companies are using WekaIO in AWS to store and process field data and research data, so that this work can be done around the world. Some companies have run out of compute in a single AZ (unbelievable I know but it’s COVID). By offering multi-AZ support unstructured data access with WekaIO, these companies can spread their compute across AZ’s and region and still access their data. And when their products are ready for gov’t certification, having all this data in the cloud, can make provide an easy way to have gov’t access this same data.

Liran Zvibel, Co-founder and CEO WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design, and development for a portfolio of rich social media applications.

Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.

110: GreyBeards talk FMS2020 wrap up with Jim Handy, General Director of Objective Analysis

This months it’s back to storage and our annual wrap-up on the Flash Memory Summit Conference with Jim Handy, General Director of Objective Analysis. Jim’s been on our show 5 times before and is a well known expert on NAND and SSDs (as well as DRAM and memory systems). Jim also blogs at TheSSDGuy.com and TheMemoryGuy.com just in case you want to learn more.

FMS went virtual this year and had many interesting topics including how computational storage is making headway in the cloud, 3D QLC is hitting the enterprise with PLC on the way, and for a first at FMS, a talk on DNA storage (for more information on this, see our podcast with CatalogDNA). Jim’s always interesting to talk with to help us understand where the NAND-SSD industry is headed. Listen to the podcast to learn more.

Jim mentioned that the major NAND vendors are all increasing the number of layers for their 3D NAND, and it continues to scale well. Most vendors are currently shipping ~100 layer NAND, with Micron doing more than that. And vendor roadmaps are looking at the possibility of 200 layers or more. Jim doesn’t think anyone knows how high it can go.

Another advantage of 3D NAND is it can be used to make bigger bit cells and thus have better endurance. From Jim’s perspective more electrons per cell means a better more resilient bit cell.

Many vendors in the nascent persistent memory industry were all hoping that NAND would stop scaling at some point and they would be able to pick up the slack. But NAND manufacturers found 3D and scaling hasn’t stopped at all. This has relegated most persistent memory vendors to a small niche market with the exception of Intel (and Micron).

Jim said that Intel is losing money on Optane every year, ~$5B so far. But Intel knows that chip profitability is tied to economies of scale, volumes matter. With enough volume, Optane will become cheap enough to manufacture that they will make buckets of money from it.

Interestingly, Jim said that DRAM scaling is slowing down. That means there may be an even bigger market for something close to DRAM access speeds, but with increased density and lower cost. Optane seems to fit that description very well.

Jim also mentioned that computational storage is starting to see some traction with public cloud vendors. Computational storage adds generic compute power to inside an SSD which can be used to perform storage intensive functions out at the SSD rather than transferring data into the CPU for processing. This makes sense where a lot of data would need to be transferred back and forth to an SSD and where computational cycles are just as cheap out on the SSD as in the server. For example, for data compression, search, and video transcoding, computational storage can make a lot of sense. (See our podcast with NGD systems for more informaiton).

In contrast, Open-Channel SSDs are making dumb SSDs, or SSDs without any flash translation layer or other smarts needed to make NAND work as persistent storage bin the enterprise. There’s a small group of system providers that want to perform all this functionality at a global scale (or across multiple SSDs) rather than at the local, SSD drive level.

Another topic that hit it’s stride this year at FMS2020 was Zoned Name Spaces (ZNS). ZNS partitions an SSD into separately addressable segments, to allow higher performing sequential (write) access within those zones. As SSD capacity has increased, IO activity has sky-rocketed and this has led to an “IO blender” effect. Within an IO blender, it’s impossible to understand which IO is following a sequential pattern and which is not. ZNS is intended to solve that probplem

With ZNS SSDs, IOs doing sequential access can have their own partition and that way the SSD can understand its sequential IO and act accordingly. It turns out that sequential writes to NAND can perform much, much faster than random writes.

ZNS was invented for SMR (shingled magnetic recording) disks, because these overwrote portions of other tracks (like roof shingles, tracks on SMR disks overlap). We had heard about ZNS at FMS2019 but had thought this just a better way to share access to a single SSD, by carving it up into logical (mini-)volumes. Jim said that was also a benefit but the major advantage is being able to understand sequential IO and write to the SSD more effectively.

We talked some on the economics of NAND flash, disk and tape as storage media. Jim and I see this continuing a trend that’s been going on for years, where NAND storage cost $/GB ~10X more than disk capacity, and disk storage costs $/GB ~10X more than tape capacity. All three technologies continue their relentless pursuit of increasing capacity but it’s almost like train tracks, all three $/GB curves following one another into the future.

On the other hand, high RPM disk seems to have died, and been replaced with SSDs. Disk manufacturers have seen unit declines but the # GB they are shipping continues to increase. Contrary to a number of AFA system providers, disk is not dead and is unlikely to die anytime soon.

Finally, we discussed DNA storage and it’s coming entry into the storage market. It’s all a question of price of the drive and media technology, size of the mechanism (drive?) and read and write access times. At the moment all these are coming down but are not yet competitive with tape. But given DNA technology trends, there doesn’t appear to be any physical barrier that’s going to stop it from becoming yet another storage technology in the enterprise, most likely at a 10X $/GB cost advantage over tape…

Jim Handy, General Director, Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication.

He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. 

He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com