41: Greybeards talk time shifting storage with Jacob Cherian, VP Product Management and Strategy, Reduxio

In this episode, we talk with Jacob Cherian (@JacCherian),  VP of Product Management and Product Strategy at Reduxio. They have a produced a unique product that merges some characteristics of CDP storage and the best of hybrid and reduplicating storage today into a new primary storage system. We first saw Reduxio at VMworld a couple of years back and this is the first chance we have had a chance to talk with them.

Backdating data

Many of us have had the need to go back to previous versions of files, volumes and storage. But few systems provide an easy way to do this. Reduxio is the first storage system that makes this extremely effortless to do.

Reduxio’s storage system splits apart an IO write operation into data and meta-data. The IO meta-data information includes the volume/LUN id, offset into the volume, and data length. The data is chunked, compressed, hashed, and then sent to NVRam cache. The IO meta-data and a system wide time stamp together with data chunk hash(es) are sent to a separate key-value (K-V) meta-data store.

What Reduxio supplies is an easy way to go back for any data volume, to any second in its past. Yes there are limits as to how far back one can go with a data volume. Like saving every second for the last 8 hours,  every hour for the last week, every week for the last month, every month for the last year, etc. all of which can be established at volume configuration time. But all this does is tell Reduxio when to discard old data.

With all this in place, re-establishing a volume to some instant in its past is simply a query to the meta-data K-V store with the appropriate time stamp. The meta-data K-V store returns from the query all the hashes and other IO meta-data for all the data chunks in sequence for the volume of data at that point in time, in it’s past. With that information the system can easily fabricate the volume at that moment in its past.

By keeping the data and the meta-data tag, time stamp and hash(es) information separate, Reduxio can reconstruct the data at any time (to one second granularity) in the past where data is still available to the system.

Performance

In the past, this sort of time shifting storage functionality was limited to a separate CDP backup appliance. What Reduxio has done is integrate all this functionality with a deduplicating-compressed, auto tiering primary storage system. So every IO is chunking, deduplicating, compressing data and splitting the meta-data, time-stamps, hashes from data chunks.  There is no IO performance penalty for doing any of this, it’s all a part of the normal IO path of the Reduxio primary storage system.

However, there is some garbage collection activity that needs to go on in order to deal with data that’s no longer needed. Reduxio does this mostly in real time, as the data actually expires.

Deduplication, compression and all the other characteristics of the storage system that enable its time shifting capabilities cannot be turned off.

Auto storage tiering

Reduxio optimized their auto-tiering beyond what is normally done in other hybrid storage systems. Data is chunked and moved to cache and ultimately destaged to flash. Hot vs. cold data is analyzed in real time, not sometime later with other hybrid storage system. Also, when data is deemed cold and needs to be moved to disk, Reduxio takes another step to analyze it’s meta-data K-V store and other information to see what other data was referenced during the same time as this data. This way it can attempt to demote a “group” of data chunks that will likely all be referenced together. That way when one chunk of this “group” of data is referenced, the rest can be promoted to flash/cache at the same time.

Their auto-tiering group algorithm is used, every time they demote data and every time they promote data to a faster tier they can start to record any data that is referenced together. This way the next time they demote data chunks  the group definition can be further refined.

Reduxio storage system

Reduxio provides a hybrid (disk-SSD) iSCSI primary storage system that holds 40TB of storage today, and with an average compression-dedupe ratio (over their 2PB of field data) of  >4:1, 40TB should equate to over 160TB of usable data storage. Some of that usable storage would be for current volume data and some would be used for historical data.

There was a Slack discussion the other week on what to do about ransomware. It seems to me that Reduxio with its time traveling storage, could be used as an effective protection for any ransomware.

The podcast runs ~41 minutes, although snapshots have been around for a long time (one of the Greybeards worked on a snapshotting storage system back in the early 90s), Reduxio has taken the idea to new heights.  Listen to the podcast to learn more.

Jacob Cherian, VP Product Management and Product Strategy, Reduxio

Jacob is responsible for Reduxio’s product vision and strategy. Jacob has overall ownership for defining Reduxio’s product portfolio and roadmap.

Prior to joining Reduxio, Jacob spent 14 years at Dell in the Enterprise Storage Group leading product development and architectural initiatives for host storage, NAS, SAN, RAID and other data center infrastructure. As a member of Dell’s storage architecture council he was responsible for developing Dell’s strategy for unstructured data management, and drove its implementation through organic development efforts and technology acquisitions such as Ocarina Networks and Exanet. In his last role as a Dell expatriate in Israel he oversaw Dell’s FluidFS development.

Jacob started his career in Dell as a development engineer for various SAN, NAS and host-side solutions, then served as the Architect and Technologist for Dell’s MD series of external storage arrays.

Jacob was named a Dell Inventor of the Year in 2005, and holds 30 patents and has 20 patents pending in the areas of storage and networking. He holds a Bachelor of Science (B.S.) in Electrical Engineering from the Cochin University of Science and Technology, a Master of Science (M.S.) in Computer Science from Oklahoma State University, and a Master of Business Administration (MBA) from the Kellogg School of Management, Northwestern University

40: Greybeards storage industry yearend review podcast

In this episode, the Greybeards discuss the year in storage and naturally we kick off with the consolidation trend in the industry and the big one last year, the DELL-EMC acquisition. How the high margin EMC storage business is going to work in a low margin company like Dell is the subject of much speculation. That and which of the combined companies storage products will make it through the transition make for interesting discussions. And Finally what exactly is Dell’s long term strategy is another question.

We next turn to the coming of age of object storage. A couple of years ago, object storage was being introduced to a wider market but few wanted to code to RESTful interfaces. Nowadays, that seems to be less of a concern and the fact that one can have onsite/offsite/cloud based object storage repositories from open source, proprietary solutions and everything in between is making object storage a much more appealing option to enterprise IT.

Finally, we discuss the new Tier 0. What with NVMe SSDs and the emergence of NVMe over Fabric coming out last year, Tier 0 has never looked so promising.  You may recall that Tier 0 was hot about 5 years with TMS and Violin and others coming out with lightning fast storage IO. But with DELL-EMC DSSD: startups (E8 storage, Mangstor, Apeiron data systems, and others); NVMDIMMs, CrossBar, and Everspin coming out with denser offerings; and other SCM (Micron, HPE, IBM, others?) technologies on the horizon, Tier 0 has become red hot again.

Sorry about the occasional airplane noise and other audio anomalies. The podcast runs  over 47 minutes. Howard and I could talk for hours on what’s happening in the storage industry. Listen to the podcast to learn more.

Ray Lucchesi is the President and Founder of Silverton Consulting, a prominent blogger at RayOnStorage.com, and can be found on twitter @RayLucchesi.

Howard Marks is the Founder and Chief Scientist of howardmarksDeepStorage, a prominent blogger at Deep Storage Blog and can be found on twitter @DeepStorageNet.

 

39: Greybeards talk deep storage/archive with Matt Starr, CTO Spectra Logic

In this episode, we talk with Matt Starr (@StarrFiles),  CTO of Spectra Logic, the deep storage experts. Matt has been around a long time and Ray’s shared many a meal with Matt as we’re both in NW Denver. Howard has a minor quibble with Spectra Logic over the use of his company’s name (DeepStorage) in their product line but he’s also known Matt for awhile now.

The Pearl

Matt and Spectra Logic have a number of customers with multi-PB to over an EB of data repository problems and how to take care of these ever expanding storage stashes is an ongoing concern.  One of the solutions Spectra Logic offers is the Black Pearl Deep Storage, which provides an object storage, RESTfull interface front end to storage tiering/archive backend that uses flash, (spin-down) disk, (LTFS) tape (libraries) and the (AWS) cloud as backend storage.

Major portions of the Black Pearl are open sourced and available on GitHub. I see several (DS3-)SDK’s for Java, Python, C, and others. Open sourcing the product provides an easy way for client customization. In fact, one customer was using CEPH and they modified their CEPH backup client to send a copy of data off to the Pearl.

We talk a bit about the Black Pearl’s data integrity. It uses a checksum, computed over the object at creation time which is then verified anytime the object is retrieved, copied, moved or migrated and can be validated periodically (scrubbed), even when it has not been touched.

Super Computing’s interesting (storage) problems

Matt just returned from the SC16 (Super Computing Conference 2016) in Salt Lake City last month. At the conference there were plenty of MultiPB customers that were looking for better storage alternatives.

One customer Matt mentioned  was the Square Kilometer Array, the world’s largest radio telescope which will be transmitting 700TB/hour, over an 1EB per year.  All that data has to land somewhere and for this quantity (>eb) of data, tape becomes an necessary choice.

Matt likened Spectra’s  archive solutions to warehouses vs. factories. For the factory floor,  you need responsive (AFA or hybrid) primary storage but for the warehouse, you just want cheap, bulk storage (capacity).

The podcast runs long, over 51 minutes, and reveals a different world from the GreyBeards everyday enterprise environments. Specifically customers that have extra large data repositories and how they manage to survive under the data deluge. Matt’s an articulate spokesperson for Spectra Logic and their archive solutions and we could have talked about >eb data repositories for hours.  Listen to the podcast to learn more.

matt-starrMatt Starr, CTO, Spectra Logic

Matt Starr’s tenure with Spectra Logic spans 24 years and includes experience in service, hardware design, software development, operating systems, electronic design and management. As CTO, he is responsible for helping define the company’s product vision, and serves as the executive representative for the voice of the market. He leads Spectra’s efforts in high-performance computing, private cloud and other vertical markets.

Matt served as the lead engineering architect for the design and production of Spectra’s TSeries tape library family. Spectra Logic has secured more than 50 patents under Matt’s direction, establishing the company as the innovative technology leader in the data storage industry. He holds a BS in electrical engineering from the University of Colorado at Colorado Springs.

38: GreyBeards talk with Rob Peglar, Senior VP and CTO, Symbolic IO

In this episode, we talk with Rob Peglar (@PeglarR), Senior VP and CTO of Symbolic IO, a computationally defined storage vendor. Rob has been around almost as long as the GreyBeards (~40 years) and most recently was with Micron and prior to that, EMC Isilon. Rob is also on the board of SNIA.

Symbolic IO has emerged out of stealth earlier this year and intends to be shipping products by late this year/early next.  Rob joined Symbolic IO in July of 2016.

What’s computational storage?

It’s all about symbolic representation of bits. Symbolic IO has  come up with a way to encode bit streams into unique symbols that offer significant savings in memory space, beyond standard data compression techniques.

All that would be just fine if it was at the end of a storage interface and we would probably just call it a new form of data reduction. But Symbolic IO also incorporates persistent memory (NV-DIMMs, in the future 3D XPoint, RERam, others) and provides this symbolic data inside a server, directly through its processor data cache, in (decoded) raw data form.

Symbolic IO provides a translation layer between persistent memory and processor cache that decodes the symbolic representation of the data in persistent memory for data reads on the way into data cache and encodes the symbolic representation of the raw data for data writes on the way out of cache to persistent memory.

Rob says that the mathematics are there to show that Symbolic IO’s data reduction is significant and that the decode/encode functionality can be done in a matter of a few clock cycles per cache (line) access on modern (Intel) processors.

The system continually monitors the data it sees to determine what the optimum encoding should be and can change its symbolic table to provide more memory savings for new data written to persistent memory.

All this reminds the GreyBeards of Huffman encoding algorithms for data compression (which one of us helped deploy on a previous [unnamed] storage product). Huffman encoding transformed ASCII (8-bit) characters into variable length bit streams.

Symbolic IO will offer 3 products:,

  • IRIS™ Compute, which provides a persistent memory storage, accessed using something like the Linux pmem library and includes Symbolic StoreModules™ (persistent memory hardware);
  • IRIS Vault, which is an appliance with its own (IRIS) infused Linux (Symbolic’s SymCE™) OS plus Symbolic IO StoreModules, that can run any Linux application without change accessing the persistent memory and offers full data security, next generation snapshot-/clone-like capabilities with BLINK™ full storage backups, and offers enhanced physical security with the removable, IRIS Advanced EYE ASIC; and
  • IRIS Store, which extends the IRIS Vault and IRIS Compute above with more tiers of storage, using Symbolic IO StoreModules as Tier1, PCIe (flash) storage as Tier 2 and external SSD storage as Tier 3 storage.

For more information on Symbolic IO’s three products, so we would encourage you to read their website (linked above).

The podcast runs long, over 47 minutes, and was wide ranging, discussing some of the history of processor/memory/information technologies. It was very easy to talk with Rob and both Howard and I have known Rob for years, across multiple vendors & organizations.  Listen to the podcast to learn more.

peglar_robert_160x200Rob Peglar, Senior VP and CTO, Symbolic IO

Rob Peglar is the Senior Vice President and Chief Technology Officer of Symbolic IO. Rob is a seasoned technology executive with 39 years of data storage, network and compute-related experience, is a published author and is active on many industry boards, providing insight and guidance. He brings a vast knowledge of strategy and industry trends to Symbolic IO. Rob is also on the Board of Directors for the Storage Networking Industry Association (SNIA) and an advisor for the Flash Memory Summit. His role at Symbolic IO will include working with the management team to help drive the future product portfolio, executive-level forecasting and customer/partner interaction from early-stage negotiations through implementation and deployment.

Prior to joining Symbolic IO, Rob was the Vice President, Advanced Storage at Micron Technology, where he led next-generation technology and architecture enablement efforts of Micron’s Storage Business Unit, driving storage solution development with strategic customers and partners. Previously he was the CTO, Americas for EMC where he led the entire CTO functions for the Americas. He has also held senior level positions at Xiotech Corporation, StorageTek and ETA Systems.

Rob’s extensive experience in data management, analytics, high-performance computing, non-volatile memory, distributed cluster architectures, filesystems, I/O performance optimization, cloud storage and replication and archiving, networking, virtualization makes him a sought after industry expert and board member. He was named an EMC Elect in 2014, 2015 and 2016. He was one of 25 senior executives worldwide selected for the CRN ‘Storage Superstars’ Award in 2010.

37: GreyBeards discuss blockchains with Donna Dillenberger, IBM Fellow

In this episode, we talk with Donna Dillenberger (@DonnaExplorer), IBM Fellow on IBM’s work with blockchain technology. Ray was at IBM Edge Conference last month where Donna and others presented on what BlockChain technology could do for financial services and asset provenance. Ray wrote a post on Blockchains at IBM after the conference.

Blockchain is the technology behind Bitcoins, the crypto-currency, but the technology has the potential to revolutionize a lot of other activities.

What does blockchain have to do with storage? Probably not that much, but as it’s an up and coming technology with great prospects, the GreyBeards thought it worthwhile to find out more.

Blockchain explained

Blockchain is essentially a software protocol to establish trust where there is none. At another level, it is a programatic way to maintain a shared ledger of information, without compromise.

The funny thing about ledgers and record keeping in general, is that they are everywhere. From, the first record of written language, to double entry accounting, to todays keeping track of financial transactions, ledgers do it all.

Blockchains is just an updated, software protocol version of good ledger keeping.

What’s so special about blockchain ledgers is that they can be maintained correctly and consistently even with entities/persons/servers that are trying to cheat the system.

Donna called this the Byzantine Generals’ Problem.

Byzantine generals are tricky

There’s a group of Byzantine armies surrounding a castle and some want to attack while others want to retreat, and they would all like to coordinate their actions. But some Byzantine generals are traitors and will selective tell some generals to attack while telling others to retreat, in an attempt disrupt any coordinated actions.

Generalizing the problem, when there are a number of independent entities, how does one determine consensus such that no one entity can cheat the system. CS calls this a Byzantine Fault Tolerance (BFT) algorithm.

Algorithmic consensus in blockchain

With Bitcoin blockchain (Donna calls this blockchain V1.0), consensus is achieved by “Proof of work“, a computational problem difficult to produce but easy to verify.

But Proof of work is not the only way to achieve algorithmic consensus for blockchains. HyperLedger, an open source blockchain project  has a pluggable form of consensus. So,  different Hyperledger blockchains can support different forms of consensus.

Currently, Hyperledger support a BFT algorithm, which says that 2/3rds +1 of the nodes must agree on a hash (digitally signed current transaction data and historical info) value to reach consensus.

It turns out that Hyperledger blockchains use a key-value store to record transactions history and other metadata, which is RocksDB.

Other current blockchains

At IBM Edge, Donna discussed an IBM supply chain blockchain where suppliers and consumers record sending, receipt and other movement of parts around IBM’s world wide supply chain. It uses a Hyperledger blockchain.

The  Everledger blockchain is being used to supply diamond provenance/pedigree validation. Each diamond is encoded with a digital barcode as it’s mined, and as the diamond is processed, cut and sent to wholesalers/retailers with each of those transactions maintained in the blockchain. One can easily validate the origin, clarity, color, carrot and cut of a diamond by examining it’s transaction history on the blockchain.

IBM Blockchain activities

IBM wrote the Hyperledger code from scratch to run on z/Linux but their financial services customers wanted it open sourced. So, IBM donated it to the Linux Foundation and sponsored the Hyperledger project. It’s currently the fastest growing Linux Foundation open source project at the moment. You can run a Hyperledger apps an any Linux system.

IBM z/Linux has some unique security characteristics useful for financial services and other  critical organizations/industries. For instance, secure application signing/verification to run, data at rest/in-flight encryption with secured keys and crypto code, and a secure cloud where the hardware is run.

Together these software, hardware and data centers have a FIPS 140-2 level 4 certification.

IBM also offers professional services to help customers create and host their own Hyperledger apps. Moreover. IBM are sponsoring Hyperledger hackathons to add features  and are sponsoring other Hyperledger community events.

The podcast runs long, over 50 minutes and introduces blockchain technology, where it can be used, and what IBM is doing with it. Howard and I could have talked with Donna for hours on the topic but we had to stop sometime. . Listen to the podcast to learn more.

donnaDonna Dillenberger, IBM Fellow

 

Donna Dillenberger is an IBM Fellow at IBM’s Watson Research Center.   She has redesigned many enterprise applications for greater scalability and availability.  She has worked on analytic models for financial, insurance, retail and healthcare industries.

In 2005, she became IBM’s Chief Technology Officer of IT Optimization.   In 2006, she became an Adjunct Professor at Columbia University’s Graduate School of Engineering. She is a Master Inventor and is currently working on cognitive analytics and blockchain.