57: GreyBeards talk midrange storage with Pierluca Chiodelli, VP of Prod. Mgmt. & Cust. Ops., Dell EMC Midrange Storage

Sponsored by:

Dell EMC Midrange Storage

In this episode we talk with Pierluca Chiodelli  (@chiodp), Vice President of Product, Management and Customer Experience at Dell EMC Midrange storage.  Howard talked with Pierluca at SFD14 and I talked with Pierluca at SFD13. He started working there as a customer engineer and has worked his way up to VP since then.

This is the second time (Dell) EMC has been on our show (see our EMCWorld2015 summary podcast with Chad Sakac) but this is the first sponsored podcast from Dell EMC. Pierluca seems to have been with (Dell) EMC forever.

You may recall that Dell EMC has two product families in their midrange storage portfolio. Pierluca provides a number of reasons why both continue to be invested in, enhanced and sold on the market today.

Dell EMC Unity and SC product lines

Dell EMC Unity storage is the outgrowth of unified block and file storage that was first released in the EMC VNXe series storage systems. Unity continues that tradition of providing both file and block storage in a dense, 2 rack U system configuration, with dual controllers, high availability, AFA and hybrid storage systems. The other characteristic of Unity storage is its tight integration with VMware virtualization environments.

Dell EMC SC series storage continues the long tradition of Dell Compellent storage systems, which support block storage and which invented data progression technology.  Data progression is storage tiering on steroids, with support for multi-tiered rotating disk (across the same drive), flash, and now cloud storage. SC series is also considered a set it and forget it storage system that just takes care of itself without the need for operator/admin tuning or extensive monitoring.

Dell EMC is bringing together both of these storage systems in their CloudIQ, cloud based, storage analytics engine and plan to have both systems supported under the Unisphere management engine.

Also Unity storage can tier files to the cloud and copy LUN snapshots to the public cloud using their Cloud Tiering Appliance software.  With their UnityVSA Software Defined Storage appliance and VMware vSphere running in AWS, the file and snapshot data can then be accessed in the cloud. SC Series storage will have similar capabilities, available soon.

At the end of the podcast, Pierluca talks about Dell EMC’s recently introduced Customer Loyalty Programs, which include: Never Worry Data Migrations, Built-in VirtuSteram Storage Cloud, 4:1 Storage Efficiency Guarantee, All-inclusive Software pricing, 3-year Satisfaction Guarantee, Hardware Investment Protection, and Predictable Support Pricing.

The podcast runs ~27 minutes. Pierluca is a very knowledgeable individual and although he has a beard, it’s not grey (yet). He’s been with EMC storage forever and has a long, extensive history in midrange storage, especially with Dell EMC’s storage product families. It’s been a pleasure for Howard and I to talk with him again.  Listen to the podcast to learn more.

Pierluca Chiodelli, V.P. of Product Management & Customer Operations, Dell EMC Midrange Storage

Pierluca Chiodelli is currently the Vice President of Product Management for Dell EMC’s suite of Mid-Range solutions including, Unity, VNX, and VNXe from heritage EMC storage and Compellent, EqualLogic, and Windows Storage Server from heritage Dell Storage.

Pierluca’s organization is comprised of four teams: Product Strategy, Performance & Competitive Engineering, Solutions, and Core & Strategic Account engineering. The teams are responsible for ensuring Dell EMC’s mid-range solutions enable end users and service providers to transform their operations and deliver information technology as a service.

Pierluca has been with EMC since 1999, with experience in field support and core engineering across Europe and the Americas. Prior to joining EMC, he worked at Data General and as a consultant for HP Corporation.

Pierluca holds one degree in Chemical Engineering and second one in Information Technology.

 

53: GreyBeards talk MAMR and future disk with Lenny Sharp, Sr. Dir. Product Management, WDC

This month we talk new disk technology with Lenny Sharp, Senior Director of Product Management, responsible for enterprise disk with Western Digital Corp. (WDC). WDC recently announced their future disk offerings will be based on a new disk recording technology, called MAMR or microwave assisted magnetic recording.

Over the last decade or so the disk industry has been investing in HAMR or heat assisted magnetic recording as the next recording innovation. So, MAMR is a significant departure but appears well worth it.

WDC is arguably the leading supplier of HDD and one of the leading SSD suppliers to the industry today. Any departure from industry technology roadmaps for WDC is big news.

WDC is banking on MAMR technology to continue to offer capacity disk (for big data) at prices that are 10X below the price of flash storage for the foreseeable future. If they and the rest of the disk industry can deliver on that promise then there should be a substantial market for capacity disk for the next decade or so.

What’s  MAMR?

HAMR uses lasers to heat up a media spot being recorded. This boost in energy helps reduce the magnetic threshold of the grains inside the media and allowed them to be written or change state. Once that energy was removed, the data state on media would persist and could be read multiple times without error.

MAMR uses microwaves to add similar energy to the spot being written on disk media. MAMR doesn’t actually heat up the spot with microwaves, but it does add elector-magnetic energy to the spot being written, which has the same affect of reducing the threshold for writing the media.  I wrote a recent blog post about MAMR technology describing the technology in more detail

HAMR heated the media spot from 400C to 700C, which was potentially reduces disk reliability. MAMR, because it doesn’t heat the disk anymore than normal operations, should not impact disk reliability.

Also MAMR can use pretty much the same disk substrate used in enterprise disks today and be fabricated using much the same manufacturing lines used for PMR (perpendicular magnetic recording) heads, today.

Disk densities

MAMR should allow the industry to get to ~4.5Tb/sqin. Current PMR technology will probably max out at 1.0 to 1.3Tb/sqin.  PMR density growth has flatlined (6-7% per year) recently, but MAMR should put the disk industry back on a 15% density growth/year. The new MAMR disks will be sampling for enterprise customer in 2018 and in production by 2019.

As for how far MAMR will take disk, WDC said we can expect a 40TB disk device (using multiple platters) by 2025 and Lenny said perhaps double that eventually.

We ended our discussion with Lenny on WDC and other disk vendor moves outside of the device level. Over time, IT use of disks have changed and the disk vendor’s seem to believe the best way to address this transition is to look beyond disk/SSD devices and towards manufacturing storage shelves and potentially even systems!? We’ll need to wait and see the dust settle on these moves.

The podcast runs ~45 minutes. Lenny was very knowledgeable about current and future disk technology and seems to have been around the disk industry forever.  He’s got an insider’s view of disk technology, IT’s use of disk and storage market dynamics. Both  Howard and I enjoyed our time with him.   Listen to the podcast to learn more.

Lenny Sharp, Sr. Dir. Product Management, WDC

Lenny Sharp serves as Western Digital’s Sr. Director of Enterprise HDD product line management and planning. He has over 30 years of experience in high technology and storage. Sharp joined HGST in 2009, iniIally responsible for enterprise SSD.
He has also managed client HDD and spent four years in Japan, working closely with the development team and APAC customers.
Previously, he was responsible for managing systems, software, storage and semiconductors for companies including Dell, Philips, Western Digital and Maxtor (since acquired by Seagate).

52: GreyBeards talk software defined storage with Kiran Sreenivasamurthy, VP Product Management, Maxta

This month we talk with an old friend from Storage Field Day 7 (videos), Kiran Sreenivasamurthy, VP of Product Management for Maxta. Maxta has a software defined storage solution which currently works on VMware vSphere, Red Hat Virtualization and KVM to supply shared, scale out storage and HCI solutions for enterprises across the world.

Maxta is similar to VMware’s vSAN software defined storage whose licenses can be transferred from one server to another, as you upgrade your data center over time. As software defined storage, Maxta runs on any standard Intel X86 hardware. Indeed, Maxta has one customer running two Super Micro servers and one Cisco server in the same cluster.

Maxta advantages

One item that makes Maxta unique is all of its storage properties are assignable at a VM granularity. That is,  replication, deduplication, compression and even blocksize can all be enabled/set at the VMDK-VM level.  This could be useful for environments supporting diverse applications, such as having a 64K block size for Microsoft Exchange and 4K block size for web servers.

Another advantage is their multi-hypervisor support. Maxta’s support for RH Virtualization, VMware and KVM offers the unique ability to migrate storage and even powered off VMs, from one hypervisor to another. Maxta’s file system is the same for both VMware and KVM clusters.

Maxta clusters

Their software must be licensed on all servers in a vSphere or KVM cluster with access to Maxta storage. The minimum Maxta cluster size is 3 nodes for 2-way replication and 5 nodes for 3-way replication.  Most Maxta systems run on 8 to 12 server node clusters. But Maxta has installations with 20 to 24 nodes in customer deployments.

Maxta supports SSD only as well as SSD-disk hybrid storage. And SSDs can be NVMe as well as SATA SSD storage. In hybrid configurations, Maxta SSDs are used as read and write back caches for disk storage.

Maxta supports compute only nodes, compute-storage nodes and witness only nodes (node with 1 storage device). In addition, besides heterogeneous server support, Maxta clusters can have nodes with different storage capacities. Maxta will optimize VM data placement to balance IO activity across heterogeneous nodes.

Maxta provides a vCenter plugin so VMware admins can manage and monitor their storage inside vSphere environment. Maxta also offers a Cloud Connect MX which is a cloud based system allowing for management of all your Maxta clusters through out an enterprise, wherever they reside.

Even HCI, through partners

For customers wanting an HCI solution, Maxta partners can supply pre-tested, HCI appliances or can configure Maxta software with servers at customer data centers. Maxta has done well OEMing their solution, and one significant success has been their OEM deal with Lenovo in China and East Asia, where they sell HCI appliances with Maxta software.

Maxta has also found success with managed service providers (that want to deploy the software on their own hardware), and SME & ROBO environments. Also Maxta seems to be doing very well in Latin America as well as previously mentioned China.

The podcast runs ~42 minutes. Kiran is knowledgeable individual and has worked with some of the leading storage companies of the last two decades.  Listen to the podcast to learn more.

Kiran Sreenivasamurthy, VP Product Management, Maxta

Kiran Sreenivasamurthy is the Vice President of Product Management for Maxta Inc. He has developed and managed storage hardware and software products for more than 20 years with leading storage companies and startups including HP 3PAR, NetApp and Mendocino Software.

Kiran Manages all aspects of Maxta’s hyperconvergence product portfolio from inception through revenue.

41: Greybeards talk time shifting storage with Jacob Cherian, VP Product Management and Strategy, Reduxio

In this episode, we talk with Jacob Cherian (@JacCherian),  VP of Product Management and Product Strategy at Reduxio. They have a produced a unique product that merges some characteristics of CDP storage and the best of hybrid and deduplicating storage today into a new primary storage system. We first saw Reduxio at VMworld a couple of years back and this is the first chance we have had a chance to talk with them.

Backdating data

Many of us have had the need to go back to previous versions of files, volumes and storage. But few systems provide an easy way to do this. Reduxio is the first storage system that makes this extremely effortless to do.

Reduxio’s storage system splits apart an IO write operation into data and meta-data. The IO meta-data information includes the volume/LUN id, offset into the volume, and data length. The data is chunked, compressed, hashed, and then sent to NVRam cache. The IO meta-data and a system wide time stamp together with data chunk hash(es) are sent to a separate key-value (K-V) meta-data store.

What Reduxio supplies is an easy way to go back for any data volume, to any second in its past. Yes there are limits as to how far back one can go with a data volume. Like saving every second for the last 8 hours,  every hour for the last week, every week for the last month, every month for the last year, etc. all of which can be established at volume configuration time. But all this does is tell Reduxio when to discard old data.

With all this in place, re-establishing a volume to some instant in its past is simply a query to the meta-data K-V store with the appropriate time stamp. The meta-data K-V store returns from the query all the hashes and other IO meta-data for all the data chunks in sequence for the volume of data at that point in time, in it’s past. With that information the system can easily fabricate the volume at that moment in its past.

By keeping the data and the meta-data tag, time stamp and hash(es) information separate, Reduxio can reconstruct the data at any time (to one second granularity) in the past where data is still available to the system.

Performance

In the past, this sort of time shifting storage functionality was limited to a separate CDP backup appliance. What Reduxio has done is integrate all this functionality with a deduplicating-compressed, auto tiering primary storage system. So every IO is chunking, deduplicating, compressing data and splitting the meta-data, time-stamps, hashes from data chunks.  There is no IO performance penalty for doing any of this, it’s all a part of the normal IO path of the Reduxio primary storage system.

However, there is some garbage collection activity that needs to go on in order to deal with data that’s no longer needed. Reduxio does this mostly in real time, as the data actually expires.

Deduplication, compression and all the other characteristics of the storage system that enable its time shifting capabilities cannot be turned off.

Auto storage tiering

Reduxio optimized their auto-tiering beyond what is normally done in other hybrid storage systems. Data is chunked and moved to cache and ultimately destaged to flash. Hot vs. cold data is analyzed in real time, not sometime later with other hybrid storage system. Also, when data is deemed cold and needs to be moved to disk, Reduxio takes another step to analyze it’s meta-data K-V store and other information to see what other data was referenced during the same time as this data. This way it can attempt to demote a “group” of data chunks that will likely all be referenced together. That way when one chunk of this “group” of data is referenced, the rest can be promoted to flash/cache at the same time.

Their auto-tiering group algorithm is used, every time they demote data and every time they promote data to a faster tier they can start to record any data that is referenced together. This way the next time they demote data chunks  the group definition can be further refined.

Reduxio storage system

Reduxio provides a hybrid (disk-SSD) iSCSI primary storage system that holds 40TB of storage today, and with an average compression-dedupe ratio (over their 2PB of field data) of  >4:1, 40TB should equate to over 160TB of usable data storage. Some of that usable storage would be for current volume data and some would be used for historical data.

There was a Slack discussion the other week on what to do about ransomware. It seems to me that Reduxio with its time traveling storage, could be used as an effective protection for any ransomware.

The podcast runs ~41 minutes, although snapshots have been around for a long time (one of the Greybeards worked on a snapshotting storage system back in the early 90s), Reduxio has taken the idea to new heights.  Listen to the podcast to learn more.

Jacob Cherian, VP Product Management and Product Strategy, Reduxio

Jacob is responsible for Reduxio’s product vision and strategy. Jacob has overall ownership for defining Reduxio’s product portfolio and roadmap.

Prior to joining Reduxio, Jacob spent 14 years at Dell in the Enterprise Storage Group leading product development and architectural initiatives for host storage, NAS, SAN, RAID and other data center infrastructure. As a member of Dell’s storage architecture council he was responsible for developing Dell’s strategy for unstructured data management, and drove its implementation through organic development efforts and technology acquisitions such as Ocarina Networks and Exanet. In his last role as a Dell expatriate in Israel he oversaw Dell’s FluidFS development.

Jacob started his career in Dell as a development engineer for various SAN, NAS and host-side solutions, then served as the Architect and Technologist for Dell’s MD series of external storage arrays.

Jacob was named a Dell Inventor of the Year in 2005, and holds 30 patents and has 20 patents pending in the areas of storage and networking. He holds a Bachelor of Science (B.S.) in Electrical Engineering from the Cochin University of Science and Technology, a Master of Science (M.S.) in Computer Science from Oklahoma State University, and a Master of Business Administration (MBA) from the Kellogg School of Management, Northwestern University

GreyBeards deconstruct storage with Brian Biles and Hugo Patterson, CEO and CTO, Datrium

In this our 32nd episode we talk with Brian Biles (@BrianBiles), CEO & Co-founder and Hugo Patterson, CTO & Co-founder of Datrium a new storage startup. We like to call it storage deconstructed, a new view of what storage could be based on today and future storage technologies.  If I had to describe it succinctly, I would say it’s a hybrid between software defined storage, server side flash and external disk storage.  We have discussed server side flash before but this takes it to a whole another level.

Their product, the DVX consists of Hyperdriver host software and a NetShelf, external disk storage unit. The DVX was designed from the ground up based on the use of host/server side flash or non-volatile memory as a given and built everything else around that. I hesitate to say this but the DVX NetShelf backend storage is pretty unintelligent, just a dual controller disk storage with a multi-task coordinator. In contrast, the DVX Hyperdriver host software used to access their storage system is pretty smart and is installed as a VIB in vSphere. Customers can assign up to 8TB of host-based, server side flash/non-volatile memory to the storage system per server. The Datrium DVX does the rest.

The Hyperdriver leverages host flash, DRAM and compute cores to act as a caching layer for read and write IO and as a data management engine. Write data is write-thru straight from the server side flash to the NetShelf storage system which has Non-volatile DRAM (NVRAM) caching. Once write data is in NetShelf cache, it’s in two places, one on the host server side flash and the other in storage NVRAM. Reads are easier to handle, just being cached from the NetShelf storage in the server side flash. There’s no unique data residing in the hosts.

The Hyperdriver looks like a NFS mount to vSphere and the DVX uses a proprietary protocol to talk with the backend DVX NetShelf. Datrium supports up to 32 hosts and you can define the amount of Flash, DRAM and host compute allocated to the DVX Hyperdriver activity.

But the other interesting part about DVX is that much of the storage management functionality and storage control logic is partitioned between the host  Hyperdriver and NetShelf, with both participating to do what they do best.

For example,  disk rebuilds are done in combination with the host Hyperdriver. DVX RAID rebuild brings data from the backend into host cache, computes rebuild data and writes the reconstructed data back out to the NetShelf backend. This way rebuild performance can scale up with the number of hosts active in a cluster.

DVX data are compressed and deduplicated at the host before being sent to the NetShelf. The NetShelf backend also does a global deduplication on the host data. Hashing computations and data compression activities are all done on the host and passed on to the NetShelf.  Brian and Hugo were formerly with EMC Data Domain, and know all about data deduplication.

At the moment DVX is missing some storage functionality but they have an extensive roadmap with engineering resources to match and are plugging away at all of it. On the other hand, very few disk storage devices offer deduped/compressed data storage and warm server side caches during vMotion. They also support QoS functionality to limit the amount of host resources consumed by DVX Hyperdriver software

The podcast runs ~41 minutes and episode covers a lot of ground about how the new DVX product came about, how they separated storage functionality between host and backend and other aspects of DVX storage.  Listen to the podcast to learn more.

AAEAAQAAAAAAAAK8AAAAJGQyODQwNjg1LWI3NTMtNGY0OC04MGVmLTc5Nzg3N2IyMmEzYQBrian Biles, Datrium CEO & Co-founder

Prior to Datrium, Brian was Founder and VP of Product Mgmt. at EMC Backup Recovery Systems Division. Prior to that he was Founder, VP of Product Mgmt. and Business Development for Data Domain (acquired by EMC in 2009).

Hugo Patterson, Datrium CTO & Co-founderAAEAAQAAAAAAAANZAAAAJDhiMTI2NzMyLTdkZDAtNDE5Yy1hMTM5LTNiMWM2MWM3NTlmMA

Prior to Datrium, Hugo was an EMC Fellow serving as CTO of the EMC Backup Recovery Systems Division, and the Chief Architect and CTO of Data Domain (acquired by EMC in 2009), where he built the first deduplication storage system. Prior to that he was the engineering lead at NetApp, developing SnapVault, the first snap-and-replicate disk-based backup product. Hugo has a Ph.D. from Carnegie Mellon.