57: GreyBeards talk midrange storage with Pierluca Chiodelli, VP of Prod. Mgmt. & Cust. Ops., Dell EMC Midrange Storage

Sponsored by:

Dell EMC Midrange Storage

In this episode we talk with Pierluca Chiodelli  (@chiodp), Vice President of Product, Management and Customer Experience at Dell EMC Midrange storage.  Howard talked with Pierluca at SFD14 and I talked with Pierluca at SFD13. He started working there as a customer engineer and has worked his way up to VP since then.

This is the second time (Dell) EMC has been on our show (see our EMCWorld2015 summary podcast with Chad Sakac) but this is the first sponsored podcast from Dell EMC. Pierluca seems to have been with (Dell) EMC forever.

You may recall that Dell EMC has two product families in their midrange storage portfolio. Pierluca provides a number of reasons why both continue to be invested in, enhanced and sold on the market today.

Dell EMC Unity and SC product lines

Dell EMC Unity storage is the outgrowth of unified block and file storage that was first released in the EMC VNXe series storage systems. Unity continues that tradition of providing both file and block storage in a dense, 2 rack U system configuration, with dual controllers, high availability, AFA and hybrid storage systems. The other characteristic of Unity storage is its tight integration with VMware virtualization environments.

Dell EMC SC series storage continues the long tradition of Dell Compellent storage systems, which support block storage and which invented data progression technology.  Data progression is storage tiering on steroids, with support for multi-tiered rotating disk (across the same drive), flash, and now cloud storage. SC series is also considered a set it and forget it storage system that just takes care of itself without the need for operator/admin tuning or extensive monitoring.

Dell EMC is bringing together both of these storage systems in their CloudIQ, cloud based, storage analytics engine and plan to have both systems supported under the Unisphere management engine.

Also Unity storage can tier files to the cloud and copy LUN snapshots to the public cloud using their Cloud Tiering Appliance software.  With their UnityVSA Software Defined Storage appliance and VMware vSphere running in AWS, the file and snapshot data can then be accessed in the cloud. SC Series storage will have similar capabilities, available soon.

At the end of the podcast, Pierluca talks about Dell EMC’s recently introduced Customer Loyalty Programs, which include: Never Worry Data Migrations, Built-in VirtuSteram Storage Cloud, 4:1 Storage Efficiency Guarantee, All-inclusive Software pricing, 3-year Satisfaction Guarantee, Hardware Investment Protection, and Predictable Support Pricing.

The podcast runs ~27 minutes. Pierluca is a very knowledgeable individual and although he has a beard, it’s not grey (yet). He’s been with EMC storage forever and has a long, extensive history in midrange storage, especially with Dell EMC’s storage product families. It’s been a pleasure for Howard and I to talk with him again.  Listen to the podcast to learn more.

Pierluca Chiodelli, V.P. of Product Management & Customer Operations, Dell EMC Midrange Storage

Pierluca Chiodelli is currently the Vice President of Product Management for Dell EMC’s suite of Mid-Range solutions including, Unity, VNX, and VNXe from heritage EMC storage and Compellent, EqualLogic, and Windows Storage Server from heritage Dell Storage.

Pierluca’s organization is comprised of four teams: Product Strategy, Performance & Competitive Engineering, Solutions, and Core & Strategic Account engineering. The teams are responsible for ensuring Dell EMC’s mid-range solutions enable end users and service providers to transform their operations and deliver information technology as a service.

Pierluca has been with EMC since 1999, with experience in field support and core engineering across Europe and the Americas. Prior to joining EMC, he worked at Data General and as a consultant for HP Corporation.

Pierluca holds one degree in Chemical Engineering and second one in Information Technology.


56: GreyBeards talk high performance file storage with Liran Zvibel, CEO & Co-Founder, WekaIO

This month we talk high performance, cluster file systems with Liran Zvibel (@liranzvibel), CEO and Co-Founder of WekaIO, a new software defined, scale-out file system. I first heard of WekaIO when it showed up on SPEC sfs2014 with a new SWBUILD benchmark submission. They had a 60 node EC2-AWS cluster running the benchmark and achieved, at the time, the highest SWBUILD number (500) of any solution.

At the moment, WekaIO are targeting HPC and Media&Entertainment verticals for their solution and it is sold on an annual capacity subscription basis.

By the way, a Wekabyte is 2**100 bytes of storage or ~ 1 trillion exabytes (2**60).

High performance file storage

The challenges with HPC file systems is that they need to handle a large number of files, large amounts of storage with high throughput access to all this data. Where WekaIO comes into the picture is that they do all that plus can support high file IOPS. That is, they can open, read or write a high number of relatively small files at an impressive speed, with low latency. These are becoming more popular with AI-machine learning and life sciences/genomic microscopy image processing.

Most file system developers will tell you that, they can supply high throughput  OR high file IOPS but doing both is a real challenge. WekaIO’s is able to do both while at the same time supporting billions of files per directory and trillions of files in a file system.

WekaIO has support for up to 64K cluster nodes and have tested up to 4000 cluster nodes. WekaIO announced last year an OEM agreement with HPE and are starting to build out bigger clusters.

Media & Entertainment file storage requirements are mostly just high throughput with large (media) file sizes. Here WekaIO has a more competition from other cluster file systems but their ability to support extra-large data repositories with great throughput is another advantage here.

WekaIO cluster file system

WekaIO is a software defined  storage solution. And whereas many HPC cluster file systems have metadata and storage nodes. WekaIO’s cluster nodes are combined meta-data and storage nodes. So as one scale’s capacity (by adding nodes), one not only scales large file throughput (via more IO parallelism) but also scales small file IOPS (via more metadata processing capabilities). There’s also some secret sauce to their metadata sharding (if that’s the right word) that allows WekaIO to support more metadata activity as the cluster grows.

One secret to WekaIO’s ability to support both high throughput and high file IOPS lies in  their performance load balancing across the cluster. Apparently, WekaIO can be configured to constantly monitoring all cluster nodes for performance and can balance all file IO activity (data transfers and metadata services) across the cluster, to insure that no one  node is over burdened with IO.

Liran says that performance load balancing was one reason they were so successful with their EC2 AWS SPEC sfs2014 SWBUILD benchmark. One problem with AWS EC2 nodes is a lot of unpredictability in node performance. When running EC2 instances, “noisy neighbors” impact node performance.  With WekaIO’s performance load balancing running on AWS EC2 node instances, they can  just redirect IO activity around slower nodes to faster nodes that can handle the work, in real time.

WekaIO performance load balancing is a configurable option. The other alternative is for WekaIO to “cryptographically” spread the workload across all the nodes in a cluster.

WekaIO uses a host driver for Posix access to the cluster. WekaIO’s frontend also natively supports (without host driver) NFSv3, SMB3.1, HDFS and AWS S3  protocols.

WekaIO also offers configurable file system data protection that can span 100s of failure domains (racks) supporting from 4 to 16 data stripes with 2 to 4 parity stripes. Liran said this was erasure code like but wouldn’t specifically state what they are doing differently.

They also support high performance storage and inactive storage with automated tiering of inactive data to object storage through policy management.

WekaIO creates a global name space across the cluster, which can be sub-divided into one to thousands  of file systems.

Snapshoting, cloning & moving work

WekaIO also has file system snapshots (readonly) and clones (read-write) using re-direct on write methodology. After the first snapshot/clone, subsequent snapshots/clones are only differential copies.

Another feature Howard and I thought was interesting was their DR as a Service like capability. This is, using an onprem WekaIO cluster to clone a file system/directory, tiering that to an S3 storage object. Then using that S3 storage object with an AWS EC2 WekaIO cluster to import the object(s) and re-constituting that file system/directory in the cloud. Once on AWS, work can occur in the cloud and the process can be reversed to move any updates back to the onprem cluster.

This way if you had work needing more compute than available onprem, you could move the data and workload to AWS, do the work there and then move the data back down to onprem again.

WekaIO’s RtOS, network stack, & NVMeoF

WekaIO runs under Linux as a user space application. WekaIO has implemented their own  Realtime O/S (RtOS) and high performance network stack that runs in user space.

With their own network stack they have also implemented NVMeoF support for (non-RDMA) Ethernet as well as InfiniBand networks. This is probably another reason they can have such low latency file IO operations.

The podcast runs ~42 minutes. Linar has been around  data storage systems for 20 years and as a result was very knowledgeable and interesting to talk with. Liran almost qualifies as a Greybeard, if not for the fact that he was clean shaven ;/. Listen to the podcast to learn more.

Linar Zvibel, CEO and Co-Founder, WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design and development for a portfolio of rich social media applications.


Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.

53: GreyBeards talk MAMR and future disk with Lenny Sharp, Sr. Dir. Product Management, WDC

This month we talk new disk technology with Lenny Sharp, Senior Director of Product Management, responsible for enterprise disk with Western Digital Corp. (WDC). WDC recently announced their future disk offerings will be based on a new disk recording technology, called MAMR or microwave assisted magnetic recording.

Over the last decade or so the disk industry has been investing in HAMR or heat assisted magnetic recording as the next recording innovation. So, MAMR is a significant departure but appears well worth it.

WDC is arguably the leading supplier of HDD and one of the leading SSD suppliers to the industry today. Any departure from industry technology roadmaps for WDC is big news.

WDC is banking on MAMR technology to continue to offer capacity disk (for big data) at prices that are 10X below the price of flash storage for the foreseeable future. If they and the rest of the disk industry can deliver on that promise then there should be a substantial market for capacity disk for the next decade or so.

What’s  MAMR?

HAMR uses lasers to heat up a media spot being recorded. This boost in energy helps reduce the magnetic threshold of the grains inside the media and allowed them to be written or change state. Once that energy was removed, the data state on media would persist and could be read multiple times without error.

MAMR uses microwaves to add similar energy to the spot being written on disk media. MAMR doesn’t actually heat up the spot with microwaves, but it does add elector-magnetic energy to the spot being written, which has the same affect of reducing the threshold for writing the media.  I wrote a recent blog post about MAMR technology describing the technology in more detail

HAMR heated the media spot from 400C to 700C, which was potentially reduces disk reliability. MAMR, because it doesn’t heat the disk anymore than normal operations, should not impact disk reliability.

Also MAMR can use pretty much the same disk substrate used in enterprise disks today and be fabricated using much the same manufacturing lines used for PMR (perpendicular magnetic recording) heads, today.

Disk densities

MAMR should allow the industry to get to ~4.5Tb/sqin. Current PMR technology will probably max out at 1.0 to 1.3Tb/sqin.  PMR density growth has flatlined (6-7% per year) recently, but MAMR should put the disk industry back on a 15% density growth/year. The new MAMR disks will be sampling for enterprise customer in 2018 and in production by 2019.

As for how far MAMR will take disk, WDC said we can expect a 40TB disk device (using multiple platters) by 2025 and Lenny said perhaps double that eventually.

We ended our discussion with Lenny on WDC and other disk vendor moves outside of the device level. Over time, IT use of disks have changed and the disk vendor’s seem to believe the best way to address this transition is to look beyond disk/SSD devices and towards manufacturing storage shelves and potentially even systems!? We’ll need to wait and see the dust settle on these moves.

The podcast runs ~45 minutes. Lenny was very knowledgeable about current and future disk technology and seems to have been around the disk industry forever.  He’s got an insider’s view of disk technology, IT’s use of disk and storage market dynamics. Both  Howard and I enjoyed our time with him.   Listen to the podcast to learn more.

Lenny Sharp, Sr. Dir. Product Management, WDC

Lenny Sharp serves as Western Digital’s Sr. Director of Enterprise HDD product line management and planning. He has over 30 years of experience in high technology and storage. Sharp joined HGST in 2009, iniIally responsible for enterprise SSD.
He has also managed client HDD and spent four years in Japan, working closely with the development team and APAC customers.
Previously, he was responsible for managing systems, software, storage and semiconductors for companies including Dell, Philips, Western Digital and Maxtor (since acquired by Seagate).

48: Greybeards talk object storage with Enrico Signoretti, Head of Product Strategy, OpenIO

In this episode we talk with Enrico Signoretti, Head of Product Strategy for OpenIO, a software defined, object storage startup out of Europe. Enrico is an old friend, having been a member of many Storage Field Day events (SFD) in the past which both Howard and I attended and we wanted to hear what he was up to nowadays.

OpenIO open source SDS

It turns out that OpenIO is an open source object storage project that’s been around since 2008 and has recently (2015) been re-launched as a new storage startup. The open source, community version is still available and OpenIO has links to downloads to try it out. There’s even one for a Raspberry PI (Raspbian 8, I believe) on their website.

As everyone should recall object storage is meant for multi-PB data storage environments. Objects are assigned an ID and are stored in containers or buckets. Object storage has a flat hierarchy unlike file systems that have a multi-tiered hierarchy.

Currently, OpenIO is in a number of customer sites running 15-20PB storage environments. OpenIO supports AWS S3 compatible protocol and OpenStack Swift object storage API.

OpenIO is based on open source but customer service and usability are built into the product they license to end customers  on a usable capacity basis. Minimum license is for 100TB and can go into the multiPB range. There doesn’t appear to be any charge for enhancements of additional features or additional cluster nodes.

The original code was developed for a big email service provider and supported a massive user community. So it was originally developed for small objects, with fast access and many cluster nodes. Nowadays, it can also support very large objects as well.

OpenIO functionality

Each disk device in the OpenIO cluster is a dedicated service. By setting it up this way,  load balancing across the cluster can be at the disk level. Load balancing in OpenIO, is also a dynamic operation. That is, every time a object is created all node’s current capacity is used to determine the node with the least used capacity, which is then allocated to hold that object. This way there’s no static allocation of object IDs to nodes.

Data protection in OpenIO supports erasure coding as well as mirroring (replication{. This can be set by policy and can vary depending on object size. For example, if an object is say under 100MB it can be replicated 3 times but if it’s over 100MB it uses erasure coding.

OpenIO supports hybrid tiering today. This means that an object can move from OpenIO residency to public cloud (AWS S3 or BackBlaze B2) residency over time if the customer wishes. In a future release they will support replication to public cloud as well as tiering.  Many larger customers don’t use tiering because of the expense. Enrico says S3 is cheap as long as you don’t access the data.

OpenIO provides compression of objects. Although many object storage customers already compress and encrypt their data so may not use this. For those customers who don’t, compression can often double the amount of effective storage.

Metadata is just another service in the OpenIO cluster. This means it can be assigned to a number of nodes or all nodes on a configuration basis. OpenIO keeps their metadata on SSDs, which are replicated for data protection rather than in memory. This allows OpenIO to have a light weight footprint. They call their solution “serverless” but what I take from that is that it doesn’t use a lot of server resources to run.

OpenIO offers a number of adjunct services besides pure object storage such as video transcoding or streaming that can be invoked automatically on objects.

They also offer stretched clusters where an OpenIO cluster exists across multiple locations. Objects can have dispersal-like erasure coding for multi-site environments so that if one site goes down you still have access to the data. But Enrico said you have to have a minimum of 3 sites for this.

Enrico mentioned one media & entertainment customer stored only one version of a video in the object storage but when requested in another format automatically transcoded it in realtime. They kept this newly transcoded version in a CDN for future availability, until it aged out.

There seems to be a lot of policy and procedural flexibility available with OpenIO but that may just be an artifact of running in Linux.

They currently support RedHat, Ubuntu and CentOS. They also have a Docker container in Beta test for persistent objects, which is expected to ship later this year.

OpenIO hardware requirements

OpenIO has minimal hardware requirements for cluster nodes. The only thing I saw on their website was the need for at least 2GB of RAM on each node.  And metadata services seem to require SSDs on multiple nodes.

As discussed above, OpenIO has a uniquely light weight footprint (which is why it can run on Raspberry PI) and only seems to need about 500MB of DRAM and 1 core to run effectively.

OpenIO supports heterogeneous nodes. That is nodes can have different numbers and types of disks/SSDs on them, different processor, memory configurations and OSs. We talked about the possibility of having a node go down or disks going down and operating without them for a month, at the end of which admins could go through and fix them/replacing them as needed. Enrico also mentioned it was very easy to add and decommission nodes.

OpenIO supports a nano-node, which is just an (ARM) CPU, ram and a disk drive. Sort of like Seagate Kinetic and other vendor Open Ethernet drive solutions. These drives have a lightweight processor with small memory running Linux accessing an attached disk drive.

Also, OpenIO nodes can offer different services. Some cluster nodes can offer metadata and object storage services and others only object storage services. This seems configurable on a server basis. There’s probably some minimum number of metadata and object services required in a cluster. Enrico mentioned three nodes as a minimum cluster.

The podcast runs ~42 minutes but Enrico is a very knowledgeable, industry expert and a great friend from multiple SFD/TFD events. Howard and I had fun talking with him again. Listen to the podcast to learn more.

Enrico Signoretti, Head of Product Strategy at OpenIO.

In his role as head of product strategy, Enrico is responsible for the planning design and execution of OpenIO product strategy. With the support of his team, he develops product roadmaps from the planning stages to development to ensure their market fit.

Enrico promotes OpenIO products and represent the company and its products at several industry events, conferences and association meetings across different geographies. He actively participates in the company’s sales effort with key accounts as well as by exploring opportunities for developing new partnerships and innovative channel activities.

Prior to joining OpenIO, Enrico worked as an independent IT analyst, blogger and advisor for six years, serving clients among primary storage vendors, startups and end users in Europe and the US.

Enrico is constantly keeping an eye on how the market evolves and continuously looking for new ideas and innovative solutions.

Enrico is also a great sailor and an unsuccessful fisherman.

47: Greybeards talk Storage as a Service with Lazarus Vekiarides, CTO & Co-Founder ClearSky Data

Sponsored By:

In this episode, we talk with ClearSky Data’s Lazarus Vekiarides, CTO and Co-founder,  who we have talked with before (see our podcast from October 2015). ClearSky Data provides a storage-as-a-service offering that uses an on-premises appliance plus point of presence (PoP) storage in the local metro area to hold customer data and offloads this data to cloud storage. In addition to the on-premises storage-as-a-service they offer access to customer data from an in-cloud virtual appliance. ClearSky provides the whole storage service, including gigabit metro Ethernet connections from the customer to the POP for simple capacity based charge every month.

How does it work

Their Edge (on premises) appliance supports 24 SSDs and can scale up to 4 appliances. Soon a single appliance will be able to hold up to 32TB of data.  It’s intended to hold a data center’s entire working set for one week of activity. So essentially it’s a big caching appliance for the local data center

For ClearSky Data the lone source of truth for customer data lies in the PoP. The PoP is connected to metro wide fibre that is available in a number of large metropolitan areas. Laz says they have measured sub 500 µsec round trip response time between their PoP equipment and Edge appliance. The PoP provides the backing store for the Edge appliance. Data written to the edge appliance(s) are written through to the PoP storage. This data and it’s metadata (<1% of LUN size) is flushed to cloud storage which holds the data indefinitely.

DR through the PoP

If customers have multiple data centers within the same metro area (100Km) then they can have a single “logical” array that accesses the same data, say a cluster file system across the two data centers. The PoP will take care of copying the metadata to the secondary edge device and will invalidate any data sitting in the secondary device which is no longer valid. In this way customers can have a Recovery Point Objective (RPO)=0 seconds. That is any data written to the primary data center is automatically available to the secondary data center as long as the PoP survives.

But even if you wanted to fail over to a different metro area the PoP data is offloaded to the cloud continuously so while you wouldn’t attain an RPO=0 seconds, it could be awfully short, on the order of a couple of seconds.

Recent enhancements

ClearSky Data has recently enhanced their storage as a service to provide policy management over snapshots. That is you can establish policies as to how often to take LUN snapshots and how long to retain them in the cloud.

Also, ClearSky Data has added VMware functionality via plugins that allow their storage to know which VMs are writing data or are being backed up to their appliance. And this is included in the metadata written for a LUN which is offloaded to the cloud. Someday soon when you can have vSphere running bare metal in a public cloud service, you will be able to run the Cloud Edge (cloud software version of their Edge appliance) and restore the data from your data center directly to the cloud and have an iSCSI LUN available to EC2 running VMware providing complete Cloud DR for a data center.

We talked a bit about our favorite topic, NVMe storage and Laz sees a potential for it to help their Edge appliances but at the moment fault-tolerence/high availability is not there. And as they are primary storage for data centers HA is a critical capability.

Pricing and availability

Their product is priced as a service in $0.nn/GB/Month and if you do a 36 month cost analysis they feel they would come out cheaper than hybrid storage. They currently have PoP’s in Boston, NyNy, Northern Virginia, Dallas, and California. Laz says they believe there’s 15 major metropolitan areas across the USA they have targeted for service.  What nothing in Europe or Asia? We would imagine this is merely a question of the number of customers, amount of data and metro infrastructure.

The podcast runs ~24 minutes. Laz has been in the storage industry across a number of companies and has been with a few startups as well. Laz is very knowledgeable about storage, cloud, and metro networking, a good friend and is always a pleasure to talk with.  Listen to the podcast to learn more.

Lazarus Vekiarides, CTO & Co-Founder ClearSky Data

For over 20 years Laz Vekiarides has served in key technical and leadership roles delivering breakthrough technologies to market. Most recently, he served as the Executive Director of Software Engineering for Dell’s EqualLogic Storage Engineering group, where he led the development of numerous storage innovations and established the EqualLogic product line as a leader in host OS and hypervisor integration.

Laz joined Dell from EqualLogic, which was acquired in early 2008, where he was a member of the core leadership team – playing a key role in the company’s early success as a Senior Engineering Manager and Architect for the PS Series SAN arrays and host tools. Prior to EqualLogic, Laz held senior engineering and management positions at several companies including 3COM and Banyan Systems.

An occasional blogger, Laz frequently speaks at industry conferences, particularly in the areas of virtualization and data storage. He holds several storage technology patents, as well as a BSEE from Northeastern University, and an MSCS from the Worcester Polytechnic Institute.