SCI SPECsfs2008 NFS throughput per node – Chart of the month

SCISFS150928-001
As SPECsfs2014 still only has (SPECsfs sourced) reference benchmarks, we have been showing some of our seldom seen SPECsfs2008 charts, in our quarterly SPECsfs performance reviews. The above chart was sent out in last months Storage Intelligence Newsletter and shows the NFS transfer operations per second per node.

In the chart, we only include NFS SPECsfs2008 benchmark results with configurations that have more than 2 nodes and have divided the maximum NFS throughput operations per second achieved by the node counts to compute NFS ops/sec/node.

HDS VSP G1000 with an 8 4100 file modules (nodes) and HDS HUS (VM) with 4 4100 file modules (nodes) came in at #1 and #2 respectively, for ops/sec/node, each attaining ~152K NFS throughput operations/sec. per node. The #3 competitor was Huawei OceanStor N8500 Cluster NAS with 24 nodes, which achieved ~128K NFS throughput operations/sec./node. At 4th and 5th place were EMC  VNX VG8/VNX5700 with 5 X-blades and Dell Compellent FS8600 with 4 appliances, each of which reached ~124K NFS throughput operations/sec. per node. It falls off significantly from there, with two groups at ~83K and ~65K NFS ops/sec./node.

Although not shown above, it’s interesting that there are many well known scale-out NAS solutions in SPECsfs2008 results with over 50 nodes that do much worse than the top 10 above, at <10K NFS throughput ops/sec/node. Fortunately, most scale-out NAS nodes cost quite a bit less than the above.

But for my money, one can be well served with a more sophisticated, enterprise class NAS system which can do >10X the NFS throughput operations per second per node than a scale-out systm. That is, if you don’t have to deploy 10PB or more of NAS storage.

More information on SPECsfs2008/SPECsfs2014 performance results as well as our NFS and CIFS/SMB ChampionsCharts™ for file storage systems can be found in our just updated NAS Buying Guide available for purchase on our web site.

Comments?

~~~~

The complete SPECsfs2008 performance report went out in SCI’s September newsletter.  A copy of the report will be posted on our dispatches page sometime this quarter (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free monthly newsletters by just using the signup form above right.

As always, we welcome any suggestions or comments on how to improve our SPECsfs  performance reports or any of our other storage performance analyses.

 

When 64 nodes are not enough

Why would VMware with years of ESX development behind them want to develop a whole new virtualization system for Docker and other container frameworks. Especially since they already have a compatible Docker support in their current product line.

The main reason I can think of is that a 64 node cluster may be limiting to some container services and the likelihood of VMware ESX/vSphere to supporting 1000s of nodes in a single cluster seems pretty unlikely. So given that more and more cloud services are being deployed across 1000s of nodes using container frameworks, VMware had to do something or say goodbye to a potentially lucrative use case for virtualization.

Yes over time VMware may indeed extend vSphere clusters to 128 or even 256 nodes but by then the world will have moved beyond VMware services for these services and where will VMware be then – left behind.

Photon to the rescue

With the new Photon system VMware has an answer to anyone that needs 1000 to 10,000 server cluster environments. Now these customers can easily deploy their services on a VMware Photon Platform which is was developed off of ESX but doesn’t have any cluster limitations of ESX.

Thus, the need for Photon was now. Customers can easily deploy container frameworks that span 1000s of nodes. Of course it won’t be as easy to manage as a 64 node vSphere cluster but it will be easy automated and easier to deploy and easier to scale when necessary, especially beyond 64 nodes.

The claim is that the new Photon will be able to support multiple container frameworks without modification.

So what’s stopping you from taking on the Amazons, Googles, and Apples of the worlds data centers?

  • Maybe storage, but then there’s ScaleIO, and the other software defined storage solutions that are there to support local DAS clusters spanning almost incredible sizes of clusters.
  • Maybe networking, I am not sure just where NSX is in the scheme of things but maybe it’s capable of handling 1000s of nodes and maybe not but networking could be a clear limitation to what how many nodes can be deployed in this sort of environment.

Where does this leave vSphere? Probably continuation of the current trajectory, making easier and more efficient to run VMware clusters and over time extending any current limitations. So for the moment two development streams based off of ESX and each being enhanced for it’s own market.

How much of ESX survived is an open question but it’s likely that Photon will never see the VMware familiar services and operations that is readily available to vSphere clusters.

Comments?

Photo Credit(s): A first look into Dockerfile system

VMware VSAN 6.0 all-flash & hybrid IO performance at SFD7

We visited with VMware’s VSAN team during last Storage Field Day (SFD7, session available here). The presentation was wide ranging but the last two segments dealt with recent changes to VSAN and at the end provided some performance results for both a hybrid VSAN and an all-Flash VSAN.

Some new features in VSAN 6.0 include:

  • More scaleability, up to 64 hosts in a cluster and up to 200VMs per host
  • New higher performance snapshots & clones
  • Rack awareness for better availability
  • Hardware based checksum for T10 DIF (data integrity feature)
  • Support for blade servers with JBODs
  • All-flash configurations
  • Higher IO performance

Even in the all-flash configuration there are two tiers of storage a write cache tier and a capacity tier of SSDs. These are configured with two different classes of SSDs (high endurance/low-capacity and low-endurance/high capacity).

At the end of the session Christos Karamanolis (@Xtosk), Principal Architect for VSAN showed us some performance charts on VSAN 6.0 hybrid and all-flash configurations.

Hybrid VSAN performance

On the chart we see two plots showing IOmeter performance as VSAN scales across multiple nodes (hosts), on the left  we have  a 100% Read workload and on the right a 70%Read:30%Write workload.

The hybrid VSAN configuration has 4-10Krpm disks and 1-400GB SSD on each host and ranges from 8 to 64 hosts. The bars on the chart show IOmeter IOPS and the line shows the average response time (or IO latency) for each VSAN host configuration. I am not a big fan of IOmeter, as it’s an overly simplified, but that’s what VMware used.

The results show that in a 100% read case the hybrid, 64 host VSAN 6.0 cluster was able to sustain ~3.8M IOPS or over 60K IOPS per host.  or the mixed 70:30 R:W workload VSAN 6.0 was able to sustain ~992K IOPs or ~15.5K IOPS per host.

We see a pretty drastic IOPs degradation (~1/4 the 100% read performance) in the plot on the right, when they added write activity to the mix. But with VSAN’s mirrored data protection each VM write represents at least two VSAN backend writes and at a 70:30 IOmeter R:W this would be ~694K IOPS read and ~298K IOPS write frontend IOs but with mirroring this represents 595K writes to the backend storage.

Then of course, there’s destage activity (data written to SSDs need  to be read off SSD and written to HDD) which also multiplies internal IO operations for every external write IOP. Lets say all that activity multiplies each external write by 6 (3 for each mirror: 1 to the write cache SSD, 1 to read it back and 1 to write to HDD) and we multiply that times the ~298K external write IOPS, it would add up to about a total of ~1.8M write derived IOPS  and ~0.7M read derived IOPS or a total of ~2.5M IOPS but this is still far away from the 3.5M IOPS for 100% read activity. I am probably missing another IO or two in the write path (maybe Virtual to physical mapping structures need to be updated) or have failed to account for more inter-cluster IO activity in support of the writes.

In addition, we see the IO latency was roughly flat across the 100% Read workload at ~2.25msec. and got slightly worse over the 70:30 R:W workload, ranging from ~2.5msec. at 4 hosts to a little over 3.0msec. with 64 hosts. Not sure why this got worse but hosts are scaled up it could induce more inter-cluster overhead.

Rays-pix37

In the chart to the right, we can see similar performance data for systems with one or two disk-groups. The message here is that with two disk groups on a host (2X the disk and SSD resources per host) one can potentially double the performance of the systems, to 116K IOPS/host on 100% read and 31K IOPS/host on a 70:30 R:W workload.

All-flash VSAN performance

Rays-pix41

Here we can see performance data for an 8-host, all-flash VSAN configuration. In this case the chart on the left was a single “disk” group and the chart on the right was a dual disk group, all-flash configuration on each of the 8-hosts. The hosts were configured with 1-400GB and 3-800GB SSDs per disk group.

The various bars on the charts represent different VM working set sizes, 100, 200, 400 & 600GB for the single disk group chart and 100, 200, 400, 800 & 1200GB for dual disk group configurations. For the dual disk group, the 1200GB working set size is much bigger than a cache tier on each host.

The chart text is a bit confusing: the title of each plot says 70% read but the text under the two plots says 100% read. I must assume these were 70:30 R:W workloads. If we just look at the 8 hosts, using a 400GB VM working set size, all-flash VSAN 6.0 single disk group cluster was able to achieve ~37.5K IOPS/host and with two disk groups, the all-flash VSAN 6.0  was able to achieve ~68.75K IOPS/host at the 400GB working set size. Both doubling the hybrid performance.

Response times degrade for both the single and dual disk groups as we increase the working set sizes. It’s pretty hard to see on the two charts but it seems to range from 1.8msec to 2.2msec for the single disk group and 1.8msec to 2.5 msec for the dual disk group. The two charts are somewhat misleading because they didn’t use the exact same working group sizes for the two performance runs but just taking the 100|200|400GB working set sizes, for the single disk group it looks like the latency went from ~1.8msec. to ~2.0msec and for the dual disk group from ~1.8msec to ~2.3msec. Why the higher degradation for the dual disk group is anyone’s guess.

The other thing that doesn’t make much sense is that as you increase the working set size the number of IOPS goes down, worse for the dual disk group than the single. Once again taking just the 100|200|400GB working group sizes this ranges from ~350K IOPS to ~300K IOPS (~15% drop) for the single disk group and ~700K IOPS to ~550K IOPS (~22% drop) for the dual disk group.

Increasing working group sizes should cause additional backend IO as the cache effectivity should be proportionately less as you increase working set size. Which I think goes a long way to explain the degradation in IOPS as you increase working set size. But I would have thought the degradation would have been a proportionally similar for both the single and dual disk groups. The fact that the dual disk group did 7% worse seems to indicate more overhead associated with dual disk groups than single disk groups or perhaps, they were running up against some host controller limits (a single controller supporting both disk groups).

~~~~

At the time (3 months ago) this was the first world-wide look at all-flash VSAN 6.0 performance. The charts are a bit more visible in the video than in my photos (?) and if you want to just see and hear Christos’s performance discussion check out ~11 minutes into the final video segment.

For more information you can also read these other SFD7 blogger posts on VMware’s session:

 

Storage systems on Agile

640px-Scrum_FrameworkWas talking with Qumulo‘s CEO Peter Godman earlier this week for another GreyBeards On Storage Podcast (not available yet). One thing he said which was hard for me to comprehend was that they were putting out a new storage software release every 2 weeks.

Their customers are updating their storage system software every 2 weeks.

In my past life as a storage systems development director, we would normally have to wait months if not quarters before customers updated their systems to the latest release. As a result, we strived to put out an update at most, once a quarter with a major release every year to 18 months or so.

To me releasing code to the field every two weeks sounds impossible or at best very risky. Then I look at my iPhone. I get updates from Twitter, Facebook, LinkedIN and others, every other week. And Software-as-a-service (SaaS) solutions often update their systems frequently, if not every other week. Should storage software be any different?

It turns out Peter and his development team at Qumulo have adopted SaaS engineering methodology, which I believe uses Agile development.

Agile development

As I understand it Agile development has a couple of tenets (see Wikipedia article for more information):

  • Individuals and interaction – leading to co-located teams, with heavy use of pair programming, and developer generated automated testing, rather than dispersed teams with developers and QA separate but (occasionally) equal.
  • Working software – using working software as a means of validating requirements, showing off changes and modifying code rather than developing reams of documentation.
  • (Continuous) Customer collaboration – using direct engagement with customers over time to understand changes (using working software) rather than one time contracts for specifications of functionality
  • Responding to change – changing direction in real time using immediate customer feedback rather than waiting months or a year or more to change development direction to address customer concerns.

In addition to all the above, Agile development typically uses Scrum for product planning. An Agile Scrum (see picture above & Wikipedia article) is a weekly (maybe daily) planning meeting, held standing up and discussing what changes go into the code next.

This is all fine for application development which involves a few dozen person years of effort but storage software development typically takes multiple person centuries of development & QA effort. In my past life, our storage system regression testing typically took 24 hours or more and proper QA validation took six months or more of elapsed time with ~ 5 person years or so of effort, not to mention beta testing the system at a few, carefully selected customer sites for 6 weeks or more. How can you compress this all into a few weeks?

Software development on Agile

With Agile, you probably aren’t beta testing a new release for 6 weeks anywhere, anymore. While you may beta test a new storage system for a period of time you can’t afford the time to do this on subsequent release updates anymore.

Next, there is no QA. It’s just a developer/engineer and their partner. Together they own code change and its corresponding test suite. When one adds functionality to the system, it’s up to the team to add new tests to validate it. Test automation helps streamline the process.

Finally, there’s continuous integration to the release code in flight. Used to be a developer would package up a change, then validate it themselves (any way they wanted), then regression test it integrated with the current build, and then if it was deemed important enough, it would be incorporated into the next (daily) build of the software. If it wasn’t important, it could wait on the shelf (degenerating over time due to lack of integration) until it came up for inclusion. In contrast, I believe Agile software builds happen hourly or even more often (in real time perhaps), changes are integrated as soon as they pass automated testing, and are never put on the shelf. Larger changes may still be delayed until a critical mass is available, but if it’s properly designed even major changes can be implemented incrementally. Once in the build, automated testing insures that any new change doesn’t impact working functionality.

Due to the length of our update cycle, we often had 2 or more releases being validated at any one time. Unclear to me whether Agile allows for multiple releases in flight as it just adds to the complexity and any change may  have to be tailored for each release it goes into.

Storage on Agile

Vendors are probably not doing this with hardware that’s also undergoing significant change. Trying to do both would seem suicidal.

Hardware modifications often introduce timing alterations that can expose code bugs that had never been seen before. Hardware changes also take a longer time to instantiate (build into electronics). This can be worked around by using hardware simulators but timing is often not the same as the real hardware and it can take 10X to 100X more real-time to execute simple operations. Nonetheless, new hardware typically takes weeks to months to debug and this can be especially hard if the software is changing as well.

Similar to hardware concerns, OS or host storage protocol changes (say from NFSv3 to NFSv4) would take a lot more testing/debugging to get right.

So it helps if the hardware doesn’t change, the OS doesn’t change and the host IO protocol doesn’t change when your using Agile to develop storage software.

The other thing that we ran into is that over time, regression testing just kept growing and took longer and longer to complete. We made it a point of adding regression tests to validate any data loss fix we ever had in the field. Some of these required manual intervention (such as hardware bugs that need to be manually injected). This is less of a problem with a new storage system and limited field experience, but over time fixes accumulate and from a customer perspective, tests validating them are hard to get rid of.

Hardware on Agile

Although a lot of hardware these days is implemented as ASICs, it can also be implemented via Field Programmable Gate Arrays (FPGAs). Some FPGAs can be configured at runtime (see Wikipedia article on FPGAs), that is in the field, almost on demand.

FPGA programming is done using a hardware description language, an electronic logic coding scheme. It looks very much like software development of hardware logic. Why can’t this be incrementally implemented, continuously integrated, automatically validated and released to the field every two weeks.

As discussed above, the major concern is that new hardware introducing timing changes which expose hard to find (software and hardware) bugs.

And incremental development of original hardware, seems akin to having a building’s foundation changing while your adding more stories. One needs a critical mass of hardware to get to a base level of functionality to run storage functionality. This is less of a problem when one’s adding or modifying functionality for current running hardware.

~~~~

I suppose Qumulo’s use of Agile shouldn’t be much of a surprise. They’re a startup, with limited resources, and need to play catchup with a lot of functionality to implement. It’s risky from my perspective but you have to take calculated risks if your going to win the storage game.

You have to give Qumulo credit for developing their storage using Agile and being gutsy enough to take it directly to the field. Let’s hope it continues to work for them.

Photo Credits“Scrum Framework” by Source (WP:NFCC#4). Licensed under Fair use via Wikipedia

DDN unchains Wolfcreek, unleashes IME and updates WOS

16371098088_3b264f5844_zIt’s not every day that we get a vendor claiming 2.5X the top SPC-1 IOPS (currently held by Hitachi G1000 VSP all flash array at ~2M IOPS) as DataDirect Networks (DDN) has claimed for an all-flash version of their new Wolfcreek hyper converged appliance. DDN says their new 4U appliance is capable of 60GB/sec of throughput and over 5M IOPS. (See their press release for more information.) Unclear if these are SPC-1 IOPS or not, but I haven’t seen any SPC-1 report on it yet.

In addition to the new Wolfcreek appliance, DDN announced their new Infinite Memory Engine™ (IME) flash caching software and WOS® 360 V2.0, an enhanced version of their object storage.

DDN if you haven’t heard of them has done well in the Web 2.0 environments and is a leading supplier to high performance computing (HPC) sites. They have object storage system (WOS), all flash block storage (SFA12KXi), hybrid (disk-SSD) block storage (SFA7700X™ & SFA12KX™), Lustre file appliance (EXAScaler), IBM GPFS™ NAS appliance (GRIDScaler), media server appliance (MEDIAScaler™) and  software defined storage (Storage Fusion Accelerator [SFX™] flash caching software).

Wolfcreek hyper converged appliance

The converged solution comes in a 4U appliance using dual Haswell Intel microprocessors (with up to 18 cores each), includes a PCIe fabric which supports 48-NVMe flash cards or 72-SFF SSDs. With the NVMe or SSDs, Wolfcreek will be using their new IME software to accelerate IO activity.

Wolfcreek IME software supports either burst mode IO caching cluster or a storage cluster of nodes. I assume burst mode is a storage caching layer for backend file system stoorage. As a storage cluster I assume this would include some of their scale-out file system software on the nodes. Wolfcreek cluster interconnect is 40Gb Infiniband or 10/40Gb Ethernet and also will support Intel’s Omni-Path. The Wolfcreek appliance is compatible with HPC Lustre and IBM GPFS scale out file systems.

Wolfcreek appliance can be a great platform for OpenStack and Hadoop environments. But it also supports virtual machine hypervisors from VMware, Citrix and Microsoft. DDN says that the Wolfcreek appliance can scale up to support 100K VMs. I’ve been told that IME will not be targeted to work with Hypervisors in the first release.

Recall that with a hyper converged appliance, some portion of the system resources (memory and CPU cores) must be devoted to server and VM application activities and the remainder to storage activity. How this is divided up and whether this split is dynamic (changes over time) or static (fixed over time) in the Wolfcreek appliance is not indicated.

The hyper converged field is getting crowded of late what with VMware EVO:RAIL, Nutanix, ScaleComputing, Simplivity and others coming out with solutions. But there aren’t many that support all-flash storage and it seems unusual that hyper converged customers would have need for that much IO performance. But I could be wrong, especially for HPC customers.

There’s much more to hyper convergence than just having storage and compute in the same node. The software that links it all together, manages, monitors and deploys these combined hypervisor, storage and server systems is almost as important as any of the  hardware. There wasn’t much talk about the software that DDN is putting together for Wolfcreek but it’s still early yet. With their roots in HPC, it’s likely that any DDN hyper converged solution will target this market first and broaden out from there.

Infinite Memory Engine (IME)

IME is an outgrowth of DDN’s SFX software and seem to act as a caching layer for parallel file system IO. It makes use of NVMe or SSDs for its IO caching. And according to DDN can offer up to 1000X IO acceleration to storage or 100X file system acceleration.

It does this primarily by providing an application aware IO caching layer and supplying more effective IO to the file system layer using PCIe NVMe or SSD flash storage for hardware IO acceleration. According to the information provided by DDN, IME can provide 50 GB/sec bandwidth to a host compute cluster while only doing 4GB/sec of throughput to a backend file system, presumably by better caching of file IO.

WOS 360 V2.0

The new WOS 360 V2.0 object storage system features include

  • Higher density storage package with 98-8TB SATA drives or 768TB raw capacity in 4U) supporting 8B objects each and over 100B objects in a cluster.
  • Native SWIFT API support for OpenStack environments  which includes gateway or embedded deployments, up to 5000 concurrent users and 5B objects/namespace.
  • Global ObjectAssure data encoding with lower storage overhead (1.5x or a 20% reduction from their previous encoding option) for highly durable and available object storage usiing a two level hierarchical erasure code for object storage.
  • Enhanced network security with SSL  which provides end-to-end SSL network data transport between clients and WOS and betweenWOS storage nodes.
  • Simplified cluster installation, deployment and maintenance with can now deploy a WOS cluster in minutes, with a simple point and click GUI for installation and cluster deployment with automated non-disruptive software upgrade.
  • Performance improvements for better video streaming, content distribution and large file transfers with improved QoS for latency sensitive applications.

~~~~

Probably more going on with DDN than covered here but this hits the highlights. I wish there was more on their Wolfcreek appliance and its various configurations and performance benchmarks but there’s not.

Comments?

 Photo Credits: wolf-63503+1920 by _Liquid

 

Springpath SDS springs forth

Springpath presented at SFD7 and has a new Software Defined Storage (SDS) that attempts to provide the richness of enterprise storage in a SDS solution running on commodity hardware. I would encourage you to watch the SFD7 video stream if you want to learn more about them.

HALO software

Their core storage architecture is called HALO which stands for Hardware Agnostic Log-structured Object store. We have discussed log-structured file systems before. They are essentially a sequential file that can be randomly accessed (read) but are sequentially written. Springpath HALO was written from scratch, operates in user space and unlike many SDS solutions, has no dependencies on Linux file systems.

HALO supports both data deduplication and compression to reduce storage footprint. The other unusual feature  is that they support both blade servers and standalone (rack) servers as storage/compute nodes.

Tiers of storage

Each storage node can optionally have SSDs as a persistent cache, holding write data and metadata log. Storage nodes can also hold disk drives used as a persistent final tier of storage. For blade servers, with limited drive slots, one can configure blades as part of a caching tier by using SSDs or PCIe Flash.

All data is written to the (replicated) caching tier before the host is signaled the operation is complete. Write data is destaged from the caching tier to capacity tier over time, as the caching tier fills up. Data reduction (compression/deduplication) is done at destage.

The caching tier also holds read cached data that is frequently read. The caching tier also has a non-persistent segment in server RAM.

Write data is distributed across caching nodes via a hashing mechanism which allocates portions of an address space across nodes. But during cache destage, the data can be independently spread and replicated across any capacity node, based on node free space available.  This is made possible by their file system meta-data information.

The capacity tier is split up into data and a meta-data partitions. Meta-data is also present in the caching tier. Data is deduplicated and compressed at destage, but when read back into cache it’s de-compressed only. Both capacity tier and caching tier nodes can have different capacities.

HALO has some specific optimizations for flash writing which includes always writing a full SSD/NAND page and using TRIM commands to free up flash pages that are no longer being used.

HALO SDS packaging under different Hypervisors

In Linux & OpenStack environments they run the whole storage stack in Docker containers primarily for image management/deployment, including rolling upgrade management.

In VMware and HyperVM, Springpath runs as a VM and uses direct path IO to access the storage. For VMware Springpath looks like an NFSv3 datastore with VAAI and VVOL support. In Hyper-V Springpath’s SDS is an SMB storage device.

For KVM its an NFS storage, for OpenStack one can use NFS or they have a CINDER plugin for volume support.

The nice thing about Springpath is you can build a cluster of storage nodes that consists of VMware, HyperV and bare metal Linux nodes that supports all of them. (Does this mean it’s multi protocol, supporting SMB for Hyper-V, NFSv3 for VMware?)

HALO internals

Springpath supports (mostly) file, block (via Cinder driver) and object access protocols. Backend caching and capacity tier all uses a log structured file structure internally to stripe data across all the capacity and caching nodes.  Data compression works very well with log structured file systems.

All customer data is supported internally as objects. HALO has a write-log which is spread across their caching tier and a capacity-log which is spread across the capacity tier.

Data is automatically re-balanced across nodes when new nodes are added or old nodes deleted from the cluster.

Data is protected via replication. The system uses a minimum of 3 SSD nodes and 3 drive (capacity) nodes but these can reside on the same servers to be fully operational. However, the replication factor can be configured to be less than 3 if you’re willing to live with the potential loss of data.

Their system supports both snapshots (2**64 times/object) and storage clones for test dev and backup requirements.

Springpath seems to have quite a lot of functionality for a SDS. Although, native FC & iSCSI support is lacking. For a file based, SDS for hypbervisors, it seems to have a lot of the bases covered.

Comments?

Other SFD7 blogger posts on Springpath:

Picture credit(s): Architectural layout (from SpringpathInc.com) 

Free & frictionless and sometimes open sourced

IMG_4467I was at EMCWorld2015 (see my posts on the day 1 news and day 2&3 news) and IBMEdge2015 this past month and there was a lot of news on software defined storage. And it turns out I was at an HP Storage Deep Dive the previous month and they also spoke on the topic.

One key aspect of software defined storage is how customers can consume the product. I’m not talking about licensing but rather about product trial-ability. One approach championed by HP, EMC, IBM and others is to offer their software defined storage in a new way.

Free & frictionless?

Howard Marks (@DeepStorageNetDeepStorage.net) and I, had Chad Sakac (@sakacc, VirtualGeek) were on a recent GreyBeards on Storage podcast to discuss the news coming out of EMCWorld2015 and he used the term free & frictionless as a new approach to offering  emerging technology software only storage solutions.

  • Frictionless refers to not having to encounter a sales person and not having to provide a lot of information to gain access to a software download. Frictionless is a matter of degree: at one extreme all you have is a direct link to a software download and it fires up without any registration requirements whatsoever; and at the other end, you have to fill out a couple of pages about your company and your plans for the product.
  • Free refers to the ability to use the product for free in limited situations (e.g., test & development) but requires a full paid for license and support contracts when used outside these limitations.

For example:

  • Microsoft Windows Storage Server 2012 is available for a free 180-day evaluation and can be directly download. I was able to download it without having to supply any information whatsoever. Unfortunately, I don’t have any Windows Server hardware floating around that I could use to see if there was any further registration requirements for it.
  • HP StoreVirtual VSA and StoreOnce VSA are both available for a 60-day, free trial offer, downloadable from the StoreVirtual VSA and StoreOnce VSA websites. StoreVirtual VSA is also available for an free, 1TB/3-year license. You have to register for this last option and all three options require an HP Passport account to download the software. Didn’t have an HP Passport account so don’t know what else was required.
  • VMware Virtual SAN is available for a 60-day, free trial offer (with no capacity or other use restrictions). You will need a 3-server vSphere cluster so you also get vSphere and vCenter server software for free at the download website.  You will need a VMware account in order to download the software, beyond that, it’s unclear to me what’s required.
  • EMC ScaleIO will be available for free when used for test and development, by the end of this month. There is no limit on the time you can use the product, no limit on the amount of storage that can be defined and no limit on the number of servers it’s deployed on. Although the website for EMC’s ScaleIO download was up, there was no download link active on the page yet. So I can’t say what’s required to access the download.
  • IBM Spectrum Accelerate (software-only version of XIV) is going to be available for a 90-day, free trial offer. As far as I know you can do what you like with it for 90-days. I couldn’t find any links on their website for the download but it was just announced last week at IBMEdge2015.

I couldn’t find any information on an Hitachi or a NetApp software defined storage solution free trial offer but could have missed them in my searches.

There are plenty of other software defined storage solutions out there including Maxta, NexentaSpringPath, and probably a dozen others, many of which provide free trial offers. Not to mention software defined object/file systems such as Ceph, Gluster, Lustre, etc.

… And sometimes Open Source

One other item of interest out of EMCWorld2015 this month was that ViPR Controller is being open sourced as Project CoprHD (on GitHub). Its source code is scheduled to be loaded around June.

EMC, IBM, HDS, NetApp, VMware and others have all been very active in open source in the past, in areas such as storage support in Linux, OpenStack and other projects. But outside of Pivotal (an EMC Federation company), most of them have not open sourced a real product.

I believe it was Paul Maritz, CEO Pivotal who said on stage, that one reason to open source a project is to help to create an eco-system around it.

EMC open sourced ViPR Controller primarily to add even more development resources to enhance the solution. The other consideration was that customers adopting ViPR Controller in their data centers were concerned about vendor lock-in. Open sourcing ViPR Controller addresses both of these issues.

My understanding is that Project CoprHD will be under a Mozilla Public License (MPL 2.0) as standalone project. Customers can now add any storage system support they want and anyone that’s afraid of lock-in can download the software and modify it themselves. MPL 2.0 supports a copyleft style of licensing, which essentially means anyone can modify the source code but any derivative work must be licensed under MPL as well.

My understanding is that ViPR Controller will still be available as a commercial product.

~~~~

From my perspective it all seems to make a lot of sense. Customers creating new applications that could use software defined storage want access to the product for free to try it out to see what it can and can’t do.

EMC’s taken a lead in offering their’s for free in test and dev situations, we’ll see if the others go along with them.

Comments?

 

 

 

 

 

 

 

 

EMCWorld2015 Day 2&3 news

Some additional news from EMCWorld2015 this week:

IMG_4527 IMG_4528 IMG_4531EMC announced directed availability for DSSD, their Rack scale shared Flash storage solution using a PCIe3 (switched) fabric with 36 dual ported, flash modules, which hold 512 NAND chips for 144TB NAND flash storage. On the stage floor they had a demonstration pitting a  40 node Hadoop cluster with DAS against a 15 node Hadoop cluster using the DSSD, both running HIVE and working on the same Query. By the time the 40node/DAS solution got to about 2% of the query completion the 15node/DSSD based cluster had finished the query without breaking a sweat. They then ran an even more complex query and it took no time at all.

They also simulated a copy of a 4TB file (~32K-128K IOs) from memory to memory and it took literally seconds, then copied it to SSD that took considerably longer (didn’t catch how long but much longer than memory), and then they showed the same file copy to DSSD and it only took seconds, almost looked exactly a smidgen slower than the memory to memory copy.

They said the PCIe fabric (no indication what the driver was) provided much more parallelism to the dual ported flash storage that the system was almost able to complete the 4TB copy at memory to memory speeds. It was all pretty impressive, albeit a simulation of the real thing.

EMC indicated that they designed the flash modules themselves and expect to double capacity of the DSSD to 288TB shortly. They showed the controller board that had a mezzanine board over a part of it, but together had 12 major chips on it which I assume had something to do with the PCIe fabric. They said there were two controllers in the system for high availability and the 144TB DSSD was deployed in 5U of space.

I can see how this would play well for real time analytics, high frequency trading and HPC environments but there’s more to shared storage than just speed. Cost wasn’t mentioned neither was the software driver but with the ease with which it worked on the Hive query, I can only assume at some lever it must look something like a DAS device but with memory access times… NVMe anyone?

Project CoprHD was announced which open sourced EMC’s ViPR Controller software. Many ViPR customers were asking for EMC to open source ViPR controller, apparently their listening. Hopefully this will enable some participation from non-EMC storage vendors to allow their storage to be brought under the management of ViPR Controller. I believe the intent is to have an EMC hardened/supported version of Project CoprHD or ViPR Controller to coexist with the open source project version which anyone can download and modify for themselves.

A Non-production, downloadable version of ScaleIO was also announced. The test-dev version is a free download with unlimited capacity, full functionality and available for an unlimited time but only for non-production use.  Another of the demos onstage this morning was Chad configuring storage across a ScaleIO cluster and using its QoS services to limit the impact of a specific workload. There was talk that ScaleIO was available previously as a free download but it took a bunch of effort to find and download. They have removed all these prior hindrances and soon, if not today it’s freely available for anyone. ScaleIO runs on VMware and other hypervisors (maybe bare metal as well). So if you wanted to get your feet wet with software defined storage, this sounds like the perfect opportunity.

ECS is being added to EMC’s Data Lake foundation. Not exactly sure what are all the components in the data lake solution but previously the only Data Lake storage was Isilon based. This week EMC added Elastic Cloud Storage to the picture. Recall that Elastic Cloud Storage comes in either a software only or hardware appliance deployment and provides object storage.

I missed Project Liberty before but it’s a virtual VNX appliance, software only version.  I assume this is intended for ROBO deployments or very low end business environments. Presumably it runs on VMware and has some sort of storage limitations. It seems, more and more of EMC products are coming out in virtual appliance versions.

Project Falcon was also announced which is a virtual Data Domain appliance, software only solution, targeted for ROBO environments and other small enterprises. The intent is to have an onramp for DataDomain backup storage.  I assume runs under VMware.

Project Caspian – rolling out CloudScaling orchestration/automation for OpenStack deployments. On the big stage today, Chad and Jeremy demonstrated Project Caspian on a VCE VxRACK deploying racks of servers under OpenStack control. They were able within a couple of clicks define and deploy openstack on bare metal hardware and deploy applications to the OpenStack servers. They had a monitoring screen which showed the OpenStack server activity (transactions) in real time and showed an over commit of the rack and how easy it was to add a new rack with more servers. All this seemed to take but a few clicks. The intent is not to create another OpenStack distribution but to provide an orchestration/automation/monitoring layer of software on top of OpenStack to “industrialize OpenStack” for enterprise users. Looked pretty impressive to me.

I would have to say the DSSD box was most impressive. It would have been interesting to get an upclose look at the box with some more specifications but they didn’t have one on the Expo floor.