A new storage benchmark – STAC-M3

9761778404_73283cbd17_nA week or so ago I was reading a DDN press release that said they had just published a STAC-M3 benchmark. I had never heard of STAC or their STAC-M3 so thought I would look them up.

STAC stands for Securities Technology Analyst Center and is an organization dedicated to testing system solutions for the financial services industries.

What does STAC-M3 do

It turns out that STAC-M3 simulates processing a time (ticker tape or tick) log of security transactions and identifyies the maximum and weighted bid along with various other statistics for a number (1%) of securities over various time periods (year, quarter, week, and day) in the dataset. They call it high-speed analytics on time-series data. This is a frequent use case for systems in the securities sector.

There are two versions of the STAC-M3 benchmark: Antuco and Kanaga. The Antuco version uses a statically sized dataset and the Kanaga uses more scaleable (varying number of client threads) queries over larger datasets. For example, the Antuco version uses 1 or 10 client threads for their test measurements whereas the Kanaga version scales client threads, in some cases, from 1 to 100 threads and uses more tick data in 3 different sized datasets.

Good, bad and ugly of STAC-M3

Access to STAC-M3 reports requires a login but it’s available for free. Some details are only available after you request them which can be combersome.

One nice thing about the STAC-M3 benchmark information is that it provides a decent summary of the amount of computational time involved in all the queries it performs. From a storage perspective, if one were to take this and just analyze the queries with minimal or light computation that come closer to a pure storage workload than computationally heavy queries.

Another nice thing about the STAC-M3 is that it in some cases it provides detailed statistical information about the distribution of metrics, including mean, median, min, max and standard deviation. Unfortunately, the current version of the STAC-M3 does not provide these statistics for the computational light measures that are of primary interest to me as a storage analyst. It would be very nice to see some of their statistical level reporting be adopted by SPCSPECsfs or Microsoft ESRP for their benchmark metrics.

STAC-M3 also provides a measure of storage efficiency, or how much storage it took to store the database. This is computed as the reference size of the dataset divided by the amount of storage it took to store the dataset. Although this could be interesting most of the benchmark reports I examined all had similar numbers for storage efficiency 171% or 172%.

The STAC-M3 benchmark is a full stack test. That is it measures the time from the point a query is issued to the point the query response is completed. Storage is just one part of this activity, computing the various statistics is another part and the database used to hold the stock tick data is another aspect of their test environment. But what is being measured is the query elapsed time. SPECsfs2014 has also recently changed over to be a full stack test, so it’s not that unusual anymore.

The other problem from a storage perspective (but not a STAC perspective) is that there is minimal write activity during any of the benchmark specific testing. There’s just one query that generates a lot of storage write activity all the rest are heavy read IO only.

Finally, there’s not a lot of description of the actual storage and server configuration available in the basic report. But this might be further detailed in the Configuration Disclosure report which you have to request permission to see.

STAC-M3 storage submissions

As it’s a stack benchmark we don’t find a lot of storage array submissions. Typical submissions include a database running on some servers with SSD-DAS or occasionally a backend storage array. In the case of DDN it was KX system’s kdb 3.2 database, with Intel Enterprise Edition servers for Lustre, with 8 Intel based DDN EXAscaler servers, talking to a DDN SFA12KX-40 storage array. In contrast, a previousr submission used an eXtremeDB Financial Edition 6.0 database running on Lucera Compute™ (16 Core SSD V2, Smart OS) nodes.

Looking back over the last couple of years of  submissions (STAC-M3 goes back to 2010), forstorage arrays, aside from the latest DDN SFA12KX-40, we find an IBM FlashSystem 840, NetApp EF550, IBM FlashSystem 810, a couple of other DDN SFA12K-40 storage solutions, and a Violin V3210 & V3220 submission.  Most of the storage array submissions were all-flash arrays, but the DDN SFA12KX-40 is a hybrid (flash-disk) appliance.

Some metrics from recent STAC-M3 storage array runs

STAC-M3-YRHIBID-MAXMBPS

In the above chart, we show the Maximum MBPS achieved in the year long high bid (YRHIBID) extraction testcase. DDN won this one handily with over 10GB/sec for its extraction result.

STAC-M3-YRHIBID-MEANRTHowever in the next chart, we show the Mean Response (query elapsed) Time (in msec.) for the query that extracts the Year long High Bid data extraction test (YRHIBID). In this case the IBM FlashSystem 810 did much better than the DDN or even the more recent IBM FlashSystem 840.

Unexpectedly, the top MBPS storage didn’t achieve the best mean response time for the YRHIBID query. I would have thought the mean response time and the maximum MBPS would show the same rankings. Not clear to me why this is, it’s the mean response time not minimum or maximum. Although the maximum response time would show the same rankings. An issue with the YRHIBID is that it doesn’t report standard deviation, median or minimum response time results. Having these other metrics might have shed some more light on this anomaly but for now this is all we have.

If anyone knows of other storage (or stack) level benchmarks for other verticals please let me know and I would be glad to dig into them to see if they provide some interesting viwes on storage performance.

Comments?

 Photo Credit(s): Stock market quotes in newspaper by Andreas Poike

Max MBPS and Mean RT Charts (c) 2015 Silverton Consulting, Inc., All Rights Reserved

VMware VSAN 6.0 all-flash & hybrid IO performance at SFD7

We visited with VMware’s VSAN team during last Storage Field Day (SFD7, session available here). The presentation was wide ranging but the last two segments dealt with recent changes to VSAN and at the end provided some performance results for both a hybrid VSAN and an all-Flash VSAN.

Some new features in VSAN 6.0 include:

  • More scaleability, up to 64 hosts in a cluster and up to 200VMs per host
  • New higher performance snapshots & clones
  • Rack awareness for better availability
  • Hardware based checksum for T10 DIF (data integrity feature)
  • Support for blade servers with JBODs
  • All-flash configurations
  • Higher IO performance

Even in the all-flash configuration there are two tiers of storage a write cache tier and a capacity tier of SSDs. These are configured with two different classes of SSDs (high endurance/low-capacity and low-endurance/high capacity).

At the end of the session Christos Karamanolis (@Xtosk), Principal Architect for VSAN showed us some performance charts on VSAN 6.0 hybrid and all-flash configurations.

Hybrid VSAN performance

On the chart we see two plots showing IOmeter performance as VSAN scales across multiple nodes (hosts), on the left  we have  a 100% Read workload and on the right a 70%Read:30%Write workload.

The hybrid VSAN configuration has 4-10Krpm disks and 1-400GB SSD on each host and ranges from 8 to 64 hosts. The bars on the chart show IOmeter IOPS and the line shows the average response time (or IO latency) for each VSAN host configuration. I am not a big fan of IOmeter, as it’s an overly simplified, but that’s what VMware used.

The results show that in a 100% read case the hybrid, 64 host VSAN 6.0 cluster was able to sustain ~3.8M IOPS or over 60K IOPS per host.  or the mixed 70:30 R:W workload VSAN 6.0 was able to sustain ~992K IOPs or ~15.5K IOPS per host.

We see a pretty drastic IOPs degradation (~1/4 the 100% read performance) in the plot on the right, when they added write activity to the mix. But with VSAN’s mirrored data protection each VM write represents at least two VSAN backend writes and at a 70:30 IOmeter R:W this would be ~694K IOPS read and ~298K IOPS write frontend IOs but with mirroring this represents 595K writes to the backend storage.

Then of course, there’s destage activity (data written to SSDs need  to be read off SSD and written to HDD) which also multiplies internal IO operations for every external write IOP. Lets say all that activity multiplies each external write by 6 (3 for each mirror: 1 to the write cache SSD, 1 to read it back and 1 to write to HDD) and we multiply that times the ~298K external write IOPS, it would add up to about a total of ~1.8M write derived IOPS  and ~0.7M read derived IOPS or a total of ~2.5M IOPS but this is still far away from the 3.5M IOPS for 100% read activity. I am probably missing another IO or two in the write path (maybe Virtual to physical mapping structures need to be updated) or have failed to account for more inter-cluster IO activity in support of the writes.

In addition, we see the IO latency was roughly flat across the 100% Read workload at ~2.25msec. and got slightly worse over the 70:30 R:W workload, ranging from ~2.5msec. at 4 hosts to a little over 3.0msec. with 64 hosts. Not sure why this got worse but hosts are scaled up it could induce more inter-cluster overhead.

Rays-pix37

In the chart to the right, we can see similar performance data for systems with one or two disk-groups. The message here is that with two disk groups on a host (2X the disk and SSD resources per host) one can potentially double the performance of the systems, to 116K IOPS/host on 100% read and 31K IOPS/host on a 70:30 R:W workload.

All-flash VSAN performance

Rays-pix41

Here we can see performance data for an 8-host, all-flash VSAN configuration. In this case the chart on the left was a single “disk” group and the chart on the right was a dual disk group, all-flash configuration on each of the 8-hosts. The hosts were configured with 1-400GB and 3-800GB SSDs per disk group.

The various bars on the charts represent different VM working set sizes, 100, 200, 400 & 600GB for the single disk group chart and 100, 200, 400, 800 & 1200GB for dual disk group configurations. For the dual disk group, the 1200GB working set size is much bigger than a cache tier on each host.

The chart text is a bit confusing: the title of each plot says 70% read but the text under the two plots says 100% read. I must assume these were 70:30 R:W workloads. If we just look at the 8 hosts, using a 400GB VM working set size, all-flash VSAN 6.0 single disk group cluster was able to achieve ~37.5K IOPS/host and with two disk groups, the all-flash VSAN 6.0  was able to achieve ~68.75K IOPS/host at the 400GB working set size. Both doubling the hybrid performance.

Response times degrade for both the single and dual disk groups as we increase the working set sizes. It’s pretty hard to see on the two charts but it seems to range from 1.8msec to 2.2msec for the single disk group and 1.8msec to 2.5 msec for the dual disk group. The two charts are somewhat misleading because they didn’t use the exact same working group sizes for the two performance runs but just taking the 100|200|400GB working set sizes, for the single disk group it looks like the latency went from ~1.8msec. to ~2.0msec and for the dual disk group from ~1.8msec to ~2.3msec. Why the higher degradation for the dual disk group is anyone’s guess.

The other thing that doesn’t make much sense is that as you increase the working set size the number of IOPS goes down, worse for the dual disk group than the single. Once again taking just the 100|200|400GB working group sizes this ranges from ~350K IOPS to ~300K IOPS (~15% drop) for the single disk group and ~700K IOPS to ~550K IOPS (~22% drop) for the dual disk group.

Increasing working group sizes should cause additional backend IO as the cache effectivity should be proportionately less as you increase working set size. Which I think goes a long way to explain the degradation in IOPS as you increase working set size. But I would have thought the degradation would have been a proportionally similar for both the single and dual disk groups. The fact that the dual disk group did 7% worse seems to indicate more overhead associated with dual disk groups than single disk groups or perhaps, they were running up against some host controller limits (a single controller supporting both disk groups).

~~~~

At the time (3 months ago) this was the first world-wide look at all-flash VSAN 6.0 performance. The charts are a bit more visible in the video than in my photos (?) and if you want to just see and hear Christos’s performance discussion check out ~11 minutes into the final video segment.

For more information you can also read these other SFD7 blogger posts on VMware’s session:

 

Springpath SDS springs forth

Springpath presented at SFD7 and has a new Software Defined Storage (SDS) that attempts to provide the richness of enterprise storage in a SDS solution running on commodity hardware. I would encourage you to watch the SFD7 video stream if you want to learn more about them.

HALO software

Their core storage architecture is called HALO which stands for Hardware Agnostic Log-structured Object store. We have discussed log-structured file systems before. They are essentially a sequential file that can be randomly accessed (read) but are sequentially written. Springpath HALO was written from scratch, operates in user space and unlike many SDS solutions, has no dependencies on Linux file systems.

HALO supports both data deduplication and compression to reduce storage footprint. The other unusual feature  is that they support both blade servers and standalone (rack) servers as storage/compute nodes.

Tiers of storage

Each storage node can optionally have SSDs as a persistent cache, holding write data and metadata log. Storage nodes can also hold disk drives used as a persistent final tier of storage. For blade servers, with limited drive slots, one can configure blades as part of a caching tier by using SSDs or PCIe Flash.

All data is written to the (replicated) caching tier before the host is signaled the operation is complete. Write data is destaged from the caching tier to capacity tier over time, as the caching tier fills up. Data reduction (compression/deduplication) is done at destage.

The caching tier also holds read cached data that is frequently read. The caching tier also has a non-persistent segment in server RAM.

Write data is distributed across caching nodes via a hashing mechanism which allocates portions of an address space across nodes. But during cache destage, the data can be independently spread and replicated across any capacity node, based on node free space available.  This is made possible by their file system meta-data information.

The capacity tier is split up into data and a meta-data partitions. Meta-data is also present in the caching tier. Data is deduplicated and compressed at destage, but when read back into cache it’s de-compressed only. Both capacity tier and caching tier nodes can have different capacities.

HALO has some specific optimizations for flash writing which includes always writing a full SSD/NAND page and using TRIM commands to free up flash pages that are no longer being used.

HALO SDS packaging under different Hypervisors

In Linux & OpenStack environments they run the whole storage stack in Docker containers primarily for image management/deployment, including rolling upgrade management.

In VMware and HyperVM, Springpath runs as a VM and uses direct path IO to access the storage. For VMware Springpath looks like an NFSv3 datastore with VAAI and VVOL support. In Hyper-V Springpath’s SDS is an SMB storage device.

For KVM its an NFS storage, for OpenStack one can use NFS or they have a CINDER plugin for volume support.

The nice thing about Springpath is you can build a cluster of storage nodes that consists of VMware, HyperV and bare metal Linux nodes that supports all of them. (Does this mean it’s multi protocol, supporting SMB for Hyper-V, NFSv3 for VMware?)

HALO internals

Springpath supports (mostly) file, block (via Cinder driver) and object access protocols. Backend caching and capacity tier all uses a log structured file structure internally to stripe data across all the capacity and caching nodes.  Data compression works very well with log structured file systems.

All customer data is supported internally as objects. HALO has a write-log which is spread across their caching tier and a capacity-log which is spread across the capacity tier.

Data is automatically re-balanced across nodes when new nodes are added or old nodes deleted from the cluster.

Data is protected via replication. The system uses a minimum of 3 SSD nodes and 3 drive (capacity) nodes but these can reside on the same servers to be fully operational. However, the replication factor can be configured to be less than 3 if you’re willing to live with the potential loss of data.

Their system supports both snapshots (2**64 times/object) and storage clones for test dev and backup requirements.

Springpath seems to have quite a lot of functionality for a SDS. Although, native FC & iSCSI support is lacking. For a file based, SDS for hypbervisors, it seems to have a lot of the bases covered.

Comments?

Other SFD7 blogger posts on Springpath:

Picture credit(s): Architectural layout (from SpringpathInc.com) 

The wizardry of StorMagic

We talked with Hans O’Sullivan, CEO and Chris Farey, CTO of StorMagic during Storage Field Days 6 (SFD6, view videos of their session) a couple of weeks back and they presented some interesting technology, at least to me.

Their SvSAN, software defined storage  solution has been around since 2009, and was originally intended to provide shared storage for SMB environments but was changed in 2011 to focus more on remote offices/branch offices (ROBO) for larger customers.

What makes the SvSAN such an appealing solution is that it’s a software-only storage solution that can use a minimum of 2 servers to provide a high availability, shared block storage cluster which can all be managed from one central site. Their SvSAN installs as a virtual storage appliance that runs as a virtual machine under a hypervisor and you can assign it to manage as much or as little of the direct access or SAN attached storage available to the server.

SvSAN customers

As of last count they had 30K licenses, in 64 countries, across 6 continents, were managing over 57PB of data, and had one (large retail) customer with over 2000 sites managed from one central location.  They had pictures of one customer in their presentation which judging by the color was obvious who it was but they couldn’t actually say.

One customer with a 1000’s of sites had prior storage that was causing 100’s of store outages a year, each of which averaged 6 hours to recover which cost them $6K each. Failure cost could be much larger and much longer, if there was a data loss.  They obviously needed a much more reliable storage system and wanted to reduce their cost of maintenance. Turning to SvSAN saved them lot’s of $s and time and eliminated their maintenance downtime.

Their largest vertical is retail but StorMagic does well in most ROBO environments which have limited IT staff, and limited data requirements. Other verticals they mentioned included defense (they specifically mentioned the German Army who have a parachute deployable, all-SSD SvSAN storage/data center), manufacturing (with small remote factories), government with numerous sites around the world, financial services (banks with many remote offices), restaurant and hotel chains, large energy companies, wind farms, etc.  Hans mentioned one a large wind farm operator that said their “field” data centers were so remote it took 6 days to get someone out to them to solve a problem but they needed 600GBs of shared storage to manage the complex.

SvSAN architecture

SvSAN uses synchronous mirroring between pairs of servers so that the data is constantly available in both servers of a pair. Presumably the amount of storage available to the SvSAN VSA’s running in the two servers have to be similar in capacity and performance.

An SvSAN cluster can grow by adding pairs of servers or by adding storage to an already present SvSAN cluster. One can have as many pairs of servers in an SvSAN local cluster as you want (probably some maximum here but I can’t recall what they said). The cluster interconnect is 1GbE or 10GbE. Most (~90%) of SvSAN implementations are under 2TB of data but their largest single clustered configuration is 200TB.

SvSAN supplies iSCSI storage services and runs inside a Linux virtual machine. But SvSAN can support both bare metal as well as virtualized server environments.

All the storage within a server that is assigned to SvSAN is pooled together and carved out as iSCSI virtual disks.  SvSAN can make use of raid controller with JBODs, DAS or even SAN storage, anything that is accessible to a virtual machine can be configured as part of SvSAN’s storage pool.

Servers that are accessing the shared iSCSI storage may access either of the servers in a synchronous mirrored pair. As it’s a synchronous mirror, any writes written to one of the servers is automatically mirrored to the other side before an acknowledgement is sent back to the host. Synchronous mirroring depends on multi-pathing software at the host.

As in any solution that supports active-active read-write access there is a need for a Quorum service to be hosted somewhere in the environment. Hopefully, at some location distinct from where a problem could potentially occur, but it doesn’t have to be. In StorMagic’s case this could reside on any physical server, even in the same environment. The Quorum service is there to decide which of the two copies is “more” current when there is some sort of split brain scenario. That is when the two servers in a synchronized pair lose communication with one another. At that point the Quorum service declares one dead and the other active and from that point on all IO activity must be done through the active SvSAN server. The Quorum service can also run on Linux or Windows and remotely or locally. Any configuration changes will need to be communicated to the Quorum service.

They have a bare metal recovery solution. Specifically, when one server fails, customers can ship out another server with a matching configuration to be installed in the remote site. When the new server comes up, it auto-configures it’s storage and networking by using the currently active server in the environment and starts a resynchronization process with that server. Which all means it can be brought up into a high availability mode with almost no IT support other than what it takes to power the server and connect some networking ports. This was made for ROBO!

Code upgrades can be done by taking one of the pair of servers down and loading the new code and resynching it’s data. Then once resynch completes you can do the same with the other server.

They support a fast-resynch service for when one of the pair goes down for any reason. At that point the active server starts tracking any changes that occur in a journal and when the other server comes up it just resends the changes that have occurred since the last time it was up.

SvSAN has support for SSDs and just released an SSD write back caching feature to help improve disk write speeds. They also support an all SSD configuration for harsh environments.

StorMagic also offers an option for non-mirrored disk but I can’t imagine why anyone would use it.

They can dynamically move one mirrored iSCSI volume from one pair of servers to another, without disrupting application activity.

Minimum hardware configuration requires a single core server but can use as many cores that you can give it. StorMagic commented that a single core maxes out at 50-60K IOPS but you can always just add more cores to the solution.

The SvSAN cluster can be managed in VMware vCenter or Microsoft System Center (MSSC) and it maintains statistics which help monitor the storage clusters in the remote office environments.

They also have a scripted recipe to help bring up multiple duplicate remote sites where local staff only need to plug in minimal networking and some storage information and they are ready to go.

SvSAN pricing and other information

Their product lists for 2 servers and 2TB of data storage is $2K and they have standard license options for 4, 8, and 16TB across a server pair after which it’s unlimited amounts of storage for the same price of $10K. This doesn’t include hardware or physical data storage this is just for the SvSAN software and management.

They offer a free 60 day evaluation license on their website (see link above).

There was a lot of twitter traffic and onsite discussion as to how this compared to HP’s StorVirtual VSA solution. The contention was that StorVirtual required more nodes but there was no-one from HP there to dispute this.

Didn’t hear much about snapshot, thin provisioning, remote replication, deduplication or encryption. But for ROBO office environments, that are typically under 2TB most of these features are probably overkill, especially when there’s no permanent onsite IT staff to support the local storage environment.

~~~~

I had talked with StorMagic previously at one or more of the storage/IT conferences we have attended at the past and had relegated them to SMB  storage solutions. But after talking with them at SFD6, their solution became quite clearer. All of the sophisticated functionality they have developed together with their software only solution, seems to be  very appealing solution for these ROBO environments.

 

 

 

VMworld 2014 projects Marvin, Mystic, and more

IMG_2902[This post was updated after being published to delete NDA material – sorry, RL] Attended VMworld2014 in San Francisco this past week. Lots of news, mostly about vSphere 6 beta functionality and how the new AirWatch acquisition will be rolled into VMware’s End-User Computing framework.

vSphere 6.0 beta

Virtual Volumes (VVOLs) is in beta and extends VMware’s software-defined storage model to external NAS and SAN storage.  VVOLs transforms SAN/NAS  storage into VM-centric devices by making the virtual disk a native representation of the VM at the array level, and enables app-centric, policy-based automation of SAN and NAS based storage services, somewhat similar to the capabilities used in a more limited fashion by Virtual SAN today.

Storage system features have proliferated and differentiated over time and to be able to specify and register any and all of these functional nuances to VMware storage policy based management (SPBM) service is a significant undertaking in and of itself. I guess we will have to wait until it comes out of beta to see more. NetApp had a functioning VVOL storage implementation on the show floor.

Virtual SAN 1.0/5.5 currently has 300+ customers with 30+ ready storage nodes from all major vendors, There are reference architecture documents and system bundles available.

Current enhancements outside of vSphere 6 beta

vRealize Suite extends automation and monitoring support for a broad mix of VMware and non VMware infrastructure and services including OpenStack, Amazon Web Services, Azure, Hyper-V, KVM, NSX, VSAN and vCloud Air (formerly vCloud Hybrid Services), as well as vSphere.

New VMware functionality being released:

  • vCenter Site Recovery Manager (SRM) 5.8 – provides self service DR through vCloud Automation Center (vRealize Automation) integration, with up to 5000 protected VMs per vCenter and up to 2000 VM concurrent recoveries. SRM UI will move to be supported under vSphere’s Web Client.
  • vSphere Data Protection Advanced 5.8 – provides configurable parallel backups (up to 64 streams) to reduce backup duration/shorten backup windows, access and restore backups from anywhere, and provides support for Microsoft Exchange DAGs, and SQL Clusters, as well as Linux LVMs and EXT4 file systems.

VMware NSX 6.1 (in beta) has 150+ customers and provides micro segmentation security levels which essentially supports fine grained security firewall definitions almost at the VM level, there are over 150 NSX customers today.

vCloud Hybrid Cloud Services is being rebranded as vCloud Air, and is currently available globally through data centers in the US, UK, and Japan. vCloud Air is part of the vCloud Air Network, an ecosystem of over 3,800 service providers with presence in 100+ countries that are based on common VMware technology.  VMware also announced a number of new partnerships to support development of mobile applications on vCloud Air.  Some additional functionality for vCloud Air that was announced at VMworld includes:

  • vCloud Air Virtual Private Cloud On Demand beta program supports instant, on demand consumption model for vCloud services based on a pay as you go model.
  • VMware vCloud Air Object Storage based on EMC ViPR is in beta and will be coming out shortly.
  • DevOps/continuous integration as a service, vRealize Air automation as a service, and DB as a service (MySQL/SQL server) will also be coming out soon

End-User Computing: VMware is integrating AirWatch‘s (another acquisition) enterprise mobility management solutions for mobile device management/mobile security/content collaboration (Secure Content Locker) with their current Horizon suite for virtual desktop/laptop support. VMware End User Computing now supports desktop/laptop virtualization, mobile device management and security, and content security and file collaboration. Also VMware’s recent CloudVolumes acquisition supports a light weight desktop/laptop app deployment solution for Horizon environments. AirWatch already has a similar solution for mobile.

OpenStack, Containers and other collaborations

VMware is starting to expand their footprint into other arenas, with new support, collaboration and joint ventures.

A new VMware OpenStack Distribution is in beta now to be available shortly, which supports VMware as underlying infrastructure for OpenStack applications that use  OpenStack APIs. VMware has become a contributor to OpenStack open source. There are other OpenStack distributions that support VMware infrastructure available from HP, Cannonical, Mirantis and one other company I neglected to write down.

VMware has started a joint initiative with Docker and Pivotal to broaden support for Linux containers. Containers are light weight packaging for applications that strip out the OS, hypervisor, frameworks etc and allow an application to be run on mobile, desktops, servers and anything else that runs Linux O/S (for Docker Linux 3.8 kernel level or better). Rumor has it that Google launches over 15M Docker containers a day.

VMware container support expands from Pivotal Warden containers, to now also include Docker containers. VMware is also working with Google and others on the Kubernetes project which supports container POD management (logical groups of containers). In addition Project Fargo is in development which is VMware’s own lightweight packaging solution for VMs. Now customers can run VMs, Docker containers, or Pivotal (Warden) containers on the same VMware infrastructure.

AT&T and VMware have a joint initiative to bring enterprise grade network security, speed and reliablity to vCloud Air customers which essentially allows customers to use AT&T VPNs with vCloud Air. There’s more to this but that’s all I noted.

VMware EVO, the next evolution in hyper-convergence has emerged.

  • EVO RAIL (formerly known as project Marvin) is appliance package from VMware hardware partners that runs vSphere Suite and Virtual SAN and vCenter Log Insight. The hardware supports 4 compute/storage nodes in a 2U tall rack mounted appliance. 4 of these appliances can be connected together into a cluster. Each compute/storage node supports ~100VMs or ~150 virtual desktops. VMware states that the goal is to have an EVO RAIL implementation take at most 15 minutes from power on to running VMs. Current hardware partners include Dell, EMC (formerly named project Mystic), Inspur (China), Net One (Japan), and SuperMicro.
  • EVO RACK is a data center level hardware appliance with vCloud Suite installed and includes Virtual SAN and NSX. The goal is for EVO RACK hardware to support a 2hr window from power on to a private cloud environment/datacenter deployed and running VMs. VMware expects a range of hardware partners to support EVO RACK but none were named. They did specifically mention that EVO RACK is intended to support hardware from the Open Compute Project (OCP). VMware is providing contributions to OCP to facilitate EVO RACK deployment.

~~~~

Sorry about the stream of consciousness approach to this. We got a deep dive on what’s in vSphere 6 but it was all under NDA. So this just represents what was discussed openly in keynotes and other public sessions.

Comments?

 

Google cloud offers SSD storage

Read an article the other day on Google Cloud tests out fast, high I/O SSD drives. I suppose it was only a matter of time before cloud services included SSDs in their I/O mix.

Yet, it doesn’t seem to me to be as simple as adding SSDs to the storage catalog. Enterprise storage vendors have had SSDs arguably since January of 2008 (see my EMC introduced SSDs to DMX dispatch). And although there are certainly a class of applications that can take advantage of SSD low latency/high IOPs, the vast majority of applications don’t seem to require these services.

Storage systems use of SSDs today

That’s why most enterprise storage system vendors support some form of automated storage tiering or flash caching of normal I/O for their high-end storage systems. Together with offering just plain old SSDs as data storage. In this more sophisticated solution customers have the option to assign application data to SSDs only, hybrid SSD-disks, or disk only storage. In this way the customer get’s to decide whether they want some sort of mix or just pure SSD or disk IO to satisfy their application IO requirements.

Storage startups have emerged that take on both the hybrid SSD-disk and all-flash model and add quality of service to the picture. An example of all-flash that supplies QoS version of all-flash storage is SolidFire (learn more about SolidFire in our GreyBeardsOnStorage podcast with Dave Wright).  An example that does the same sort of thing for hybrid storage is Fusion IOcontrol (formerly NexGen) storage.

Storage system QoS

In the case of SolidFire one can limit volume or volume groups with an IOPs max, throughput max, and a Burst max. The burst is sort of a credit that accrues on a time basis if the application doesn’t ask for the maximum IOPs/Througput which they then can consume above their maximums up to the burst max for a limited timeframe.

QoS capabilities are slowly making their way into enterprise storage systems as well but it will take some time for the instrumentation and capabilities to be put in place. But one can see limited QoS in IBM DS8000 priority IO, NetApp Storage QoS, EMC Unisphere QoS manager for VNX & SMC QoS for VMAX, and HDS SVOS QoS via partitioning. Most of these capabilities control access or partition cache, backend and frontend resources for host volumes. As such, they are not nearly as sophisticated or as easy to use as what SolidFire and other start ups are offering, but they are getting there.

Cloud SSD pricing

Back to the cloud offering. According to the GigaOm article, Google SSD volumes can sustain up to 15K IOPs and they are charging a premium price for this storage ($0.325/GB-month). Apparently Amazon AWS offers high IO EC2 storage as well with a maximum of 4K IOPs but charges a premium both for the storage ($0.125/GB month) and on an IOPs basis ($0.10/IOPS-month). GigaOM had a pricing comparison for 500GB and 2000 IOPs indicating that Google SSD storage would cost $163/month and the AWS provisioned SSD storage would cost $263 ($62.50 for storage and $200 for the 2000 IOPs).

The fact that you can drive the Google SSD to it’s limits without incurring any extra cost seems a serious advantage to me and would be very appealing to me to most enterprise customers.

But where’s latency

It seems to me after some IOPs level is attained, most mission critical applications are more interested in low latency IO (for more on why low latency matters seem my IO throughput vs. low latency post…). Many storage systems are capable of maximum of 100,000s of IOPS but most shops don’t run them that hard, ever. But with proper use of SSDs, most enterprise storage is now clocking IO at sub-msec. low latency IO.

However, I have yet to see any Cloud storage pricing or QoS for that matter that was based on latency guarantees.  I think this is a serious omission.

In any event, SSDs in the cloud is a good think now they just need to offer flash caching, automatic storage tiering and sophisticated QoS.  I realize this is partially re-inventing enterprise storage in the cloud but isn’t that what everyone actually wants, at cloud storage pricing of course.

Comments?

Latest SPC-2 performance results – chart of the month

Spider chart top 10 SPC-1 MB/second broken out by workload LFP, LDQ and VODIn the figure above you can see one of the charts from our latest performance dispatch on SPC-1 and SPC-2  benchmark results. The above chart shows SPC-2 throughput results sorted by aggregate MB/sec order, with all three workloads broken out for more information.

Just last quarter I was saying it didn’t appear as if any all-flash system could do well on SPC-2, throughput intensive workloads.  Well I was wrong (again) and with an aggregate MBPS™ of ~33.5GB/sec. Kaminario’s all-flash K2 took the SPC-2 MBPS results to a whole different level, almost doubling the nearest competitor in this category (Oracle ZFS ZS3-4).

Ok, Howard Marks (deepstorage.net), my GreyBeardsOnStorage podcast co-host and long-time friend, had warned me that SSDs had the throughput to be winners at SPC-2, but they would probably cost to much to be viable.  I didn’t believe him at the time — how wrong could I be.

As for cost, both Howard and I misjudged this one. The K2 came in at just under a $1M USD, whereas the #2, Oracle system was under $400K. But there were five other top 10 SPC-2 MPBS systems over $1M so the K2, all-flash system price was about average for the top 10.

Ok, if cost and high throughput aren’t the problem why haven’t we seen more all-flash systems SPC-2 benchmarks.  I tend to think that most flash systems are optimized for OLTP like update activity and not sequential throughput. The K2 is obviously one exception. But I think we need to go a little deeper into the numbers to understand just what it was doing so well.

The details

The LFP (large file processing) reported MBPS metric is the average of 1MB and 256KB data transfer sizes, streaming activity with 100% write, 100% read and 50%:50% read-write. In K2’s detailed SPC-2 report, one can see that for 100% write workload the K2 was averaging ~26GB/sec. while for the 100% read workload the K2 was averaging ~38GB/sec. and for the 50:50 read:write workload ~32GB/sec.

On the other hand the LDQ workload appears to be entirely sequential read-only but the report shows that this is made up of two workloads one using 1MB data transfers and the other using 64KB data transfers, with various numbers of streams fired up to generate  stress. The surprising item for K2’s LDQ run is that it did much better on the 64KB data streams than the 1MB data streams, an average of 41GB/sec vs. 32GB/sec.. This probably says something about an internal flash data transfer bottleneck at large data transfers someplace in the architecture.

The VOD workload also appears to be sequential, read-only and the report doesn’t indicate a data transfer size but given K2’s actual results, averaging ~31GB/sec it would seem to indicate it was on the order of 1MB.

So what we can tell is that K2’s SSD write throughput is worse than reads (~1/3rd worse) and relatively smaller sequential reads are better than relatively larger sequential reads (~1/4 better).  But I must add that even at the relatively “slower write throughput”, the K2 would still have beaten the next best disk-only storage system by ~10GB/sec.

Where’s the other all-flash SPC-2 benchmarks?

Prior to K2 there was only one other all-flash system (TMS RamSan-630) submission for SPC-2. I suspect that writing 26 GB/sec. to an all-flash system would be hazardous to its health and maybe other all-flash storage system vendors don’t want to encourage this type of activity.

Just for the record the K2 SPC-2 result has been submitted for “review” (as of 18Mar2014) and may be modified before finally “accepted”. However, the review process typically doesn’t impact performance results as much as other report items. So, officially, we will need to await for final acceptance before we can truly believe these numbers.

Comments?

~~~~

The complete SPC  performance report went out in SCI’s February 2014 newsletter.  But a copy of the report will be posted on our dispatches page sometime next quarter (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free newsletters by just using the signup form above right.

Even more performance information and OLTP, Email and Throuphput ChampionCharts for Enterprise, Mid-range and SMB class storage systems are also available in SCI’s SAN Buying Guide, available for purchase from  website.

As always, we welcome any suggestions or comments on how to improve our SPC  performance reports or any of our other storage performance analyses.

Two dimensional magnetic recording (TDMR)

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)

I attended a Rocky Mountain IEEE Magnetics Society meeting a couple of weeks ago where Jonathan Coker, HGST’s Chief Architect and an IEEE Magnetics Society Distinguished Lecturer was discussing HGST’s research into TDMR heads.

It seems that disk track density is getting so high, track pitch is becoming so small, that the magnetic read heads have become wider than the actual data track width.  Because of this, read heads are starting to pick up more inter-track noise and it’s getting more difficult to obtain a decent signal to noise ratio (SNR) off of a high-density disk platter with a single read head.

TDMR read heads can be used to counteract this extraneous noise by using multiple read heads per data track and as such, help to create a better signal to noise ratio during read back.

What are TDMR heads?

TDMR heads are any configuration of multiple read heads used in reading a single data track. There seemed to be two popular configurations of HGST’s TDMR heads:

  • In-series, where one head is directly behind another head. This provides double the signal for the same (relative) amount of random (electronic) noise.
  • In-parallel (side by side), where three heads were configured in-parallel across the data track and the two inter-track bands. That is, one head was configured directly over the data track with portions spanning the inter-track gap to each side, one head was half way across the data track and the next higher track, and a third head was placed half way across the data track and the next lower track.

At first, the in-series configuration seemed to make the most sense to me. You could conceivably average the two signals coming off the heads and be able to filter out the random noise.  However, the “random noise” seemed to be mostly coming from the inter-track zone and this wasn’t as much random electronics noise as random magnetic noise, coming off of the disk platter, between the data tracks.

In-parallel wins the SNR race

So, much of the discussion was on the in-parallel configuration. The researcher had a number of simulated magnetic recordings which were then read by simulated, in parallel, tripartite read heads.  The idea here was that the information read from each of the side band heads that included inter-track noise could be used as noise information to filter the middle head’s data track reading. In this way they could effectively increase the SNR across the three signals, and thus, get a better data signal from the data track.

Originally, TDMR was going to be the technology that was needed to get the disk industry to 100Tb/sqin. But, what they are finding at HGST and elsewhere, is even today, at “only” ~5Tb/sqin (HGST helium drives), there seems to be an increasing need to help reduce noise coming from read heads.

Disk density increase has been slowing lately but is still on a march to double density every 2 years or so. As such,  1TB platter today will be a 2TB platter in 2 years and a4TB platter in 4 years, etc. TDMR heads may be just the thing that gets the industry to that 4TB platter (20Tb/sqin) in 4 years.

The only problem is what’s going to get them to 100Tb/sqin now?

Comments?