Disk rulz, at least for now

Last week WDC announced their next generation technology for hard drives, MAMR or Microwave Assisted Magnetic Recording. This is in contrast to HAMR, Heat (laser) Assisted Magnetic Recording. Both techniques add energy so that data can be written as smaller bits on a track.

Disk density drivers

Current hard drive technology uses PMR or Perpendicular Magnetic Recording with or without SMR (Shingled Magnetic Recording) and TDMR (Two Dimensional Magnetic Recording), both of which we have discussed before in prior posts.

The problem with PMR-SMR-TDMR is that the max achievable disk density is starting to flat line and approaching the “WriteAbility limit” of the head-media combination.

That is even with TDMR, SMR and PMR heads, the highest density that can be achieved is ~1.1Tb/sq.in. The Writeability limit for the current PMR head-media technology is ~1.4Tb/sq.in. As a result most disk density increases over the past years has been accomplished by adding platters-heads to hard drives.

MAMR and HAMR both seem able to get disk drives to >4.0Tb/sq.in. densities by adding energy to the magnetic recording process, which allows the drive to record more data in the same (grain) area.

There are two factors which drive disk drive density (Tb/sq.in.): Bits per inch (BPI) and Tracks per inch (TPI). Both SMR and TDMR were techniques to add more TPI.

I believe MAMR and HAMR increase BPI beyond whats available today by writing data on smaller magnetic grain sizes (pitch in chart) and thus more bits in the same area. At 7nm grain sizes or below PMR becomes unstable, but HAMR and MAMR can record on grain sizes of 4.5nm which would equate to >4.5Tb/sq.in.

HAMR hurdles

It turns out that HAMR as it uses heat to add energy, heats the media drives to much higher temperatures than what’s normal for a disk drive, something like 400C-700C.  Normal operating temperatures for disk drives is  ~50C.  HAMR heat levels will play havoc with drive reliability. The view from WDC is that HAMR has 100X worse reliability than MAMR.

In order to generate that much heat, HAMR needs a laser to expose the area to be written. Of course the laser has to be in the head to be effective. Having to add a laser and optics will increase the cost of the head, increase the steps to manufacture the head, and require new suppliers/sourcing organizations to supply the componentry.

HAMR also requires a different media substrate. Unclear why, but HAMR seems to require a glass substrate, the magnetic media (many layers) is  deposited ontop of the glass substrate. This requires a new media manufacturing line, probably new suppliers and getting glass to disk drive (flatness-bumpiness, rotational integrity, vibrational integrity) specifications will take time.

Probably more than a half dozen more issues with having laser light inside a hard disk drive but suffice it to say that HAMR was going to be a very difficult transition to perform right and continue to provide today’s drive reliability levels.

MAMR merits

MAMR uses microwaves to add energy to the spot being recorded. The microwaves are generated by a Spin Torque Oscilator, (STO), which is a solid state device, compatible with CMOS fabrication techniques. This means that the MAMR head assembly (PMR & STO) can be fabricated on current head lines and within current head mechanisms.

MAMR doesn’t add heat to the recording area, it uses microwaves to add energy. As such, there’s no temperature change in MAMR recording which means the reliability of MAMR disk drives should be about the same as todays disk drives.

MAMR uses todays aluminum substrates. So, current media manufacturing lines and suppliers can be used and media specifications shouldn’t have to change much (?) to support MAMR.

MAMR has just about the same max recording density as HAMR, so there’s no other benefit to going to HAMR, if MAMR works as expected.

WDC’s technology timeline

WDC says they will have sample MAMR drives out next year and production drives out in 2019. They also predict an enterprise 40TB MAMR drive by 2025. They have high confidence in this schedule because MAMR’s compatabilitiy with  current drive media and head manufacturing processes.

WDC discussed their IP position on HAMR and MAMR. They have 400+ issued HAMR patents with another 100+ pending and 75 issued MAMR patents with 46 more pending. Quantity doesn’t necessarily equate to quality, but their current IP position on both MAMR and HAMR looks solid.

WDC believes that by 2020, ~90% of enterprise data will be stored on hard drives. However, this is predicated on achieving a continuing, 10X cost differential between disk drives and (QLC 3D) flash.

What comes after MAMR is subject of much speculation. I’ve written on one alternative which uses liquid Nitrogen temperatures with molecular magnets, I called CAMR (cold assisted magnetic recording) but it’s way to early to tell.

And we have yet to hear from the other big disk drive leader, Seagate. It will be interesting to hear whether they follow WDC’s lead to MAMR, stick with HAMR, or go off in a different direction.

Comments?

 

Photo Credit(s): WDC presentation

Research reveals ~liquid nitrogen temperature molecular magnets with 100X denser storage


Must be on a materials science binge these days. I read another article this week in Phys.org on “Major leap towards data storage at the molecular level” reporting on a Nature article “Molecular magnetic hysteresis at 60K“, where researchers from University of Manchester, led by Dr David Mills and Dr Nicholas Chilton from the School of Chemistry, have come up with a new material that provides molecular level magnetics at almost liquid nitrogen temperatures.

Previously, molecular magnets only operated at from 4 to 14K (degrees Kelvin) from research done over the last 25 years or so, but this new  research shows similar effects operating at ~60K or close to liquid nitrogen temperatures. Nitrogen freezes at 63K and boils at ~77K, and I would guess, is liquid somewhere between those temperatures.

What new material

The new material, “hexa-tert-butyldysprosocenium complex—[Dy(Cpttt)2][B(C6F5)4], with Cpttt = {C5H2tBu3-1,2,4} and tBu = C(CH3)3“, dysprosocenium for short was designed (?) by the researchers at Manchester and was shown to exhibit magnetism at the molecular level at 60K.

The storage effect is hysteresis, which is a materials ability to remember the last (magnetic/electrical/?) field it was exposed to and the magnetic field is measured in oersteds.

The researchers claim the new material provides magnetic hysteresis at a sweep level of 22 oersteds. Not sure what “sweep level of 22 oersteds” means but I assume a molecule of the material is magnetized with a field strength of 22 oersteds and retains this magnetic field over time.

Reports of disk’s death, have been greatly exaggerated

While there seems to be no end in sight for the densities of flash storage these days with 3D NAND (see my 3D NAND, how high can it go post or listen to our GBoS FMS2017 wrap-up with Jim Handy podcast), the disk industry lives on.

Disk industry researchers have been investigating HAMR, ([laser] heat assisted magnetic recording, see my Disk density hits new record … post) for some time now to increase disk storage density. But to my knowledge HAMR has not come out in any generally available disk device on the market yet. HAMR was supposed to provide the next big increase in disk storage densities.

Maybe they should be looking at CAMMR, or cold assisted magnetic molecular recording (heard it here, 1st).

According to Dr Chilton using the new material at 60K in a disk device would increase capacity by 100X. Western Digital just announced a 20TB MyBook Duo disk system for desktop storage and backup. With this new material, at 100X current densities, we could have 2PB Mybook Duo storage system on your desktop.

That should keep my ever increasing video-photo-music library in fine shape and everything else backed up for a little while longer.

Comments?

Photo Credit(s): Molecular magnetic hysteresis at 60K, Nature article

 

Microsoft ESRP database access latencies – chart of the month

sciesrp1030-001
The above chart was included in last month’s SCI e-Newsletter and depicts recent Microsoft Exchange (2013) Solution Reviewed Program (ESRP) results for database access latencies. Storage systems new to this 5000 mailbox and over category include, Oracle, Pure and Tegile. As all of these systems are all flash arrays we are starting to see significant reductions in database access latencies and the #4 system (Nimble Storage) was a hybrid (disk and flash) array.

As you recall, ESRP reports on three database access latencies: read database, write database and log write. All three are shown above but we sort and rank this list based on database read activity alone.

Hard to see above, but reading the ESRP reports, one finds that the top 3 systems had 1.04, 1.06 and 1.07 milliseconds. average read database latencies. So the separation between the top 3 is less than 40 microseconds.

The top 3 database write access latencies were 1.75, 1.62 and 3.07 milliseconds, respectively. So if we were ranking the above on write response times Pure would have come in #1.

The top 3 log write access latencies were 0.67, 0.41 and 0.82 milliseconds and once again if we were ranking based on log response times Pure would be #1.

It’s unclear whether Exchange customers would want to deploy AFAs for their database and log files but these three ESRP reports and Nimble’s show that there should be no problem with the performance of AFAs in these environments.

What about data reduction?

Unclear to me is how much data reduction technologies played in the AFA and hybrid solution ESRP performance. Data reduction advantages would most likely show up in database IOPS counts more so than response times but if present, may still reduce access latencies, as there would be potentially less data to be transferred to-from the backend of the storage system into-out of storage system cache.

ESRP reports do not officially report on a vendor’s data reduction effectiveness, so we are left with whatever the vendor decides to say.

In that respect, Pure FlashArray//m20 indicated in their ESRP report that their “data reduction is significantly higher” than what they see normally (4:1) because Jetstress (ESRP benchmark program) generates lots of duplicated data.

I couldn’t find anything that Tegile T3800 or Nimble Storage said similar to the above, indicating how well their data reduction technologies worked in Jetstress as compared to normal. However, they did make a reference to their compression effectiveness in database size but I have found this number to be somewhat less effective as it historically showed the amount of over provisioning used by disk-only systems and for AFA’s and hybrid – storage, it’s unclear how much is data reduction effectiveness vs. over provisioning.

For example, Pure, Tegile and Nimble also reported a “database capacity utilization” of 4.2%60% and 74.8% respectively. And Nimble did report that over their entire customer base, Exchange data has on average, a 61.2% capacity savings.

So you tell me what was the effective data reduction for their Pure’s, Tegile’s and Nimble’s respective Jetstress runs? From my perspective Pure’s report of 4.2% looks about right (that says that actual database data fit in 4.2% of SSD storage for a ~23.8:1 reduction effectiveness for Jetstress/ESRP data. I find it harder to believe what Tegile and Nimble have indicated as it doesn’t seem to be as believable as they would imply a 1.7:1 and 1.33:1 reduction effectiveness for Jetstress/ESRP data.

Oracle FS1-2 doesn’t seem to have any data reduction capabilities and reported a 100% storage capacity used by Exchange database.

So that’s it, Jetstress uses “significantly reducible” data for some AFAs systems. But in the field the advantage of data reduction techniques are much less so.

I think it’s time that ESRP stopped using significantly reducible data in their Jetstress program and tried to more closely mimic real world data.

Want more?

The October 2016 and our other ESRP reports have much more information on Microsoft Exchange performance. Moreover, there’s a lot more performance information, covering email and other (OLTP and throughput intensive) block storage workloads, in our SAN Storage Buying Guide, available for purchase on our website. More information on file and block protocol/interface performance is also included in SCI’s SAN-NAS Buying Guidealso available from our website. And if your interested in file system performance please consider purchasing our NAS Buying Guide also available on our website.

~~~~

The complete ESRP performance report went out in SCI’s October 2016 Storage Intelligence e-newsletter.  A copy of the report will be posted on our SCI dispatches (posts) page over the next quarter or so (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free SCI Storage Intelligence e-newsletters, by just using the signup form in the sidebar or you can subscribe here.

As always, we welcome any suggestions or comments on how to improve our ESRP  performance reports or any of our other storage performance analyses.

 

Hedvig storage system, Docker support & data protection that spans data centers

Hedvig003We talked with Hedvig (@HedvigInc) at Storage Field Day 10 (SFD10), a month or so ago and had a detailed deep dive into their technology. (Check out the videos of their sessions here.)

Hedvig implements a software defined storage solution that runs on X86 or ARM processors and depends on a storage proxy operating in a hypervisor host (as a VM) and storage service nodes. Their proxy and the storage services can execute as separate VMs on the same host in a hyper-converged fashion or on different nodes as a separate storage cluster with hosts doing IO to the storage cluster.

Hedvig’s management team comes from hyper-scale environments (Amazon Dynamo/Facebook Cassandra) so they have lots of experience implementing distributed software defined storage at (hyper-)scale.
Continue reading “Hedvig storage system, Docker support & data protection that spans data centers”

AWS vs. Azure security setup for Linux

Strange Clouds by michaelroper (cc) (from Flickr)
Strange Clouds by michaelroper (cc) (from Flickr)

I have been doing some testing with both Azure and Amazon Web Services (AWS) these last few weeks and have observed a significant difference in the security setups for both of these cloud services, at least when it comes to Linux compute instances and cloud storage.

First, let me state at the outset, all of my security setups for both AWS and Amazon was done through using the AWS console or the Azure (classic) portal. I believe anything that can be done with the portal/console for both AWS and Azure can also be done in the CLI or the REST interface. I only used the portal/console for these services, so can’t speak to the ease of using AWS’s or Azure’s CLI or REST services.

For AWS

EC2 instance security is pretty easy to setup and use, at least for Linux users:

  • When you set up an (Linux) EC2 instance you are asked to set up a Public Key Infrastructure file (.pem) to be used for SSH/SFTP/SCP connections. You just need to copy this file to your desktop/laptop/? client system. When you invoke SSH/SFTP/SCP, you use the “-i” (identity file) option and specify the path to the (.PEM) certificate file. The server is already authorized for this identity. If you lose it, AWS services will create another one for you as an option when connecting to the machine.
  • When you configure the AWS instance, one (optional) step is to configure its security settings. And one option for this is to allow connections only from ‘my IP address’, how nice. You don’t even have to know your IP address, AWS just figures it out for itself and configures it.

That’s about it. Unclear to me how well this secures your EC2 instance but it seems pretty secure to me. As I understand it, a cyber criminal would need to know and spoof your IP address to connect to or control remotely the EC2 instance. And if they wanted to use SSH/SFTP/SCP they would either have to access to the identity file. I don’t believe I ever set up a password for the EC2 instance.

As for EBS storage, there’s no specific security associated with EBS volumes. Its security is associated with the EC2 instance it’s attached to. It’s either assigned/attached to an EC2 instance and secured there, or it’s unassigned/unattache. For unattached volumes, you may be able to snapshot it (to an S3 bucket within your administration control) or delete it (if it’s unattached, but for either of these you have to be an admin for the EC2 domain.

As for S3 bucket security, I didn’t see any S3 security setup that mimicked the EC2 instance steps outlined above. But in order to use AWS automated billing report services for S3, you have to allow the service to have write access to your S3 buckets. You do this by supplying an XML-like security policy, and applying this to all S3 buckets you wish to report on (or maybe it’s store reports in). AWS provides a link to the security policy page which just so happens to have the XML-like file you will need to do this. All I did was copy this text and insert it into a window that was opened when I said I wanted to apply a security policy to the bucket.

I did find that S3 bucket security, made me allow public access (I think, can’t really remember exactly) to the S3 bucket to be able to list and download objects from the bucket from the Internet. I didn’t like this, but it was pretty easy to turn on. I left this on. But this PM I tried to find it again (to disable it) but couldn’t seem to locate where it was.

From my perspective all the AWS security setup for EC2 instances, storage, and S3 was straightforward to use and setup, it seemed pretty secure and allowed me to get running with only minimal delay.

For Azure

First, I didn’t find the more modern, new Azure portal that useful but then I am a Mac user, and it’s probably more suitable for Windows Server admins. The classic portal was as close to the AWS console as I could find and once I discovered it, I never went back.

Setting up a Linux compute instance under Azure was pretty easy, but I would say the choices are a bit overwhelming and trying to find which Linux distro to use was a bit of a challenge. I settled on SUSE Enterprise, but may have made a mistake (EXT4 support was limited to RO – sigh). But configuring SUSE Enterprise Linux without any security was even easier than AWS.

However, Azure compute instance security was not nearly as straightforward as in AWS. In fact, I could find nothing similar to securing your compute instance to “My IP” address like I did in AWS. So, from my perspective my Azure instances are not as secure.

I wanted to be able to SSH/SFTP/SCP into my Linux compute instances on Azure just like I did on AWS. But, there was no easy setup of the identity file (.PEM) like AWS supported. So I spent some time, researching how to create a Cert file with the Mac (didn’t seem able to create a .PEM file). Then more time researching how to create a Cert file on my Linux machine. This works but you have to install OpenSSL, and then issue the proper “create” certificate file command, with the proper parameters. The cert file creation process asks you a lot of questions, one for a pass phrase, and then for a network (I think) phrase. Of course, it asks for name, company, and other identification information, and at the end of all this you have created a set of cert files on your linux machine.

But there’s a counterpart to the .pem file that needs to be on the server to authorize access. This counterpart needs to be placed in a special (.ssh/authorized) directory and I believe needs to be signed by the client needing to be authorized. But I didn’t know if the .cert, .csr, .key or .pem file needed to be placed there and I had no idea how to” sign it”. After spending about a day and a half  on all this, I decided to abandon the use of an identity file and just use a password. I believe this provides less security than an identity file.

As for BLOB storage, it was pretty easy to configure a PageBlob for use by my compute instances. It’s security seemed to be tied to the compute instance it was attached to.

As for my PageBlob containers, there’s a button on the classic portal to manage access keys to these. But it said once generated, you will need to update all VMs that access these storage containers with the new keys. Not knowing how to do that. I abandoned all security for my container storage on Azure.

So, all in all, I found Azure a much more manual security setup for Linux systems than AWS and in the end, decided to not even have the same level of security for my Linux SSH/SFTP/SCP services that I did on AWS. As for container security, I’m not sure if there’s any controls on the containers at this point. But I will do some more research to find out more.

In all fairness, this was trying to setup a Linux machine on Azure, which appears  more tailored for Windows Server environments. Had I been in an Active Directory group, I am sure much of this would have been much easier. And had I been configuring Windows compute instances instead of Linux, all of this would have also been much easier, I believe.

~~~~

All in all, I had fun using AWS and Azure services these last few weeks, and I will be doing more over the next couple of months. So I will let you know what else I find as significant differences between AWS and Azure. So stay tuned.

Comments?

(Storage QoW 15-003): SMR disks in GA enterprise storage in 12 months? Yes@.85 probability

Hard Disk by Jeff Kubina (cc) (from Flickr)
Hard Disk by Jeff Kubina (cc) (from Flickr)

(Storage QoW 15-003): Will we see SMR (shingled magnetic recording) disks in GA enterprise storage systems over the next 12 months?

Are there two vendors of SMR?

Yes, both Seagate and HGST have announced and currently shipping (?) SMR drives, HGST has a 10TB drive and Seagate has an 8TB drive on the market since last summer.

One other interesting fact is that SMR will be the common format for all future disk head technologies including HAMR, MAMR, & BPMR (see presentation).

What would storage vendors have to do to support SMR drives?

Because of the nature of SMR disks, writes overlap other tracks so they must be written, at least in part, sequentially (see our original post on Sequential only disks). Another post I did reported on recent work by Garth Gibson at CMU (Shingled Magnetic Recording disks) which showed how multiple bands or zones on an SMR disk could be used some of which could be written randomly and others which could be written sequentially but all could be read randomly. With such an approach you could have a reasonable file system on an SMR device with a metadata partition (randomly writeable) and a data partition (sequentially writeable).

In order to support SMR devices, changes have been requested for the T10 SCSI  & T13 ATA command protocols. Such changes would include:

  • SMR devices support a new write cursor for each SMR sequential band.
  • SMR devices support sequential writes within SMR sequential bands at the write cursor.
  • SMR band write cursors can be read, statused and reset to 0. SMR sequential band LBA writes only occur at the band cursor and for each LBA written, the SMR device increments the band cursor by one.
  • SMR devices can report their band map layout.

The presentation refers to multiple approaches to SMR support or SMR drive modes:

  • Restricted SMR devices – where the device will not accept any random writes, all writes occur at a band cursor, random writes are rejected by the device. But performance would be predictable. 
  • Host Aware SMR devices – where the host using the SMR devices is aware of SMR characteristics and actively manages the device using write cursors and band maps to write the most data to the device. However, the device will accept random writes and will perform them for the host. This will result in sub-optimal and non-predictable drive performance.
  • Drive managed SMR devices – where the SMR devices acts like a randomly accessed disk device but maps random writes to sequential writes internally using virtualization of the drive LBA map, not unlike SSDs do today. These devices would be backward compatible to todays disk devices, but drive performance would be bad and non-predictable.

Unclear which of these drive modes are currently shipping, but I believe Restricted SMR device modes are already available and drive manufacturers would be working on Host Aware and Drive managed to help adoption.

So assuming Restricted SMR device mode availability and prototypes of T10/T13 changes are available, then there are significant but known changes for enterprise storage systems to support SMR devices.

Nevertheless, a number of hybrid storage systems already implement Log Structured File (LSF) systems on their backends, which mostly write sequentially to backend devices, so moving to a SMR restricted device modes would be easier for these systems.

Unclear how many storage systems have such a back end, but NetApp uses it for WAFL and just about every other hybrid startup has a LSF format for their backend layout. So being conservative lets say 50% of enterprise hybrid storage vendors use LSF.

The other 60% would have more of a problem implementing SMR restricted mode devices, but it’s only a matter of time before  all will need to go that way. That is assuming they still use disks. So, we are primarily talking about hybrid storage systems.

All major storage vendors support hybrid storage and about 60% of startups support hybrid storage, so adding these to together, maybe about 75% of enterprise storage vendors have hybrid.

Using analysis on QoW 15-001, about 60% of enterprise storage vendors will probably ship new hardware versions of their systems over the next 12 months. So of the 13 likely new hardware systems over the next 12 months, 75% have hybrid solutions and 50% have LSF, or ~4.9 new hardware systems will be released over the next 12 months that are hybrid and have LSF backends already.

What are the advantages of SMR?

SMR devices will have higher storage densities and lower cost. Today disk drives are running 6-8TB and the SMR devices run 8-10TB so a 25-30% step up in storage capacity is possible with SMR devices.

New drive support has in the past been relatively easy because command sets/formats haven’t changed much over the past 7 years or so, but SMR is different and will take more effort to support. The fact that all new drives will be SMR over time gives more emphasis to get on the band wagon as soon as feasible. So, I would give a storage vendor a 80% likelihood of implementing SMR, assuming they have new systems coming out, are already hybrid and are already using LSF.

So of the ~4.9 systems that are LSF/Hybrid/being released *.8, says ~3.9 systems will introduce SMR devices over the next 12 months.

For non-LSF hybrid systems, the effort seems much harder, so I would give the likelihood of implementing SMR about a 40% chance. So of the ~8.1 systems left that will be introduced in next year, 75% are hybrid or ~6.1 systems and they have a 40% likelihood of implementing SMR so ~2.4 of these non-LSF systems will probably introduce SMR devices.

There’s one other category that we need to consider and that would be startups in stealth. These could have been designing their hybrid storage for SMR from the get go. In QoW 15-001 analysis I assumed another ~1.8 startup vendors would emerge to GA over the next 12 months. And if we assume that 0.75% of these were hybrid then there’s ~1.4 startups vendors that could be using SMR technology in their hybrid storage for a (4.9+2.4+1.4(1.8*.75)= 8.7 systems have a high probability of SMR implementation over the next 12 months in GA enterprise storage products.

Forecast

So my forecast of SMR adoption by enterprise storage is Yes for .85 probability (unclear what the probability should be, but it’s highly probable).

~~~~

Comments?

A new storage benchmark – STAC-M3

9761778404_73283cbd17_nA week or so ago I was reading a DDN press release that said they had just published a STAC-M3 benchmark. I had never heard of STAC or their STAC-M3 so thought I would look them up.

STAC stands for Securities Technology Analyst Center and is an organization dedicated to testing system solutions for the financial services industries.

What does STAC-M3 do

It turns out that STAC-M3 simulates processing a time (ticker tape or tick) log of security transactions and identifyies the maximum and weighted bid along with various other statistics for a number (1%) of securities over various time periods (year, quarter, week, and day) in the dataset. They call it high-speed analytics on time-series data. This is a frequent use case for systems in the securities sector.

There are two versions of the STAC-M3 benchmark: Antuco and Kanaga. The Antuco version uses a statically sized dataset and the Kanaga uses more scaleable (varying number of client threads) queries over larger datasets. For example, the Antuco version uses 1 or 10 client threads for their test measurements whereas the Kanaga version scales client threads, in some cases, from 1 to 100 threads and uses more tick data in 3 different sized datasets.

Good, bad and ugly of STAC-M3

Access to STAC-M3 reports requires a login but it’s available for free. Some details are only available after you request them which can be combersome.

One nice thing about the STAC-M3 benchmark information is that it provides a decent summary of the amount of computational time involved in all the queries it performs. From a storage perspective, if one were to take this and just analyze the queries with minimal or light computation that come closer to a pure storage workload than computationally heavy queries.

Another nice thing about the STAC-M3 is that it in some cases it provides detailed statistical information about the distribution of metrics, including mean, median, min, max and standard deviation. Unfortunately, the current version of the STAC-M3 does not provide these statistics for the computational light measures that are of primary interest to me as a storage analyst. It would be very nice to see some of their statistical level reporting be adopted by SPCSPECsfs or Microsoft ESRP for their benchmark metrics.

STAC-M3 also provides a measure of storage efficiency, or how much storage it took to store the database. This is computed as the reference size of the dataset divided by the amount of storage it took to store the dataset. Although this could be interesting most of the benchmark reports I examined all had similar numbers for storage efficiency 171% or 172%.

The STAC-M3 benchmark is a full stack test. That is it measures the time from the point a query is issued to the point the query response is completed. Storage is just one part of this activity, computing the various statistics is another part and the database used to hold the stock tick data is another aspect of their test environment. But what is being measured is the query elapsed time. SPECsfs2014 has also recently changed over to be a full stack test, so it’s not that unusual anymore.

The other problem from a storage perspective (but not a STAC perspective) is that there is minimal write activity during any of the benchmark specific testing. There’s just one query that generates a lot of storage write activity all the rest are heavy read IO only.

Finally, there’s not a lot of description of the actual storage and server configuration available in the basic report. But this might be further detailed in the Configuration Disclosure report which you have to request permission to see.

STAC-M3 storage submissions

As it’s a stack benchmark we don’t find a lot of storage array submissions. Typical submissions include a database running on some servers with SSD-DAS or occasionally a backend storage array. In the case of DDN it was KX system’s kdb 3.2 database, with Intel Enterprise Edition servers for Lustre, with 8 Intel based DDN EXAscaler servers, talking to a DDN SFA12KX-40 storage array. In contrast, a previousr submission used an eXtremeDB Financial Edition 6.0 database running on Lucera Compute™ (16 Core SSD V2, Smart OS) nodes.

Looking back over the last couple of years of  submissions (STAC-M3 goes back to 2010), forstorage arrays, aside from the latest DDN SFA12KX-40, we find an IBM FlashSystem 840, NetApp EF550, IBM FlashSystem 810, a couple of other DDN SFA12K-40 storage solutions, and a Violin V3210 & V3220 submission.  Most of the storage array submissions were all-flash arrays, but the DDN SFA12KX-40 is a hybrid (flash-disk) appliance.

Some metrics from recent STAC-M3 storage array runs

STAC-M3-YRHIBID-MAXMBPS

In the above chart, we show the Maximum MBPS achieved in the year long high bid (YRHIBID) extraction testcase. DDN won this one handily with over 10GB/sec for its extraction result.

STAC-M3-YRHIBID-MEANRTHowever in the next chart, we show the Mean Response (query elapsed) Time (in msec.) for the query that extracts the Year long High Bid data extraction test (YRHIBID). In this case the IBM FlashSystem 810 did much better than the DDN or even the more recent IBM FlashSystem 840.

Unexpectedly, the top MBPS storage didn’t achieve the best mean response time for the YRHIBID query. I would have thought the mean response time and the maximum MBPS would show the same rankings. Not clear to me why this is, it’s the mean response time not minimum or maximum. Although the maximum response time would show the same rankings. An issue with the YRHIBID is that it doesn’t report standard deviation, median or minimum response time results. Having these other metrics might have shed some more light on this anomaly but for now this is all we have.

If anyone knows of other storage (or stack) level benchmarks for other verticals please let me know and I would be glad to dig into them to see if they provide some interesting viwes on storage performance.

Comments?

 Photo Credit(s): Stock market quotes in newspaper by Andreas Poike

Max MBPS and Mean RT Charts (c) 2015 Silverton Consulting, Inc., All Rights Reserved

VMware VSAN 6.0 all-flash & hybrid IO performance at SFD7

We visited with VMware’s VSAN team during last Storage Field Day (SFD7, session available here). The presentation was wide ranging but the last two segments dealt with recent changes to VSAN and at the end provided some performance results for both a hybrid VSAN and an all-Flash VSAN.

Some new features in VSAN 6.0 include:

  • More scaleability, up to 64 hosts in a cluster and up to 200VMs per host
  • New higher performance snapshots & clones
  • Rack awareness for better availability
  • Hardware based checksum for T10 DIF (data integrity feature)
  • Support for blade servers with JBODs
  • All-flash configurations
  • Higher IO performance

Even in the all-flash configuration there are two tiers of storage a write cache tier and a capacity tier of SSDs. These are configured with two different classes of SSDs (high endurance/low-capacity and low-endurance/high capacity).

At the end of the session Christos Karamanolis (@Xtosk), Principal Architect for VSAN showed us some performance charts on VSAN 6.0 hybrid and all-flash configurations.

Hybrid VSAN performance

On the chart we see two plots showing IOmeter performance as VSAN scales across multiple nodes (hosts), on the left  we have  a 100% Read workload and on the right a 70%Read:30%Write workload.

The hybrid VSAN configuration has 4-10Krpm disks and 1-400GB SSD on each host and ranges from 8 to 64 hosts. The bars on the chart show IOmeter IOPS and the line shows the average response time (or IO latency) for each VSAN host configuration. I am not a big fan of IOmeter, as it’s an overly simplified, but that’s what VMware used.

The results show that in a 100% read case the hybrid, 64 host VSAN 6.0 cluster was able to sustain ~3.8M IOPS or over 60K IOPS per host.  or the mixed 70:30 R:W workload VSAN 6.0 was able to sustain ~992K IOPs or ~15.5K IOPS per host.

We see a pretty drastic IOPs degradation (~1/4 the 100% read performance) in the plot on the right, when they added write activity to the mix. But with VSAN’s mirrored data protection each VM write represents at least two VSAN backend writes and at a 70:30 IOmeter R:W this would be ~694K IOPS read and ~298K IOPS write frontend IOs but with mirroring this represents 595K writes to the backend storage.

Then of course, there’s destage activity (data written to SSDs need  to be read off SSD and written to HDD) which also multiplies internal IO operations for every external write IOP. Lets say all that activity multiplies each external write by 6 (3 for each mirror: 1 to the write cache SSD, 1 to read it back and 1 to write to HDD) and we multiply that times the ~298K external write IOPS, it would add up to about a total of ~1.8M write derived IOPS  and ~0.7M read derived IOPS or a total of ~2.5M IOPS but this is still far away from the 3.5M IOPS for 100% read activity. I am probably missing another IO or two in the write path (maybe Virtual to physical mapping structures need to be updated) or have failed to account for more inter-cluster IO activity in support of the writes.

In addition, we see the IO latency was roughly flat across the 100% Read workload at ~2.25msec. and got slightly worse over the 70:30 R:W workload, ranging from ~2.5msec. at 4 hosts to a little over 3.0msec. with 64 hosts. Not sure why this got worse but hosts are scaled up it could induce more inter-cluster overhead.

Rays-pix37

In the chart to the right, we can see similar performance data for systems with one or two disk-groups. The message here is that with two disk groups on a host (2X the disk and SSD resources per host) one can potentially double the performance of the systems, to 116K IOPS/host on 100% read and 31K IOPS/host on a 70:30 R:W workload.

All-flash VSAN performance

Rays-pix41

Here we can see performance data for an 8-host, all-flash VSAN configuration. In this case the chart on the left was a single “disk” group and the chart on the right was a dual disk group, all-flash configuration on each of the 8-hosts. The hosts were configured with 1-400GB and 3-800GB SSDs per disk group.

The various bars on the charts represent different VM working set sizes, 100, 200, 400 & 600GB for the single disk group chart and 100, 200, 400, 800 & 1200GB for dual disk group configurations. For the dual disk group, the 1200GB working set size is much bigger than a cache tier on each host.

The chart text is a bit confusing: the title of each plot says 70% read but the text under the two plots says 100% read. I must assume these were 70:30 R:W workloads. If we just look at the 8 hosts, using a 400GB VM working set size, all-flash VSAN 6.0 single disk group cluster was able to achieve ~37.5K IOPS/host and with two disk groups, the all-flash VSAN 6.0  was able to achieve ~68.75K IOPS/host at the 400GB working set size. Both doubling the hybrid performance.

Response times degrade for both the single and dual disk groups as we increase the working set sizes. It’s pretty hard to see on the two charts but it seems to range from 1.8msec to 2.2msec for the single disk group and 1.8msec to 2.5 msec for the dual disk group. The two charts are somewhat misleading because they didn’t use the exact same working group sizes for the two performance runs but just taking the 100|200|400GB working set sizes, for the single disk group it looks like the latency went from ~1.8msec. to ~2.0msec and for the dual disk group from ~1.8msec to ~2.3msec. Why the higher degradation for the dual disk group is anyone’s guess.

The other thing that doesn’t make much sense is that as you increase the working set size the number of IOPS goes down, worse for the dual disk group than the single. Once again taking just the 100|200|400GB working group sizes this ranges from ~350K IOPS to ~300K IOPS (~15% drop) for the single disk group and ~700K IOPS to ~550K IOPS (~22% drop) for the dual disk group.

Increasing working group sizes should cause additional backend IO as the cache effectivity should be proportionately less as you increase working set size. Which I think goes a long way to explain the degradation in IOPS as you increase working set size. But I would have thought the degradation would have been a proportionally similar for both the single and dual disk groups. The fact that the dual disk group did 7% worse seems to indicate more overhead associated with dual disk groups than single disk groups or perhaps, they were running up against some host controller limits (a single controller supporting both disk groups).

~~~~

At the time (3 months ago) this was the first world-wide look at all-flash VSAN 6.0 performance. The charts are a bit more visible in the video than in my photos (?) and if you want to just see and hear Christos’s performance discussion check out ~11 minutes into the final video segment.

For more information you can also read these other SFD7 blogger posts on VMware’s session: