We were at Nimble Storage (videos of their sessions) for Storage Field Day 10 (SFD10) last week and they presented some interesting IO statistics from data analysis across their 7500 customer install base using InfoSight.
As I understand it, the data are from all customers that have maintenance and are currently connected to InfoSight, their SaaS service solution for Nimble Storage. The data represents all IO over the course of a single month across the customer base. Nimble wrote a white paper summarizing their high level analysis, called Busting the myth of storage block size.
In the chart above there are two interpretations representing the same data. The incorrect one (on the right in the picture above) shows an IO distribution averaged across all arrays. The correct one (on the left in the picture above) shows an IO distribution averaged across the entire base, regardless of array. It’s a subtle difference but some arrays do an a lot more IO than others and if you want to understand the true IO distribution, you need to aggregate IOs counts across the whole base.
It seems Nimble was most interested in indicating that the true IO block size distributions are multi-modal, that is not just 4-8KiB but 4-8KiB AND 64-128KiB. But it’s more correct to say that there is a wide distribution across the entire spectrum of block sizes above 4KiB
Surprise!
What I found more interesting was that they showed a much higher proportion of Write activity than I would have guessed.
In the chart above there are three bars representing Read IOs, Write IOs and both for each block size band. If we just focus on the [4,8{KiB}) block size band:
- For all [4,8{KiB}) I/O activity, reads account for ~23% of all IO activity across their whole base;
- For all [4,8{KiB}) I/O activity, writes account for ~43% of all IO activity across their whole base; and
- For all [4,8{KiB}) I/O activity, IOs for this block size range account for ~32% of all IO activity across their whole base.
The other block size bands have varying amounts of read vs. write activity with the majority being reads. But when you look at the chart in total, I would say it seems more like Read:Write IOs are much closer to 50:50 than 80:20 or even 67:33. This is not that typical for the industry as I understand it or at least wasn’t the case, historically. What’s changed?
Are flash storage systems changing the IO mix?
No one can deny that flash storage and AFAs have increased IOPS (IO activity per array) and reduced IO latency or response time. Should this lead to a change in IO mix? A couple of thoughts come to mind:
- Application mix is changing as we get more and faster IO with flash. Today’s applications are more modern, less IO optimized and thus have a higher percentage of write IO than previously observed.
- IO mix is changing due to server side caching (in SSD/flash and host memory) and are taking more of the read hits away from the storage system, transforming what a storage array sees into proportionally higher write IO activity.
- Flash and AFA’s have removed IO bottlenecks that were holding back application write IO activity and now with much faster reads AND writes, a truer IO mix is emerging.
It’s likely a mix of all of the above and no doubt, I missed one or two other crucial aspects of what’s happening here. But the facts are Read:Write mix is closer to even than what we had believed before the advent of flash storage or AFAs.
What about throughput?
If you are more interested in data transfer or throughput vs. IOs the picture looks closer to what we have seen historically, I believe. First realize that each block size bracket is 2X the previous one. So, even though there are more IOs being done in the [4,8{KiB}) bracket than in the [8,16{KiB}) bracket, the fact is there’s more data being transferred in the [8,16{KiB}) per IO. If one just focuses on the [8,16{KiB}) block size band:
- For all [8,16{KiB}) I/O activity, reads account for ~22% of all IO activity across their whole base;
- For all [8,16{KiB}) I/O activity, writes account for ~13% of all IO activity across their whole base; and
- For all [8,16{KiB}) I/O activity, IOs for this block size range account for ~18% of all IO activity across their whole base.
But this block size range is 2X the [4,8{KiB}) block size. To put the comparison in equivalent throughput, the [4,8{KiB}) did about ~16% (32%/2) of the data throughput as compared to ~18% being done by the [8,16{KiB}) block size range. For the [8,16{KiB}) band, there were ~1.7 more reads than writes and it almost counterbalances the more writes vs. reads done in the [4,8{KiB}).
The remaining larger block size bands have more reads than writes. So, I believe there’s more bandwidth consumed by reads than writes. It almost looks like maybe 60:40 read:write data throughput. Again, not what I would have said if you asked me the week before SFD10 but closer to what I would have believed.
Other Nimble analysis
I had an interesting debate over Twitter on interpreting another of Nimble’s charts. This one on blocksize-IO distribution by self identified application (believe this was VDI) across their customer base. But I think I will leave that to another post.
~~~~~
Comments?
Photo Credit(s): From SFD10 Nimble Storage session and Nimble’s Busting the myth of storage block sizes white paper
Disclosure: Nimble Storage gave us a gift of a grey baseball uniform with our names on it and their logo on the front.
Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base https://t.co/PWKUj18q1s https://t.co/SL1nvMUrGV
Feedbin star: Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base https://t.co/4IQFWZ82mR
RT @RayLucchesi: Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base https://t.co/PWKUj18q1s https://t…
RT @RayLucchesi: [Blog post] Surprises in flash storage IO distributions from 1 month of @NimbleStorage customer base https://t.co/XP0XTvGr…
RT @RayLucchesi: [Blog post] Surprises in flash storage IO distributions from 1 month of @NimbleStorage customer base https://t.co/XP0XTvGr…
RT @RayLucchesi: [Blog post] Surprises in flash storage IO distributions from 1 month of @NimbleStorage customer base https://t.co/XP0XTvGr…
Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base, from @RayLucchesi: https://t.co/6mgbDsTxVr
RT @NimbleStorage: Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base, from @RayLucchesi: https://t.c…
RT @NimbleStorage: Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base, from @RayLucchesi: https://t.c…
RT @NimbleStorage: Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base, from @RayLucchesi: https://t.c…
Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base https://t.co/A3XzpxLae4
#SymLink: Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base https://t.co/N3eiZlQlcC
Link: Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base https://t.co/jh6vOR96Mp @nimblestorage, @Ra…