Chart of the month: SPC-1 LRT performance results

Chart of the Month: SPC-1 LRT(tm) performance resultsThe above chart shows the top 12 LRT(tm) (least response time) results for Storage Performance Council’s SPC-1 benchmark. The vertical axis is the LRT in milliseconds (msec.) for the top benchmark runs. As can be seen the two subsystems from TMS (RamSan400 and RamSan320) dominate this category with LRTs significantly less than 2.5msec. IBM DS8300 and it’s turbo cousin come in next followed by a slew of others.

The 1msec. barrier

Aside from the blistering LRT from the TMS systems one significant item in the chart above is that the two IBM DS8300 systems crack the <1msec. barrier using rotating media. Didn’t think I would ever see the day, of course this happened 3 or more years ago. Still it’s kind of interesting that there haven’t been more vendors with subsystems that can achieve this.

LRT is probably most useful for high cache hit workloads. For these workloads the data comes directly out of cache and the only thing between a server and it’s data is subsystem IO overhead, measured here as LRT.

Encryption cheap and fast?

The other interesting tidbit from the chart is that the DS5300 with full drive encryption (FDE), (drives which I believe come from Seagate) cracks into the top 12 at 1.8msec exactly equivalent with the IBM DS5300 without FDE. Now FDE from Seagate is a hardware drive encryption capability and might not be measurable at a subsystem level. Nonetheless, it shows that having data security need not reduce performance.

What is not shown in the above chart is that adding FDE to the base subsystem only cost an additional US$10K (base DS5300 listed at US$722K and FDE version at US$732K). Seems like a small price to pay for data security which in this case is simply turn it on, generate keys, and forget it.

FDE is a hard drive feature where the drive itself encrypts all data written and decrypts all data read to from a drive and requires a subsystem supplied drive key at power on/reset. In this way the data is never in plaintext on the drive itself. If the drive were taken out of the subsystem and attached to a drive tester all one would see is ciphertext. Similar capabilities have been available in enterprise and SMB tape drives is the past but to my knowledge the IBM DS5300 FDE is the first disk storage benchmark with drive encryption.

I believe the key manager for the DS5300 FDE is integrated within the subsystem. Most shops would need a separate, standalone key manager for more extensive data security. I believe the DS5300 can also interface with an standalone (IBM) key manager. In any event, it’s still an easy and simple step towards increased data security for a data center.

The full report on the latest SPC results will be up on my website later this week but if you want to get this information earlier and receive your own copy of our newsletter – email me at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.

Why SSD performance is a mystery?

SSDs! :) by gimpbully (cc) (from flickr)
SSDs! 🙂 by gimpbully (cc) (from flickr)

SSD and/or SSS (solid state storage) performance is a mystery to most end-users. The technology is inherently asymmetrical, i.e., it reads much faster than it writes. I have written on some of these topics before (STEC’s new MLC drive, Toshiba’s MLC flash, Tape V Disk V SSD V RAM) but the issue is much more complex when you put these devices behind storage subsystems or in client servers.

Some items that need to be considered when measuring SSD/SSS performance include:

  • Is this a new or used SSD?
  • What R:W ratio will we use?
  • What blocksize should be used?
  • Do we use sequential or random I/O?
  • What block inter-reference interval should be used?

This list is necessarily incomplete but it’s representative of the sort of things that should be considered to measure SSD/SSS performance.

New device or pre-conditioned

Hard drives show little performance difference whether new or pre-owned, defect skips notwithstanding. In contrast, SSDs/SSSs can perform very differently when they are new versus when they have been used for a short period depending on their internal architecture. A new SSD can write without erasure throughout it’s entire memory address space but sooner or later wear leveling must kick in to equalize the use of the device’s NAND memory blocks. Wear leveling causes both reads and rewrites of data during it’s processing. Such activity takes bandwidth and controller processing away from normal IO. If you have a new device it may take days or weeks of activity (depending on how fast you write) to attain the device’s steady state where each write causes some sort of wear leveling activity.

R:W Ratio

Historically, hard drives have had slightly slower write seeks than reads, due to the need to be more accurately positioned to write data than to read it. As such, it might take .5msec longer to write than to read 4K bytes. But for SSDs the problem is much more acute, e.g. read times can be in microseconds while write times can almost be in milliseconds for some SSDs/SSSs. This is due to the nature of NAND flash, having to erase a block before it can be programmed (written) and the programming process taking a lot’s longer than a read.

So the question for measuring SSD performance is what read to write (R:W) ratio to use. Historically a R:W of 2:1 was used to simulate enterprise environments but most devices are starting to see more like 1:1 for enterprise applications due to the caching and buffering provided by controllers and host memory. I can’t speak as well for desktop environments but it wouldn’t surprise me to see 2:1 used to simulate desktop workloads as well.

SSDs operate a lot faster if their workload is 1000:1 than for 1:1 workloads. Most SSD data sheets tout a significant read I/O rate but only for 100% read workloads. This is like a subsystem vendor quoting a 100% read cache hit rate (which some do) but is unusual in the real world of storage.

Blocksize to use

Hard drives are not insensitive to blocksizes, as blocks can potentially span tracks which will require track-to-track seeks to be read or written. However, SSDs can also have some adverse interaction with varying blocksizes. This is dependent on the internal SSD architecture and is due to over optimizing write performance.

With an SSD, you erase a block of NAND and write a page or sector of NAND at a time. As writes takes much longer than reads, many SSD vendors add parallelism to improve write throughput. Parallelism writes or programs multiple sectors at the same time. Thus, if your blocksize is an integral multiple of the multi-sector size written performance is great, if not, performance can suffer.

In all honesty, similar issues exist with hard drive sector sizes. If your blocksize is an integral multiple of the drive sector size then performance is great, if not too bad. In contrast to SSDs, drive sector size is often configurable at the device level.

Sequential vs. random IO

Hard drives perform sequential IO much better than random IO. For SSDs this is not much of a problem, as once wear leveling kicks in, it’s all random to the NAND flash. So when comparing hard drives to SSDs the level of sequentiality is a critical parameter to control.

Cache hit rate

The block inter-reference interval is simply measures how often the same block is re-referenced. This is important for caching devices and systems because it ultimately determines the cache hit rate (reading data directly from cache instead of the device storage). Hard drives have onboard cache of 8 to 32MB today. SSD drives also have a DRAM cache for data buffering and other uses. SSDs typically publicize their cache size so in order to insure 0 cache hits one needs an block inter-reference interval close to the device’s capacity. Not a problem today with 146GB devices but as they move to 300GB and larger it becomes more of a problem to completely characterize device performance.

The future

So how do we get a handle on SSD performance? SNIA and others are working on a specification on how to measure SSD performance that will one day become a standard. When the standard is available we will have benchmarks and service groups that can run these benchmarks to validate SSD vendor performance claims. Until then – caveat emptor.

Of course most end users would claim that device performance is not as important as (sub)system performance which is another matter entirely…

What if there were no backup?

Data Center by Mathieu Ramage (Flickr)
Data Center by Mathieu Ramage (Flickr)
If backup didn’t exist and you had to start over to protect your data how would you do it today?

I think four things are important to protect data in today’s data center:

  • Any data ever created in the data center or on-the-road needs to be protected,
  • Data restores must be under end-user control,
  • Data needs to be copied/replicated/mirrored offsite to support disaster recovery,
  • Multiple data copies should exist only to satisfy some data protection policy – one copy is mandatory, two copies (not co-located) would be required to support higher availability, and
  • Data protection activities should not interfere with or interrupt ongoing data center operations

All this can and is being done with backup and other systems today but most of these products and features grew out of earlier phases of computing. With today’s technology many of these capabilities may no longer be necessary today if one could just rethink data protection from the ground up.

Data Versioning

I think some form of data/file/block easily versioning could easily support the requirement of restoring any data ever created. Versioning systems have existed in the past and could certainly be re-constituted today with some sort of standards. The cost of storing all that data might be a concern but storage costs continue to decrease and if multiple copies retained for data protection can be eliminated, it might just be a wash. Versioning could just as easily be provided for the labtop and once new versions of data are created old versions could be moved off the laptop to the data center for safekeeping and to free up space.

End-user visiblility

End-user restoration requires some facility to explore the end-users data protection file-name and block space. Once this is available, identifying which version needs to be restored and where to restore it should be straightforward. All backup applications provide a backup directory and a few even allow end-user access to perform data restores. While all this works well with files, having an end-user do this for block storage would require more sophistication. Nonetheless, both file and block restores seems entirely feasible once data versioning is in place.

Ubiquitous replication

The requirement to have data copies offsite is certainly feasible today. Replication can be done in hardware or software today, synchronously, semi-synchronously, and/or asynchronously. Replication today can solve this problem but replicating to separate data centers cost too much. Enter the storage cloud. With the storage cloud we could pay just for the data bandwidth and storage to support our data protection needs and no more. Old data versions could be replicated as new versions are created. Protecting data written to a new version is more problematic but some sort of write splitter (ala CDP) could be used to create a replica of this data as well.

Policy driven

Having a policy driven data protection system that only stores a minimal number of copies of data seems to be difficult to support. Yet, this seems to be what incremental-only backup software and archive products support today. For other backup software, if one uses a deduplicating VTL this can be very similar. Adding some policy sophistication to coordinate multiple data protection copies across multiple (potentially Cloud) nodes and deduplicating all the un-necessary copies seems entirely feasible.

Operationally transparent

Not interrupting ongoing operations also seems to be tough to crack. Yet, many storage vendors provide snapshot technologies that copy block and/or file data without interrupting operations. However, coordinating vendor snapshot technologies from some central data protection manager is an essential integration but continues to be lacking.

Can pieceparts solve the problem?

Yes, most of these features are purchasable as separate product offerings (except data versioning) but what’s missing is any one product that pulls all of this together and offers one integrated solution to data protection as I have described it.

The problem, of course, is that such functionality probably best belongs as part of the O/S or a hypervisor but they long ago relinquished any responsibility for data protection. Aside from the anti-trust and non-competitive nature of such a future data protection O/S offering, I only see isolated steps and no coordinated attack on today’s overall data protection problem.

Backup software vendors do a great job with what they have under their control, but they can’t do it all, ditto for VTL providers, CDP vendors, replication products, etc. Piecemeal solutions can only take us so far down this path but it’s all we have today and I fear for the forseeable future.

Dream time over for now, gotta backup some data…

IO Virtualization comes out

Snakes in a plane by richardmasoner [from flickr (cc)]
Snakes in a plane by richardmasoner (from flickr (cc))
Prior to last week’s VMworld, I had never heard of IO virtualization products before – storage virtualization yes but never IO virtualization. Then at last week’s VMworld I met with two vendors of IO virtualization products Aprius and Virtensys.

IO virtualization shares the HBAs/CNAs/NICs that a server tower would normally have plugged into each server and creates a top-of-rack box that shares these IO cards. The top-of-rack IO is connected to each of the tower servers by extending each server’s PCI-express bus.

Each individual server believes it has a local HBA/CNA/NIC card and acts accordingly. The top-of-rack box handles the mapping of each server to a portion of the HBA/CNA/NIC cards being shared. This all seems to remind me of server virtualization, using software to share the server processor, memory and IO resources across multiple applications. But with one significant difference.

How IO virtualization works

Aprius depends on the new SRIOV (Single Root I/O virtualization [requires login]) standards. I am no PCI-express expert but what this seems to do is allow a HBA/CNA/NIC PCI-express card to be a shared resource among a number of virtual servers executing within a physical server. What Aprius has done is sort of a “P2V in reverse” and allows a number of physical servers to share the same PCI-express HBA/CNA/NIC card in the top-of-rack solution.

Virtensys says it’s solution does not depend on SRIOV standards to provide IO virtualization. As such, it’s not clear what’s different but the top-of-box solution could conceivably share the hardware via software magic.

From a FC and HBA perspective there seems to be a number of questions as to how all this works.

  • Does the top-of-box solution need to be powered and booted up first?
  • How is FC zoning and LUN masking supported in a shared environment?

Similar networking questions should arise especially when one considers iSCSI boot capabilities.

Economics of IO virtualization

But the real question is one of economics. My lab owner friends tell me that a CNA costs about $800/port these days. Now when you consider that one could have 4-8 servers sharing each of these ports with IO virtualization the economics become clearer. With a typical configuration of 6 servers

  • For a non-IO virtualized solution, each server would have 2 CNA ports at a minimum so this would cost you $1600/server or $9600.
  • For an IO virtualized solution, each server requires PCI-extenders, costing about $50/server or $300, at least one CNA (for the top-of-rack) costing $1600 and the cost of their top-of-rack box.

If the IO virtualization box cost less than $7.7K it would be economical. But, IO virtualization providers also claim another savings, i.e, less switch ports need to be purchased because there are less physical network links. Unclear to me what a 10Gbe port with FCOE support costs these days but my guess may be 2X what a CNA port costs or another $1600/port or for the 6 server dual ported configuration ~$19.2K. Thus, the top-of-rack solution could cost almost $27K and still be more economical. When using IO virtualization to reduce HBAs and NICs then the top-of-rack solution could be even more economical.

Although the economics may be in favor of IO virtualization – at the moment – time is running out. CNA, HBA and NIC ports are coming down in price as vendors ramp up production. These same factors will reduce switch port cost as well. Thus, the savings gained from sharing CNAs, HBAs and NICs across multiple servers will diminish over time. Also the move to FCOE will eliminate HBAs and NICs and replace them with just CNAs so there are even less ports to amortize.

Moreover, PCI-express extender cards will probably never achieve volumes similar to HBAs, NICs, or CNAs so extender card pricing should remain flat. In contrast, any top-of-rack solution will share in overall technology trends reducing server pricing so relative advantages of IO virtualization over top-of-rack switches should be a wash.

The critical question for the IO virtualization vendors is can they support a high enough fan-in (physical server to top-of-rack) to justify the additional costs in both capital and operational expense for their solution. And will they be able to keep ahead of the pricing trends of their competition (top-of-rack switch ports and server CNA ports).

On one side as CNAs, HBAs, and NICs become faster and more powerful, no single application can consume all the throughput being made available. But on the other hand, server virtualization are now running more applications on each physical server and as such, amortizing port hardware over more and more applications.

Does IO virtualization make sense today at HBAs@8GFC, NICs and CNAs@10Gbe, would it make sense in the future with converged networks? It all depends on port costs. As port costs go down eventually these products will be squeezed.

The significant difference between server and IO virtualization is the fact that IO virtualization doesn’t reduce hardware footprint – one top-of-box IO virtualization appliance replaces a top-of-box switch and server PCI-express slots used by CNAs/HBAs/NICs are now used by PCI-extender cards. In contrast, server virtualization reduced hardware footprint and costs from the start. The fact that IO virtualization doesn’t reduce hardware footprint may doom this product.

VMworld and long distance Vmotion

Moving a VM from one data center to another

In all the blog posts/tweets about VMworld this week I didn’t see much about long distance Vmotion. At Cisco’s booth there was a presentation on how they partnered with VMware and to perform Vmotion over 200 (simulated) miles away.

I can’t recall when I first heard about this capability but for many of us this we heard about this before. However, what was new was that Cisco wasn’t the only one talking about it. I met with a company called NetEx whose product HyperIP was being used to performe long distance Vmotion at over 2000 miles apart . And had at least three sites actually running their systems doing this. Now I am sure you won’t find NetEx on VMware’s long HCL list but what they have managed to do is impressive.

As I understand it, they have an optimized appliance (also available as a virtual [VM] appliance) that terminates the TCP session (used by Vmotion) at the primary site and then transfers the data payload using their own UDP protocol over to the target appliance which re-constitutes (?) the TCP session and sends it back up the stack as if everything is local. According to the NetEx CEO Craig Gust, their product typically offers a data payload of around ~90% compared to standard TCP/IP of around 30%, which automatically gives them a 3X advantage (although he claimed a 6X speed or distance advantage, I can’t seem to follow the logic).

How all this works with vCenter, DRS and HA I can only fathom but my guess is that everything this long distance Vmotion is actually does appears to VMware as a local Vmotion. This way DRS and/or HA can control it all. How the networking is set up to support this is beyond me.

Nevertheless, all of this proves that it’s not just one highend networking company coming away with a proof of concept anymore, at least two companies exist, one of which have customers doing it today.

The Storage problem

In any event, accessing the storage at the remote site is another problem. It’s one thing to transfer server memory and state information over 10-1000 miles, it’s quite another to transfer TBs of data storage over the same distance. The Cisco team suggested some alternatives to handle the storage side of long distance Vmotion:

  • Let the storage stay in the original location. This would be supported by having the VM in the remote site access the storage across a network
  • Move the storage via long distance Storage Vmotion. The problem with this is that transferring TB of data takes (even at 90% data payload for 800 Mb/s) would take hours. And 800Mb/s networking isn’t cheap.
  • Replicate the storage via active-passive replication. Here the storage subsystem(s) concurrently replicate the data from the primary site to the secondary site
  • Replicate the storage via active-active replication where both the primary and secondary site replicate data to one another and any write to either location is replicated to the other

Now I have to admit the active-active replication where the same LUN or file system can be be being replicated in both directions and updated at both locations simultaneously seems to me unobtainium, I can be convinced otherwise. Nevertheless, the other approaches exist today and effectively deal with the issue, albeit with commensurate increases in expense.

The Networking problem

So now that we have the storage problem solved, what about the networking problem. When a VM is Vmotioned to another ESX server it retains its IP addressing so as to retain all it’s current network connections. Cisco has some techniques here where they can seem to extend the VLAN (or subnet) from the primary site to the secondary site and leave the VM with the same network IP address as at the primary site. Cisco has a couple of different ways to extend the VLAN optimized for HA, load ballancing, scaleability or protocol isolation and broadcast avoidance. (all of which is described further in their white paper on the subject). Cisco did mention that their Extending VLAN technology currently would not support distances greater than 500 miles apart.

Presumably NetEx’s product solves all this by leaving the IP addresses/TCP port at the primary site and just transferring the data to the secondary site. In any event multiple solutions to the networking problem exist as well.

Now, that long distance Vmotion can be accomplished is it a DR tool, a mobility tool, a load ballancing tool, or all of the above. That will need to wait for another post.

What’s happening with MRAM?

16Mb MRAM chips from Everspin
16Mb MRAM chips from Everspin

At the recent Flash Memory Summit there were a few announcements that show continued development of MRAM technology which can substitute for NAND or DRAM, has unlimited write cycles and is magnetism based. My interest in MRAM stems from its potential use as a substitute storage technology for today’s SSDs that use SLC and MLC NAND flash memory with much more limited write cycles.

MRAM has the potential to replace NAND SSD technology because of the speed of write (current prototypes write at 400Mhz or a few nanoseconds) and with the potential to go up to 1Ghz. At 400Mhz, MRAM is already much much faster than today’s NAND. And with no write limits, MRAM technology should be very appealing to most SSD vendors.

The problem with MRAM

The only problem is that current MRAM chips use 150nm chip design technology whereas today’s NAND ICs use 32nm chip design technology. All this means that current MRAM chips hold about 1/1000th the memory capacity of today’s NAND chips (16Mb MRAM from Everspin vs 16Gb NAND from multiple vendors). MRAM has to get on the same (chip) design node as NAND to make a significant play for storage intensive applications.

It’s encouraging that somebody at least is starting to manufacture MRAM chips rather than just being lab prototypes with this technology. From my perspective, it can only get better from here…

SSD vs Drive energy use

Hard Disk by Jeff Kubina
Hard Disk by Jeff Kubina

Recently, the Storage Performance Council (SPC) has introduced a new benchmark series, the SPC-1C/E, which provides detailed energy usage for storage subsystems. So far there have been only two published submissions in this category but we look forward to seeing more in the future. The two submissions are for an IBM SSD and a Seagate Savvio (10Krpm) SAS attached storage subsystems.

My only issue with the SPC-1C/E reports is that they focus on a value of nominal energy consumption rather than reporting peak and idle energy usage. I understand that this is probably closer to what an actual data center would see as energy cost but it buries some intrinsic energy use profile differences.

SSD vs Drive power profile differences

The deltas for reported energy consumption for the two current SPC-1C/E submissions show a ~9.6% difference in peak versus nominal energy use for rotating media storage. Similar results for the SSD storage show a difference of ~1.7%. Taking these results for peak versus idle periods, shows the difference for rotating media being 28.5% and for SSD, ~2.8%.

So, the upside for SSD is drive them as hard as you want and it will cost you only a little bit more energy. In contrast, the downside is leave them idle and it will cost almost as much as if you were driving them at peak IO rates.

Rotating media storage seems to have a much more responsive power profile. Drive them hard and it will consume more power, leave them idle and it consumes less power.

Data center view of storage power

Now these differences might not seem significant but given the amount of storage in most shops they could represent significant cost differentials. Although SSD storage consumes less power, it’s energy use profile is significantly flatter than rotating media and will always consume that level of power (when powered on). On the other hand, rotating media consumes more power on average but it’s power profile is more slanted than SSDs and at peak workload consumes much more power than when idle.

Usualy, it’s unwise to generalize from two results. However, everything I know says that these differences in their respective power profiles should persist across other storage subsystem results. As more results are submitted it should be easy to verify whether I am right.

Why virtualize now?

HP servers at School of Electrical Engineering, University of Belgrade
HP servers by lilit
I suppose it’s obvious to most analyst why server virtualization is such a hot topic these days. Most IT shops purchase servers today that are way overpowered and can easily execute multiple applications. Today’s overpowered servers are wasted running single applications and would easily run multiple applications if only an operating system could run them together without interference.

Enter virtualization, with virtualization hypervisors can run multiple applications concurrently and sometimes simultaneously on the same hardware server without compromising application execution integrity. Multiple virtual machine applications execute on a single server under a hypervisor that isolates the applications from one another. Thus, they all execute together on the same hardware without impacting each other.

But why doesn’t the O/S do this?

Most computer purists would say why not just run the multiple applications under the same operating system. But operating systems that run servers nowadays weren’t designed to run multiple applications together and as such, also weren’t designed to isolate them properly.

Virtualization hypervisors have had a clean slate to execute and isolate multiple application. Thus, virtualization is taking over the data center floor. As new servers come in, old servers are retired and the applications that used to run on them are consolidated on fewer and fewer physical servers.

Why now?

Current hardware trends dictate that each new generation of server has more processing power and oftentimes, more processing elements than previous generations. Today’s applications are getting more sophisticated but even with added sophistication, they do not come close to taking advantage of all the processing power now available. Hence, virtualization wins.

What seems to be happening nowadays is that while data centers started out consolidating tier 3 applications through virtualization, now they are starting to consolidate tier 2 applications and tier 1 apps are not far down this path. But, tier 2 and 1 applications require more dedicated services, more processing power, more deterministic execution times and thus, require more sophisticated virtualization hypervisors.

As such, VMware and others are responding by providing more hypervisor sophistication, e.g., more ways to dedicate and split up processing, networking and storage available to the physical server for virtual machine or application dedicated use. Thus preparing themselves for a point in the not to distant future when tier 1 applications run with all the comforts of a dedicated server environment but actually execute with other VMs in a single physical server.

VMware vSphere

We can see the start of this trend with the latest offering from VMware, vSphere. This product now supports more processing hardware, more networking options and stronger storage support. vSphere also can dedicate more processing elements to virtual machines. Such new features make it easier to support tier 2 today and tier 1 applications sometime in future.