SMB2.2 (CIFS) screams over InfiniBand

Microsoft MVP Summit 2010 by David McCarter (cc) (From Flickr)
Microsoft MVP Summit 2010 by David McCarter (cc) (From Flickr)

I missed the MVP summit last month in Redmond, but I heard there was some more discussion of the Server Message Block v2.2 (SMB2.2, also known previously as CIFS) coming in Windows Server (R) 8.

The big news is SMB2.2 now supports RDMA and can use InfiniBand (announced at SNIA Developer Conference last fall). It also supports RDMA over Ethernet via RoCE (see my Intel buys Qlogic’s Infiniband post) and iWARP.

SMB2.2 over InfiniBand performance

As reported last fall at the SNIA Developer Conference SMB2.2 using RDMA over InfiniBand reached over 3.7GB/sec with no server configuration changes using two QDR cards and 160K IOPs (the IOPs are from an SQLIO run using 8KB IOs, not SPECsfs2008). The pre-beta, SMB2.2 code was running on commodity server hardware using 32Gbps InfiniBand links. I couldn’t find any performance numbers with ROCE or iWARP but I would suspect running on 10GbE these would be much slower than InfiniBand.

Hints are that performance gets even better with the released versions of the code coming out in Windows Server 8.

SMB2.2 gets even faster than NFS

We have noted in the past that SMB (CIFS) on average, shows better throughput (IOPS) performance than NFS in SPECsfs2008 results (for example, see our latest Chart-of-the-Month post on SPECsfs results). However, those results were all at best SMB2 or even SMB1 results, and commonly using Ethernet links.

NFS already supports InfiniBand but I am unsure whether it makes use of RDMA. Nevertheless, the significant speed up shown here for SMB2.2 will potentially take SPECsfs2008 SMB2.2 performance up to a whole new level.

Why InfiniBand?

As you may recall, InfinBand is primarily deployed as a server to server interface and used extensively in the past for high performance computing environments. However nowadays, we find storage clusters, such as EMC Isilon, HP X9000 (Ibrix), IBM XIV and others using InfiniBand for their inter-node communications. The use of InfiniBand in these storage clusters is probably due primarily to its superior latency over Ethernet.

But InfiniBand has another advantage, fast data throughput, when using RDMA it can transfer data faster than almost any other networking protocol alive today. SMB2.2 takes advantage of this throughput boost by using RDMA only for large blocks of data and avoiding it for smaller blocks of data. Not sure what the cutoff is, but this would certainly help in large SQL database queries, disk copies, and any other large file data transfer operations.

Of course with 56Gbps FDR InfiniBand available today and faster transfer rates coming (see IBTA roadmap), there appears to be every reason to believe that superior throughput performance will continue at least for the foreseeable future. Better latency is also certain to be retained as well

Now that Intel’s pushing it, Mellanox continuing to push Infiniband and storage cluster’s using it more frequently, we may start to see more storage protocols supporting it.

We thought that FC only had Ethernet to worry about, with SMB2.2 moving to InfiniBand, NFS already supporting it, can a fully functional FCoIB be far behind?

 

New file system capacity tool – Microsoft’s FSCT

Filing System by BinaryApe (cc) (from Flickr)
Filing System by BinaryApe (cc) (from Flickr)

Jose Barreto blogged about a recent report Microsoft did on File Server Capacity Tool (FSCT) results (blog here, report here).  As you may know FSCT is a free tool released in September of 2009, available from Microsoft that verifies a SMB (CIFS) and/or SMB2 storage server configuration.

The FSCT can be used by anyone to verify that a SMB/SMB2 file server configuration can adequately support a particular number of users, doing typical Microsoft Office/Window’s Explorer work with home folders.

Jetstress for SMB file systems?

FSCT reminds me a little of Microsoft’s Jetstress tool used in the Exchange Solution Review Program (ESRP) which I have discussed extensively in prior blog posts (search my blog) and other reports (search my website).  Essentially, FSCT has a simulated “home folder” workload which can be dialed up or down by the number of users selected.  As such, it can be used to validate any NAS system which supports SMB/SMB2 or CIFS protocol.

Both Jetstress and FSCT are capacity verification tools.  However, I look at all such tools as a way of measuring system performance for a solution environment and FSCT is no exception.

Microsoft FSCT results

In Jose’s post on the report he discusses performance for five different storage server configurations running anywhere from 4500 to 23,000 active home directory users, employing white box servers running Windows (Storage) Server 2008 and 2008 R2 with various server hardware and SAS disk configurations.

Network throughput ranged from 114 to 650 MB/sec. Certainly respectable numbers and somewhat orthogonal to the NFS and CIFS throughput operations/second reported by SPECsfs2008.  Unclear if FSCT reports activity in an operations/second.

Microsoft ‘s FSCT reports did not specifically state what the throughput was other than at the scenario level.  I assume Network throughput that Jose reported was extracted concurrently with the test run from something akin to Perfmon.  FSCT seems to only report performance or throughput as the number of home folder scenarios sustainable per second and the number of users.  Perhaps there is an easy way to convert user scenarios to network throughput?

While the results for the file server runs looks interesting, I always want more. For whatever reason, I have lately become enamored with ESRPs log playback results (see my latest ESRP blog post) and it’s not clear whether FSCT reports anything similar to this.  Something like file server simulated backup performance would suffice from my perspective.

—-

Despite that, another performance tool is always of interest and I am sure my readers will want to take a look as well.  The current FSCT tester can be downloaded here.

Not sure whether Microsoft will be posting vendor results for FSCT similar to what they do for Jetstress via ESRP but that would be a great next step.  Getting the vendors onboard is another problem entirely.  SPECsfs2008 took almost a year to get the first 12 (NFS) submissions and today, almost 9 months later there are still only ~40 NFS and ~20 CIFS submissions.

Comments?

IBM Scale out NAS (SONAS) v1.1.1

IBM SONAS from IBM's Flickr stream (c) IBM
IBM SONAS from IBM's Flickr stream (c) IBM

We have discussed other scale out NAS products on the market such as Symantec’s FileStoreIBRIX reborn as HP networked storage, and why SO/CFS, why now (scale out/cluster file systems) in previous posts but haven’t talked about IBM’s highend scale out NAS (SONAS) product before. There was an announcement yesterday of a new SONAS version so thought it an appropriate time to cover it.

As you may know SONAS packages up IBM’s well known GPFS system services and surrounds it with pre-packaged hardware and clustering software that supports a high availability cluster of nodes serving native CIFS and NFS clients.

One can see SONAS is not much to look at from the outside but internally it comes with three different server components:

  • Interface nodes – which provide native CIFS, NFS and now with v1.1.1 HTTP interface protocols to the file store.
  • Storage nodes – which supply backend storage device services.
  • Management nodes – which provide for administration of the SONAS storage system.

The standard SONAS system starts with a fully integrated hardware package within one rack with 2-management nodes, 2- to 6-interface nodes, 2-storage pods (one storage pod consists of of 2-storage nodes and 60 to 240 attached disk drives).  The starter system can then be expanded with either a single interface rack with up to 30 interface nodes and/or multiple storage racks with 2 storage pods in each rack.

With v1.1.1, a new hardware option has been provided, specifically the new IBM SONAS gateway for IBM’s XIV storage.  With this new capability SONAS storage nodes can now be connected to an IBM XIV storage subsystem using 8GFC interfaces through a SAN switch.

Some other new functionality released in SONAS V1.1.1 include:

  • New policy engine – used for internal storage tiering and for external/hierarchical storage through IBM’s Tivoli Storage Managere (TSM) product. Recall that SONAS supports both SAS and SATA disk drives and now one can use policy management to migrate files between internal storage tiers.  Also, with the new TSM interface, data can now be migrated out of SONAS and onto tape or any of the other over 600 storage devices supported by TSM’s Hierarchical Storage Management (HSM) product.
  • Asynch replication – used for disaster recovery/business continuance.  SONAS uses standard Linux based RSYNC capabilities to replicate file systems from one SONAS cluster to another cluster.  SONAS replication only copies changed portions of files within file systems being replicated and uses SSH data transfer to encrypt data-in-flight between the two SONAS systems.

There were some other minor enhancements for this announcement namely, higher capacity SAS drive support (now 600GB), using NIS authentication, increased cache per interface node (now up to 128GB), and the already mentioned new HTTP support.

In addition, IBM stated that a single interface node can pump out 900MB/sec (out of cache) and 6 interface nodes can sustain over 5GB/sec (presumably also from cache).  SONAS can currently scale up to 30 interface nodes but this doesn’t appear to be an architectural limitation but rather just what has been validated by IBM.

Can’t wait to see this product show up in SPECsfs 2008 performance benchmarks to see how it compares to other SO and non-SA file system products.

SPECsfs2008 CIFS ORT performance – chart of the month

(c) 2010 Silverton Consulting Inc., All Rights Reserved
(c) 2010 Silverton Consulting Inc., All Rights Reserved

The above chart on SPECsfs(R) 2008 results was discussed in our latest performance dispatch that went out to SCI’s newsletter subscribers last month.  We have described Apple’s great CIFS ORT performance in previous newsletters but here I would like to talk about NetApp’s CIFS ORT results.

NetApp had three new CIFS submissions published this past quarter, all using the FAS3140 system but with varying drive counts/types and Flash Cache installed.  Recall that Flash Cache used to be known as PAM-II and is an onboard system card which holds 256GB of NAND memory used as an extension of system cache.  This differs substantially from using NAND in a SSD as a separate tier of storage as many other vendors currently do.  The newly benchmarked NetApp systems included:

  • FAS3140 (FCAL disks with Flash Cache) – used 56-15Krpm FC disk drives with 512GB of Flash Cache (2-cards)
  • FAS3140 (SATA disks with Flash Cache) – used 96-7.2Krpm SATA disk drives with 512GB of Flash Cache
  • FAS3140 (FCAL disks) – used 242-15Krpm FC disk drives and had no Flash Cache whatsoever

If I had to guess the point of this exercise was to show that one can offset fast performing hard disk drives by using FlashCache and significantly less (~1/4) fast disk drives or by using Flash Cache and somewhat more SATA drives.  In another chart from our newsletter one could see that all three systems resulted in very similar CIFS throughput results (CIFS Ops/Sec.), but in CIFS ORT (see above), the differences between the 3 systems are much more pronounced.

Why does Flash help CIFS ORT?

As one can see, the best CIFS ORT performance of the three came from the FAS3140 with FCAL disks and Flash Cache which managed a response time of ~1.25 msec.  The next best performer was the FAS3140 with SATA disks and Flash Cache with a CIFS ORT of just under ~1.48 msec.  The relatively worst performer of the bunch was the FAS3140 with only FCAL disks which came in at ~1.84 msec. CIFS ORT.  So why the different ORT performance?

Mostly the better performance is due to the increased cache available in the Flash Cache systems.  If one were to look at the SPECsfs 2008 workload one would find that less than 30% is read and write data activity and the rest is what one might call meta-data requests (query path info @21.5%, query file info @12.9%, create = close @9.7%, etc.).  While read data may not be very cache friendly, most of the meta-data and all the write activity are cache friendly.  Meta-data activity is more cache friendly primarily because it’s relatively small in size and any write data goes to cache before being destaged to disk.  As such, this more cache friendly workload generates on average, better response times when one has larger amounts of cache.

For proof one need look no further than the relative ORT performance of the FAS3140 with SATA and Flash vs. the FAS3140 with just FCAL disks.  The Flash Cache/SATA drive system had ~25% better ORT results than the FCAL only system even with significantly slower and much fewer disk drives.

The full SPECsfs 2008 performance report will go up on SCI’s website later this month in our dispatches directory.  However, if you are interested in receiving this report now and future copies when published, just subscribe by email to our free newsletter and we will email the report to you now.

More cloud storage gateways come out

Strange Clouds by michaelroper (cc) (from Flickr)
Strange Clouds by michaelroper (cc) (from Flickr)

Multiple cloud storage gateways either have been announced or are coming out in the next quarter or so. We have talked before about Nasuni’s file cloud storage gateway appliance, but now that more are out one can have a better appreciation of the cloud gateway space.

StorSimple

Last week I was talking with StorSimple that just introduced their cloud storage gateway which provides a iSCSI block protocol interface to cloud storage with an onsite data caching.  Their appliance offers a cloud storage cache residing on disk and/or optional flash storage (SSDs) and provides iSCSI storage speeds for highly active working set data residing on the cache or cloud storage speeds for non-working set data.

Data is deduplicated to minimize storage space requirements.  In addition data sent to the cloud is compressed and encrypted. Both deduplication and compression can reduce WAN bandwidth requirements considerably.    Their appliance also offers snapshots and “cloud clones”.  Cloud clones are complete offsite (cloud) copies of a LUN which can then be maintained in synch with the gateway LUNs by copying daily change logs and applying the logs.

StorSimple works with Microsoft’s Azure, AT&T, EMC Atmos, Iron Mountan and Amazon’s S3 cloud storage providers.   A single appliance can support multiple cloud storage providers segregated on a LUN basis.  Although how cross-LUN deduplication works across multiple cloud storage providers was not discussed.

Their product can be purchased as a hardware appliance with a few 100GB of NAND/Flash storage up to a 150TB of SATA storage.  It also can be purchased as a virtual appliance at lower cost but also much lower performance.

Cirtas

In addition to StorSimple, I have talked with Cirtas which has yet to completely emerge from stealth but what’s apparent from their website is that the Cirtas appliance provides “storage protocols” to server systems, and can store data directly on storage subsystems or on cloud storage.

Storage protocols could mean any block storage protocol which could be FC and/or iSCSI but alternatively, it might mean file protocols I can’t be certain.  Having access to independent, standalone storage arrays may mean that  clients can use their own storage as a ‘cloud data cache’.  Unclear how Cirtas talks to their onsite backend storage but presumably this is FC and/or iSCSI as well.  And somehow some of this data is stored out on the cloud.

So from our perspective it looks somewhat similar to StorSimple with the exception that it uses external storage subsystems for its cloud data cache for Cirtas vs. internal storage for StorSimple.  Few other details were publicly available as this post went out.

Panzura

Although I have not talked directly with Panzura they seem to offer a unique form of cloud storage gateway, one that is specific to some applications.  For example, the Panzura SharePoint appliance actually “runs” part of the SharePoint application (according to their website) and as such, can better ascertain which data should be local versus stored in the cloud.  It seems to have  both access to cloud storage as well as local independent storage appliances.

In addition to a SharePoint appliance they offer a “”backup/DR” target that apparently supports NDMP, VTL, iSCSI, and NFS/CIFS protocols to store (backup) data on the cloud. In this version they show no local storage behind their appliance by which I assume that backup data is only stored in the cloud.

Finally, they offer a “file sharing” appliance used to share files across multiple sites where files reside both locally and in the cloud.  It appears that cloud copies of shared files are locked/WORM like but I can’t be certain.  Having not talked to Panzura before, much of their product is unclear.

In summary

We now have both a file access and at least one iSCSI block protocol cloud storage gateway, currently available, publicly announced, i.e., Nasuni and StorSimple.  Cirtas, which is in the process of coming out, will support a “storage protocol” access to cloud storage and Panzura offers it all (SharePoint direct, iSCSI, CIFS, NFS, VTL & NDMP cloud storage access protocols).  There are other gateways just focused on backup data, but I reserve the term cloud storage gateways for those that provide some sort of general purpose storage or file protocol access.

However, Since last weeks discussion of eventual consistency, I am becoming a bit more concerned about cloud storage gateways and their capabilities.  This deserves some serious discussion at the cloud storage provider level and but most assuredly, at the gateway level.  We need some sort of generic statement that says they guarantee immediate consistency for data at the gateway level even though most cloud storage providers only support “eventual consistency”.  Barring that, using cloud storage for anything that is updated frequently would be considered unwise.

If anyone knows of another cloud storage gateway I would appreciate a heads up.  In any case, the technology is still young yet and I would say that this isn’t the last gateway to come out but it feels like these provide coverage for just about any file or block protocol one might use to access cloud storage.

SPECsfs2008 CIFS vs. NFS results – chart of the month

SPECsfs(R) 2008 CIFS vs. NFS 2010Mar17
SPECsfs(R) 2008 CIFS vs. NFS 2010Mar17

We return now to our ongoing quest to understand the difference between CIFS and NFS performance in the typical data center.  As you may recall from past posts and our newsletters on this subject, we had been convinced that in SPECsfs 2008 CIFS had almost 2X the throughput of NFS in SPECsfs 2008 benchmarks.  Well as you can see from this updated chart this is no longer true.

Thanks to EMC for proving me wrong (again).  Their latest NFS and CIFS result utilized a NS-G8 Celerra gateway server in front of V-Max backend using SSDs and FC disks. The NS-G8 was the first enterprise class storage subsystem to release both a CIFS and NFS SPECsfs 2008 benchmark.

As you can see from the lower left quadrant all of the relatively SMB level systems (under 25K NFS throughput ops/sec) showed a consistent pattern of CIFS throughput being ~2X NFS throughput.  But when we added the Celerra V-Max combination to the analysis it brought the regression line down considerably and now the equation is:

CIFS throughput = 0.9952 X NFS throughput + 10565, with a R**2 of 0.96,

what this means is that CIFS and NFS throughput are roughly the same now.

When I first reported the relative advantage of CIFS over NFS throughput in my newsletter I was told that you cannot compare the two results mainly because NFS was “state-less” and CIFS was “state-full” and a number of other reasons (documented in the earlier post and in the newsletter).  Nonetheless, I felt that it was worthwhile to show the comparison because at the end of the day whether some file happens to be serviced by NFS or CIFS may not matter to the application/user, it should matter significantly to the storage administrator/IT staff.  By showing the relative performance of each we were hoping to help IT personnel to decide between using CIFS or NFS storage.

Given the most recent results, it seems that the difference in throughput is not that substantial irregardless of their respective differences.  Of course more data will help. There seems to be a wide gulf between the highest SMB submission and the EMC enterprise class storage that should be filled out.  As Celerra V-Max is the only enterprise NAS to submit both CIFS and NFS benchmarks there could still be many surprises in store. As always, I would encourage storage vendors to submit both NFS and CIFS benchmarks for the same system so that we can see how this pattern evolves over time.

The full SPECsfs 2008 report should have went out to our newsletter subscribers last month but I had a mistake with the link.  The full report will be delivered with this months newsletter along with a new performance report on Exchange Solution Review Program and storage announcement summaries.  In addation, a copy of the SPECsfs report will be up on the dispatches page of our website later next month. However, you can get this information now and subscribe to future newsletters to receive future full reports even earlier, just email us at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.

As always, we welcome any suggestions on how to improve our analysis of SPECsfs or any of our other storage system performance results.

Latest SPECsfs2008 CIFS performance – chart of the month

Above we reproduce a chart from our latest newsletter StorInttm Dispatch on SPECsfs(R) 2008 benchmark results.  This chart shows the top 10 CIFS throughput benchmark results as of the end of last year.  As observed in the chart Apple’s Xserve running Snow Leopard took top performance with over 40K CIFS throughput operations per second.  My problem with this chart is that there are no enterprise class systems represented in the top 10 or for that matter (not shown in the above) in any CIFS result.

Now some would say it’s still early yet in the life of the 2008 benchmark but it has been out now for 18 months and still has not a single enterprise class system submission reported.  Possibly, CIFS is not considered an enterprise class protocol but I can’t believe that given the proliferation of Windows.  So what’s the problem?

I have to believe it’s part tradition, part not wanting to look bad, and part just lack of awareness on the part of CIFS users.

  • Traditionally, NFS benchmarks were supplied by SPECsfs and CIFS benchmarks were supplied elsewhere, i.e., NetBenc. However, there never was a central repository for NetBench results so comparing system performance was cumbersome at best.  I believe that’s one reason for SPECsfs’s CIFS benchmark.  Seeing the lack of a central repository for a popular protocol, SPECsfs created their own CIFS benchmark.
  • Performance on system benchmarks are always a mixed bag.  No-one wants to look bad and any top performing result is temporary until the next vendor comes along.  So most vendors won’t release a benchmark result unless it shows well for them.  Not clear if Apple’s 40K CIFS ops is a hard number to beat, but it’s been up there for quite awhile now, and has to tell us something.
  • CIFS users seem to be aware and understand NetBench but don’t have similar awareness on SPECsfs CIFS benchmark yet.  So, given today’s economic climate, any vendor wanting to impress CIFS customers would probably choose to ignore SPECsfs and spend their $s on NetBench.  The fact that comparing results was neigh impossible, could be considered an advantage for many vendors.

So SPECsfs CIFS just keeps going on.  One way to change this dynamic is to raise awareness.  So as more IT staff/consultants/vendors discuss SPECsfs CIFS results, its awareness will increase.  I realize some of  my analysis on CIFS and NFS performance results doesn’t always agree with the SPECsfs party line, but we all agree that this benchmark needs wider adoption.  Anything that can be done to facilitate that deserves my (and their) support.

So for all my storage admins, CIOs and other influencers of NAS system purchases friends out there, you need to start asking to about SPECsfs CIFS benchmark results.  All my peers out their in the consultant community, get on the bandwagon.  As for my friends in the vendor community, SPECsfs CIFS benchmark results should be part of any new product introduction.  Whether you want to release results is and always will be, a marketing question but you all should be willing to spend the time and effort to see how well new systems perform on this and other benchmarks.

Now if I could just get somebody to define an iSCSI benchmark, …

Our full report on the latest SPECsfs 2008 results including both NFS and CIFS performance, will be up on our website later this month.  However, you can get this information now and subscribe to future newsletters to receive the full report even earlier, just email us at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.

Ibrix reborn as HP X9000 Network Storage

HP X9000 appliances pictures from HP(c) presentation
HP X9000 appliances pictures from HP(c) presentation

On Wednesday 4 November, HP announced a new network storage system based on the Ibrix Fusion file system called the X9000. Three versions were announced:

  • X9300 gateway appliance which can be attached to SAN storage (HP EVA, MSA, P4000, or 3rd party SAN storage) and provides scale out file system services
  • X9320 performance storage appliance which includes a fixed server gateway and storage configuration in one appliance targeted at high performance application environments
  • X9720 extreme storage appliance using blade servers for file servers and separate storage in one appliance but can be scaled up (with additional servers and storage) as well as out (by adding more X9720 appliances) to target more differentiated application environments

The new X9000 appliances support a global name space of 16PB by adding additional X9000 network storage appliances to a cluster. The X9000 supports a distributed metadata architecture which allows the system to scale performance by adding more storage appliances.

X9000 Network Storage appliances

With the X9300 gateway appliance, storage can be increased by adding more SAN arrays. Presumably, multiple gateways can be configured to share the same SAN storage creating a highly available file server node. The gateway can be configured to support the following Gige, 10Gbe, and/or QDR (40gb/s) Infiniband interfaces for added throughput.

The Extreme appliance (X9720) comes with 82 TB in the starting configuration and storage can be increased by in 82TB raw capacity block increments (7u-1/2rack wide/35*2 drive enclosures + 1-12 drive tray for each capacity block) up to a maximum of 656TB in two rack (42U) configuration. Capacity blocks are connected to the file servers via 3gb SAS, and the X9720 includes a SAS switch as well as two ProCurve 10Gbe ethernet switches. Also, file system performance can be scaled by independently adding performance blocks, essentially C-class HP blade servers. The starter configuration includes 3 performance blocks (blades) but up to 8 can be added to one X9720 appliance.

For the X9320 scale out appliance, performance and capacity are fixed in a 12U rack mountable appliance that includes 2-X9300 gateways and 21.7TB SAS or 48TB SATA raw storage per appliance. The X9320 comes with either GigE or 10Gbe attachments for added performance. The 10Gbe version supports up to 700MB/s raw potential throughput per gateway (node).

X9000 capabilities

All these systems have separate, distinct internal-like storage devoted to O/S, file server software and presumably metadata services. In the X9300 and X9320 storage, this internal storage is packaged in the X9300 gateway server itself. In the X9720, presumably this internal storage is configured via storage blades in the blade server cabinet which would need to be added with each performance block.

All X9000 storage is now based on the Fusion file system technology acquired by HP from Ibrix, an acquisition which closed this summer. Ibrix’s Fusion file system provided a software only implementation of a distributed (or segmented) metadata serviced file system which allowed the product to scale out performance and/or capacity, independently by adding appropriate hardware.

HP’s X9000 supports both NFS and CIFS interfaces. Moreover, a\Advanced storage features such as continuous remote file replication, snapshot, high availability (with two or more gateways/performance blocks), and automated policy driven data tiering also come with the X9000 Network Storage system. In additition, file data is automatically re-distributed across all nodes in X9000 appliance to ballance storage performance across nodes. Every X9000 Network Storage system requires a separate management server to manage the X9000 Network Storage nodes but one server can support the whole 16PB name space.

I like the X9720 and look forward to seeing some performance benchmarks on what it can do. In the past Ibrix never released a SPECsfs(tm) benchmark, presumably because they were a software only solution. But now that HP has instantiated it with top-end hardware there seems to be no excuse to providing benchmark comparisons.

Full disclosure: I have an current contract with another group within HP StorageWorks, not associated with HP X9000 storage.