New SPECsfs2008 CIFS/SMB vs. NFS (again) – chart of the month

SPECsfs2008 benchmark results for CIFS/SMB vs. NFS protocol performance
SCISFS140326-001 (c) 2014 Silverton Consulting, All Rights Reserved

The above chart represents another in a long line of charts on the relative performance of CIFS[/SMB] versus NFS file interface protocols. The information on the chart are taken from vendor submissions that used the same exact hardware configurations for both NFS and CIFS/SMB protocol SPECsfs2008 benchmark submissions.

There are generally two charts I show in our CIFS/SMB vs. NFS analysis, the one above and another that shows a ops/sec per spindle count analysis for all NFS and CIFS/SMB submissions.  Both have historically indicated that CIFS/SMB had an advantage. The one above shows the total number of NFS or CIFS/SMB operations per second on the two separate axes and provides a linear regression across the data. The above shows that, on average, the CIFS/SMB protocol provides about 40% more (~36.9%) operations per second than NFS protocol does with the same hardware configuration.

However, there are a few caveats about this and my other CIFS/SMB vs. NFS comparison charts:

  • The SPECsfs2008 organization has informed me (and posted on their website) that  CIFS[/SMB] and NFS are not comparable.  CIFS/SMB is a stateful protocol and NFS is stateless and the corresponding commands act accordingly. My response to them and my readers is that they both provide file access, to a comparable set of file data (we assume, see my previous post on What’s wrong with SPECsfs2008) and in many cases today, can provide access to the exact same file, using both protocols on the same storage system.
  • The SPECsfs2008 CIFS/SMB benchmark does slightly more read and slightly less write data operations than their corresponding NFS workloads. Specifically, their CIFS/SMB workload does 20.5% and 8.6% READ_ANDX and WRITE_ANDX respectively CIFS commands vs. 18% and 9% READ and WRITE respectively NFS commands.
  • There are fewer CIFS/SMB benchmark submissions than NFS and even fewer with the same exact hardware (only 13). So the statistics comparing the two in this way must be considered preliminary, even though the above linear regression is very good (R**2 at ~0.98).
  • Many of the submissions on the above chart are for smaller systems. In fact 5 of the 13 submissions were for storage systems that delivered less than 20K NFS ops/sec which may be skewing the results and most of which can be seen above bunched up around the origin of the graph.

And all of this would all be wonderfully consistent if not for a recent benchmark submission by NetApp on their FAS8020 storage subsystem.  For once NetApp submitted the exact same hardware for both a NFS and a CIFS/SMB submission and lo and behold they performed better on NFS (110.3K NFS ops/sec) than they did on CIFS/SMB (105.1K CIFS ops/sec) or just under ~5% better on NFS.

Luckily for the chart above this was a rare event and most others that submitted both did better on CIFS/SMB. But I have been proven wrong before and will no doubt be proven wrong again. So I plan to update this chart whenever we get more submissions for both CIFS/SMB and NFS with the exact same hardware so we can see a truer picture over time.

For those with an eagle eye, you can see NetApp’s FAS8020 submission as the one below the line in the first box above the origin which indicates they did better on NFS than CIFS/SMB.

Comments?

~~~~

The complete SPECsfs2008  performance report went out in SCI’s March 2014 newsletter.  But a copy of the report will be posted on our dispatches page sometime next quarter (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free newsletters by just using the signup form above right.

Even more performance information on NFS and CIFS/SMB protocols, including our ChampionCharts™ for file storage can be found in  SCI’s recently (March 2014) updated NAS Buying Guide, on sale from our website.

As always, we welcome any suggestions or comments on how to improve our SPECsfs2008 performance reports or any of our other storage performance analyses.

What’s wrong with SPECsfs2008?

I have been analyzing SPECsfs results now for almost 7 years now and I feel that maybe it’s time for me to discuss some of the t problems with SPECsfs2008 today that should be fixed in the next SPECsfs20xx whenever that comes out.

CIFS/SMB

First and foremost, for CIFS SMB 1 is no longer pertinent to today’s data center. The world of Microsoft has moved on to SMB 2 mostly and are currently migrating to SMB 3.  There were plenty of performance fixes in the last years SMB 3.0 release which would be useful to test with current storage systems. But I would be even be somewhat happy with SMB2 if that’s all I can hope for.

My friends at Microsoft would consider me remiss if I didn’t mention that since SMB 2 they no longer call it CIFS and have moved to SMB. SPECsfs should follow this trend. I have tried to use CIFS/SMB in my blog posts/dispatches as a step in this direction mainly because SPEC continues to use CIFS and Microsoft wants me to use SMB.

In my continuing quest to better compare different protocol performance I believe it would be useful to insure that the same file size distributions are used for both CIFS and NFS benchmarks. Although the current Users Guide discusses some file size information for NFS it is silent when it comes to CIFS. I have been assuming that they were the same because of lack of information but this would be worthy to have confirmed in documentation.

Finally for CIFS, it would be very useful if there could be a closer approximation of the same amount of data transfers that are done for NFS.  This is a nit but when I compare CIFS to NFS storage system results there is a slight advantage to NFS because NFS’s workload definition doesn’t do as much reading as CIFS. In contrast, CIFS has slightly less file data write activity than the NFS benchmark workload. Having them be exactly the same would help in any (unsanctioned) comparisons.

NFSv3

As for NFSv3, although NFSv4 has been out for more than 3 years now, it has taken a long time to be widely adopted. However, these days there seems to be more client and storage support coming online every day and maybe this would be a good time to move on to NFSv4.

The current NFS workloads, while great for the normal file server activities, have not kept pace with much of how NFS is used today especially in virtualized environments. As far as I can tell under VMware NFS data stores don’t do a lot of meta-data operations and do an awful lot more data transfers than normal file servers do. Similar concerns apply to NFS used for Oracle or other databases. Unclear how one could incorporate a more data intensive workload mix into the standard SPECsfs NFS benchmark but it’s worthy of some thought. Perhaps we could create a SPECvms20xx benchmark that would test these types of more data intensive workloads.

For both NFSv3 and CIFs benchmarks

Both the NFSv3 and CIFS benchmarks typically report [throughput] ops/sec. These are a mix of all the meta-data activities and the data transfer activities.  However, I think many storage customers and users would like a finer view of system performance. .

I have often been asked just how many files a storage system actually support. This depends of course on the workload and file size distributions but SPECsfs already defines this. As a storage performance expert, I would also like to know how much data transfer can a storage system support in MB/sec read and written.  I believe both of these metrics can be extracted from the current benchmark programs with a little additional effort. Probably another half dozen metrics that would be useful maybe we could sit down and have an open discussion of what these might be.

Also the world has changed significantly over the last 6 years and SSD and flash has become much more prevalent. Some of your standard configuration tables could be better laid out to insure that readers understand just how much DRAM, flash, SSDs and disk drives are in a configuration.

Beyond file NAS

Going beyond SPECsfs there is a whole new class of storage, namely object storage where there are no benchmarks available. I would think now that Amazon S3 and Openstack Cinder are well defined and available that maybe a new set of SPECobj20xx benchmarks would be warranted. I believe with the adoption of software defined data centers, object storage may become the storage of choice over the next decade or so. If that’s the case then having some a benchmark to measure object storage performance would help in its adoption. Much like the original SPECsfs did for NFS.

Then there’s the whole realm of server SAN or (hyper-)converged storage which uses DAS inside a cluster of compute servers to support block and file services. Not sure exactly where this belongs but NFS is typically the first protocol of choice for these systems and having some sort of benchmark configuration that supports converged storage would help adoption of this new type of storage as well.

I think thats about it for now but there’s probably a whole bunch more that I am missing out here.

Comments?

Latest SPECsfs2008 results NFS vs. CIFS – chart-of-the-month

SCISFS121227-010(001) (c) 2013 Silverton Consulting, Inc. All Rights Reserved
SCISFS121227-010(001) (c) 2013 Silverton Consulting, Inc. All Rights Reserved

We return to our perennial quest to understand file storage system performance and our views on NFS vs. CIFS performance.  As you may recall, SPECsfs2008 believes that there is no way to compare the two protocols because

  • CIFS/SMB is “statefull” and NFS is “state-less”
  • The two protocols are issuing different requests.

Nonetheless, I feel it’s important to go beyond these concerns and see if there is any way to assess the relative performance of the two protocols.  But first a couple of caveats on the above chart:

  • There are 25 CIFS/SMB submissions and most of them are for SMB environments vs. 64 NFS submissions which are all over the map
  • There are about 12 systems that have submitted exact same configurations for CIFS?SMB and NFS SPECsfs2008 benchmarks.
  • This chart does not include any SSD or FlashCache systems, just disk drive only file storage.

All that being said, let us now see what the plot has to tell us. First the regression line is computed by Excel and is a linear regression.  The regression coefficient for CIFS/SMB is much better at 0.98 vs NFS 0.80. But this just means that their is a better correlation between CIFS/SMB throughput operations per second to the number of disk drives in the benchmark submission than seen in NFS.

Second, the equation and slope for the two lines is a clear indicator that CIFS/SMB provides more throughput operations per second per disk than NFS. What this tells me is that given the same hardware, all things being equal the CIFS/SMB protocol should perform better than NFS protocol for file storage access.

Just for the record the CIFS/SMB version used by SPECsfs2008 is currently SMB2 and the NFS version is NFSv3.  SMB3 was just released last year by Microsoft and there aren’t that many vendors (other than Windows Server 2012) that support it in the field yet and SPECsfs2008 has yet to adopt it as well.   NFSv4 has been out now since 2000 but SPECsfs2008 and most vendors never adopted it.  NFSv4.1 came out in 2010 and still has little new adoption.

So these results are based on older, but current versions of both protocols available in the market today.

So, given all that, if I had an option I would run CIFS/SMB protocol for my file storage.

Comments?

More information on SPECsfs2008 performance results as well as our NFS and CIFS/SMB ChampionsCharts™ for file storage systems can be found in our NAS Buying Guide available for purchase on our web site.

~~~~

The complete SPECsfs2008 performance report went out in SCI’s December newsletter.  But a copy of the report will be posted on our dispatches page sometime this month (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free newsletters by just using the signup form above right.

As always, we welcome any suggestions or comments on how to improve our SPECsfs2008  performance reports or any of our other storage performance analyses.

 

Latest SPECsfs2008 benchmarks analysis, CIFS vs NFS corrected – chart of the month

(SCISFS110929-001) (c) 2011 Silverton Consulting, All Rights Reserved
(SCISFS110929-001) (c) 2011 Silverton Consulting, All Rights Reserved

We made a mistake in our last post discussing CIFS vs. NFS results using SPECsfs2008 benchmarks by including some storage systems that had SSDs in this analysis. All of our other per spindle/disk drive analyses exclude SSDs and NAND cache because they skew per drive results so much.  We have corrected this in the above chart which includes all the SPECsfs2008 results, up to the end of last month.

However, even with the corrections the results stand pretty much the way they were. CIFS is doing more throughput per disk drive spindle than NFS for all benchmark results not using SSDs or Flash Cache.

Dropping SSD results changed the linear regression equation. Specificall,  the R**2 for CIFS and NFS dropped from 0.99 to 0.98 and from 0.92 to 0.82 and the B coefficient dropped from 463 to 405 and from 296 to 258 respectively.

I would be remiss if I didn’t discuss a few caveats with this analysis.

  • Now there are even less results in both CIFS and NFS groups, down to 15 for CIFS and 38 for NFS.   For any sort of correlation comparison, more results would have better statistical significance.
  • In the NFS data, we include some NAS systems which have lots of DRAM cache (almost ~0.5TB).  We should probably exclude these as well, which might drop the NFS line down some more (at least lower the B value).
  • There are not a lot of enterprise level CIFS systems in current SPECsfs resuslts, with or without SSD or NAND caching.  Most CIFS benchmarks are from midrange or lower filers.  Unclear why these would do much better on a per spindle basis than a wider sample of NFS systems, but they obviously do.

All that aside, it seems crystal clear here, that CIFS provides more throughput per spindle.

In contrast, we have shown in the past posts how for the limited number of systems that submitted benchmarks with both CIFS and NFS typically show roughly equivalent throughput results for CIFS and NFS. (See my other previous post on this aspect of the CIFS vs. NFS discussion).

Also, in our last post we discussed some of the criticism leveled against this analysis and provided our view to refute these issues. Mostly their concerns are due to the major differences between CIFS state-full protocol and NFS stateless protocol.

But from my perspective it’s all about the data.  How quickly can I read a file, how fast can I create a file.  Given similar storage systems, with similar SW, cache and hard disk drives, it’s now clear to me that CIFS provides faster access to data than NFS does, at least on a per spindle basis.

Nevertheless, more data may invalidate these results, so stay tuned.

—-

Why this is should probably be subject for another post but it may have a lot to do with the fact that it is stateless….

Comments?

CIFS vs. NFS the saga continues, recent SPECsfs2008 results- chart of the month


SCISFS110628-004A (c) 2011 Silverton Consulting, Inc., All Rights Reserved
SCISFS110628-004A (c) 2011 Silverton Consulting, Inc., All Rights Reserved

When last we discussed this topic, the tides had turned and the then current SPECsfs 2008 results had shown that any advantage that CIFS had over NFS was an illusion.

Well there has been more activity for both CIFS and NFS protocols since our last discussion and it showed, once again that CIFS was faster than NFS but rather than going down that same path again, I decided to try something different.

As a result, we published the above chart which places all NFS and CIFS disk only submissions in stark contrast.

This chart was originally an attempt to refute many analysts contention that storage benchmarks are more of a contest as to who has thrown more disks at the problem rather than some objective truth about the performance of one product or another.

But a curious thought occurred to me as I was looking at these charts for CIFS and NFS last month. What if I plotted both results on the same chart?  Wouldn’t such a chart provide some additional rationale to our discussion on CIFS vs. NFS.

Sure enough, it did.

From my perspective this chart proves that CIFS is faster than NFS.  But, maybe a couple of points might clarify my analysis:

  1. I have tried to eliminate any use of SSDs or NAND caching from this chart as they just confound the issue.  Also, all disk-based, NFS and CIFS benchmarks are represented on the above charts, not just those that have submitted both CIFS and NFS results on the same hardware.
  2. There is an industry wide view that CIFS and NFS are impossible to compare because one is state-full (CIFS) and the other state-less (NFS).  I happen to think this is wrong.  Most users just want to know which is faster and/or better.  It would be easier to do analyze this if SPECsfs2008 reported data transfer rates rather than operations/second rates but they don’t.
  3. As such, one potential problem with comparing the two on the above chart is that the percentage of “real” data transfers represented by “operations per second” may be different.  Ok, this would need to be normalized if they were a large difference between CIFS and NFS.  But when examining the SPECsfs2008 user’s guide spec., one sees that NFS read and write data ops is 28.0% of all operations and CIFS read and write data ops is 29.1% of all operations.  As they aren’t that different, the above chart should correlate well to the number of data operations done by each separate protocol. If anything, normalization would show an even larger advantage for CIFS, not less.
  4. Another potential concern one needs to consider is the difference in the average data transfer size between the protocols.  The user guide doesn’t discriminate between access transfer rates for NFS or CIFS, so we assume it’s the same for the two protocols. Given that assumption, then the above chart provides a reasonable correlation to the protocols relative data transfer rates.
  5. The one real concern on this chart is the limited amount of CIFS disk benchmarks.  At this time there are about 20 CIFS disk benchmarks vs. 40 NFS disk benchmarks. So the data is pretty slim for CIFS, nonetheless, 20 is almost enough to make this statistically significant.  So with more data the advantage may change slightly but I don’t think it will ever shift back to NFS.

Ok, now that I have all the provisos dealt with, what’s the chart really telling me.

One has to look at the linear regression equations to understand this but, CIFS does ~463.0 operations/second per disk drive and NFS does ~296.5 operations/second per disk drive.  What this says is, for all things being equal, i.e., the same hardware and disk drive count, CIFS does about 1.6X (463.0/296.5) more operations per second than NFS and correspondingly, CIFS provides ~1.6X more data per second than NFS does.

Comments?

—–

The full SPECsfs 2008 report went out to our newsletter subscribers last month. The above chart has been modified somewhat from a plot in our published report, but the data is the same (forced the linear equations to have an intercept of 0 to eliminate the constant, displayed the R**2 for CIFS, and fixed the vertical axis title).

A copy of the full SPECsfs report will be up on the dispatches page of our website later next month. However, you can get this information now and subscribe to future newsletters to receive these reports even earlier by just emailing us at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter or using the signup form above and to the right.

As always, we welcome any suggestions on how to improve our analysis of SPECsfs or any of our other storage system performance discussions.

 

 

SPECsfs2008 CIFS ORT performance – chart of the month

(c) 2010 Silverton Consulting Inc., All Rights Reserved
(c) 2010 Silverton Consulting Inc., All Rights Reserved

The above chart on SPECsfs(R) 2008 results was discussed in our latest performance dispatch that went out to SCI’s newsletter subscribers last month.  We have described Apple’s great CIFS ORT performance in previous newsletters but here I would like to talk about NetApp’s CIFS ORT results.

NetApp had three new CIFS submissions published this past quarter, all using the FAS3140 system but with varying drive counts/types and Flash Cache installed.  Recall that Flash Cache used to be known as PAM-II and is an onboard system card which holds 256GB of NAND memory used as an extension of system cache.  This differs substantially from using NAND in a SSD as a separate tier of storage as many other vendors currently do.  The newly benchmarked NetApp systems included:

  • FAS3140 (FCAL disks with Flash Cache) – used 56-15Krpm FC disk drives with 512GB of Flash Cache (2-cards)
  • FAS3140 (SATA disks with Flash Cache) – used 96-7.2Krpm SATA disk drives with 512GB of Flash Cache
  • FAS3140 (FCAL disks) – used 242-15Krpm FC disk drives and had no Flash Cache whatsoever

If I had to guess the point of this exercise was to show that one can offset fast performing hard disk drives by using FlashCache and significantly less (~1/4) fast disk drives or by using Flash Cache and somewhat more SATA drives.  In another chart from our newsletter one could see that all three systems resulted in very similar CIFS throughput results (CIFS Ops/Sec.), but in CIFS ORT (see above), the differences between the 3 systems are much more pronounced.

Why does Flash help CIFS ORT?

As one can see, the best CIFS ORT performance of the three came from the FAS3140 with FCAL disks and Flash Cache which managed a response time of ~1.25 msec.  The next best performer was the FAS3140 with SATA disks and Flash Cache with a CIFS ORT of just under ~1.48 msec.  The relatively worst performer of the bunch was the FAS3140 with only FCAL disks which came in at ~1.84 msec. CIFS ORT.  So why the different ORT performance?

Mostly the better performance is due to the increased cache available in the Flash Cache systems.  If one were to look at the SPECsfs 2008 workload one would find that less than 30% is read and write data activity and the rest is what one might call meta-data requests (query path info @21.5%, query file info @12.9%, create = close @9.7%, etc.).  While read data may not be very cache friendly, most of the meta-data and all the write activity are cache friendly.  Meta-data activity is more cache friendly primarily because it’s relatively small in size and any write data goes to cache before being destaged to disk.  As such, this more cache friendly workload generates on average, better response times when one has larger amounts of cache.

For proof one need look no further than the relative ORT performance of the FAS3140 with SATA and Flash vs. the FAS3140 with just FCAL disks.  The Flash Cache/SATA drive system had ~25% better ORT results than the FCAL only system even with significantly slower and much fewer disk drives.

The full SPECsfs 2008 performance report will go up on SCI’s website later this month in our dispatches directory.  However, if you are interested in receiving this report now and future copies when published, just subscribe by email to our free newsletter and we will email the report to you now.

SPECsfs2008 CIFS vs. NFS results – chart of the month

SPECsfs(R) 2008 CIFS vs. NFS 2010Mar17
SPECsfs(R) 2008 CIFS vs. NFS 2010Mar17

We return now to our ongoing quest to understand the difference between CIFS and NFS performance in the typical data center.  As you may recall from past posts and our newsletters on this subject, we had been convinced that in SPECsfs 2008 CIFS had almost 2X the throughput of NFS in SPECsfs 2008 benchmarks.  Well as you can see from this updated chart this is no longer true.

Thanks to EMC for proving me wrong (again).  Their latest NFS and CIFS result utilized a NS-G8 Celerra gateway server in front of V-Max backend using SSDs and FC disks. The NS-G8 was the first enterprise class storage subsystem to release both a CIFS and NFS SPECsfs 2008 benchmark.

As you can see from the lower left quadrant all of the relatively SMB level systems (under 25K NFS throughput ops/sec) showed a consistent pattern of CIFS throughput being ~2X NFS throughput.  But when we added the Celerra V-Max combination to the analysis it brought the regression line down considerably and now the equation is:

CIFS throughput = 0.9952 X NFS throughput + 10565, with a R**2 of 0.96,

what this means is that CIFS and NFS throughput are roughly the same now.

When I first reported the relative advantage of CIFS over NFS throughput in my newsletter I was told that you cannot compare the two results mainly because NFS was “state-less” and CIFS was “state-full” and a number of other reasons (documented in the earlier post and in the newsletter).  Nonetheless, I felt that it was worthwhile to show the comparison because at the end of the day whether some file happens to be serviced by NFS or CIFS may not matter to the application/user, it should matter significantly to the storage administrator/IT staff.  By showing the relative performance of each we were hoping to help IT personnel to decide between using CIFS or NFS storage.

Given the most recent results, it seems that the difference in throughput is not that substantial irregardless of their respective differences.  Of course more data will help. There seems to be a wide gulf between the highest SMB submission and the EMC enterprise class storage that should be filled out.  As Celerra V-Max is the only enterprise NAS to submit both CIFS and NFS benchmarks there could still be many surprises in store. As always, I would encourage storage vendors to submit both NFS and CIFS benchmarks for the same system so that we can see how this pattern evolves over time.

The full SPECsfs 2008 report should have went out to our newsletter subscribers last month but I had a mistake with the link.  The full report will be delivered with this months newsletter along with a new performance report on Exchange Solution Review Program and storage announcement summaries.  In addation, a copy of the SPECsfs report will be up on the dispatches page of our website later next month. However, you can get this information now and subscribe to future newsletters to receive future full reports even earlier, just email us at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.

As always, we welcome any suggestions on how to improve our analysis of SPECsfs or any of our other storage system performance results.

Latest SPECsfs2008 CIFS performance – chart of the month

Above we reproduce a chart from our latest newsletter StorInttm Dispatch on SPECsfs(R) 2008 benchmark results.  This chart shows the top 10 CIFS throughput benchmark results as of the end of last year.  As observed in the chart Apple’s Xserve running Snow Leopard took top performance with over 40K CIFS throughput operations per second.  My problem with this chart is that there are no enterprise class systems represented in the top 10 or for that matter (not shown in the above) in any CIFS result.

Now some would say it’s still early yet in the life of the 2008 benchmark but it has been out now for 18 months and still has not a single enterprise class system submission reported.  Possibly, CIFS is not considered an enterprise class protocol but I can’t believe that given the proliferation of Windows.  So what’s the problem?

I have to believe it’s part tradition, part not wanting to look bad, and part just lack of awareness on the part of CIFS users.

  • Traditionally, NFS benchmarks were supplied by SPECsfs and CIFS benchmarks were supplied elsewhere, i.e., NetBenc. However, there never was a central repository for NetBench results so comparing system performance was cumbersome at best.  I believe that’s one reason for SPECsfs’s CIFS benchmark.  Seeing the lack of a central repository for a popular protocol, SPECsfs created their own CIFS benchmark.
  • Performance on system benchmarks are always a mixed bag.  No-one wants to look bad and any top performing result is temporary until the next vendor comes along.  So most vendors won’t release a benchmark result unless it shows well for them.  Not clear if Apple’s 40K CIFS ops is a hard number to beat, but it’s been up there for quite awhile now, and has to tell us something.
  • CIFS users seem to be aware and understand NetBench but don’t have similar awareness on SPECsfs CIFS benchmark yet.  So, given today’s economic climate, any vendor wanting to impress CIFS customers would probably choose to ignore SPECsfs and spend their $s on NetBench.  The fact that comparing results was neigh impossible, could be considered an advantage for many vendors.

So SPECsfs CIFS just keeps going on.  One way to change this dynamic is to raise awareness.  So as more IT staff/consultants/vendors discuss SPECsfs CIFS results, its awareness will increase.  I realize some of  my analysis on CIFS and NFS performance results doesn’t always agree with the SPECsfs party line, but we all agree that this benchmark needs wider adoption.  Anything that can be done to facilitate that deserves my (and their) support.

So for all my storage admins, CIOs and other influencers of NAS system purchases friends out there, you need to start asking to about SPECsfs CIFS benchmark results.  All my peers out their in the consultant community, get on the bandwagon.  As for my friends in the vendor community, SPECsfs CIFS benchmark results should be part of any new product introduction.  Whether you want to release results is and always will be, a marketing question but you all should be willing to spend the time and effort to see how well new systems perform on this and other benchmarks.

Now if I could just get somebody to define an iSCSI benchmark, …

Our full report on the latest SPECsfs 2008 results including both NFS and CIFS performance, will be up on our website later this month.  However, you can get this information now and subscribe to future newsletters to receive the full report even earlier, just email us at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.