Has latency become the key metric? SPC-1 LRT results – chart of the month

I was at EMCworld a couple of months back and they were showing off a preview of the next version VNX storage, which was trying to achieve a million IOPS with under a millisecond latency.  Then I attended NetApp’s analyst summit and the discussion at their Flash seminar was how latency was changing the landscape of data storage and how flash latencies were going to enable totally new applications.

One executive at NetApp mentioned that IOPS was never the real problem. As an example, he mentioned one large oil & gas firm that had a peak IOPS of 35K.

Also, there was some discussion at NetApp of trying to come up with a way of segmenting customer applications by latency requirements.  Aside from high frequency trading applications, online payment processing and a few other high-performance database activities, there wasn’t a lot that could easily be identified/quantified today.

IO latencies have been coming down for years now. Sophisticated disk only storage systems have been lowering latencies for over a decade or more.   But since the introduction of SSDs it’s been a whole new ballgame.  For proof all one has to do is examine the top 10 SPC-1 LRT (least response time, measured with workloads@10% of peak activity) results.

Top 10 SPC-1 LRT results, SSD system response times

 

In looking over the top 10 SPC-1 LRT benchmarks (see Figure above) one can see a general pattern.  These systems mostly use SSD or flash storage except for TMS-400, TMS 320 (IBM FlashSystems) and Kaminario’s K2-D which primarily use DRAM storage and backup storage.

Hybrid disk-flash systems seem to start with an LRT of around 0.9 msec (not on the chart above).  These can be found with DotHill, NetApp, and IBM.

Similarly, you almost have to get to as “slow” as 0.93 msec. before you can find any disk only storage systems. But most disk only storage comes with a latency at 1msec or more. Between 1 and 2msec. LRT we see storage from EMC, HDS, HP, Fujitsu, IBM NetApp and others.

There was a time when the storage world was convinced that to get really good response times you had to have a purpose built storage system like TMS or Kaminario or stripped down functionality like IBM’s Power 595.  But it seems that the general purpose HDS HUS, IBM Storwize, and even Huawei OceanStore are all capable of providing excellent latencies with all SSD storage behind them. And all seem to do at least in the same ballpark as the purpose built, TMS RAMSAN-620 SSD storage system.  These general purpose storage systems have just about every advanced feature imaginable with the exception of mainframe attach.

It seems nowadays that there is a trifurcation of latency results going on, based on underlying storage:

  • DRAM only systems at 0.4 msec to ~0.1 msec.
  • SSD/flash only storage at 0.7 down to 0.2msec
  • Disk only storage at 0.93msec and above.

The hybrid storage systems are attempting to mix the economics of disk with the speed of flash storage and seem to be contending with all these single technology, storage solutions. 

It’s a new IO latency world today.  SSD only storage systems are now available from every major storage vendor and many of them are showing pretty impressive latencies.  Now with fully functional storage latency below 0.5msec., what’s the next hurdle for IT.

Comments?

Image: EAB 2006 by TMWolf

 

Enhanced by Zemanta

EMCworld 2013 Day 3

IMG_1431Rich Napolitano, President Unified Storage Division got up and showed some technology demonstrations of what they had working in their labs.  Rich had some of his long time engineers up on the stage to show what was running in their labs.

  • First up was a dual controller, dual processors per controller 8 core processing chips (32cores in all) running against an all SSD backend. The configuration was up for a short time but it seemed like 96 SSDs, so an all-flash VNX array.  They used Iometer, random-8KB IO to drive almost 975K IOPS at sub-msec. response time. They hit 1M IOPS with just slightly above 1 msec. response time. You could see the processor utilization of the 32 cores going up as the workload reached higher levels.  Couldn’t see precisely but all the cores were running at ~70-80% busy at the 1Miops level and it seemed like the system performance was entering the knee-of-the-curve
  • Next up was the new VNX data app store demonstration. Similar to iPhone and Android App stores. EMC has identified a select set of apps that can be run directly on VNX hardware. The current demonstration had two versions of anti-virus, Recover Point Virtual Appliance (vRPA), (v?)VPLEX, CloudAccess and MySQL server.  The engineers showed how AV software could be installed and be running on the VNX as well as how vRPA could be installed and provide onboard replication services.
  • Then, they demonstrated a VNX virtual appliance (vVNX?) which was able to run on white box server which I think was running ESX.  In this case, vVNX was running with onboard DAS storage but had all the advanced functionality of VNX
  • Finally, they showed a vVNX running in a cloud services environment. Not sure if this was VMware vCloud or some other compute cloud but Rich stated that they will support many clouds.  With vVNX running in the cloud accessing storage behind the compute engine it’s unclear what the performance would be and how one would access the storage (file or iSCSI no doubt) but it did open up new possibilities as to where one could run VNX services.

It’s readily apparent that the next iteration of VNX software seems focused on taking advantage of multi-core processing (called MCx) to boost storage system performance, providing a virtualized environment within the VNX engine to run specialized data services and supplying a new vVNX functionality which can be deployed just about anywhere you would want.

That’s all for the public sessions, spent much of the rest of the day in NDA sessions.

I had a good time at EMCworld 2013, seeing old friends again and meeting new ones and thank EMC for inviting me.  For information on previous days at EMCworld 2013 please see my Day 1 and Day 2 posts.

EMCworld 2013 day 1

Lines for coffee at the Cafe were pretty long this morning and I missed my opportunity to have breakfast to do some work. But eventually made my way to the press room and got some food and coffee.

Spent the morning in Analyst sessions mostly under NDA but it seems safe to say that EMC sees plenty of opportunity ahead.

The first session Q&A with BRS executives and customers was enlightening but the main message from the customers was that data protection is hard, legacy systems often can’t adjust quick enough and sometimes a completely new architecture is warranted. The executives were upbeat about current BRS business and where they were headed in the future.

20130506-142735.jpgRest of the morning was with Jeremy Burton EVP Product, Operations and Marketing and John Roese, the new SVP and CTO of EMC (6 months on the job). Jeremy talked about an IDC insight that there’s a new world emerging so-called 3rd platform applications based on mobile and consumer grade technology  with literally billions of users, millions of apps built on mobile-cloud-bigdata-social infrastructure which complements the 2nd platform built on lan/wan, client server frameworks.

For an example of this environment Jeremy mentioned that AT&T provisions 12PB of storage a month.

What’s needed for this new platform is a new type of storage built for the 3rd platform but taking advantage of current enterprise storage characteristics.  This is ViPR (more on that later)

John comes by way of Huawei, Nortel and myriad others and offers a broad insight to the way forward for EMC. It looks like a bright future ahead if they can do half of what John has outlined.

John talked about the intersections between the carrier market (or services), enterprise IT and consumer market.  There is convergence between these regions and at each of these intersections new technology is going to answer many of the problems which exist. For instance in the carrier space:

  • The amount of information they gather is frightening they know everything about you. Pivotal will be the key here because its good at 1) ability to correlate information across different information sources. Most carriers have a whole bunch of disparate information stores; and 2) It’s not just focused on Big Data as a non-realtime problem but also provides realtime analytics as well.
  • Capital costs are going down but $/bits are going way down.  VMware & Software defined data center is the right way to drive down costs.  Today servers are ~50% virtualized but networking is not virtualized at all.
  • Customers are dissatisfied with service providers (carriers).  Again Pivotal is key here. One carrier customer was focused on customer churn and tried to figure out how to minimize this. They used  Gemfire’ high speed infrastructure that could watchc all transactions on cell tower infrastructure pick out dropped calls, send it to Greenplum and correlate this with the customer attributes (good or bad), and within 100msec supply an interaction with the customer in to apologize and offer some services to make it better.
  • Internet is the new wild west –use at your own risk,  spoofing websites, respond to email could be anyone, chaos to security. RSA can become the trusted internet provider by looking at the internet holistically, combining information from many customers, aggregating and sharing these interactions to deterimine the trust of every transaction. Trust is becoming a new big data problem.
  • Hybrid and public cloud is their biggest opportunity but they don’t know how to attack it. VMware and SDDC will evolve to provide orchestrated movement from private to public and closed to open.

The thinking seems pretty straightforward given what they are trying to accomplish and the framework he applied to EMC’s strategy going forward made a lot of sense.

20130506-172955.jpgBrian Gallagher did a keynote on enterprise storage new functions and features which covered VMAX, VPLEX, RecoverPoint, and XtremIO/SF/SW. Mentioned RecoverPoint virtual appliance and sort of a statement of direction on being able to move application functionality directly on VMAX. He kind of demoed this with VPLEX running on VMAX.

He also talked about FAST speed of reaction versus the competition, mentioned that FAST provides information about the storage tiering to up to 4 different VMAX arrays. Showed a comparison of VMAX 10K against another prime competitor that looked downright embarrassing.  And talked about VMAX cloud edition.

20130506-173022.jpgAfter that 1 on 1 meetings all under strict NDA. But then the big Keynote with Jeremy again and David Goulden President and COO on ViPR. They have implemented software defined storage (SDS).  Last week I did a post on SDS trying to layout some of the problems and promises of SDS (please see The promise of SDS post).

But what I missed was the data path transformation that ViPR can do to provide object and HDFS access to traditional and commodity storage systems.  ViPR starts out primarily in the control layer providing automated provisioning, self management, across heterogeneous storage pools. With ViPR one can define virtual storage arrays and then configure virtual storage pools across those arrays regardless of the physical infrastructure underneath them.

More on ViPR in a separate post but suffice it to say EMC has been working on this for awhile now. But how it’s positioned with VPLEX and the other storage virtualization capabilities in VMAX and other products is another matter. But it seems they are carving out a space for ViPR between and above the current storage solutions.

End of day one is in the Expo and then cocktail parties… stay tuned for day 2.

 

Enterprise file synch

Strange Clouds by michaelroper (cc) (from Flickr)
Strange Clouds by michaelroper (cc) (from Flickr)

Last fall at SNW in San Jose there were a few vendors touting enterprise file synchronization services each having a slightly different version of the requirements.   The one that comes most readily to mind was Egnyte which supported file synchronization across a hybrid cloud (public cloud and network storage) which we discussed in our Fall SNWUSA wrap up post last year.

The problem with BYOD

With bring your own devices (BYOD) corporate end users are quickly abandoning any pretense of IT control and turning consumer class file synchronization services to help  synch files across desktop, laptop and all mobile devices they haul around.   But the problem with these solutions such as DropBoxBoxOxygenCloud and others are that they are really outside of IT’s control.

Which is why there’s a real need today for enterprise class file synchronization solutions that exhibit the ease of use and set up available from consumer file synch systems but  offer IT security, compliance and control over the data that’s being moved into the cloud and across corporate and end user devices.

EMC Syncplicity and EMC on premises storage

Last week EMC announced an enterprise version of their recently acquired Syncplicity software that supports on-premises Isilon or Atmos storage, EMC’s own cloud storage offering.

In previous versions of Syncplicity storage was based in the cloud and used Amazon Web Services (AWS) for cloud orchestration and AWS S3 for cloud storage. With the latest release, EMC adds on premises storage to host user file synchronization services that can span mobile devices, laptops and end user desktops.

New Syncplicity users must download desktop client software to support file synchronization or mobile apps for mobile device synchronization.  After that it’s a simple matter of identifying which if any directories and/or files are to be synchronized with the cloud and/or shared with others.

However, with the Business (read enterprise) edition one also gets the Security and Compliance console which supports access control to define users and devices that can synchronize or share data, enforce data retention policies, remote wipe corporate data,  and native support for single sign services. In addition, one also gets centralized user and group management services to grant, change, revoke user and group access to data.  Also, one now obtains enterprise security with AES-256 data-at-rest encryption, separate key manager data centers and data storage data centers, quadruple replication of data for high disaster fault tolerance and SAS70 Type II compliant data centers.

If the client wants to use on premises storage, they would also need to deploy a VM virtual appliance somewhere in the data center to act as the gateway to file synchronization service requests. The file synch server would also presumably need access to the on premises storage and it’s unclear if the virtual appliance is in-band or out-of-band (see discussion on Egnyte’s solution options below).

Egnyte’s solution

Egnyte comes as a software only solution building a file server in the cloud for end user  storage. It also includes an Egnyte app for mobile hardware and the ever present web file browser.  Desktop file access is provided via mapped drives which access the Egnyte cloud file server gateway running as a virtual appliance.

One major difference between Syncplicity and Egnyte is that Egnyte offers a combination of both cloud and on premises storage but you cannot have just on premises storage. Syncplicity only offers one or the other storage for file data, i.e., file synchronization data can only be in the cloud or on local on premises storage but cannot be in both locations.

The other major difference is that Egnyte operates with just about anybody’s NAS storage such as EMC, IBM, and HDS for the on premises file storage.  It operates as an in-band, software appliance solution that traps file activity going to your on premises storage. In this case, one would need to start using a new location or directory for data to be synchronized or shared.

But for NetApp storage only (today), they utilize ONTAP APIs to offer out-of-band file synchronization solutions.  This means that you can keep NetApp data where it resides and just enable synchronization/shareability services for the NetApp file data in current directory locations.

Egnyte promises enterprise class data security with AD, LDAP and/or SSO user authentication, AES-256 data encryption and their own secure data centers.  No mention of separate key security in their literature.

As for cloud backend storage, Egnyte has it’s own public cloud or supports other cloud storage providers such as AWS S3, Microsoft Azure, NetApp Storage Grid and HP Public Cloud.

There’s more to Egnyte’s solution than just file synchronization and sharing but that’s the subject of today’s post. Perhaps we can cover them at more length in a future post if their interest.

File synchronization, cloud storage’s killer app?

The nice thing about these capabilities is that now IT staff can re-gain control over what is and isn’t synched and shared across multiple devices.  Up until now all this was happening outside the data center and external to IT control.

From Egnyte’s perspective, they are seeing more and more enterprises wanting data both on premises for performance and compliance as well as in the cloud storage for ubiquitous access.  They feel its both a sharability demand between an enterprise’s far flung team members and potentially client/customer personnel as well as a need to access, edit and propagate silo’d corporate information using new mobile devices that everyone has these days.

In any event, Enterprise file synchronization and sharing is emerging as one of the killer apps for cloud storage.  Up to this point cloud gateways made sense for SME backup or disaster recovery solutions but IMO, didn’t really take off beyond that space.  But if you can package a robust and secure file sharing and synchronization solution around cloud storage then you just might have something that enterprise customers are clamoring for.

~~~~

Comments?

NFS ChampionsChart™ – chart-of-the-month

SCISFS120926-001, Q4-2012 NFS ChampionsChart(tm) (c) 2012 Silverton Consulting, Inc., All Rights Reserved
SCISFS120926-001, Q4-2012 NFS ChampionsChart(tm) (c) 2012 Silverton Consulting, Inc., All Rights Reserved

We had no new performance data to report on in our September StorInt™ newsletter so we decided to publish our NAS Buying Guide ChampionsCharts™.   The chart above is our Q4-2102 NFS ChampionsChart which shows the top performing NFS storage systems from published SPECsfs2008 benchmark results available at the time.

We split up all of our NAS and SAN ChampionsCharts into four quadrants: Champions, Sprinters, Marathoners and Slowpokes.  We feel that storage Champions represent the best overall performers,  Sprinters have great response time but lack in transaction throughput as compared to storage Champions, Marathoners have good transaction throughput but are defficient in responsiveness and Slowpokes need to go back to the drawing board because they suffer both poor transaction throughput and responsiveness.

You may notice that there are two categories of systems identified in the NFS Champions Quadrant above.  These represent the more “normal” NAS systems (numbered 1-7) such as integrated systems and NAS gateways with SAN storage behind them vs. the more caching oriented, NAS systems (denoted with letters A-E) which have standalone NAS systems behind them.

In our Dispatch we discuss the top 3 NAS Champions in the integrated and gateway category which include:

  1. NetApp FAS6080 – although a couple of generations back, the FAS6080 did remarkably well for its complement of hardware.
  2. Huawei Symantec Oceanspace N8500 Clustered NAS – this product did especially well for its assortment of hardware in response time and not necessarily that great in NFS throughput but still respectable.
  3. EMC Celerra VG8, 2 DM and VMAX hardware – similar to number one above, a relatively modest amount of cache and disks but seemed to perform relatively well.

One negative to all our ChampionsCharts is that they depend on audited, published performance data which typically lag behind recent product introductions.  As evidence of this the FAS6080 and Celerra VG8 listed above are at least a generation or two behind current selling systems.  I am not as familiar with the Huawei systems but it may very well be a generation or two behind current selling products.

As for our rankings, this is purely subjective but our feeling is that transaction performance comes first with responsiveness a close second. For example in the above ranking Huawei’s system had the best overall responsiveness but relatively poorer transaction performance than any of the other Champions.  Nonetheless as the best in responsiveness, we felt it deserved a number two in our Champions list.

The full Champions quadrants for the NFS and CIFS/SMB ChampionsCharts are detailed in our NAS Buying Guide available for purchase on our website (please see NAS buying guide page).  The dispatch that went out with our September newsletter also detailed the top 3 CIFS/SMB Champions.

~~~~

The complete SPECsfs2008 performance report with both NFS and CIFS/SMB ChampionsCharts went out in SCI’s September newsletter.  But a copy of the report will be posted on our dispatches page sometime this month (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free newsletters by just using the signup form above right.

As always, we welcome any suggestions or comments on how to improve our SPECsfs2008 performance analysis or any of our other storage performance analyses.


EMC World 2012 Day 2a – VMware & VMAX

20120522-122518.jpg

VMware’s vision

Paul Maritz CEO of VMware came up and gave his vision for the new transformations impacting the IT  world today. It all starts with infrastructure transformation and VMware’s build out of the cloud infrastructure suite (stack).  Paul described the backend transformation provided by VMware as follows:

  • vSphere – providing virtualization, pooling oand scheduling of resources across multiple physical boundaries,
  • vShield – providing software defined services across net, storage and server resources in the infrastructure,
  • vCloud Director – providing administration, self service and multi-tenancy for physical and virtual resources,
  • vCenter Operations Manager – providing automated monitoring and management of physical and virtual resources,
  • vFabric Data and Application Director(s) – providing app-aware and data (aware?) service provisioning.

 

20120522-122601.jpgPaul went on to discuss the frontend transformation primarily through VMware View 5.1 and VMware’s Horizon Suite covering any display out there. He finished up talking about application transformation keying on Spring framework, GemFire RAM database, and CloudFoundery.org/.com open source cloud APIs.

20120522-125105.jpg

VMAX enhancements

Brian Gallagher did a keynote on the changes to the VMAX product line, with the new 10K, 20K and 40K storage systems supporting ~1PB, ~2PB and over ~4PB of capacity.

The new systems also support both 2.5″ and 3.5″ drives and will now support eMLC SSDs. Brian talked about the many millions of run hours they now have on FAST VP in enterprises around the world.

He also introduced the VMAX SP, a new storage service offering where EMC owns the equipment and sells storage QOS to the customer with special SLAS associated with the storage.  Brian sees this as a step to increasing IT agility allowing for a quick turnaround deployment of enterprise storage without the high acquisition cost and complexity.

Brian also talked about Federated Storage Tiering where VMAX can now incorporate other vendor storage as a storage tier with VMAX advanced functionality in front of it.

More on VMAX new enhanced hardware and software in our free monthly newsletter (sign up above right).

… more to come.

EMC World 2012 part 1 – VNX/VNXe

Plenty of announcements this morning from EMC World 2012. I’ll try to group them into different posts.  Today’s post Unified Storage Division VNX/VNXe announcements:

  • New VNXe3150 which fills out the lower end of the VNXe line and replaces VNX3100 (but will coexist for awhile). The new storage system supports 2.5″ drives, has quadcore processing, now supports SSDs as a static storage tier (no FAST yet) and has a 100 drive capacity supports 3.5″ 3TB drives and has dual 10GbE port frontend interface cards.  The new system provides 50% performance increase and capacity increase in the same rack.
  • New VNX software now supports 256 read-writeable snapshots per LUN, previous limits were 8 (I think). EMC has also improved storage pooling for both VNX and VNXe which now allows multiple types of RAID groups per pool (previously they all had to be the same RAID level) and rebalancing across RAID groups for better performance, new larger RAID 5 & 6 groups (why?).   They now offer better storage analytics with VMware vCenter which provides impressive integration with FAST VP and FAST CACHE, supplying performance and capacity information, allowing faster diagnosis and resolution of storage issues under VMware.

Stay tuned, more to come I’m sure

 

EMC buys ExtremeIO

Wow, $430M for a $25M startup that’s been around since 2009 and hasn’t generated any revenue yet.  It probably compares well against Facebook’s recent $1B acquisition of Instagram but still it seems a bit much.

It certainly signals a significant ongoing interest in flash storage in whatever form that takes. Currently EMC offers PCIe flash storage (VFCache), SSD options in VMAX and VNX, and has plans for a shared flash cache array (project: Thunder).  An all-flash storage array makes a lot of sense if you believe this represents an architecture that can grab market share in storage.

I have talked with ExtremeIO in the past but they were pretty stealthy then (and still are as far as I can tell). Not much details about their product architecture, specs on performance, interfaces or anything substantive. The only thing they told me then was that they were in the flash array storage business.

In a presentation to SNIA’s BOD last summer I said that the storage industry is in revolution.  When a 20 or so device system can generate ~250K or more IO/second with a single controller, simple interfaces, and solid state drives, we are no longer in Kansas anymore.

Can a million IO storage system be far behind.

It seems to me, that doing enterprise storage performance has gotten much easier over the last few years.  Now that doesn’t mean enterprise storage reliability, availability or features but just getting to that level of performance before took 1000s of disk drives and racks of equipment.  Today, you can almost do it in a 2U enclosure and that’s without breaking a sweat.

Well that seems to be the problem, with a gaggle of startups, all vying after SSD storage in one form or another the market is starting to take notice.  Maybe EMC felt that it was a good time to enter the market with their own branded product, they seem to already have all the other bases covered.

Their website mentions that ExtremeIO was a load balanced, deduplicated clustered storage system with enterprise class services (this could mean anything). Nonetheless, a deduplicating, clustered SSD storage system built out of commodity servers could define at least 3 other SSD startups I have recently talked with and a bunch I haven’t talked with in awhile.

Why EMC decided that ExtremeIO was the one to buy is somewhat a mystery.  There was some mention of an advanced data protection scheme for the flash storage but no real details.

Nonetheless, enterprise SSD storage services with relatively low valuation and potential to disrupt enterprise storage might be something to invest in.  Certainly EMC felt so.

~~~~

Comments, anyone know anything more about ExtremeIO?