The smartphone research was done by NEC. They took an Android phone and modified the O/S to use an external memory card for all of the App data needs.
Then they ran a number of Apps through their paces with various external memory cards. It turned out that depending on the memory card in use, the mobile phones email and Twitter Apps launched 2-3X faster. Also, the native web App was tested with over 50 pages loads and had at best, a 3X faster page load time.
All the tests were done using a cable to simulate advanced network connections, above and beyond today’s capabilities and to eliminate that as the performance bottleneck. In the end, faster networking didn’t have as much of a bearing on App performance as memory card speed.
(NAND) memory card performance
The problem, it turns out is due to data writes. It seems that the non-volatile memory used in most external memory cards is NAND flash, which as is we all know, has much slower write time than read time, almost 1000X (see my post on Why SSD performance is such a mystery). Most likely the memory cards are pretty “dumb” so many performance boosting techniques used in enterprise class SSDs are not available (e.g., DRAM write buffering).
Data caching helps
The researchers did another experiment with the phone, using a more sophisticated version of data caching and a modified Facebook App. Presumably, this new “data caching” minimized the data write penalty by caching writes to DRAM first and only destaging data to NAND flash when absolutely necessary. By using the more sophisticated “data caching” they were able to speed up the modified Facebook App by 4X.
It seems that storage sophistication matters even in smartphones, I think I am going to need to have someone port the caching portions of Data ONTAP® or Enginuity™ to run on my iPhone.
Although technically Project Lightening and Thunder represent some interesting offshoots of EMC software, hardware and system prowess, I wonder why they would decide to go after this particular market space.
There are plenty of alternative offerings in the PCIe NAND memory card space. Moreover, the PCIe card caching functionality, while interesting is not that hard to replicate and such software capability is not a serious barrier of entry for HP, IBM, NetApp and many, many others. And the margins cannot be that great.
So why get into this low margin business?
I can see a couple of reasons why EMC might want to do this.
Believing in the commoditization of storage performance. I have had this debate with a number of analysts over the years but there remain many out there that firmly believe that storage performance will become a commodity sooner, rather than later. By entering the PCIe NAND card IO buffer space, EMC can create a beachhead in this movement that helps them build market awareness, higher manufacturing volumes, and support expertise. As such, when the inevitable happens and high margins for enterprise storage start to deteriorate, EMC will be able to capitalize on this hard won, operational effectiveness.
Moving up the IO stack. From an applications IO request to the disk device that actually services it is a long journey with multiple places to make money. Currently, EMC has a significant share of everything that happens after the fabric switch whether it is FC, iSCSI, NFS or CIFS. What they don’t have is a significant share in the switch infrastructure or anywhere on the other (host side) of that interface stack. Yes they have Avamar, Networker, Documentum, and other software that help manage, secure and protect IO activity together with other significant investments in RSA and VMware. But these represent adjacent market spaces rather than primary IO stack endeavors. Lightening represents a hybrid software/hardware solution that moves EMC up the IO stack to inside the server. As such, it represents yet another opportunity to profit from all the IO going on in the data center.
Making big data more effective. The fact that Hadoop doesn’t really need or use high end storage has not been lost to most storage vendors. With Lightening, EMC has a storage enhancement offering that can readily improve Hadoop cluster processing. Something like Lightening’s caching software could easily be tailored to enhance HDFS file access mode and thus, speed up cluster processing. If Hadoop and big data are to be the next big consumer of storage, then speeding cluster processing will certainly help and profiting by doing this only makes sense.
Believing that SSDs will transform storage. To many of us the age of disks is waning. SSDs, in some form or another, will be the underlying technology for the next age of storage. The densities, performance and energy efficiency of current NAND based SSD technology are commendable but they will only get better over time. The capabilities brought about by such technology will certainly transform the storage industry as we know it, if they haven’t already. But where SSD technology actually emerges is still being played out in the market place. Many believe that when industry transitions like this happen it’s best to be engaged everywhere change is likely to happen, hoping that at least some of them will succeed. Perhaps PCIe SSD cards may not take over all server IO activity but if it does, not being there or being late will certainly hurt a company’s chances to profit from it.
There may be more reasons I missed here but these seem to be the main ones. Of the above, I think the last one, SSD rules the next transition is most important to EMC.
They have been successful in the past during other industry transitions. If anything they have shown similar indications with their acquisitions by buying into transitions if they don’t own them, witness Data Domain, RSA, and VMware. So I suspect the view in EMC is that doubling down on SSDs will enable them to ride out the next storm and be in a profitable place for the next change, whatever that might be.
And following lightening, Project Thunder
Similarly, Project Thunder seems to represent EMC doubling their bet yet again on the SSDs. Just about every month I talk to another storage startup coming out in the market providing another new take on storage using every form of SSD imaginable.
However, Project Thunder as envisioned today is not storage, but rather some form of external shared memory. I have heard this before, in the IBM mainframe space about 15-20 years ago. At that time shared external memory was going to handle all mainframe IO processing and the only storage left was going to be bulk archive or migration storage – a big threat to the non-IBM mainframe storage vendors at the time.
One problem then was that the shared DRAM memory of the time was way more expensive than sophisticated disk storage and the price wasn’t coming down fast enough to counteract increased demand. The other problem was making shared memory work with all the existing mainframe applications was not easy. IBM at least had control over the OS, HW and most of the larger applications at the time. Yet they still struggled to make it usable and effective, probably some lesson here for EMC.
Fast forward 20 years and NAND based SSDs are the right hardware technology to make inexpensive shared memory happen. In addition, the road map for NAND and other SSD technologies looks poised to continue the capacity increase and price reductions necessary to compete effectively with disk in the long run.
However, the challenges then and now seem as much to do with software that makes shared external memory universally effective as with the hardware technology to implement it. Providing a new storage tier in Linux, Windows and/or VMware is easier said than done. Most recent successes have usually been offshoots of SCSI (iSCSI, FCoE, etc). Nevertheless, if it was good for mainframes then, it certainly good for Linux, Windows and VMware today.
And that seems to be where Thunder is heading, I think.
(SCISFS111221-001) (c) 2011 Silverton Consulting, All Rights Reserved
[We are still catching up on our charts for the past quarter but this one brings us up to date through last month]
There’s just something about a million SPECsfs2008(r) NFS throughput operations per second that kind of excites me (weird, I know). Yes it takes over 44-nodes of Avere FXT 3500 with over 6TB of DRAM cache, 140-nodes of EMC Isilon S200 with almost 7TB of DRAM cache and 25TB of SSDs or at least 16-nodes of NetApp FAS6240 in Data ONTAP 8.1 cluster mode with 8TB of FlashCache to get to that level.
Nevertheless, a million NFS throughput operations is something worth celebrating. It’s not often one achieves a 2X improvement in performance over a previous record. Something significant has changed here.
The age of scale-out
We have reached a point where scaling systems out can provide linear performance improvements, at least up to a point. For example, the EMC Isilon and NetApp FAS6240 had a close to linear speed up in performance as they added nodes indicating (to me at least) there may be more there if they just throw more storage nodes at the problem. Although maybe they saw some drop off and didn’t wish to show the world or potentially the costs became prohibitive and they had to stop someplace. On the other hand, Avere only benchmarked their 44-node system with their current hardware (FXT 3500), they must have figured winning the crown was enough.
However, I would like to point out that throwing just any hardware at these systems doesn’t necessary increase performance. Previously (see my CIFS vs NFS corrected post), we had shown the linear regression for NFS throughput against spindle count and although the regression coefficient was good (~R**2 of 0.82), it wasn’t perfect. And of course we eliminated any SSDs from that prior analysis. (Probably should consider eliminating any system with more than a TB of DRAM as well – but this was before the 44-node Avere result was out).
Speaking of disk drives, the FAS6240 system nodes had 72-450GB 15Krpm disks, the Isilon nodes had 24-300GB 10Krpm disks and each Avere node had 15-600GB 7.2Krpm SAS disks. However the Avere system also had a 4-Solaris ZFS file storage systems behind it each of which had another 22-3TB (7.2Krpm, I think) disks. Given all that, the 16-node NetApp system, 140-node Isilon and the 44-node Avere systems had a total of 1152, 3360 and 748 disk drives respectively. Of course, this doesn’t count the system disks for the Isilon and Avere systems nor any of the SSDs or FlashCache in the various configurations.
I would say with this round of SPECsfs2008 benchmarks scale-out NAS systems have come out. It’s too bad that both NetApp and Avere didn’t release comparable CIFS benchmark results which would have helped in my perennial discussion on CIFS vs. NFS.
But there’s always next time.
~~~~
The full SPECsfs2008 performance report went out to our newsletter subscriber’s last December. A copy of the full report will be up on the dispatches page of our site sometime later this month (if all goes well). However, you can see our full SPECsfs2008 performance analysis now and subscribe to our free monthly newsletter to receive future reports directly by just sending us an email or using the signup form above right.
For a more extensive discussion of file and NAS storage performance covering top 30 SPECsfs2008 results and NAS storage system features and functionality, please consider purchasing our NAS Buying Guide available from SCI’s website.
As always, we welcome any suggestions on how to improve our analysis of SPECsfs2008 results or any of our other storage system performance discussions.
Merry Christmas! Buon Natale! Frohe Weihnachten! by Jakob Montrasio (cc) (from Flickr)
Happy Holidays.
I ranked my blog posts using a ratio of hits to post age and have identified with the top 10 most popular posts for 2011 (so far):
Vsphere 5 storage enhancements – We discuss some of the more interesting storage oriented Vsphere 5 announcements that included a new DAS storage appliance, host based (software) replication service, storage DRS and other capabilities.
Intel’s 320 SSD 8MB problem – We discuss a recent bug (since fixed) which left the Intel 320 SSD drive with only 8MB of storage, we presumed the bug was in the load leveling logic/block mapping logic of the drive controller.
How has IBM researched changed – We examine some of the changes at IBM research that have occurred over the past 50 years or so which have led to much more productive research results.
HDS buys BlueArc – We consider the implications of the recent acquisition of BlueArc storage systems by their major OEM partner, Hitachi Data Systems.
Will Hybrid drives conquer enterprise storage – We discuss the unlikely possibility that Hybrid drives (NAND/Flash cache and disk drive in the same device) will be used as backend storage for enterprise storage systems.
SNIA CDMI plugfest for cloud storage and cloud data services – We were invited to sit in on a recent SNIA Cloud Data Management Initiative (CDMI) plugfest and talk to some of the participants about where CDMI is heading and what it means for cloud storage and data services.
Is FC dead?! – What with the introduction of 40GbE FCoE just around the corner, 10GbE cards coming down in price and Brocade’s poor YoY quarterly storage revenue results, we discuss the potential implications on FC infrastructure and its future in the data center.
~~~~
I would have to say #3, 5, and 9 were the most fun for me to do. Not sure why, but #10 probably generated the most twitter traffic. Why the others were so popular is hard for me to understand.
I have been wondering for some time now how it is that a company known for it’s cutting edge research but lack of product breakthrough has transformed itself into an innovation machine.
There has been a sea change in the research at IBM that is behind the recent productization of tecnology.
Talking the past couple of days with various IBMers at STGs Smarter Computing Forum, I have formulate a preliminary hypothesis.
At first I heard that there was a change in the way research is reviewed for product potential. Nowadays, it almost takes a business case for research projects to be approved and funded. And the business case needs to contain a plan as to how it will eventually reach profitability for any project.
In the past it was often said that IBM invented a lot of technology but productized only a little of it. Much of their technology would emerge in other peoples products and IBM would not recieve anything for their efforts (other than some belated recognition for their research contribution).
Nowadays, its more likely that research not productized by IBM is at least licensed from them after they have patented the crucial technologies that underpin the advance. But it’s just as likely if it has something to do with IT, the project will end up as a product.
One executive at STG sees three phases to IBM research spanning the last 50 years or so.
Phase I The ivory tower:
IBM research during the Ivory Tower Era looked a lot like research universities but without the tenure of true professorships. Much of the research of this era was in materials and pure mathematics.
I suppose one example of this period was Mandlebrot and fractals. It probably had a lot of applications but little of them ended up in IBM products and mostly it advanced the theory and practice of pure mathematics/systems science.
Such research had little to do with the problems of IT or IBM’s customers. The fact that it created pretty pictures and a way of seeing nature in a different light was an advance to mankind but it didn’t have much if any of an impact to IBM’s bottom line.
Phase II Joint project teams
In IBM research’s phase II, the decision process on which research to move forward on now had people from not just IBM research but also product division people. At least now there could be a discussion across IBM’s various divisions on how the technology could enhance customer outcomes. I am certain profitability wasn’t often discussed but at least it was no longer purposefully ignored.
I suppose over time these discussions became more grounded in fact and business cases rather than just the belief in the value of the research for research sake. Technological roadmaps and projects were now looked at from how well they could impact customer outcomes and how such technology enabled new products and solutions to come to market.
Phase III Researchers and product people intermingle
The final step in IBM transformation of research involved the human element. People started moving around.
Researchers were assigned to the field and to product groups and product people were brought into the research organization. By doing this, ideas could cross fertilize, applications could be envisioned and the last finishing touches needed by new technology could be envisioned, funded and implemented. This probably led to the most productive transition of researchers into product developers.
On the flip side when researchers returned back from their multi-year product/field assignments they brought a new found appreciation of problems encountered in the real world. That combined with their in depth understanding of where technology could go helped show the path that could take research projects into new more fruitful (at least to IBM customers) arenas. This movement of people provided the final piece in grounding research in areas that could solve customer problems.
In the end, many research projects at IBM may fail but if they succeed they have the potential to make change IT as we know it.
I heard today that there were 700 to 800 projects in IBM research today if any of them have the potential we see in the products shown today like Watson in Healthcare and Neuromorphic chips, exciting times are ahead.
Toyota Hybrid Synergy Drive Decal: RAC Future Car Challenge by Dominic's pics (cc) (from Flickr)
I saw where Seagate announced the next generation of their Momentus XT Hybrid (SSD & Disk) drive this week. We haven’t discussed Hybrid drives much on this blog but it has become a viable product family.
I am not planning on describing the new drive specs here as there was an excellent review by Greg Schulz at StorageIOblog.
However, the question some in the storage industry have had is can Hybrid drives supplant data center storage. I believe the answer to that is no and I will tell you why.
Hybrid drive secrets
The secret to Seagate’s Hybrid drive lies in its FAST technology. It provides a sort of automated disk caching that moves frequently accessed OS or boot data to NAND/SSD providing quicker access times.
Storage subsystem caching logic has been around in storage subsystems for decade’s now, ever since the IBM 3880 Mod 11&13 storage control systems came out last century. However, these algorithms have gotten much more sophisticated over time and today can make a significant difference in storage system performance. This can be easily witnessed by the wide variance in storage system performance on a per disk drive basis (e.g., see my post on Latest SPC-2 results – chart of the month).
Enterprise storage use of Hybrid drives?
The problem with using Hybrid drives in enterprise storage is that caching algorithms are based on some predictability of access/reference patterns. When you have a Hybrid drive directly connected to a server or a PC it can view a significant portion of server IO (at least to the boot/OS volume) but more importantly, that boot/OS data is statically allocated, i.e., doesn’t move around all that much. This means that one PC session looks pretty much like the next PC session and as such, the hybrid drive can learn an awful lot about the next IO session just by remembering the last one.
However, enterprise storage IO changes significantly from one storage session (day?) to another. Not only are the end-user generated database transactions moving around the data, but the data itself is much more dynamically allocated, i.e., moves around a lot.
Backend data movement is especially true for automated storage tiering used in subsystems that contain both SSDs and disk drives. But it’s also true in systems that map data placement using log structured file systems. NetApp Write Anywhere File Layout (WAFL) being a prominent user of this approach but other storage systems do this as well.
In addition, any fixed, permanent mapping of a user data block to a physical disk location is becoming less useful over time as advanced storage features make dynamic or virtualized mapping a necessity. Just consider snapshots based on copy-on-write technology, all it takes is a write to have a snapshot block be moved to a different location.
Nonetheless, the main problem is that all the smarts about what is happening to data on backend storage primarily lies at the controller level not at the drive level. This not only applies to data mapping but also end-user/application data access, as cache hits are never even seen by a drive. As such, Hybrid drives alone don’t make much sense in enterprise storage.
Maybe, if they were intricately tied to the subsystem
I guess one way this could all work better is if the Hybrid drive caching logic were somehow controlled by the storage subsystem. In this way, the controller could provide hints as to which disk blocks to move into NAND. Perhaps this is a way to distribute storage tiering activity to the backend devices, without the subsystem having to do any of the heavy lifting, i.e., the hybrid drives would do all the data movement under the guidance of the controller.
I don’t think this likely because it would take industry standardization to define any new “hint” commands and they would be specific to Hybrid drives. Barring standards, it’s an interface between one storage vendor and one drive vendor. Probably ok if you made both storage subsystem and hybrid drives but there aren’t any vendor’s left that does both drives and the storage controllers.
~~~~
So, given the state of enterprise storage today and its continuing proclivity to move data around accross its backend storage, I believe Hybrid drives won’t be used in enterprise storage anytime soon.
HDS CEO Jack Domme shares the company’s vision and strategy with Influencer Summit attendees #HDSday by HDScorp
Attended #HDSday yesterday in San Jose. Listened to what seemed like the majority of the executive team. The festivities were MCed by Asim Zaheer, VP Corp and Product Marketing, a long time friend and employee, that came to HDS with the acquisition of Archivas five or so years ago. Some highlights of the day’s sessions are included below.
The first presenter was Jack Domme, HDS CEO, and his message was that there is a new, more aggressive HDS, focused on executing and growing the business.
Jack said there will be almost a half a ZB by 2015 and ~80% of that will be unstructured data. HDS firmly believes that much of this growing body of data today lives in silos, locked into application environments and can’t become truly information until it can be liberated from this box. Getting information out of the unstructured data is one of the key problems facing the IT industry.
To that end, Jack talked about the three clouds appearing on the horizon:
infrastructure cloud – cloud as we know and love it today where infrastructure services can be paid for on a per use basis, where data and applications move seemlessly across various infrastructural boundaries.
content cloud – this is somewhat new but here we take on the governance, analytics and management of the millions to billions pieces of content using the infrastructure cloud as a basic service.
information cloud – the end game, where any and all data streams can be analyzed in real time to provide information and insight to the business.
Jack mentioned the example of when Japan had their earthquake earlier this year they automatically stopped all the trains operating in the country to prevent further injury and accidents, until they could assess the extent of track damage. Now this was a specialized example in a narrow vertical but the idea is that the information cloud does that sort of real-time analysis of data streaming in all the time.
For much of the rest of the day the executive team filled out the details that surrounded Jack’s talk.
For example Randy DeMont, Executive VP & GM Global Sales, Services and Support talked about the new, more focused sales team. On that has moved to concentrate on better opportunities and expanded to take on new verticals/new emerging markets.
Then Brian Householder, SVP WW Marketing and Business Development got up and talked about some of the key drivers to their growth:
Current economic climate has everyone doing more with less. Hitachi VSP and storage virtualization is a unique position to be able to obtain more value out of current assets, not a rip and replace strategy. With VSP one layers better management on top of your current infrastructure, that helps get more done with the same equipment.
Focus on the channel and verticals are starting to pay off. More than 50% of HDS revenues now come from indirect channels. Also, healthcare and life sciences are starting to emerge as a crucial vertical for HDS.
Scaleability of their storage solutions is significant. Used to be a PB was a good sized data center but these days we are starting to talk about multiple PBs and even much more. I think earlier Jack mentioned that in the next couple of years HDS will see their first 1EB customer.
MarkMike Gustafson, SVP & GM NAS (former CEO BlueArc) got up and talked about the long and significant partnership between the two companies regarding their HNAS product. He mentioned that ~30% of BlueArc’s revenue came from HDS. He also talked about some of the verticals that BlueArc had done well in such as eDiscovery and Media and Entertainment. Now these verticals will become new focus areas for HDS storage as well.
John Mansfield, SVP Global Solutions Strategy and Developmentcame up and talked about the successes they have had in the product arena. Apparently they have over 2000 VSPs intsalled, (announced just a year ago), and over 50% of the new systems are going in with virtualization. When asked later what has led to the acceleration in virtualization adoption, the consensus view was that server virtualization and in general, doing more with less (storage efficiency) were driving increased use of this capability.
Hicham Abdessamad, SVP, Global Services got up and talked about what has been happening in the services end of the business. Apparently there has been a serious shift in HDS services revenue stream from break fix over to professional services (PS). Such service offerings now include taking over customer data center infrastructure and leasing it back to the customer at a monthly fee. Hicham re-iterated that ~68% of all IT initiatives fail, while 44% of those that succeed are completed over time and/or over budget. HDS is providing professional services to help turn this around. His main problem is finding experienced personnel to help deliver these services.
After this there was a Q&A panel of John Mansfield’s team, Roberto Bassilio, VP Storage Platforms and Product Management, Sean Moser, VP Software Products, and Scan Putegnat, VP File and Content Services, CME. There were a number of questions one of which was on the floods in Thailand and their impact on HDS’s business.
Apparently, the flood problems are causing supply disruptions in the consumer end of the drive market and are not having serious repercussions for their enterprise customers. But they did mention that they were nudging customers to purchase the right form factor (LFF?) disk drives while the supply problems work themselves out.
Also, there was some indication that HDS would be going after more SSD and/or NAND flash capabilities similar to other major vendors in their space. But there was no clarification of when or exactly what they would be doing.
After lunch the GMs of all the Geographic regions around the globe got up and talked about how they were doing in their particular arena.
Jeff Henry, SVP &GM Americas talked about their success in the F500 and some of the emerging markets in Latin America. In fact, they have been so successful in Brazil, they had to split the country into two regions.
Niels Svenningsen, SVP&GM EMAE talked about the emerging markets in his area of the globe, primarily eastern Europe, Russia and Africa. He mentioned that many believe Africa will be the next area to take off like Asia did in the last couple of decades of last century. Apparently there are a Billion people in Africa today.
Kevin Eggleston, SVP&GM APAC, talked about the high rate of server and storage virtualization, the explosive growth and heavy adoption of Cloud pay as you go services. His major growth areas were India and China.
The rest of the afternoon was NDA presentations on future roadmap items.
—-
All in all a good overview of HDS’s business over the past couple of quarters and their vision for tomorrow. It was a long day and there was probably more than I could absorb in the time we had together.
The NexGen n5 Storage System (c) 2011 NexGen Storage, All Rights Reserved
NexGen comes out of stealth
NexGen Storage a local storage company came out of stealth today and is also generally available. Their storage system has been in beta since April 2011 and is in use by a number of customers today.
Their product uses DRAM caching, PCIe NAND flash, and nearline SAS drives to provide guaranteed QoS for LUN I/O. The system can provision IOP rate, bandwidth and (possibly) latency over a set of configured LUNs. Such provisioning can change using policy management on a time basis to support time-based tiering. Also, one can prioritize how important the QoS is for a LUN so that it could be guaranteed or could be sacrificed to support performance for other storage system LUNs.
The NexGen storage provides a multi-tiered hybrid storage system that supports 10GBE iSCSI, and uses MLC NAND PCIe card to boost performance for SAS nearline drives. NexGen also supports data deduplication which is done during off-peak times to reduce data footprint.
DRAM replacing Disk!?
In a report by ARS Technica, a research group out of Stanford is attempting to gang together server DRAM to create a networked storage system. There have been a number of attempts to use DRAM as a storage system in the past but the Stanford group is going after it in a different way by aggregating together DRAM across a gaggle of servers. They are using standard disks or SSDs for backup purposes because DRAM is, of course, a volatile storage device but the intent is to keep all in memory to speed up performance.
I was at SNW USA a couple of weeks ago talking to a Taiwanese company that was offering a DRAM storage accelerator device which also used DRAM as a storage service. Of course, Texas Memory Systems and others have had DRAM based storage for a while now. The cost for such devices was always pretty high but the performance was commensurate.
In contrast, the Stanford group is trying to use commodity hardware (servers) with copious amounts of DRAM, to create a storage system. The article seems to imply that the system could take advantage of unused DRAM, sitting around your server farm. But, I find it hard to believe that. Most virtualized server environments today are running lean on memory and there shouldn’t be a lot of excess DRAM capacity hanging around.
The other achilles heel of the Stanford DRAM storage is that it is highly dependent on low latency networking. Although Infiniband probably qualifies as low latency, it’s not low latency enough to support this systems IO workloads. As such, they believe they need even lower latency networking than Infiniband to make it work well.
Speaking of PCIe NAND flash, OCZ just announced speedier storage, upping the random read IO rates up to 245K from the 230K IOPS offered in their previous PCIe NAND storage. Unclear what they did to boost this but, it’s entirely possible that they have optimized their NAND controller to support more random reads.
OCZ’s been busy. Now that the enterprise is moving to adopt MLC and eMLC SSD storage, it seems time to introduce TLC (3-bits/cell) SSDs. With TLC, the price should come down a bit more (see chart in article), but the endurance should also suffer significantly. I suppose with the capacities available with TLC and enough over provisioning OCZ can make a storage device that would be reliable enough for certain applications at a more reasonable cost.
I never thought I would see MLC in enterprise storage so, I suppose at some point even TLC makes sense, but I would be even more hesitant to jump on this bandwagon for awhile yet.
Early last week Solid Fire, another local SSD startup obtained $25M in additional funding. Solid Fire, an all SSD storage system company, is still technically in beta but expect general availability near the end of the year. We haven’t talked about them before in RayOnStorage but they are focusing on cloud service providers with an all SSD solution which includes deduplication. I promise to talk about them some more when they reach GA.
Finally, in the highend consumer space, LaCie just released a new SSD which attaches to servers/desktops using the new Apple-Intel Thunderbolt IO interface. Given the expense (~$900) for 128GB SSD, it seems a bit much but if you absolutely have to have the performance this may be the only way to go.
—-
Well that’s about all I could find on SSD and DRAM storage announcements. However, I am sure I missed a couple so if you know one I should have mentioned please comment.