What does 100TB of flash need with a new ASIC? And why would you implement a realtime analytics data engine using object storage interface on Flash?
It seems the new company purchased by EMC called DSSD is up to it’s eyebrows in ASIC design to implement a lightening fast object store to deal with the needs of real time analytics. Somewhere today there was a slide on the overheads of standard 25usec of OS overhead for the standard Posix file stack and then there is the typical 300 usec SSD overhead to perform an I/O. But as we learned a couple of weeks ago at SFD5 with Diablo-Technologies MCS and SanDisk new UltraDIMM they have reduced the SSD overhead by using memory channels to 5 µsec and now that OS overhead is 5X the overhead of the storage itself.
So what’s one to do. In the Case of Diablo-Tech’s MCS they converted by software from DAS IO to memory channel IO and via ASIC Memory channel IO back to SATA disk IO.
Not sure what DSSD does but if I was to design a new ASIC for memory channel I would want something that talks maybe memory interface and scales it out to 100TBs of flash. At the software layer maybe we could talk object storage interfaces to the applications.
Going to learn more throughout the day…
[Learned on day 2 it’s more likely shared PCIe SSD storage. Ok it’s not 5 µsec latency storage but still faster than networked storage.]