DataDirect Networks WOS cloud storage

DataDirect Networks (DDN) announced this week a new product offering private cloud services. Apparently the new Web Object Scaler (WOS) is a storage appliance that can be clustered together across multiple sites and offers a single global file name space across all the sites. Also the WOS cloud supports policy file replication and distribution across sites for redundancy and/or load ballancing purposes.

DDN’s press release said a WOS cloud can service up to 1 million random file reads per second. They did not indicate the number of nodes required to sustain this level of performance and they didn’t identify the protocol that was used to do this. The press release implied low-latency file access but didn’t define what they meant here. 1M file reads/sec doesn’t necessarily mean they are all read quickly. Also, there appears to b more work for a file write than a file read and there is no statement on file ingest rate provided.

There are many systems out there touting a global name space. However not many say thier global name space spans across multiple sites. I suppose cloud storage would need to support such a facility to keep file names straight across sites. Nonetheless, such name space services would imply more overhead during file creation/deletion to keep everything straight and meta data duplication/replication/redundancy to support this.

Many questions on how this all works together with NFS or CIFS but it’s entirely possible that WOS doesn’t support either file access protocol and just depends on HTML get and post to access files or similar web services. Moreover, assuming WOS supports NFS or CIFS protocols, I often wonder why these sorts of announcements aren’t paired with a SPECsfs(r) 2008 benchmark report which could validate any performance claim at least at the NFS or CIFS protocol levels.

I talked to one media person a couple of weeks ago and they said cloud storage is getting boring. There are a lot of projects (e.g., Atmos from EMC) out there targeting future cloud storage, I hope for their sake boring doesn’t mean no market exists for cloud storage.

2 Replies to “DataDirect Networks WOS cloud storage”

  1. First I’d like to say thank you for writing about WOS. You’ve asked several good questions about WOS in your post and here are the answers.

    1. “a WOS cloud can service up to 1 million random file reads per second. They did not indicate the number of nodes required to sustain this level of performance.” This performance level is achieved using 100 of our high-density WOS6000 nodes. An important fact is that these are true sustainable random file reads (not IOPS – see below) from disk. We’re not cheating by quoting cached or peak results. I just read that all of Amazon S3 now contains 50 billion files and requires 80,000 file requests per second, so that should give you an idea of the level of performance & scalability we can achieve with WOS.

    2. “they didn’t identify the protocol that was used to do this.” The network protocol is TCP/IP. The protocol used by applications to communicate with the WOS cloud is based on an API that we publish which provides very high performance direct I/O.

    3. “The press release implied low-latency file access but didn’t define what they meant here.” WOS maintains the locations of files within the cloud in memory distributed amongst the nodes. By doing so, WOS avoids the issue NAS systems have when storing large numbers of files in which they consume disk IOPS to read metadata before reading the actual file. This technology enables WOS to use only a single disk seek to retrieve each file, whereas a typical storage system requires multiple seeks for an equivalent file read operation. By eliminating disk seeks (each of which is high latency compared to a CPU or memory operation) we drastically cut the latency to retrieve a file. Additionally, WOS automatically delivers files from the least latent network path between the storage nodes and the requesting server. For example, if a file has replicas in New York and San Francisco and the requesting server is in Arizona, WOS will read the replica in San Francisco, which is closer to the server in Arizona.

    4. “1M file reads/sec doesn’t necessarily mean they are all read quickly.” In this case it does – WOS can do 1M all with very low latency and all directly from disk.

    5. “Also, there appears to be more work for a file write than a file read and there is no statement on file ingest rate provided.” That was an insightful observation. Indeed a file write requires two disk operations as we write both the file and journal information. Thus ingest (file writes per second) performance is not as high as read performance, but is still VERY high. Our target customers are generally in ingest-light (relative to the read workload), read-heavy environments, such as web serving. Even so, our write performance is still extremely high and superior to high performance NAS systems, so all is good.

    6. “Many questions on how this all works together with NFS or CIFS but it’s entirely possible that WOS doesn’t support either file access protocol.” True. Today WOS is accessed through an API. Utilizing NFS or CIFS would preclude many of the high performance and advanced features WOS delivers.

    7. “I often wonder why these sorts of announcements aren’t paired with a SPECsfs(r) 2008 benchmark report which could validate any performance claim at least at the NFS or CIFS protocol levels.” We can’t directly compare WOS to SPECsfs benchmarks, in part because these benchmarks report IOPS, not file operations. And there’s no mechanism in the test to translate IOPS into file operations. Did it take 5 or 3 or 12 IOPS to retrieve a file? No way to know. What I can say from our own internal testing is that a WOS system delivers many times the file operations per second of an enterprise NAS system containing an equivalent number/type of disk drives. Customers care about file operations – it’s something tangible to them. IOPS is abstract and your mileage will vary based on your file system choice and how the system is engineered and tuned. That’s why we quote our performance numbers in file reads per second and file writes per second. It’s more real and something you can bank on.

    Josh Goldstein
    VP Product Marketing
    DataDirect Networks
    (408) 625-7425
    jgoldstein@ddn.com

Comments are closed.