EMC 2013Nov14 Releases XtremIO for general availability

In Dell EMC, XtremIO by AdministratorLeave a Comment

EMC® has recently released XtremIO their all-flash storage system for general availability. XtremIO has been directed availability and during this time has booked 1.5PB of all-flash storage, supporting 400K virtual desktops and 150K virtual servers. XtremIO GA is shipping version 2.2 of the product.

EMC XtremIO functionality

XtremIO is a scale-out, all-flash, deduplicating, block storage system which comes as a cluster of nodes. Customer data is broken up into 4KB chunks, fingerprinted, deduplicated and then the fingerprint is used to determine which XtremIO controller to send the data to. The fingerprinting process essentially randomizes customer data so that it is spread evenly across the whole backend.

The system supports snapshots and all LUNs are inherently thinly provisioned. XtremIO uses a two-stage metadata lookup table: the first stage maps customer block address to fingerprint value; and the second stage maps fingerprint value to physical location on the backend storage. Metadata resides in memory but is journaled to other controllers and hardened to SSD.

RAID-like protection is provided by XtremIO Data Protection (XDP) that supports partial stripe widths, which doesn’t use a log structured file system. XDP is optimized for overwrites and randomized writes. All this means that XtremIO doesn’t use garbage collection.

Mirroring services are provided via EMC RecoverPoint and stretched data centers or data-at-a-distance support is available with VPLEX. XtremIO is fully VAAI compliant with vCenter plug-ins to integrate the system into VMware operations. XtremIO is also supported under EMC PowerPath for fault tolerant IO access.

EMC XtremIO hardware

XtremIO’s scale out FC (each controller has two 8Gbps FC and two 10GigE iSCSI ports, a total of four per X-Brick) storage array comes in X-Bricks. Each X-Brick supports 10TB of flash storage, with a 20TB version coming early next year. You can have up to 4 X-Bricks in a cluster today with a total of eight N-way active controllers. You can start with a minimum of a single X-Brick. Cluster nodes are all connected via a redundant InfiniBand fabric for reduced latency and RDMA support.

Each X-Brick comes with 2 1U controllers and a single 2U SSD storage bay.  The X-Brick controllers have dual-socket CPUs with 16 processer cores per controller. As discussed earlier, system block mapping and SSD mapping metadata all resides in memory and each controller has 256GB of DRAM, or 512GB per X-Brick. All SSD storage in the cluster is shared across all controllers in the cluster.  All X-Brick controllers can also access all memory in the cluster over the InfiniBand cluster interconnects.

 XtremIO performance

Although published benchmarks are not available, EMC showed some internal charts of benchmark runs that were pretty impressive. All performance metrics were reported using unique data, with no duplicate data which would improve performance. Performance was measured on a well-broken in systems. All measurements were made with arrays that were filled to 80% of capacity. Finally more write intensive workloads were used than what is normally seen for all-flash performance measurements.

EMC had a couple of charts, one showing a leading competitor, a single X-Brick and dual X-brick under a 4KB block, random 50:50 Read:Write mix workload. The single X-Brick knee of the performance curve was ~180K IOPS with ~1.25msec response time. The dual X-Brick knee of the curve was ~310K IOPS with the same response time. On the same chart, the un-named competitor knee of the performance curve was at ~50K IOPS with a response time of ~2.25msec. It wasn’t even a close comparison because at 50K IOPS the competitor had a response time that went vertical from less than 0.5msec to  ~2.5msec and then IOPS continued to increase until ~175K IOPS with a gradual increase in response time to ~3.4msec.

On another comparison, this time using 8X random write only workload, (probably worst case for most all-flash storage systems). The single X-Brick performance curve looked similar to the 4K mixed read write workload (with numbers not shown). On this chart they had the same or another leading competitor and we see its performance degrade and improve as the competitor goes in and out of garbage collection. In contrast, with no need for garbage collection, XtremIO made a point of saying they provide predictable performance.

Significance

XtremIO has been a long time coming. The performance numbers look pretty good and the fact that they are random write intensive workloads looks even better.  Of course, with clustering and scale-out one can dial up the performance to even higher heights. Data management functionality is not on a par with VNX or VMAX but the point of all-flash arrays is performance and with everything I can see it delivers on that score.

The all-flash storage market is becoming more crowded every month. XtremIO stands out because of their scale-out, deduplication and more predictable high IOPs/low-latency performance. Of course, EMC now stands behind it, which is yet another significant plus. This long awaited debut should prove interesting to the rest of the market.

[This storage announcement summary was originally sent out to our newsletter subscribers in November of 2013.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current storage announcement and/or storage performance results please consider signing up for our newsletter.]  

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.