Category Archives: Scale-out storage

45: Greybeards talk desktop cloud backup/storage & disk reliability with Andy Klein, Director Marketing, Backblaze

In this episode, we talk with Andy Klein, Dir of Marketing for Backblaze, which backs up  desktops and computers to the cloud and also offers cloud storage.

Backblaze has a unique consumer data protection solution where customers pay a flat fee to backup their desktops and then may pay a separate fee for a large recovery. On their website, they have a counter indicating they have restored almost 22.5B files. Desktop/computer backup costs $50/year. To restore files, if it’s under 500GB you can get a ZIP file downloaded at no charge but if it’s larger, you can get a USB flash stick or hard drive shipped FedEx but it will cost you.

They also offer a cloud storage service called B2 (not AWS S3 compatible) which costs $5/TB/year. Backblaze just celebrated their tenth anniversary last April.

Early on Backblaze figured out the only way they were going to succeed was to use consumer class disk drives and to engineer their own hardware and to write their own software to manage them.

Backblaze openness

Backblaze has always been a surprisingly open company. Their Storage Pod hardware (6th generation now) has been open sourced from the start and holds 60 drives for 480TB raw capacity.

A couple of years back when there was a natural disaster in SE Asia, disk drive manufacturing was severely impacted and their cost per GB for disk drives, almost doubled overnight. Considering they were buying about 50PB of drives during that period it was going to cost them ~$1M extra. But you could still purchase drives, in limited quantities, at select discount outlets. So, they convinced all their friends and family to go out and buy consumer drives for them (see their drive farming post[s] for more info).

Howard said that Gen 1 of their Storage Pod hardware used rubber bands to surround and hold disk drives and as a result, it looked like junk. The rubber bands were there to dampen drive rotational vibration because they were inserted vertically. At the time, most if not all of the storage industry used horizontally inserted drives.  Nowadays just about every vendor has a high density, vertically inserted drive tray but we believe Backblaze was the first to use this approach in volume.

Hard drive reliability at Backblaze

These days Backblaze has over 300PB of storage and they have  been monitoring their disk drive SMART (error) logs since the start.  Sometime during 2013 they decided to keep the log data rather than recycling the space. Since they had the data and were calculating drive reliability anyways, they thought that the industry and consumers would appreciate seeing their reliability info. In December of 2014 Backblaze published their hard drive reliability report using Annualized Failure Rates (AFR) they calculated from the many thousands of disk drives they ran every day. They had not released Q2 2017 hard drive stats yet but their Q1 2017 hard drive stats post has been out now for about 3 months.

Most drive vendors report disk reliability using Mean Time Between Failure (MTBF), which is the interval of time until half the drives will fail.  AFR is an alternative reliability metric, which is the percentage of drives that will fail in one year’s time.  Although both are equivalent (for MTBF in hours, AFR=8766/MTBF), AFR is more useful as it tells users the percent of drives they can expect to fail over the next twelve months.

Drive costs matter, but performance matters more

It seemed to the Greybeards that SMR (shingle magnetic recording, read my RoS post for more info) disks would be a great fit for Backblaze’s application. But Andy said their engineering team looked at SMR disks and found the 2nd write (overwrite of a zone) had terrible performance. As Backblaze often has customers who delete files or drop the service, they reuse existing space all the time and SMR disks would hurt performance too much.

We also talked a bit about their current data protection scheme. The new scheme is a Reed Solomon (RS) solution with data written to 17 Storage Pods and parity written to 3 Storage Pods across a 20 Storage Pod group called a Vault.  This way they can handle 3 Storage Pod failures across a Vault without losing customer data.

Besides disk reliability and performance, Backblaze is also interested in finding the best $/GB for drives they purchase. Andy said nowadays the consumer disk pricing (at Backblaze’s volumes) generally falls between ~$0.04/GB and ~$0.025/GB, with newer generation disks starting out at the higher price and as the manufacturing lines mature, fall to the lower price. Currently, Backblaze is buying 8TB disk drives.

The podcast runs ~45 minutes.  Andy was great to talk with and was extremely knowledgeable about disk drives, reliability statistics and “big” storage environments.  Listen to the podcast to learn more.

Andy Klein, Director of marketing at Backblaze

Mr. Klein has 25 years of experience in the cloud storage, computer security, and network security.

Prior to Backblaze he worked at Symantec, Checkpoint, PGP, and PeopleSoft, as well as startups throughout Silicon Valley.

He has presented at the Federal Trade Commission, RSA, the Commonwealth Club, Interop, and other computer security and cloud storage events

 

43: GreyBeards talk Tier 0 again with Yaniv Romem CTO/Founder & Josh Goldenhar VP Products of Excelero

In this episode, we talk with another next gen, Tier 0 storage provider. This time our guests are Yaniv Romem CTO/Founder  & Josh Goldenhar (@eeschwa) VP Products from Excelero, another new storage startup out of Israel.  Both Howard and I talked with Excelero at SFD12 (videos here) earlier last month in San Jose. I was very impressed with their raw performance and wrote a popular RayOnStorage blog post on their system (see my 4M IO/sec@227µsec 4KB Read… post) from our discussions during SFD12.

As we have discussed previously, Tier 0, next generation flash arrays provide very high performing storage at very low latencies with modest to non-existent advanced storage services. They are intended to replace server, direct access SSD storage with a more shared, scaleable storage solution.

In our last podcast (with E8 Storage) they sold a hardware Tier 0 appliance. As a different alternative, Excelero is a software defined, Tier 0 solution intended to be used on any commodity or off the shelf server hardware with high end networking and (low to high end) NVMe SSDs.

Indeed, what impressed me most with their 4M IO/sec, was that target storage system had almost 0 CPU utilization. (Read the post to learn how they did this). Excelero mentioned that they were able to generate high (11M random 4KB) IO/sec on  Intel Core 7, desktop-class CPU. Their one need in a storage server is plenty of PCIe lanes. They don’t even need to have dual socket storage servers, single socket CPU’s work just fine as long as the PCIe lanes are there.

Excelero software

Their intent is to bring Tier 0 capabilities out to all big storage environments. By providing a software only solution it could be easily OEMed by cluster file system vendors or HPC system vendors and generate amazing IO performance needed by their clients.

That’s also one of the reasons that they went with high end Ethernet networking rather than just Infiniband, which would have limited their market to mostly HPC environments. Excelero’s client software uses RoCE/RDMA hardware to perform IO operations with the storage server.

The other thing having little to no target storage server CPU utilization per IO operation gives them is the ability to scale up to 1000 of hosts or storage servers without reaching any storage system bottlenecks.  Another concern eliminated by minimal target server CPU utilization is that you can’t have a noisy neighbor problem, because there’s no target CPU processing to be shared.  Yet another advantage with Excelero is that bandwidth is only  limited by storage server PCIe lanes and networking.  A final advantage of their approach is that they can support any of the current and upcoming storage class memory devices supporting NVMe (e.g., Intel Optane SSDs).

The storage services they offer include RAID 0, 1 and 10 and a client side logical volume manager which supports multi-pathing. Logical volumes can span up to 128 storage servers, but can be accessed by almost any number of hosts. And there doesn’t appear to be a specific limit on the number of logical volumes you can have.

 

They support two different protocols across the 40GbE/100GbE networks. Standard NVMe over Fabric or RDDA (Excelero patented, proprietary Remote Direct Disk Array access). RDDA is what mainly provides the almost non-existent target storage server CPU utilization. But even with standard NVMe over Fabric they support low target CPU utilization. One proviso, with NVMe over Fabric, they do add shared volume functionality to support RAID device locking and additional fault tolerance capabilities.

On Excelero’s roadmap is thin provisioning, snapshots, compression and deduplication. However, they did mention that adding advanced storage functionality like this will impede performance. Currently, their distributed volume locking and configuration metadata is not normally accessed during an IO but when you add thin provisioning, snapshots and data reduction, this metadata needs to become more sophisticated and will necessitate some amount of access during and after an IO operation.

Excelero’s client software runs in Linux kernel mode client and they don’t currently support VMware or Hyper-V. But they do support KVM as a hypervisor and would be willing to support the others, if APIs were published or made available.

They also have an internal OpenStack Cinder driver but it’s not part of their OpenStack’s release yet. They’re waiting for snapshot to be available before they push this into the main code base. Ditto for Docker Engine but this is more of a beta capability today.

Excelero customer experience

One customer (NASA Ames/Moffat Field) deployed a single 2TB NVMe SSD across 128 hosts and had a single 256TB logical volume shared and accessed by all 128 hosts.

Another customer configured Excelero behind a clustered file system and was able to generate 30M randomized IO/sec at 200µsec latencies but more important, 140GB/sec of bandwidth. It turns out high bandwidth is important to many big data applications that have to roll lots of data into their analytics clusters, processing it and output results, and then do it all over again. Bandwidth limitations can impact the success of these types of applications.

By being software only they can be used in a standalone storage server or as a hyper-converged solution where applications and storage can be co-resident on the same server. As noted above, they currently support Linux O/Ss for their storage and client software and support any X86 Intel processor, any RDMA capable NIC, and any NVMe SSD.

Excelero GTM

Excelero is focused on the top 200 customers, which includes the hyper-scale providers like FaceBook, Google, Microsoft and others. But hyper-scale customers have huge software teams and really a single or few, very large/complex applications which they can create/optimize a Tier 0 storage for themselves.

It’s really the customers just below the hyper-scalar class, that have similar needs for high low latency IO/sec or high IO bandwidth (or both) but have 100s to 1000s of applications and they can’t afford to optimize them all for Tier 0 flash. If they solve sharing Tier 0 flash storage in a more general way, say as a block storage device. They can solve it for any application. And if the customer insists, they could put a clustered file system or even an object storage (who would want this) on top of this shared Tier 0 flash storage system.

These customers may currently be using NVMe SSDs within their servers as a DAS device. But with Excelero these resources can be shared across the data center. They think of themselves as a top of rack NVMe storage system.

On their website they have listed a few of their current customers and their pretty large and impressive.

NVMe competition

Aside from E8 Storage, there are few other competitors in Tier 0 storage. One recently announced a move to an NVMe flash storage solution and another killed their shipping solution. We talked about what all this means to them and their market at the end of the podcast. Suffice it to say, they’re not worried.

The podcast runs ~50 minutes. Josh and Yaniv were very knowledgeable about Tier 0, storage market dynamics and were a delight to talk with.   Listen to the podcast to learn more.


Yaniv Romem CTO and Founder, Excelero

Yaniv Romem has been a technology evangelist at disruptive startups for the better part of 20 years. His passions are in the domains of high performance distributed computing, storage, databases and networking.
Yaniv has been a founder at several startups such as Excelero, Xeround and Picatel in these domains. He has served in CTO and VP Engineering roles for the most part.


Josh Goldenhar, Vice President Products, Excelero

Josh has been responsible for product strategy and vision at leading storage companies for over two decades. His experience puts him in a unique position to understand the needs of our customers.
Prior to joining Excelero, Josh was responsible for product strategy and management at EMC (XtremIO) and DataDirect Networks. Previous to that, his experience and passion was in large scale, systems architecture and administration with companies such as Cisco Systems. He’s been a technology leader in Linux, Unix and other OS’s for over 20 years. Josh holds a Bachelor’s degree in Psychology/Cognitive Science from the University of California, San Diego.

33: GreyBeards talk HPC storage with Frederic Van Haren, founder HighFens & former Sr. Director of HPC at Nuance

IMG_6319In episode 33 we talk with Frederic Van Haren (@fvha), founder of HighFens, Inc. (@HighFens), a new HPC consultancy and former Senior Director of HPC at Nuance Communications. Howard and I got a chance to talk with Frederic at a recent HPE storage deep dive event, I met up with him again during SFD10, where he was talking on behalf of Kaminario, and he was also at HPE Discover conference last week.

Nuance is the backend speech recognition engine for a number of popular service offerings. Nuance looks very similar to a lot of other hyper-scale customers and ultimately, we feel may be the way of the future for all IT over the coming decades.  Nuance’s data storage journey since Frederic’s tenure with the company holds many lessons for all of us in the storage industry

Nuance currently has ~6PB usable (~16PB raw) of speech wave files as well as uncountable text and other files, all inside IBM SpectrumScale (GPFS).  They have both lots of big files and lots of small files. These days, Spectrum Scale is processing 2-3M files/second. They have doubled capacity for each of the last 9 years, and today handle a billion new files a month. GPFS stripes data across storage, provides data protection, migration, snapshotting and storage tiering across a diverse mix of storage. At the end of the podcast we discussed some open source alternatives to Spectrum Scale but at the time Nuance started down this path,  GPFS was found to be the only thing that could do the job. This proved to be a great solution as they have completely swapped out the underlying storage at least 3 times and all their users were none the wiser.

The first storage that Frederic talked about was Coraid (no longer in business) and their ATA over Ethernet storage solution. This used a SuperMicro with 24 SATA drives/shelf and they bought 40 shelves. Over time this grew to 1000s of SATA drives and was easily scaleable but hard to manage, as it was pretty dumb storage. In fact, they had to deploy video cameras, focused on drive shelves, to detect when drives failed!

Overtime, Nuance came to the realization that they had to do something more manageable and brought in HPE MSA storage to replace their Coraid storage. The MSA was a great solution for them which had 96 SAS drives, were able to support both faster “SCRATCH” storage using fast SAS 300GB/15KRPM drives and slower “STATIC” storage with slower SATA 760GB/7.2KRPM drives and was much more manageable than the Coraid solution.

Although MSA storage worked great, after a while, Nuance’s sprawling FC environment which was doubling yearly, caused them to rethink their storage once again. This led them to swap out all their HPE MSA storage, for HPE 3PAR to consolidate their FC network and storage footprint.

For metadata, Nuance uses a 76 node, Hadoop cluster for sophisticated search queries as doing an LS on the GPFS file system would take days. Their file meta-data is essentially a textual, row by row database and they use queries over the Hadoop cluster to determine things like which files have american english, spoken by females, with 8Khz recording.  Not sure when, but eventually Nuance deployed HPE Vertica SQL over Hadoop for their metadata engine and dropped average query from 12 minutes to 73 sec.(!!)

Nuance, because of their extreme growth and more open environment to storage innovation, had become a favorite for storage startups and major vendors to do Proofs of Concepts (PoC) on new storage offerings. One PoC, Nuance did was for Kamanario storage. There is a standard metric that says a CPU core requires so many IOPS, so that when CPU cores  increase,  you need to supply more IOPS. They went with Kaminario for their test-dev environment and more performance intensive storage. Nuance appreciates Kamanario’s reliability, high availability and highly predictable performance. (See the SFD10 video feed for Frederic’s session)

We talked a bit about how speech recognition’s Hidden Markov Chain statistical model was heavily dependent on CPU cores. Today, if you want to do a recognition task, you assigned it to one core and waited until it was done, a serial process dependent on the # of CPU cores you had available. This turned out to be quite a problem as you had to scale CPU cores if you wanted to do more concurrent speech recognition activities. Then came GPUs and you could do speech recognition work on a GPU core. With the new GPU cards,   instead of a server having ~16 CPU cores,  you could have a server with multiple Graphic cards having 3000-GPU cores. This scaled a lot easier. Machine learning and deep neural nets have the potential to parallelize this, so that it will scale even better

In the end, HPC trials, tribulations and ways of doing business are starting to become  mainstream. I was recently talking to one vendor that said, most HPC groups start out in isolation to support one application but over time they either subsume corporate IT or get absorbed into corp. IT or continue to be a standalone group (while waiting until one of the other two happen).

The podcast runs ~41 minutes and  covers a lot of ground about one HPC organization’s evolution of their storage environment over time, what was driving some of that evolution and the tools they chose to master it.  Listen to the podcast to learn more.

0F2A7849 - Copyv2-resizedFrederic Van Haren, founder HighFens, Inc.

Frederic Van Haren is the Chief Technology Officer @Highfens and known for his insights in the HPC and storage industry. He has over 20 years of experience in High Tech providing technical leadership and strategic direction in Telecom and Speech markets. Frederic spent the last decade at  Nuance Communications building large HPC environments from the ground up. He is frequently invited to speak at events to provide his insights on the HPC and storage markets. He has played leading roles as President of a variety of technology user groups promoting the use of innovative technology. As an Engineer he enjoys working with the engineering teams from technology vendors providing feedback on new and upcoming products.

Frederic lives in Massachusetts,  USA but grew up in the northern part of Belgium where he received his Masters in Electrical Engineering, Electronics and Automation.

GreyBeards talk with Lee Caswell and Dave Wright of NetApp

In our 30th episode, we talk with Dave Wright (@JungleDave), SolidFire founder, VP & GM SolidFire of NetApp and Lee Casswell (@LeeCaswell), VP Products, Solution & Services Marketing NetApp. Dave’s been on before as CEO of SolidFire back in May of 2014, but this is the first time for Lee. Dave’s also been a prominent guest at Storage Field Day, most recently at SFD9 with Dave Hitz from NetApp. Unclear how Lee managed to avoid TFD/SFD duty but it’s only a matter of time.

Solidfire was recently acquired by NetApp in their largest acquisition ever, signaling a new direction for them (acquisition closed 2 Feb. 2016). Since we had spent a prior podcast on another recent storage acquisition, we thought it only appropriate to talk with these two as well. We started the discussion with Dave and how it feels to be within the NetApp umbrella.

Another topic that came up was how flash gets used in the cloud. Old school had it that flash was just high IO performance but nowadays, next gen application development has a range of IO requirements which all need consistent performance to data. Flash with scale out and QoS can handle this wide range of requirements across cloud applications. Lee mentioned how flash adoption is changing from application specific to more general purpose storage which is removing the “IO bottleneck”.

Google had written a study saying that for the next decade there will not be a flash-disk crossover but the differences are small enough that you almost have to be hyper-scale customers to see significant economic advantages.

We discussed the lack of lot’s of AFA’s doing well on throughput intensive benchmarks. Dave mentioned that throughput was one of disk’s better performing modes and in the past, storage interfaces 3Gbps-6Gbps hid a lot of flash performance. But benchmarks of synthesized pure workloads aren’t real world, workloads in real data centers are much messier.

IO density (IOPS/GB) came up as another discussion topic.  At low IO density, disk may still make sense but as IO density increases, all flash makes much more sense.

Google also mentioned the importance of tail-end IO latency (IO latency at 99.9%). Poor tail IO latency has been an ongoing problem holding back the adoption of hybrid storage. All flash has same advantages here but are not all AFAs are immune to the problems in tail-end latency.

The podcast runs just over 39 minutes and episode covers a lot of ground about their products, flash technology advantages, and market dynamics.  Listen to the podcast to learn more.

Dave Wright, SolidFire Founder, Vice President, and GM

Dave Wright_201506-0063Dave Wright left Stanford in 1998 to help start GameSpy Industries, a leader in online video game media, technology, and software. While at GameSpy, Dave led the team that created a backend infrastructure powering thousands of games and millions of gamers. GameSpy merged with IGN Entertainment in 2004 to create one of the largest Internet gaming & entertainment media companies. Dave served as Chief Architect for IGN and led technology integration with FIM / MySpace after IGN was acquired by NewsCorp in 2005.

In 2007 Dave founded Jungle Disk, a pioneer and early leader in cloud-based storage and backup solutions for consumers and businesses. Jungle Disk was acquired by leading cloud provider Rackspace in 2008 and Dave worked closely with the Rackspace Cloud division to build a cloud platform supporting tens of thousands of customers. In December 2009 Dave left Rackspace to start SolidFire.

Lee Caswell, Vice President Product, Solutions, and Services Marketing

LeeLee Caswell is vice president of Product, Solutions and Services Marketing at NetApp, where he leads a team that speeds the customer adoption of new products, partnerships, and integrations. Lee joined NetApp in 2014 and has extensive experience in executive leadership within the storage, flash and virtualization markets.

Lee was previously vice president of Marketing at Fusion-IO (now SanDisk). Prior to Fusion-IO Lee was a founding member of Pivot3, a company considered to be an early innovator in hyper-converged systems, where he served as the CEO and CMO. Earlier in his career, Lee held marketing leadership positions at VMware, Adaptec, and SEEQ Technology (now LSI Logic). He started his career at General Electric in Corporate Consulting.

Lee holds a bachelor of arts degree in economics from Carleton College and a master of business administration degree from Dartmouth College. Lee is a New York native and has lived in northern California for many years. He and his wife live in Palo Alto and have two children. In his spare time Lee enjoys cycling, playing guitar, and hiking the local hills.

Disclaimer: NetApp and SolidFire have been clients of DeepStorageNet and NetApp is a current client of Silverton Consulting.

GreyBeards talk with Pivot3 and NexGen Storage about their recent acquisition announcement

In our 29th episode, we talk with John Spiers (@lefthandsan), Co-founder & CEO of NexGen Storage and Ron Nash (@hronaldnash), Chairman & CEO of Pivot3, a hyper converged infrastructure provider.  We have talked with John before (see last June’s podcast episode) about NexGen Storage technology. Recently, Pivot3 announced they were going to acquire NexGen Storage and Howard and I wanted to talk about to them what brought the two together.

We have discussed hyper converged solutions before (see ScaleComputing and Gridstore podcasts) dating all the way to the first GreyBeardsOnStorage podcast with Nutanix but this is the first time we have talked with Pivot3 and Ron Nash. As discussed in those podcasts hyper converged infrastructure (HCI) brings together compute, storage and sometimes networking under one overarching infrastructure framework and delivers all this as a single solution that customers can then tailor to their own needs. In a typical HCI solution, storage is software defined, compute is under the control of a hypervisor and can include software defined networking.

Sometime last fall both John and Ron were considering additional funding opportunities with their VC’s, when one of them, Brian Smith of S3 Ventures, suggested they look at combining their two operations into one company.

It seemed that John was looking to expand their sales and marketing team to take NexGen Storage to the next level while Ron was looking for some additional differentiation in storage technology that could take their solution beyond where they were today. It seemed to Mr. Smith that each of them had just what the other one was looking for.

As GreyBeardsOnStorage listeners should recall, NexGen Storage is known for their hybrid storage solution with fine grained QoS capabilities. Although, NexGen Storage is delivered as an appliance, their main IP is in storage software and so implementing a Software Defined Storage solution under HCI was certainly an option.

Pivot3 has been around since 2002 and has sale teams around the world with an extensive marketing team. Pivot3 uses Zen and now mostly VMware for their hypervisor environments and typically run on whitebox servers with storage bridge bay boxes running software defined storage. Pivot3 had already implemented scaleable erasure coding which is something NexGen Storage was also looking at.

Pivot3 and the rest of the HC solutions market space seems split into two. That is there is a good market at the low end, where small companies, remote offices, small workgroups, etc. are looking for an easy to deploy, full IT stack solution. And at the high end, large web properties and other IT behemoths  also need an easy to deploy, readily automated solution, that can scale to whatever size they require.

Both Pivot3 and NexGen Storage work well in VDI deployments but NexGen was mostly deployed in currently running VDI environments, whereas Pivot3 primarily went into brand new deployments, that could take advantage of HCI solutions.

In the podcast we discuss some of these large organizations such as Google, Facebook, Etrades and others and what they are looking for in an IT infrastructure. We also discuss some of the technology trends that are impacting both HCI and storage infrastructure. It turns out NexGen’s extensive QoS capabilities are what can make HCI deployments work even better than they do today.

In the past couple of days, the technology teams of the two companies have been hot and heavy, examining possible synergies and discussing how to reconcile their respective roadmaps. John and Ron were sitting in the back during these discussions throwing out ideas which the technical teams ran with as far as they could.

The podcast runs just over 41 minutes and episode covers a lot of ground about both of their products market spaces, technology, and business dynamics and especially, on how they see the two solutions complementing each other. Apparently the acquisition is on a fast path to close soon. Listen to the podcast to learn more.

Ron Nash 2016[1][1]Ron Nash, Chairman and CEO, Pivot3

Ron brings senior leadership and experience as the chairman and CEO of Pivot3. He has held numerous leadership roles at both start-up and enterprise information technology companies including ExoLink (acquired by Alliance Data Systems), Advanced Telemarketing (now Aegis Global) and Rubicon (acquired by Cerner), Perot Systems (now Dell Services) and EDS (now HP Enterprise Service). More recently, he served as a partner at InterWest Partners, investing in successful breakthrough technology companies like Pivot3 and Lombardi Software (acquired by IBM).

 

John Spiers Headshot[1][1][1][1]John Spiers, Founder and CEO, NexGen Storage

John is a serial entrepreneur based in Boulder, CO. John has been pioneering breakthrough data storage innovations for over 30 years. He co-founded venture-backed LeftHand Networks, a market leader in virtualized, scale-out data storage, and served as LeftHand’s Chief Technology Officer. In 2010 John co-founded NexGen Storage. John supports local entrepreneurs, serving on the boards of local technology startups and as an advisor for the Blackstone Entrepreneurs Network. John is a graduate from Colorado State University with a degree in Engineering.