Category Archives: Backup

45: Greybeards talk desktop cloud backup/storage & disk reliability with Andy Klein, Director Marketing, Backblaze

In this episode, we talk with Andy Klein, Dir of Marketing for Backblaze, which backs up  desktops and computers to the cloud and also offers cloud storage.

Backblaze has a unique consumer data protection solution where customers pay a flat fee to backup their desktops and then may pay a separate fee for a large recovery. On their website, they have a counter indicating they have restored almost 22.5B files. Desktop/computer backup costs $50/year. To restore files, if it’s under 500GB you can get a ZIP file downloaded at no charge but if it’s larger, you can get a USB flash stick or hard drive shipped FedEx but it will cost you.

They also offer a cloud storage service called B2 (not AWS S3 compatible) which costs $5/TB/year. Backblaze just celebrated their tenth anniversary last April.

Early on Backblaze figured out the only way they were going to succeed was to use consumer class disk drives and to engineer their own hardware and to write their own software to manage them.

Backblaze openness

Backblaze has always been a surprisingly open company. Their Storage Pod hardware (6th generation now) has been open sourced from the start and holds 60 drives for 480TB raw capacity.

A couple of years back when there was a natural disaster in SE Asia, disk drive manufacturing was severely impacted and their cost per GB for disk drives, almost doubled overnight. Considering they were buying about 50PB of drives during that period it was going to cost them ~$1M extra. But you could still purchase drives, in limited quantities, at select discount outlets. So, they convinced all their friends and family to go out and buy consumer drives for them (see their drive farming post[s] for more info).

Howard said that Gen 1 of their Storage Pod hardware used rubber bands to surround and hold disk drives and as a result, it looked like junk. The rubber bands were there to dampen drive rotational vibration because they were inserted vertically. At the time, most if not all of the storage industry used horizontally inserted drives.  Nowadays just about every vendor has a high density, vertically inserted drive tray but we believe Backblaze was the first to use this approach in volume.

Hard drive reliability at Backblaze

These days Backblaze has over 300PB of storage and they have  been monitoring their disk drive SMART (error) logs since the start.  Sometime during 2013 they decided to keep the log data rather than recycling the space. Since they had the data and were calculating drive reliability anyways, they thought that the industry and consumers would appreciate seeing their reliability info. In December of 2014 Backblaze published their hard drive reliability report using Annualized Failure Rates (AFR) they calculated from the many thousands of disk drives they ran every day. They had not released Q2 2017 hard drive stats yet but their Q1 2017 hard drive stats post has been out now for about 3 months.

Most drive vendors report disk reliability using Mean Time Between Failure (MTBF), which is the interval of time until half the drives will fail.  AFR is an alternative reliability metric, which is the percentage of drives that will fail in one year’s time.  Although both are equivalent (for MTBF in hours, AFR=8766/MTBF), AFR is more useful as it tells users the percent of drives they can expect to fail over the next twelve months.

Drive costs matter, but performance matters more

It seemed to the Greybeards that SMR (shingle magnetic recording, read my RoS post for more info) disks would be a great fit for Backblaze’s application. But Andy said their engineering team looked at SMR disks and found the 2nd write (overwrite of a zone) had terrible performance. As Backblaze often has customers who delete files or drop the service, they reuse existing space all the time and SMR disks would hurt performance too much.

We also talked a bit about their current data protection scheme. The new scheme is a Reed Solomon (RS) solution with data written to 17 Storage Pods and parity written to 3 Storage Pods across a 20 Storage Pod group called a Vault.  This way they can handle 3 Storage Pod failures across a Vault without losing customer data.

Besides disk reliability and performance, Backblaze is also interested in finding the best $/GB for drives they purchase. Andy said nowadays the consumer disk pricing (at Backblaze’s volumes) generally falls between ~$0.04/GB and ~$0.025/GB, with newer generation disks starting out at the higher price and as the manufacturing lines mature, fall to the lower price. Currently, Backblaze is buying 8TB disk drives.

The podcast runs ~45 minutes.  Andy was great to talk with and was extremely knowledgeable about disk drives, reliability statistics and “big” storage environments.  Listen to the podcast to learn more.

Andy Klein, Director of marketing at Backblaze

Mr. Klein has 25 years of experience in the cloud storage, computer security, and network security.

Prior to Backblaze he worked at Symantec, Checkpoint, PGP, and PeopleSoft, as well as startups throughout Silicon Valley.

He has presented at the Federal Trade Commission, RSA, the Commonwealth Club, Interop, and other computer security and cloud storage events

 

44: Greybeards talk 3rd platform backup with Tarun Thakur, CEO Datos IO

Sponsored By:

In this episode, we talk with a new vendor that’s created a new method to backup database information. Our guest for this podcast is Tarun Thakur, CEO of Datos IO. Datos IO was started in 2014 with the express purpose to provide a better way to back up and recover databases in the cloud. They started with noSQL, cloud based databases, such as MongoDB and Cassandra.

The problem with backing up noSQL and any SQL databases for that matter, is that they are big files and always have some changes in them. So, for most typical backup systems, databases are always flagged as files that have been changed and thus need to be backed up. So each incremental, backups up the whole database file, even if only a row has changed. All this results in a tremendous waste of storage.

Deduplication can help, but there are problems deduplicating databases. Many databases used compressed data for storing data and deduplication that is based on fixed length blocks don’t work well for variable length, compressed data (see my RayOnStorage Poor deduplication … post).

Also, variable length deduplication algorithms are usually based on known start of record triggers to understand where a chunk of data can be found. Some databases do not use these start of row, start of table indicators, which throw off variable length deduplication algorithms.

So, with traditional backup systems most databases don’t deduplicate very well and are backed up all the time resulting in lots of waisted storage space.

How’s Datos IO different?

Datos IO identifies and backups only changed data, not changed (database) files. Their Datos IO RecoverX product extracts rows from a database, identifies whether this data has changed and then just backups the changed data.

As more customers create applications for the cloud, backups become a critical component of cloud operations. Most cloud based applications are developed from the start, using noSQL databases.

Traditional backup packages don’t work well with NoSQL, cloud databases, if at all. And data center customers are reluctant to move their expensive, enterprise backup packages to the cloud, even if they could operate effectively there.

Datos IO saw that backing up noSQL MongoDB and Cassandra databases in the cloud as a major new opportunity, if it could be done properly.

How does Datos IO backup changed  data?

Essentially, RecoverX takes a point-in-time snapshot of the database and then reads each table, row by row, comparing (hashes of) each row’s data obtained, with the row data they previously backed up and if changed, the new row’s data is added to the current backup. This provides a semantic deduplication of database data.

Furthermore, because RecoverX is looking at the data rather than files, compressed data works just as well as uncompressed. Datos IO uses standardized database APIs to extract the row data, that way they remain compatible with each release of database software.

RecoverX backups reside in S3 objects on the public cloud.

New in RecoverX Version 2

Many customers liked their approach so much they wanted RecoverX to do this for regular SQL databases as well. Major customers are not just developing new applications for the cloud they also want to do enterprise application development, test and QA in the cloud as well, and these applications almost always use SQL databases.

So, Datos IO RecoverX Version 2 nows supports migration and cloud backups for standardized SQL databases. They are starting with MySQL, with plans to support other SQL databases used by the enterprise. Migration occurs by backing up the datacenter MySQL databases to the cloud and then recovering it to the cloud.

They have also added backup and recovery support for Apache Hadoop, HDFS from Cloudera and HortonWorks. Another change is that Datos IO originally offered only a 3 node solution but with Version 2, it will now support  up to a 5 node cluster.

They have also added more backup management and policy support. Now you can add/subtract database table backups at anytime. Now admins can change backup policies  to add or subtract tables/databases on the fly, even while backups are taking place.

The podcast runs ~30 minutes. Tarun has been in the storage industry for a number of years from microcoding storage control logic to managing major industry development organizations. He has an in depth knowledge of storage and backup systems that’s hard to come by and was a pleasure to talk with.  Listen to the podcast to learn more.

Tarun Thakur, CEO, Datos IO

Tarun Thakur is co-founder and CEO, where he leads overall Datos IO business strategy, day-to-day execution, and product efforts. Prior to founding Datos IO he held senior product and technical roles at several leading technology companies.

Most recently, he worked at Data Domain (EMC), where he led and delivered multiple product releases and new products. Prior to EMC, Thakur was at Veritas (Symantec), where he was responsible for inbound and outbound product management for their clustered NAS appliance products.

Prior to that, he worked at the IBM Almaden Research Center where he focused on distributed systems technology. Thakur started his career at Seagate, where he developed advanced storage architecture products.

Thakur has more than 10 patents granted from USPTO and holds an MBA from Duke University.

34: GreyBeards talk Copy Data Management with Ash Ashutosh, CEO Actifio

In this episode, we talk with Ash Ashutosh (@ashashutosh), CEO of Actifio a copy data virtualization company. Howard met up with Ash at TechFieldDay11 (TFD11) a couple of weeks back and wanted another chance to talk with him.  Ash seems to have been around forever, the first time we met I was at a former employer and he was with AppIQ (later purchased by HP).  Actifio is populated by a number of industry veterans and since being founded in 2009 is doing really well, with over 1000 customers.

So what’s copy data virtualization (management) anyway?  At my former employer, we did an industry study that determined that IT shops (back in the 90’s) were making 9-13 copies of their data. These days,  IT is making, even more, copies of the exact same data.

Data copies proliferate like weeds

Engineers use snapshots for development, QA and validation. Analysts use data copies to better understand what’s going on in their customer-partner interactions, manufacturing activities, industry trends, etc. Finance, marketing , legal, etc. all have similar needs which just makes the number of data copies grow out of sight. And we haven’t even started to discuss backup.

Ash says things reached a tipping point when server virtualization become the dominant approach to running applications, which led to an ever increasing need for data copies as app’s started being developed and run all over the place. Then came along data deduplication which displaced tape in IT’s backup process, so that backup data (copies) now could reside on disk.  Finally, with the advent of disk deduplication, backups no longer had to be in TAR (backup) formats but could now be left in-app native formats. In native formats, any app/developer/analyst could access the backup data copy.

Actifio Copy Data Virtualization

So what is Actifio? It’s essentially a massively distributed object storage with a global name space, file system on top of it. Application hosts/servers run agents in their environments (VMware, SQL Server, Oracle, etc.) to provide change block tracking and other metadata as to what’s going on with the primary data to be backed up. So when a backup is requested, only changed blocks have to be transferred to Actifio and deduped. From that deduplicated change block backup, a full copy can be synthesized, in native format, for any and all purposes.

With change block tracking, backups become very efficient and deduplication only has to work on changed data so that also becomes more effective. Data copying can also be done more effectively since their only tracking deduplicated data. If necessary, changed blocks can also be applied to data copies to bring them up to date and current.

With Actifio, one can apply SLA’s to copy data. These SLA’s can take the form of data governance, such that some copies can’t be viewed outside the country, or by certain users. And they can also provide analytics on data copies. Both of these capabilities take copy data to whole new level.

We didn’t get into all Actifio’s offerings on the podcast but Actifio CDS is as a high availability appliance which runs their  object/file system and contains data storage. Actifio also comes in a virtual appliance as Actifio SKY, which runs as a VM under VMware, using anyone’s storage.  Actifio supports NFS, SMB/CIFS, FC, and iSCSI access to data copies, depending on the solution chosen. There’s a lot more information on their website.

It sounds a little bit like PrimaryData but focused on data copies rather than data migration and mostly tier 2 data access.

The podcast runs ~46 minutes and  covers a lot of ground. I spent most of the time asking Ash to explain Actifio (for Howard, TFD11 filled this in). Howard had some technical difficulties during the call which caused him to go offline but then came back on the call. Ash and I never missed him :), listen to the podcast to learn more.

Ash Ashutosh, CEO Actifio

Ash Ashutosh Hi Res copy-resizedAsh Ashutosh brings more than 25 years of storage industry and entrepreneurship experience to his role of CEO at Actifio. Ashutosh is a recognized leader and architect in the storage industry where he has spearheaded several major industry initiatives, including iSCSI and storage virtualization, and led the authoring of numerous storage industry standards. Ashutosh was most recently a Partner with Greylock Partners where he focused on making investments in enterprise IT companies. Prior to Greylock, he was Vice President and Chief Technologist for HP Storage.

Ashutosh founded and led AppIQ, a market leader of Storage Resource Management (SRM) solutions, which was acquired by HP in 2005. He was also the founder of Serano Systems, a Fibre Channel controller solutions provider, acquired by Vitesse Semiconductor in 1999. Prior to Serano, Ashutosh was Senior Vice President at StorageNetworks, the industry’s first Storage Service Provider. He previously worked as an architect and engineer at LSI and Intergraph.

GreyBeards talk VMware agentless backup with Chris Wahl, Tech Evangelist, Rubrik

In this edition we discuss Rubrik’s converged data backup with Chris Wahl (@ChrisWahl), Tech Evangelist for Rubrik.  You may recall Chris as a blogger on a number of Tech, Virtualization and Storage Field Days (VFD2, TFD extra at VMworld2014, SFD4, etc.) which is where  I met him. Chris is one of the bloggers that complains about me pounding on my laptop keyboard so loud at SFDs ;/

Chris had only been with Rubrik about 3 weeks when we  talked with him but both Howard and I thought it was time to find out what Rubrik was up to.

Rubrik provides an agentless, scale-out backup appliance for VMware vSphere clusters. It uses VADP to tap into VM data stores and obtain changed blocks for backup data. Rubrik deduplicates and compresses VM backup data and customers define a SLA  policy at the VM, folder or vSphere cluster level to determine when to backup VMs.

Rubrik supports cloud storage (any S3 or SWIFT provider) for long term archive storage of VM backups. With Rubrik, customers can search the backup catalog (for standard VM, NFS file, and backup metadata) that spans the Rubrik cluster data as well as S3/SWIFT storage backups.  Moreover, Rubrik can generate compliance reports to indicate how well your Rubrik-vSphere backup environment has met requested backup SLAs, over time.

Aside from the standard recovery facilities, Rubrik offers some interesting recovery options such as “instant restore” which pauses a VM and reconfigures its storage to come up on the Rubrik cluster (as a set of NFS VMDKs). Another option is “instant mount”, which runs a completely separate copy of a VM using Rubrik storage as its primary storage. In this case the VM’s NIC is disconnected so that the VM gets an error when it fires up, which has to be resolved to run the VM.

Rubrik hardware comes in a 2U package with 4 nodes. Each node has one flash SSD and 3 4 or 8TB SATA disks for customer data. The SSD is used for ingest caching and metadata. Data is triple mirrored across SATA disks in different nodes.

The latest release of Rubrik supports (compressed/deduped) data replication to other Rubrik clusters located up to asynchronous distances away.

This months edition runs just under 42 minutes and gets somewhat technical in places. We had fun with Chris on our call and hope you enjoy the podcast.

Chris Wahl, Tech Evangelist, Rubrik

chris_wahl_beard_800px

Chris Wahl, author of the award winning Wahl Network blog and Technical Evangelist at Rubrik, focuses on creating content that revolves around virtualization, automation, infrastructure, and evangelizing products and services that benefit the technology community.

In addition to co-authoring “Networking for VMware Administrators” for VMware Press, he has published hundreds of articles and was voted the “Favorite Independent Blogger” by vSphere-Land three years in a row (2013 – 2015).

Chris also travels globally to speak at industry events, provide subject matter expertise, and offer perspectives to startups and investors as a technical adviser.

GreyBeards talk backup with Rick Vanover, Product Strategy Specialist for Veeam

Welcome to our 9th monthly episode where we discuss data backup with Rick Vanover, Product Strategy Specialist for Veeam Software. The GreyBeards just talked with Rick and Veeam at last month’s Storage Field Day 5 (SFD5) in Silicon Valley and once again we would suggest everyone who wants to know more about Veeam backup and even better, new Veeam V8 restore capabilities view the video of their session.

Rick’s has been an icon in the backup industry for many years now and as Veeam’s strategy specialist he spends a lot of time in social media (twitter link below), writes the Rickatron Blog, travels the world attending various conferences and produces the award winning, Veeam Community Podcast. Moreover, Rick said this podcast will be co-distributed by Veeam on their community podcast page, so you can listen to it there as well.

This month’s episode comes in at around 36 and a half minutes.

In this podcast we discuss current IT data protection activities, such as the values and risks of elemental restores, Howard’s recent survey results on data protection for InformationWeek Analytics, and a bit about the best storage for backup.  All in all a good overview of the backup problems in the virtualized server market today and how many of them can be solved right now.

In addition, Howard identified the single “best thing to hit backup over the past 20 years”, (and it wasn’t tape libraries, Ray). Listen to the podcast to find out more …

Rick Vanover, Product Strategy Specialist

Rick Vanover (vExpert, MCITP, VCP, Cisco Champion) is a product strategy specialist for Veeam Software based in Columbus, Ohio. Rick is a popular blogger, podcaster and active member of the virtualization community. Rick’s IT experience includes system administration and IT management; with virtualization being the central theme of his career recently. Follow Rick on Twitter @RickVanover .