Blockchains go mainstream…

 

I read an article a while back on Finland’s use of blockchain technology to provide bank accounts and identity services to immigrants (see  MIT TechReview article about Finland).

Blockchains were originally invented as a way of supporting financial transactions outside the current, government monitored, financial marketplace. With Finland’s experiment, the government is starting to use blockchains to support the unbanked and monitoring their financial activity – go figure.

Debit cards on blockchain

Finland’s using a Helsinki based startup MONI, to assign a MONI card, essentially a prepaid MasterCard, to all immigrants. An immigrant can use their MONI card to pay for anything online or in real life, use it as a direct deposit account or to receive and track the use of government assistance.

Underlying the MONI card is public blockchain technology. That is MONI  is not using normal credit card services to support it’s bank accounts, MONI money transfers are done through the use of public blockchains.

MONI accounts are essentially (crypto currency) wallets but used as a debit card. The user merely enters a series of numbers into web forms or uses their MONI card at a credit card terminals throughout Europe. Transferring money between MONI users anywhere in the World is also free and instantaneous.

Finland also sees an immutable record of all immigrant financial transactions,  that can be monitored to track immigrant (financial) integration into the country.

MONI is intending to make this service more broadly available. A MONI card account costs €2/month and MONI take’s a small cut out of each monetary transaction.

IDs on blockchain

I read another article the other day “Microsoft to implement blockchain-based ID system” in CoinTelegraph about using blockchains as a universal digital ID.

India has over the last decade, implemented a digital government ID using biometrics (see Aadhaar wikipedia article). Other countries have been moving to e-government where use of government services is implemented over the Internet (see EU article on eGovernment in Lithuania). Such eGovernment services depend on a digitized population registry.

Although it’s unclear whether Aadhaar and Lithuania make use of blockchain technology for their ID services, Microsoft’s definitely looking to blockchains to provide unique accounts/digital IDs to it’s population of users.

User signon’s has been a prevalent problem of the web for years. Each and every web and mobile App requires a person to signon to personalize their App. Nowadays, many Apps support using Google ID or Facebook ID for a single signon and there are other technologies being offered that provide similar services. Using a blockchain ID could easily support a single signon service.

The blockchain ID (wallet) public key could easily be used to encrypt an authentication transaction, identifying the App and the user. This authentication transaction would be processed by the blockchain digital ID service would use the private key to decrypt the transaction and use a backend ID App repository for the user to check to see that the user loging in, is the person that opened the account, acting as a sort of “proof of who you are”

Storage on blockchain

Filecoin and StorJ are storage providers that use blockchain services to allow others to use your local (or networked) storage to provide storage to the world.

A while back I had written about (free) peer to peer storage and compute services  (see my Free P2P cloud storage … post). But the problem was how do people benefit from hosting the P2P storage or compute. Filecoin and Storj solved this by paying in cryptocurrencies for storage hosted on your hardware.

Filecoin offers a storage auction and hosting service that anyone worldwide can log into and use. The data stored is encrypted end-to-end so that no one can see what’s being stored and the data is also erasure coded so that it  is protected and accessible even with having one or more hosting sites be offline.

Filecoin uses “proofs of storage“, “proofs of space”, “proofs of data possession“, and “proofs of retrievability” as a way to guarantee their storage service works properly. They also use chained “proofs of replication” as “proofs of spacetime” as service validation checks. Proofs of Replication are a way of insuring that storage providers are not deduplicating data copies and charging for non-deduped storage. (See Filecoin’s Proof of Replication paper for more info).

Storj looks somewhat similar to Filecoin, but without as much sophistication behind it.

Compute on blockchain

Ethereum was invented to support smart contracts that run on blockchain technology. IBM’s HyperLegder OpenLedger project (see our GreyBeardsOnStorga Podcast and RayOnStorage post on Hyperledger) is another example.

Smart contracts are essentially applications that run in a blockchains virtualized environment. Blockchain services are used to run an application and validate that’s it’s run only once. In some cases smart contracts use  external oracles to query as a way to verify something or some action has occurred outside the blockchain. Other oracles can be entirely digital entities that check on a particular commodity price, weather pattern, account value, etc. The oracle becomes a critical step in determining the go no go status of a smartcontract.

Advertisements vs. crypto mining

Salon, a news providing website, offers readers an option to see advertisements or to allow Salon to use their computer (browser) to mine crypto coins. (See Salon offers… article in CoinDesk).

I believe this offer is made when the website detects a viewer is using  ad blockers.

~~~~

Tthe trend is clear, people, organizations and even governments are looking at blockchain technology to provide basic and advanced services around the world.

If anyone would is interested in providing a pre-paid Visa card via blockchains, please contact me. I’d like to help.

Now if I could just find my GPU’s at a decent price somewhere…

Speaking of advertising… RayOnStorage doesn’t use advertising. But blogging like this takes time and money. If anyone’s interested in helping fund this blog, please consider sending some BTC our way, even 0.0001 BTC would help.

Our BTC wallet address is:

1MqBbAvMo6QbCVD6ZwtbLaPxmcUZGj9Ghw

Photo Credit(s): Blockchain and the public sector on OpenGovAsia.com

Unleash your design teams with single signon on Unifilabs.com

Understanding the difference between P2P and Client-server networks on LinkedIN

Blockgeek’s guide to smart contracts

#VMworld2015 day 1 announcements

 

IMG_5411It seemed like today was all about the cloud and cloud native apps. Among the many announcements, VMware announced two key new capabilities: VMware integrated containers and the Python Photon Platform.

Containers running on VMware

  • VMware vSphere Integrated Containers is an implementation of containers that runs natively under vSphere. The advantage of this solution is that now when developers fire up a multi-container app,  each container now exists as a separate VM under vSphere and can be managed, monitored and secured just like any other VM in the environment. Previously a multi-container app would be one VM per container engine  containing potentially many containers running under the single VM. But with vSphere Integrated Containers, the container engine and the light weight Linux kernel (Python Photon OS) are now integrated into the ESX hypervisor so each container runs as a native VM. Integrated containers is an follow on to a combination of Project Bonneville, Project Python Photon (OS) and Instant clones. Recall with Instant Clones one can spin up a clone of a VM in less than a second and its memory footprint is 0MB.
  • Python Photon Platform takes container execution to a whole new level, with a new deployment of a hypervisor tailor made to run containers (not VMs). With the Python Photon Platform one natively runs container frameworks underneath the platform. Python Photon Platform consists of Python Photon Machine which is Python Photon OS (lightweight Linux Kernel distro) & the new Microvisor (new light weight hypervisor for container hardware calls) and Python Photon Controller which is a distributed control plane and management API. With Python Photon Platform one can manage 100K to Millions of containers, running under 1000s of container frameworks.

Over time Python Photon Platform is intended to be open sourced. VMware also announced a bundling of Pivotal Cloud Foundry with the Python Photon Platform so as to better run cloud native apps implemented in Cloud Foundry. But the ultimate intent is to provide support for Google Kubernetes, Apache Mesos and any other container framework that comes out.

So now you can run your Docker container apps or any other container app solution in two different ways. One depends on vSphere standard management platform and runs container apps as a standard VMs. The other takes a completely green field approach and runs container frameworks natively in a ground up new hypervisor solution with a new management solution altogether that scales.

The advantage of Python Photon is that it scales to extreme, cloud level types of application environments. Python Photon is intended to run cloud-native apps.

vCloud Air extensions

One of the other major things that VMware demoed today was moving a VM from on premises to vCloud Air and back again – a real crowd pleaser. One VMware Exec said that after MIT had convinced them they needed to be able to move apps from on premises to the cloud for dev-test apps. They then turned around and decided they wanted to move dev-test activity back to their onprem environment and instead wanted to move their production to vCloud Air.

They demoed both capabilities using vMotion to move a VM to vCloud Air and using it again to move it back. The nice thing about all this is that all the security and other attributes of the VM can move to the cloud and back again along with the VM. All the while the VM continued to operate, with no disruption to execution. They mention that it could potentially take hours to move the data for the VM.

IMG_5413There were a number of other capabilities announced today including EVO SDDC (EVO: RACK reborn) which includes a new datacenter management solution. Customers can now roll in a rack of servers and have EVO SDDC manage them and deploy software defined data center on them in a matter of hours. Within EVO SDDC you can have application domains which span racks of servers but provide isolation and management multi-tennancy.

NSX 6.2 was also discussed and essentially is key to extending your networking from on premises to vCloud Air. With NSX 6.2 local routing, micro segmentation security and app firewalls can be configured locally and then be “extended” to the vCloud Air environment.

Lots of moving parts here and I probably missed some key components to these solutions and didn’t cover any of them well enough other than to give a feel for what they are.

But one thing is clear, VMware’s long term strategy is to take your native, on premises VMs to vCloud Air and back again as well as if your Dev-Ops group or any other BU wants to use containers to implement cloud apps, VMware has you covered coming and going.

Comments?

DDN unchains Wolfcreek, unleashes IME and updates WOS

16371098088_3b264f5844_zIt’s not every day that we get a vendor claiming 2.5X the top SPC-1 IOPS (currently held by Hitachi G1000 VSP all flash array at ~2M IOPS) as DataDirect Networks (DDN) has claimed for an all-flash version of their new Wolfcreek hyper converged appliance. DDN says their new 4U appliance is capable of 60GB/sec of throughput and over 5M IOPS. (See their press release for more information.) Unclear if these are SPC-1 IOPS or not, but I haven’t seen any SPC-1 report on it yet.

In addition to the new Wolfcreek appliance, DDN announced their new Infinite Memory Engine™ (IME) flash caching software and WOS® 360 V2.0, an enhanced version of their object storage.

DDN if you haven’t heard of them has done well in the Web 2.0 environments and is a leading supplier to high performance computing (HPC) sites. They have object storage system (WOS), all flash block storage (SFA12KXi), hybrid (disk-SSD) block storage (SFA7700X™ & SFA12KX™), Lustre file appliance (EXAScaler), IBM GPFS™ NAS appliance (GRIDScaler), media server appliance (MEDIAScaler™) and  software defined storage (Storage Fusion Accelerator [SFX™] flash caching software).

Wolfcreek hyper converged appliance

The converged solution comes in a 4U appliance using dual Haswell Intel microprocessors (with up to 18 cores each), includes a PCIe fabric which supports 48-NVMe flash cards or 72-SFF SSDs. With the NVMe or SSDs, Wolfcreek will be using their new IME software to accelerate IO activity.

Wolfcreek IME software supports either burst mode IO caching cluster or a storage cluster of nodes. I assume burst mode is a storage caching layer for backend file system stoorage. As a storage cluster I assume this would include some of their scale-out file system software on the nodes. Wolfcreek cluster interconnect is 40Gb Infiniband or 10/40Gb Ethernet and also will support Intel’s Omni-Path. The Wolfcreek appliance is compatible with HPC Lustre and IBM GPFS scale out file systems.

Wolfcreek appliance can be a great platform for OpenStack and Hadoop environments. But it also supports virtual machine hypervisors from VMware, Citrix and Microsoft. DDN says that the Wolfcreek appliance can scale up to support 100K VMs. I’ve been told that IME will not be targeted to work with Hypervisors in the first release.

Recall that with a hyper converged appliance, some portion of the system resources (memory and CPU cores) must be devoted to server and VM application activities and the remainder to storage activity. How this is divided up and whether this split is dynamic (changes over time) or static (fixed over time) in the Wolfcreek appliance is not indicated.

The hyper converged field is getting crowded of late what with VMware EVO:RAIL, Nutanix, ScaleComputing, Simplivity and others coming out with solutions. But there aren’t many that support all-flash storage and it seems unusual that hyper converged customers would have need for that much IO performance. But I could be wrong, especially for HPC customers.

There’s much more to hyper convergence than just having storage and compute in the same node. The software that links it all together, manages, monitors and deploys these combined hypervisor, storage and server systems is almost as important as any of the  hardware. There wasn’t much talk about the software that DDN is putting together for Wolfcreek but it’s still early yet. With their roots in HPC, it’s likely that any DDN hyper converged solution will target this market first and broaden out from there.

Infinite Memory Engine (IME)

IME is an outgrowth of DDN’s SFX software and seem to act as a caching layer for parallel file system IO. It makes use of NVMe or SSDs for its IO caching. And according to DDN can offer up to 1000X IO acceleration to storage or 100X file system acceleration.

It does this primarily by providing an application aware IO caching layer and supplying more effective IO to the file system layer using PCIe NVMe or SSD flash storage for hardware IO acceleration. According to the information provided by DDN, IME can provide 50 GB/sec bandwidth to a host compute cluster while only doing 4GB/sec of throughput to a backend file system, presumably by better caching of file IO.

WOS 360 V2.0

The new WOS 360 V2.0 object storage system features include

  • Higher density storage package with 98-8TB SATA drives or 768TB raw capacity in 4U) supporting 8B objects each and over 100B objects in a cluster.
  • Native SWIFT API support for OpenStack environments  which includes gateway or embedded deployments, up to 5000 concurrent users and 5B objects/namespace.
  • Global ObjectAssure data encoding with lower storage overhead (1.5x or a 20% reduction from their previous encoding option) for highly durable and available object storage usiing a two level hierarchical erasure code for object storage.
  • Enhanced network security with SSL  which provides end-to-end SSL network data transport between clients and WOS and betweenWOS storage nodes.
  • Simplified cluster installation, deployment and maintenance with can now deploy a WOS cluster in minutes, with a simple point and click GUI for installation and cluster deployment with automated non-disruptive software upgrade.
  • Performance improvements for better video streaming, content distribution and large file transfers with improved QoS for latency sensitive applications.

~~~~

Probably more going on with DDN than covered here but this hits the highlights. I wish there was more on their Wolfcreek appliance and its various configurations and performance benchmarks but there’s not.

Comments?

 Photo Credits: wolf-63503+1920 by _Liquid

 

VMware VVOLs potential performance problems

We discussed vSphere 6 VVOLs (Virtual Volumes) on this month’s GreyBeardsonStorage (GBoS) podcast with Howard Marks (@DeepStorageNet) and Satyam Vaghani (@SatyamVaghani, “Father of VVOLs”, CoFounder & CTO of PernixData).

VVOLs queue depth problem?

One performance problem from my perspective is that all VVOL FC IO is now funeled through a single Protocol Endpoint (PE) LUN for a single storage system. There may be some potential queue depth issues, but Satyam and Howard both said that queue depths have been greatly increased over the last decade or so and this shouldn’t be a problem, as long as you’re configured properly.

What about VVOL PEs on ALUA storage?

In an ALUA (Asymmetrical Logical Unit Access) Active/Passive, dual controller storage system, a set of LUNs is assigned to  one controller, the “active” side of an Active/Passive ALUA storage system. Many ALUA vendors now support “Active/Active” configurations such that 1/2 the LUNs are assigned to one side and the other 1/2  assigned to the other sider, for an Active/Passive & Passive/Active pair or Active/Active configuration.

So, ALUA storage systems have a LUN “allegiance” to a controller. If this continues to be the case under VVOLs,  then a PE would only be processed by one side of an ALUA dual controller system, effectively reducing the horse power to process VVOL IO to 1/2 of an ALUA storage system.

Now just because there is a LUN allegiance in ALUA storage doesn’t necessarily mean that all internal IO processing for a LUN is done on only one controller. But historically that has been the case. For instance, during an ALUA system non-disruptive code update, an “active” ALUA side must “failover” its LUNs to the other side to provide continuous IO activity, while the formerly active ALUA side taken down and updated with new code.

Potential solutions to ALUA PE performance?

One way to get around the VVOL ALUA performance problem is to have multiple PEs in a single storage system for the same vSphere Cluster VVOLs. I don’t know anything that would inhibit a storage system from supporting multiple PEs today, they already need to support multiple PEs for multiple vSphere clusters. Also, a VMware vSphere cluster must support multiple PEs for multiple storage systems.

I am also not aware of any VASA 2.0 requirement that restricts the number of PEs for a storage system’s support of a single vSphere cluster. But I could be mistaken here. So there should be nothing to inhibit multiple PEs from the same ALUA storage system to the same vSphere cluster.

Of course, this means an ALUA storage VVOLs would need to be divided across ALUA PEs.

Another solution is to eliminate any LUN allegiance for ALUA controllers. This requires shared memory between controllers to hold IO state and this is what non-ALUA storage does already.

~~~~

It’s just like Howard said on the GBoS podcast, “there’s going to be good and bad implementations of VVOLs” and telling the difference between the two will need to be done.

Comments?

 

Photo Credit(s): Passport Please by Oren Levine

VMworld 2014 projects Marvin, Mystic, and more

IMG_2902[This post was updated after being published to delete NDA material – sorry, RL] Attended VMworld2014 in San Francisco this past week. Lots of news, mostly about vSphere 6 beta functionality and how the new AirWatch acquisition will be rolled into VMware’s End-User Computing framework.

vSphere 6.0 beta

Virtual Volumes (VVOLs) is in beta and extends VMware’s software-defined storage model to external NAS and SAN storage.  VVOLs transforms SAN/NAS  storage into VM-centric devices by making the virtual disk a native representation of the VM at the array level, and enables app-centric, policy-based automation of SAN and NAS based storage services, somewhat similar to the capabilities used in a more limited fashion by Virtual SAN today.

Storage system features have proliferated and differentiated over time and to be able to specify and register any and all of these functional nuances to VMware storage policy based management (SPBM) service is a significant undertaking in and of itself. I guess we will have to wait until it comes out of beta to see more. NetApp had a functioning VVOL storage implementation on the show floor.

Virtual SAN 1.0/5.5 currently has 300+ customers with 30+ ready storage nodes from all major vendors, There are reference architecture documents and system bundles available.

Current enhancements outside of vSphere 6 beta

vRealize Suite extends automation and monitoring support for a broad mix of VMware and non VMware infrastructure and services including OpenStack, Amazon Web Services, Azure, Hyper-V, KVM, NSX, VSAN and vCloud Air (formerly vCloud Hybrid Services), as well as vSphere.

New VMware functionality being released:

  • vCenter Site Recovery Manager (SRM) 5.8 – provides self service DR through vCloud Automation Center (vRealize Automation) integration, with up to 5000 protected VMs per vCenter and up to 2000 VM concurrent recoveries. SRM UI will move to be supported under vSphere’s Web Client.
  • vSphere Data Protection Advanced 5.8 – provides configurable parallel backups (up to 64 streams) to reduce backup duration/shorten backup windows, access and restore backups from anywhere, and provides support for Microsoft Exchange DAGs, and SQL Clusters, as well as Linux LVMs and EXT4 file systems.

VMware NSX 6.1 (in beta) has 150+ customers and provides micro segmentation security levels which essentially supports fine grained security firewall definitions almost at the VM level, there are over 150 NSX customers today.

vCloud Hybrid Cloud Services is being rebranded as vCloud Air, and is currently available globally through data centers in the US, UK, and Japan. vCloud Air is part of the vCloud Air Network, an ecosystem of over 3,800 service providers with presence in 100+ countries that are based on common VMware technology.  VMware also announced a number of new partnerships to support development of mobile applications on vCloud Air.  Some additional functionality for vCloud Air that was announced at VMworld includes:

  • vCloud Air Virtual Private Cloud On Demand beta program supports instant, on demand consumption model for vCloud services based on a pay as you go model.
  • VMware vCloud Air Object Storage based on EMC ViPR is in beta and will be coming out shortly.
  • DevOps/continuous integration as a service, vRealize Air automation as a service, and DB as a service (MySQL/SQL server) will also be coming out soon

End-User Computing: VMware is integrating AirWatch‘s (another acquisition) enterprise mobility management solutions for mobile device management/mobile security/content collaboration (Secure Content Locker) with their current Horizon suite for virtual desktop/laptop support. VMware End User Computing now supports desktop/laptop virtualization, mobile device management and security, and content security and file collaboration. Also VMware’s recent CloudVolumes acquisition supports a light weight desktop/laptop app deployment solution for Horizon environments. AirWatch already has a similar solution for mobile.

OpenStack, Containers and other collaborations

VMware is starting to expand their footprint into other arenas, with new support, collaboration and joint ventures.

A new VMware OpenStack Distribution is in beta now to be available shortly, which supports VMware as underlying infrastructure for OpenStack applications that use  OpenStack APIs. VMware has become a contributor to OpenStack open source. There are other OpenStack distributions that support VMware infrastructure available from HP, Cannonical, Mirantis and one other company I neglected to write down.

VMware has started a joint initiative with Docker and Pivotal to broaden support for Linux containers. Containers are light weight packaging for applications that strip out the OS, hypervisor, frameworks etc and allow an application to be run on mobile, desktops, servers and anything else that runs Linux O/S (for Docker Linux 3.8 kernel level or better). Rumor has it that Google launches over 15M Docker containers a day.

VMware container support expands from Pivotal Warden containers, to now also include Docker containers. VMware is also working with Google and others on the Kubernetes project which supports container POD management (logical groups of containers). In addition Project Fargo is in development which is VMware’s own lightweight packaging solution for VMs. Now customers can run VMs, Docker containers, or Pivotal (Warden) containers on the same VMware infrastructure.

AT&T and VMware have a joint initiative to bring enterprise grade network security, speed and reliablity to vCloud Air customers which essentially allows customers to use AT&T VPNs with vCloud Air. There’s more to this but that’s all I noted.

VMware EVO, the next evolution in hyper-convergence has emerged.

  • EVO RAIL (formerly known as project Marvin) is appliance package from VMware hardware partners that runs vSphere Suite and Virtual SAN and vCenter Log Insight. The hardware supports 4 compute/storage nodes in a 2U tall rack mounted appliance. 4 of these appliances can be connected together into a cluster. Each compute/storage node supports ~100VMs or ~150 virtual desktops. VMware states that the goal is to have an EVO RAIL implementation take at most 15 minutes from power on to running VMs. Current hardware partners include Dell, EMC (formerly named project Mystic), Inspur (China), Net One (Japan), and SuperMicro.
  • EVO RACK is a data center level hardware appliance with vCloud Suite installed and includes Virtual SAN and NSX. The goal is for EVO RACK hardware to support a 2hr window from power on to a private cloud environment/datacenter deployed and running VMs. VMware expects a range of hardware partners to support EVO RACK but none were named. They did specifically mention that EVO RACK is intended to support hardware from the Open Compute Project (OCP). VMware is providing contributions to OCP to facilitate EVO RACK deployment.

~~~~

Sorry about the stream of consciousness approach to this. We got a deep dive on what’s in vSphere 6 but it was all under NDA. So this just represents what was discussed openly in keynotes and other public sessions.

Comments?

 

Veeam’s upcoming V8 virtues

[Not] Vamoosing VMworld

We were at Storage Field Day 5 (SFD5, see the videos here) last month and had a briefing on Veeam’s upcoming V8 release.

They also told us (news to me) that they are leaving VMworld[I sit corrected, I have been informed after this went to press that Veeam is not leaving VMworld2014, and never said anything about it at the session – My mistake and I take full responsibility, sorry for any confusion] (sigh, now who’s going to have THE after conference, KILLER PARTY at VMworld) and moving to [but they did say they are definitely starting up] their own VeeamON conference at The Cosmopolitan in Las Vegas on October 6,7 & 8 this year. If their VMworld parties are any indication, the conference in the Cosmo should be a fun and rewarding time for all. Pre-registration is open and they have a call out for papers.

Doug Hazelman (@VMDoug), Rick Vanover (@RickVanover) and Luca Dell’Oca (@dellock6) all presented although Luca’s session was under strict NDA to be revealed later. I think sometime later this summer.

Doug mentioned that after 6 years, Veeam now has over 100,000 customers world wide.  One of their more popular, early innovations was the ability to run a VM directly off of a backup and sometime over the past couple of years they have moved from a VMware only backup & replication solution to also supporting Microsoft Hyper-V (more news to me).

V8’s virtues

Veeam V8 will add some interesting capabilities to the Veeam product solutions:

  • (VMware only) Built-in backups from storage snapshots – (Enterprise Plus edition only) Backup from VMware snapshots can sometimes impact app performance, especially when it comes time to commit changes. But with V7, Veeam now offers backup utilizing VMware’s Change Block Tracking (CBT)and taking backups from storage snapshots directly for HP 3PAR StoreServ, HP (Lefthand) StoreVirtual/StoreVirtual VSA and in soon to be available V8, NetApp FAS (Data ONTAP 8.1 or above, 7- or cluster-mode, clones too) storage systems. First Veeam does its application level processing (under Windows Server does VSS operations), after that completes tells VMware to take (a VMware) snapshot, when that completes they tell the storage to take a (storage) snapshot, when that completes they release the VMware snapshot. What all this does is allows them to utilize VMware CBT as well as storage snapshots which makes it up to 20 times faster than normal VMware snapshot backups. This way they can backup directly from the storage snapshot using the Veeam proxy. Also because the VMware snapshot is so short lived there is little overhead for committing any changes.  Also there is no need to use a proxy ESX server to do this, i.e., promote the VMware snapshot to a LUN, add it to an ESX, resignature, add the VM, and do all the backups, which, of course destroys CBT. This works for FC, iSCSI and NFS data stores. With NetApp storage you can also take the (VSS) application consistent snapshot and copy it to SnapVault.
  • Veeam Explorer (recovery) for storage snapshots – (Free backup edition) Recovery from (HP in V7 & NetApp in V8) storage snapshots is yet another feature and provides item (e.g., emails, contacts, email folders for Exchange), granular (VM level or file level) or full (volume) recovery from storage based snapshots, regardless of how those storage snapshots were created.
  • Veeam Explorer for SQL Server (V8 only) – (unsure what license is required) Similar to the Explorer for snapshots discussed above, this would allow a Veeam admin to do item level recovery for an SQL database. This also includes recovery from Veeam Backup repositories as well as storage snapshots. But this means that you could restore a ROW of an SQL table, an SQL TABLE as well as a whole SQL database. Now DBAs always had these sorts of abilities which required using Log services. But allowing a Veeam admin to do these sorts of activities seems like putting a gun in the hands of a child (or maybe a bazooka in the hands of an untrained civilian).
  • Veeam Explorer for Active Directory (V8 only) – (unsure what license is required) You’ve seen whats’ available above and just consider these same capabilities only applied to active directory. This means you can restore a password hash, user, group or organizational unit (OU). I don’t know about you but this seems more akin to a howitzer in the hands of a civilian.

They showed an example of competitive situation where running V8 (in beta?) with NetApp backups using snapshots versus some unnamed competition. They were able to complete a full backup in 1/4 the time of their competition (2hrs. vs. 8hrs.) and completed incremental backups in 35min. vs. 2hrs. for the competition.

“Thar be dragons there …”

Ok, maybe I am a little more paranoid than the average IT guy/gal. But in my (old world, greybeards) view, SQL databases belong in the realm of DBAs and Active Directory databases belong to domain controller admins. Messing around with production versions of SQL DBs or AD DBs seems hazardous to a data centers health. We’re not just talking files anymore here guys.

In Veeam’s defense, these new Explorer recovery tools are only probably going to be used to do something that needs to be done right away, to get things back operating again, and would not be used unless there’s a real need/emergency to do so. Otherwise let the DBA and security admins do it with their log recovery tools.  And another thing, they have had similar capabilities for Exchange emails, folders, contacts, etc. and no ones shot their foot off yet so why the concern.

Nonetheless, I feel strongly that these tools ought to be placed under lock and key and the key put in a safe with the combination under a glass case labeled IN CASE OF EMERGENCY, BREAK GLASS.

Comments.

Proximal Data, server SSD caching software

7707062406_6508dba2a4_oI attended Storage Field Day 4 (SFD4) about a month ago now and had a chance to visit with Rory Bolt, CEO/Founder of Proximal Data, a new server side caching software solution. Last month the GreyBeards (Howard Marks and I) talked with Satyam Vaghani, Co-founder and CTO of PernixData another server side caching solution. You can find that podcast here. But this post is about Proximal Data. These guys could use some better marketing but when you spend 90% of your funding on engineers this is what you get.

Proximal Data doesn’t believe in agent software. because it takes a long time to deploy and could potentially disrupt IT operations when being installed. In contrast, Proximal Data installs their AutoCache solution software into the hypervisor as a VIB (vSphere Installation Bundle). There was some discussion at SFD4, on whether installing the VIB would be disruptive or not to customer operations. Not being a VMware expert I won’t comment on the results of the discussion but if you want to find out more I suggest viewing the SFD4 video of their Proximal Data’s presentation.

Of course, being at the Hypervisor layer can give them IO activity information at the VM level and could use this to control their caching software at VM granularity. In addition, by executing at the Hypervisor layer AutoCache doesn’t require any guest OS specific functionality or hooks. Another nice thing about executing at the hypervisor level is that they can cache RDM devices.

To use AutoCache you will need one or more PCIe SSD(s) or DAS SSD(s) in your ESXi server.  Once the PCIe SSD or DAS SSD is installed and after you have installed/activated the AutoCache software you will need to partition or dedicate the device to Proximal Data’s AutoCache.

AutoCache is managed as a virtual appliance with a Web server GUI.  With the networking setup and AutoCache VIB, installed you can access their operator panels via a tab in vCenter. Once the software is installed you don’t have to use their GUI ever again.

AutoCache read caching algorithms

Not every read IO for a VM being cached is brought into AutoCache’s SSD cache. They are trying to insure that cached data will be referenced again. As such, they typically wait for two reads before the data is placed into cache.

They support two different read caching algorithms called during the presentation as Algorithm A or Algorithm B. (They really need some marketing – Turbo Boost and Extreme Boost sounds better to me). Not sure they ever described the differences between the two, but the fact that they have multiple caching algorithms speaks to some sophistication. They also maintain a “Ghost data list”. Ghost data is data whose metadata is still in cache, but whose actual data is no longer in cache.

When a miss occurs, they determine if the data would have been a hit in Ghost data, in Algorithm A or in Algorithm B if they were active on the VM.  If it would have been a hit in Ghost data then in general, you probably need more SSD caching space on this ESXi server for the VMs being cached. If Algorithm A or B, probably should be using that algorithm for this VM’s IO.

Another approach AutoCache supports is called “Glimmer IO”. I liken this to sequential read-ahead where AutoCache keeps track, on a VM basis, all the IO being performed and try to determine if it is sequential or random. If the VM is doing sequential IO, AutoCache can start reading ahead of where the VM is currently reading. By doing so, they could stage the data in cache before the VM needs it/reads it. According to Rory there are policies which can be set on a VM basis to limit how much read-ahead is being performed. I assume there are policies associated with the use of Algorithm A and B on a VM basis as well but they didn’t go into this.

AutoCache cache warmup for vMotion

The other nice thing that AutoCache does is it provides a cache warmup for the target ESXi server when moving VMs via vMotion. This is done by registering for Vmotion API and trapping Vmotion requests. Once they detect that a VM is being moved they send the VM’s  Autocache metadata over to the target Host at which time the target system AutoCache can start to fill it’s cache from the shared storage. Not a bad approach from my perspective. The amount of data that needs to be moved is minimal and you get the AutoCache code running in the target machine to start preloading blocks that were in cache from the source Host. They also mentioned that once they have copied the metadata over to the target Host, they could free up (invalidate) all the space in the source Host’s cache that was being held by the VM being moved.

Proximal Data for Hyper-V

At SFD4, Rory mentioned that a Hyper-V version of AutoCache was coming out shortly. And although they specifically indicated that write back caching was not a great idea (in contrast to Satyam and PernixData), there was a potential for them to look at implementing this as well over time.

The product is sold through resellers, distributors and OEMs.  They claim support for any flash device although they have an approved HCL.

Current pricing is $1000 for the AutoCache software to support a SSD cache of 500GB or less. From what we see in the enterprise storage systems having a cache of 2-5% of your total backend storage is about right. (But see my VM working set inflection points and  SSD caching post for another side on this).   So a 500GB SSD cache should be able to support 10-25TB of backend data if all goes well.

~~~~

After the podcast on PernixData’s clustering, write-back caching software, Proximal Data didn’t seem as complex or useful. But there is a place for read-only caching. The fact that they can help warm the target Host’s cache for a vMotion is a great feature if you plan on doing a lot of movement of VMs in your shop. The fact that they have distinct support for multiple cache algorithms, understand sequential detect and have some way of telling you that you could use more SSD caching is also good in my mind.

Comments?

Photo: 20-nanometer NAND flash chip, IntelFreePress’ photostream

 

 

VM working set inflection points & SSD caching – chart-of-the-month

Attended SNW USA a couple of weeks ago and talked with Irfan Ahmad, Founder/CTO of CloudPhysics, a new Management-as-a-Service offering for VMware. He took out a chart which I found very interesting which I reproduce below as my Chart of the Month for October.

© 2013 CloudPhysics, Inc., All Rights Reserved

Above is a plot of a typical OLTP like application’s IO activity fed into CloudPhysics’ SSD caching model. (I believe this is a read-only SSD cache although they have write-back and write-through SSD caching models as well.)

On the horizontal access is SSD cache size in MB and ranges from 0MB to 3,500MB. On the left vertical access is % of application IO activity which is cache hits. On the right vertical access is the amount of data that comes out of cache in MB, which ranges from 0MB to 18,000MB.

The IO trace was for a 24-hour period and shows how much of the application’s IO workload that could be captured and converted to (SSD) cache hits given a certain sized cache.

The other thing that would have been interesting is to tell the size of the OLTP database that’s being used by the application, it could easily be 18GB or TBs in size, we don’t see that here.

Analyzing the chart

First, in the mainframe era (we’re still there, aren’t we), the rule of thumb was doubling cache size should increase cache hit rate by 10%.

Second, I don’t understand why at 0MB of cache the cache hit rate is ~25%. From my perspective, at 0MB of cache the hit rate should be 0%.  Seems like a bug in the model but that aside the rest of the curve is very interesting.

Somewhere around 500MB of cache there is a step function where cache hit rate goes from ~30% to ~%50.  This is probably some sort of DB index that has been moved into cache and has now become cache hits.

As for the rule of thumb, going from 500MB to 1000MB doesn’t seem to do much, maybe it increases the cache hit ration by a few %. And doubling it again (to 2000MB), only seems to get you another percent or two of more cache hit rates.

But moving to the 2300MB size cache gets you over 80% cache hit rate. I would have to say the rule of thumb doesn’t work well for this workload.

Not sure what the step up really represents from the OLTP workload perspective but at 80% cache hit, most of the database tables that are accessed more frequently must reside now in cache. Prior to this cache size (<2300MB) all of those tables apparently just didn’t fit in cache, thus, as one was being accessed and moved into cache, another was being pushed out of cache causing a read miss the next time it was accessed. After this cache size (>=2300MB), all these frequently accessed tables could now remain in cache, resulting in the ~80% cache hit rate seen on the chart.

Irfan said that they do not display the chart in CloudPhysics solution but rather display the inflection points. That is their solution would say something like at 500MB of SSD the traced application should see ~50% cache hit rate and at 2300MB of SSD the application should generate ~80% cache hits.  This nets it out for the customer but hides the curve above and the underlying complexity.

Caching models & application working sets …

With CloudPhysics SSD trace simulation Card (caching model) and the ongoing lightweight IO trace collection (IO tracing) available with their service, any VM’s working set can be understood at this fine level of granularity. The advantage of CloudPhysics is that with these tools, one could determine the optimum sized cache required to generate some level of cache hits.

I would add some cautions to the above:

  • The results shown here are based on a CloudPhysics SSD caching model.  Not all SSDs cache in the same way, and there can be quite a lot of sophistication in caching algorithms (having worked on a few in my time). So although,  from this may show the hit rate for a simplistic SSD cache, it could easily under or over estimate real cache hit rates, perhaps by a significant amount. The only way to validate CloudPhysics SSD simulation model is to put a physical cache in at the appropriate size and measure the VM’s cache hit rate.
  • Real caching algorithms have a number of internal parameters which can impact cache hit rates. Not the least of which is the size of the IO block being cached. This can be (commonly) fixed  or (rarely) variable in length. But there are plenty of others which can adversely impact cache hit rates as well for differing workloads.
  • Real caches have a warm up period. During this time the cache is filling up with tracks which may never be referenced again. Some warm up periods take minutes while some I have seen take weeks or longer. The simulation is for 24 hours only, unclear how the hit rate would be impacted if the trace/simulation was for longer or shorter periods.
  • Caching IO activity can introduce a positive (or negative) feedback into any application’s IO stream. If without a cache, an index IO took, let’s say 10 msec to complete and now with an appropriate sized cache, it takes 10 μseconds to complete, the application users are going to complete more transactions, faster. As this takes place, then database IO activity will be change from what it looked like without any caching. Also even the non-cache hits should see some speedup, because the amount of IO issued to the backend storage is reduced substantially.  At some point this all reaches some sort of stasis and we have an ongoing cache hit rate. But the key it’s unlikely to be an exact cache hit match to using a trace and model to predict it. The point is that adding cache to any application environment has affects which are chaotic in nature and inherently difficult to model.

Nonetheless, I like what I see here. I believe it would be useful to understand a bit more about CloudPhysics caching model’s algorithm, the size of the application’s database being traced here, and how well their predictions actually matched up to physical cache’s at the sizes recommended.

… the bottom line

Given what I know about caching in the real world, my suggestion is to take the cache sizes recommended here as a bottom end estimate and the cache hit predictions as a top end estimate of what could be obtained with real SSD caches.  I would increase the cache size recommendations somewhat and expect something less than the cache hits they predicted.

In any case, having application (even VM) IO traces like this that could be accessed and used to drive caching simulation models should be a great boon to storage developers everywhere. I can only hope that server side SSDs and caching storage  vendors supply their own proprietary cache model cards that can be supplied with CachePhysics Cards so that potential customers could use their application traces with the vendor cards to predict what their hardware can do for an application.

If you want to learn more about block storage performance from SMB to enterprise class SAN storage systems, please checkout our SAN Buying Guide, available for purchase on our website. Also we report each month on storage performance results from SPC, SPECsfs, and ESRP in our free newsletter. If you would like to subscribe to this, please use the signup form above right.

~~~~

Comments?

Image:  Chart courtesy of and use approved by CloudPhysics