Hyperloop One in Colorado?

Read a couple of articles last week (TechCrunch, ArsTechnica & Denver Post) about Colorado becoming a winner in the Hyperloop One Global Challenge. The Colorado Department of Transportation (DoT) have joined with Hyperloop One to commission a study on Hyperloop transportation across the front range, from Cheyenne, WY to Pueblo, CO.

There’s been talk forever about adding a passenger train in Colorado from Fort Collins to Pueblo but every time they look at it they can’t make the economics work. How’s this different?

Transportation and the Queen city of the Prairie

Transportation has always been important to Denver. It was the Denver Pacific railroad from Denver to Cheyenne that first linked Denver to the rest of the nation. But even before that there was a stage coach line (Leavenworth & Pike’s Peak Express) that went through Denver to reduce travel time. Denver is currently the largest city within 500 miles and the second only to Phoenix as the most populus city in the mountain west.

Denver International Airport is a major hub and the world’s sixth busiest airport. Denver is a cross road for major north-south and east-west highways through the mountain west. Both the BNSF and Union Pacific railroads serve Denver and Denver is one of the major stops on the Amtrak  passenger train from San Francisco to Chicago.

Why Hyperloop?

Hyperloop can provide much faster travel, even faster than airplanes. Hyperloop can go up to 760 mph (1200 km/h) and should average 600 mph (970 km/h) from point to point

Further, it could potentially require less security.  Hyperloop can go above or below ground. But in either case a terrorist act shouldn’t be as harmful as one on a plane thats traveling at 20 to 30,000 feet in the air.

And because it can go above or below ground it could potentially make use current transportation right of way corridors for building its tubes. Although to go west, it’s going to need a new tunnel or two through the mountains.

Stops along the way

The proposed hyperloop track will bring it through Greeley and as far west as Vail. For a total of 360 miles. Cheyenne to Pueblo have about 10 urban centers between and west of them (Cheyenne, Fort Collins, Greely, Longmont-Boulder, Denver, Denver Tech Center [DTC], West [Denver] metro, Silverthorne/Dillon, Vail, Colorado Springs and Pueblo).

Cheyenne to Pueblo is is 213 miles apart and ~3.5 hr drive with Denver at about the 1/2 way point. With Hyperloop, Denver to either location should take ~10 minutes without stops and the total trip, Cheyenne to Pueblo should be ~21 minutes.

Yes but is there any demand

I would think the way to get a handle on any potential market is to examine airline traffic between these cities. Airplanes can travel at close to these speeds and the costs are public.

But today there’s not much airline traffic between Cheyenne, Denver and Pueblo.  Flights to Vail are mostly seasonal. I could only find one flight from Denver to Cheyenne over a week, one flight between Cheyenne and Pueblo, and 16 flights between Denver and Pueblo. The airplanes used on these trips only holds 9 passengers, so maybe that would amount to a maximum of 162 air travelers a week.

The other approach to estimating potential passengers is to use highway traffic between these destinations. Yes the interstate (I25) from Cheyenne through Denver to Pueblo is constantly busy and needs another lane or two in each direction to handle peak travel. And travel to Vail is very busy during weekends. But how many of these people would be willing to forego a car and travel by Hyperloop?

I travel on tollroads to get to the Denver Airport and it’s a lot faster then traveling non-tollroad highways. But the cost for me is a business expense and it’s not that frequent. These days there’s not much traffic on my tollroad corridor and at rush hour, there’s very few times where one has to slow down. But there are plenty of people coming to the airport each day from the NorthWest and SouthEast Denver suburbs that could use these tollroads but don’t.

And what can you do in Pueblo, Cheyenne or Denver for that matter without a car. It depends on where you end up. The current stops in Denver include the Denver International Airport, DTC, or West Metro (Golden?). Denver, Golden, Boulder, Vail, Greeley and Fort Collins all have compact downtowns with decent transportation. But for the rest of the stops along the way, you will probably want access to a car to get anywhere. There’s always Uber and Left and worst case renting a car.

So maybe Hyperloop would compete for all air travel and some portion of the car travel between along the Cheyenne to Denver to Pueblo. It just may not be large enough.

Other alternative routes

Why stop at Cheyenne, what about Jackson WY or Billings MT? And why Pueblo what about Sante Fe and Albuquerque in NM. And you could conceivably go down to Brownsville, TX and extend up to Calgary and Edmonton in Alberta, Canada, if it made sense. I suppose it’s a question of how many people for what distance.

I would think that going east-west would be more profitable. Say Kansas City to Salt Lake City with Denver in between. With this corridor: 1) the distances are longer (Kansas to Salt Lake is 910 mi [~1465 km]); 2) the metropolitan areas are much larger; and 3) the air travel between them is more popular.

There are currently 10 winners for Hyperloop One’s Global Challenge Contest.  The other routes in the USA include Texas (Dallas, Houston & San Antonio), Florida (Miami to Orlando), & the midwest (Chicago IL to Columbus OH to Pittsburgh PA). But there are others in Canada and Mexico in North America and more in Europe and India.

Hyperloop One will “commit meaningful business and engineering resources and work closely with each of the winning teams/routes to determine their commercial viability.” All this means that each of the winners will be examined professionally to see if it makes economic sense.

Of the 10 winners, Colorado’s route has the least population, almost by a factor of 2. Not sure why we are even in contention, but maybe it’s the ease of building the tubes that makes us a good candidate.

In any case, the public-private partnership has begun to work on the feasibility study.

Comments?

Photo Credit(s): 7 hyperloop facts Elon Musk would love us to know, Detechter

Take a ride on Hyperloop…, Daily Mail

@hyperloop

Mesosphere, Kubernetes and the coming container orchestration consensus

Read a story this past week in TechCrunch, Mesosphere adds Kubernetes support, about how Mesosphere with their own container orchestration software (called Marathon) will now support Google Kubernetes clusters and container orchestration services.

Mesosphere uses their own DC/OS (data center/operating system) to provide service discovery, resource management and networking for container cluster deployments across multiple machines.

DC/OS sounds similar to Kubo discussed in last week’s post, VMworld2017 forecast, cloudy with high chance of containers. Although Kubo was an open source development led by Pivotal to run Kubernetes clusters.

Kubernetes (and Docker) wins

This is indicative of the impact Kubernetes cluster operations is having on the container space.For now, the only holdout in container orchestration without Kubernetes is Docker with their Docker Swarm Engine.

Why add Kubernetes when Mesosphere already had a great container cluster orchestration service? It seems as the container market is maturing, more and more applications are being developed for Kubernetes clusters rather than other container orchestration software.

Although Mesosphere is the current leader in container orchestration both in containers run and revenue (according to their CEO), the move to Kubernetes clusters is likely to accelerate their market adoption/revenues and ultimately help keep them in the lead.

Marathon still lives on

It turns out that Marathon also orchestrates non-container application deployments.

Marathon can also support statefull apps like database machines with persistent storage (unlike Docker containers, stateless apps). These are closer to more typical enterprise applications. This is probably why Mesosphere has done so well up to now.
Marathon also supports both Docker and Mesos containers. Mesos containers depend on Apache Mesos, a specially developed distributed system’s kernel based on Linux for containers.

So Mesosphere will continue to fund development and support for Marathon, even while it rolls out Kubernetes. This will allow them to continue to support their customer base and move them forward into the Kubernetes age.

~~~~

I see an eventual need for both stateless and statefull apps in the enterprise data center. And that might just be Mesosphere’s key value proposition – the ability to support apps of the future (containers-stateless) and apps of today (statefull) within the same DC/OS.

Picture credit(s): Enormous container ship by Ruth Hartnup

TPU and hardware vs. software innovation (round 3)

tpu-2At Google IO conference this week, they revealed (see Google supercharges machine learning tasks …) that they had been designing and operating their own processor chips in order to optimize machine learning.

They called the new chip, a Tensor Processing Unit (TPU). According to Google, the TPU provides an order of magnitude more power efficient machine learning over what’s achievable via off the shelf GPU/CPUs. TensorFlow is Google’s open sourced machine learning  software.

This is very interesting, as Google and the rest of the hype-scale hive seem to have latched onto open sourced software and commodity hardware for all their innovation. This has led the industry to believe that hardware customization/innovation is dead and the only thing anyone needs is software developers. I believe this is incorrect and that hardware innovation combined with software innovation is a better way, (see Commodity hardware always loses and Better storage through hardware posts).
Continue reading “TPU and hardware vs. software innovation (round 3)”

Learning to live with lattices or say goodbye to security

safe 'n green by Robert S. Donovan (cc) (from flickr)
safe ‘n green by Robert S. Donovan (cc) (from flickr)

Read an article the other day in Quantum Magazine: A tricky path to quantum encryption about the problems that will occur in current public key cryptology (PKC) schemes when quantum computing emerges over the next five to 30 years.  With advances in quantum computing our current PKC scheme that depends on the difficulty of factoring large numbers will be readily crackable. At that time, all current encrypted traffic, used by banks, the NSA, the internet, etc. will no longer be secure.

NSA, NIST, & ETSI looking at the problem

So there’s a search on for quantum-resistant cryptology (see this release from ETSI [European Telecommunications Standard Institute], this presentation from NIST [{USA} National Institute of Standards &Technology], and this report from Schneier on Security on NSA’s [{USA} National Security Agency] Plans for Post-Quantum world ). There are a number of alternatives being examined by all these groups but the most promising at the moment depends on multi-dimensional (100s of dimensions) mathematical lattices.

Lattices?

According to Wikipedia a lattice is a 3-dimensional space of equidistant points. Apparently, for security reasons, they had to increase the number of dimensions significantly beyond 3.

A secret is somehow inscribed in a route (vector) through this 500-dimensional lattice between two points: an original  point (the public key) in the lattice and another arbitrary point, somewhere nearby in the lattice. The problem from a cryptographic sense is that finding a route, in a 500 dimensional lattice, is a difficult task when you only have one of the points.

But can it be efficient for digital computers of today to use?

So the various security groups have been working on divising efficient algorithms for multi-dimensional public key encryption over the past decade or so. But they have run into a problem.

Originally, the (public) keys for a 500-dimensional lattice PKC were on the order of MBs, so they have been restricting the lattice computations to utilize smaller keys and in effect reducing the complexity of the underlying lattice. But in the process they have now reduced the security of the lattice PKC scheme. So they are having to go back to longer keys, more complex lattices and trying to ascertain which approach leaves communications secure but is efficient enough to implement by digital computers and communications links of today.

Quantum computing

The problem is that quantum computers provide a much faster way to perform certain calculations like factoring a number. Quantum computing can speed up this factorization, by on the order of the square root of a number, as compared to normal digital computing of today.

Its possible that similar quantum computing calculations for lattice routes between points could also be sped up by an equivalent factor.  So even when we all move to lattice based PKC, it’s still possible for quantum computers to crack the code hopefully, it just takes longer.

So the mathematics behind PKC will need to change over the next 5 years or so as quantum computing becomes more of a reality. The hope is that this change will will at least keep our communications secure, at least until the next revolution in computing comes along, or quantum computing becomes even faster than that envisioned today.

Comments?

EMCWorld2015 day 1 news

We are at EMCWorld2015 in Vegas this week. Day 1 was great with new XtremIO 4.0, “The Beast”, new enhanced Data Protection, and a new VCE VxRACK converged infrastructure solution announcements. Somewhere in all the hoopla I saw an all flash VNXe appliance and VMAX3 with a cloud storage tier but these seemed to be just teasers.

XtremIO 4.0

The new hardware provides 40TB per X-brick and with compression/dedupe and the new 8-Xbrick cluster provides 320TB raw or 1.9PB effective capacity. As XtremIO supports 150K mixed IOPS/XBrick, an 8-Xbrick cluster could do 1.2M IOPS or with 250K read IOPS/Xbrick that’s 2.0M IOPS.

XtremIO 4.0 now also includes RecoverPoint integration. (I assume this means they have integrated the write splitter directly into XtremIO that way you don’t need the host version or the switch version of the write splitter.)

The other thing XtremIO 4.0 introduces is non-disruptive upgrades. This means that they can expand or contract the cluster without taking down IO activity.

There was also some mention of better application consistent snapshots, which I suspect means Microsoft VSS integration.

XtremIO 4.0 is a free software upgrade, so the ability to scale up to 8-Xbricks and non-disruptive cluster changes, and RecoverPoint integration can all be added to current XtremIO systems.

Data Protection

EMC introduced a new top end DataDomain hardware appliance the DataDomain 9500, which has 1.5X the performance (58.7TB/hr) and 4X the capacity (1.7PB) of their nearest competitor solution.

They also added a new software feature (from Maginetics) called CloudBoost™.  CloudBoost allows Networker and Avamar to backup to cloud storage. EMC also added Microsoft Ofc365 cloud backup to Spannings previous Google Apps and SalesForce cloud backups.

VMAX3 Protect Point was also enhanced to provide native backup for Oracle, Microsoft SQL Server, and IBM DB2 application environments. ProtectPoint offers a direct path between VMAX3 and  DataDomain appliances and can speed up backup performance by 20X.

EMC also announced Project Falcon which is a virtual appliance version of DataDomain software

VCE VxRACK

This is a rack sized, stack of VSPEX Blue appliances (a VMware EVO:RAIL solution) with new software to bring the VCE useability and data center scale services to a hyper-converged solution. Each appliance is a 2U rack mounted compute intensive or storage intensive unit. The Blue appliances are configureed in a rack for VxRACK and with version 1 you can use VMware or KVM as a chose your own hypervisor. Version 2 will come out later this year and will be based on a complete VMware stack known as EVO: RACK.

Storage services are supplied by EMC ScaleIO. You can purchase a 1/4 rack, 1/2  rack or full rack which includes top of rack networking. You can also scale out by adding more full racks to the system. EMC said that it can technically support 1000s of racks VSPEX Blue appliances for up to ~38PB of storage.

The significant thing is that the VCE VxRACK supplies the VCE customer experience, in a hyper converged solution. However, the focus for VxRACK is tier 2 applications that don’t have a need for the extremely high availability, low response times and high performance of tier 1 applications that run on their VBLOCK solutions (with VNX, VMAX or XtremIO storage).

VMAX3

They had a 5th grader provision an VMAX3 gold storage (LUN) and convert it to a diamond storage (LUN) in 20.48 seconds. It seemed pretty simple to me but the kid blazed through the screens a bit fast for me to see what was going on. It wasn’t nearly as complex as it used to be.

VMAX3 also introduces CloudArray™, which uses FastX storage tiering to cloud storage (using onboard TwinStrata software). This could be used as a tier 3 or 4 storage. EMC also mentioned that you can have an XtremIO (maybe an Xbrick) behind a VMAX3 storage system. VMAX3’s software rewrite has separated data services from backend storage and one can see EMC rolling out different backend storage (like cloud storage or XtremIO) in future offerings.

Other Notes

There was a lot of discussion about the “Information Generation” a new customer for IT services. This is tied to the 3rd platform transformation that’s happening in the industry today. To address this new world IT needs to have 5 attributes:

  1. Predictively spot new opportunities for services/products
  2. Deliver a personalized experience
  3. Innovate in an agile way
  4. Develop trusted programs/apps Demonstrate transparency & trust
  5. Operate in real time

David Goulden talked a lot about what this all means and I encourage you to take a look at the video stream to learn more.

Speaking of video last year was the first year there were more online viewers of EMCWorld than actual participants. So this year EMC upped their game with more entertainment value. The opening dance sequence was pretty impressive.

A lot of talk today was on 3rd platform and the transition from 2nd platform. EMC says their new products are Platform 2.5 which are enablers for 3rd platform. I asked the question what the 3rd platform storage environment looks like and they said scale-out (read ScaleIO) converged storage environment with flash for meta-data/indexing.

As the 3rd platform transforms IT there will be some customers that will want to own the infrastructure, some that will want to use service providers and some that will use public cloud services. EMC’s hope is to capture those customers that want to own it or use service providers.

Tomorrow the focus will be on the Federation with Pivotal and VMware being up for keynotes and other sessions. Stay tuned.

 

 

Interesting sessions at SNIA DSI Conference 2015

I attended the SNIA Data Storage Innovation (DSI) Conference in Santa Clara, CA last week and ran into a number of old friends and met a few new ones. While attending the conference, there were a few sessions that seemed to bring the conference to life for me.

Microsoft Software Defined Storage Journey

Jose Barreto, Principal Program Manager – Microsoft, spent a little time on what’s currently shipping with Scale-out File Service, Storage Spaces and other storage components of Windows software defined storage solutions. Essentially, what Microsoft is learning from Azure cloud deployments it is slowly but surely being implemented in Windows Server software and other solutions.

Microsoft ‘s vision is that customers can have their own private cloud storage with partner storage systems (SAN & NAS), with Microsoft SDS (Scale-out File Server with Storage Spaces), with hybrid cloud storage (StorSimple with Azure storage) and public cloud storage (Azure storage).

Jose also mentioned other recent innovations like the Cloud Platform System using Microsoft software, Dell compute, Force 10 networking and JBOD (PowerVault MD3060e) storage in a rack.

Some recent Microsoft SDS innovations include:

  • HDD and SSD storage tiering;
  • Shared volume storage;
  • System Center volume and unified storage management;
  • PowerShell integration;
  • Multi-layer redundancy across nodes, disk enclosures, and disk devices; and
  • Independent scale-out of compute or storage.

Probably a few more I’m missing here but these will suffice.

Then, Jose broke some news on what’s coming next in Windows Server storage offerings:

  • Quality of service (QoS) – Windows Server provides QoS capabilities which allows one to limit the IO activity and can be used to specify min and max IOPS or latency at a VM or VHD level. The scale-out storage service will balance the IO activity across the cluster to meet this QoS specification. Apparently the balancing algorithm came from Microsoft Research but Jose didn’t go into great detail on what it did differently other than being “fairer” applying QoS constraints.
  • Rolling upgrades – Windows Server now supports a cluster running different versions of software. Now one can take a cluster node down and update its software and re-activate it into the same cluster. Previously, code upgrades had to take a whole cluster down at a time.
  • Synchronous replication – Windows Server now supports synchronous Storage Replicast the volume level. Previously Storage Replicas were limited to asynch.
  • Higher VM storage resiliency – Windows will now pause a VM rather than terminate it during transient storage interruptions. This allows VMs to sustain operations across transient outages. VMs are in PausedCritical state until the storage comes back and then they are restarted automatically.
  • Shared-nothing Storage Spaces – Windows Storage Spaces can be configured across cluster nodes without shared storage. Previously, Storage Spaces required shared JBOD storage between cluster nodes. This feature removes this configuration constraint and allows JBOD storage to only be accessible fro a single node.

Jose did not name what this  “Vnext” was going to be called and didn’t provide a specific time frame other than it’s coming out shortly.

Archival Disc Technology

Yasumori  Hino from Panasonic and Jun Nakano from Sony presented information on a brand new removable media technology or Cold Storage. Previous to there session there was another one from HDS Federal Corporation on their BluRay jukebox but Yasumori’s and Jun’s session was more noteworthy.The  new Archive Disc is the next iteration in optical storage beyond BlueRay and targeted at long term archive or “cold” storage.

As a prelude to the Archive Disc discussion Yasumori played a CD that was pressed in 1982 (52nd Street, Billy Joel album) on his current generation laptop to show the inherent downward compatibility in optical disc technology.

In 1980 IBM 3480 disk drives were refrigerator sized, multi $10K devices, and held 2.3GB. As far as I know there aren’t any of these still in operation. And IBM/STK tape was reel to reel and took up a whole rack. There may be a few of these devices still operating these days but not many.  I still have a CD collection (but then I am a GreyBeard 🙂 that I still listen to occasionally.

IMG_4399The new Archive Disc includes:

  • More resilient media to high humidity, high temperature, salt water, and EMP and other magnetic disturbances. As proof, a BlueRay disk was submerged in sea water for 5 weeks and was still able to be read. Data on BlueRay and the new Archive disk is recorded without using electro magnetics and is recorded in a very stable oxide recording material layer. They project that the new Archive disc has a media life of 50 years at 50C and 1000 years at 25C under high humidity conditions.
  • Dual sided, triple layered which uses land and groove recording to provide 300GB of data storage. BlueRay also uses a land and groove disk format but only records on the land portion of the disc. Track pitch for BlueRay is 320nm whereas for the Archive disc it’s only 225nm.
  • Data transfer speeds of 90MB/sec with two optical heads, one per side. Each head can read/write data at 45MB/sec. They project double or quadrouple this data transfer rate by using more pairs of optical heads .

They also presented a roadmap for a 2nd gen 500GB and 3rd gen 1TB Archive disc using higher linear density changes and better signal processing technology.

Cold storage is starting to get some more interest these days what with all the power consumption going into today’s data centers and the never ending data tsunami. Archive and BluRay optical storage consume no power at rest and only consume power when mounting/dismounting and reading/writing/spinning. Also with optical discs imperviousness to high temp and humidity, optical storage could be stored outside of air conditioned data centers.

The Storage Revolution

The final presentation of interest to me was by Andrea Nelson from Intel. Intel has lately been focusing on helping partners and vendors provide more effective storage offerings. These aren’t storage solutions but rather storage hardware, components and software developed in collaboration with storage vendors and partners that make it easier for them to offer storage solutions using Intel hardware. One example of this collaboration is IBM hardware assist Real Time Compression available on new V7000 and FlashSystem V9000 storage hardware.

As the world turns to software defined storage, Intel wants those solutions to make use of their hardware. (Although, at the show I heard from one another new SDS vendor that was planning to use X86 as well as ARM servers).

Intel has:

  • QuickAssist Acceleration technology – such as hardware assist data compression,
  • Storage Acceleration software libraries – open source erasure coding and other low-level compute intensive functions, and
  • Cache Acceleration software – uses Intel SSDs as a data cache for storage applications.

There wasn’t of a technical description of these capabilities as in other DSI sessions but with the industry moving more and more to SDS, Intel’s got a vested interest in seeing it be implemented on their hardware.

~~~~

That’s about it. I sat in on quite a number of other sessions but nothing else stuck out as significant or interesting to me as these threes sessions.

Comments?

Top Ten RayOnStorage Posts for 2012

Here are the top 10 blog posts for 2012 from RayOnStorage.com

1. Snow Leopard to Mountain Lion

We discuss our Mac OSX transition from Snow Leopard to Mountain Lion with the good, bad and ugly of Mountain Lion from a novice user’s perspective.

2. Vsphere 5.1 storage enhancements and future vision

We detail some of the storage enhancements and directions for the latest revision of VMware Vsphere 5.1

3.  Object Storage Summit wrap up

We discuss last months ExecEvent Object Storage Summit and some of the use cases driving customers to adopt object storage for their data centers.

4. EMCWorld2012 part 1 – VNX/VNXe

We analyze the first day of EMCWorld2012 focused on EMC’s VNX/VNXe product enhancements.

5. Dell Storage Forum 2012 – day 2

We discuss the new Compellent and FluidFS systems coming out of Dell Storage Forum and their latest RNA Networks acquisition with a coherent Flash Cache network.

6. EMC buys ExtremeIO

Right before EMCWorld2012, EMC announced their purchase of ExtremeIO which was rumored for sometime but signaled a new path to flash only SAN storage systems.

7. HDS Influencer Summit wrap up

HDS held their Influencer Summit last month and rolled out their executive team to talk about their storage and service directions and successes.

8. Oracle finally releases StorageTek VSM6

Well after much delay we finally get to see the latest generation Virtual Storage Manager 6 (VSM6) for the mainframe System z market place.

9. Coraid, first thoughts

We got to meet with Coraid as part of a Storage TechField Day event and we came away impressed but still wanting to learn more.

10. Latest SPC-1 results IOPS vs. drive counts – chart of the month

Every month (or so) we do a more detailed analysis of a chart that appears in our free monthly newsletter, this was done earlier in the year and documented the correlation between IOPS and drive counts in SPC-1 results.

Happy New Year.

Oracle (finally) releases StorageTek VSM6

[Full disclosure: I helped develop the underlying hardware for VSM 1-3 and also way back, worked on HSC for StorageTek libraries.]

Virtual Storage Manager System 6 (VSM6) is here. Not exactly sure when VSM5 or VSM5E were released but it seems like an awful long time in Internet years.  The new VSM6 migrates the platform to Solaris software and hardware while expanding capacity and improving performance.

What’s VSM?

Oracle StorageTek VSM is a virtual tape system for mainframe, System z environments.  It provides a multi-tiered storage system which includes both physical disk and (optional) tape storage for long term big data requirements for z OS applications.

VSM6 emulates up to 256 virtual IBM tape transports but actually moves data to and from VSM Virtual Tape Storage Subsystem (VTSS) disk storage and backend real tape transports housed in automated tape libraries.  As VSM data ages, it can be migrated out to physical tape such as a StorageTek SL8500 Modular [Tape] Library system that is attached behind the VSM6 VTSS or system controller.

VSM6 offers a number of replication solutions for DR to keep data in multiple sites in synch and to copy data to offsite locations.  In addition, real tape channel extension can be used to extend the VSM storage to span onsite and offsite repositories.

One can cluster together up to 256 VSM VTSSs  into a tapeplex which is then managed under one pane of glass as a single large data repository using HSC software.

What’s new with VSM6?

The new VSM6 hardware increases volatile cache to 128GB from 32GB (in VSM5).  Non-volatile cache goes up as well, now supporting up to ~440MB, up from 256MB in the previous version.  Power, cooling and weight all seem to have also gone up (the wrong direction??) vis a vis VSM5.

The new VSM6 removes the ESCON option of previous generations and moves to 8 FICON and 8 GbE Virtual Library Extension (VLE) links. FICON channels are used for both host access (frontend) and real tape drive access (backend).  VLE was introduced in VSM5 and offers a ZFS based commodity disk tier behind the VSM VTSS for storing data that requires longer residency on disk.  Also, VSM supports a tapeless or disk-only solution for high performance requirements.

System capacity moves from 90TB (gosh that was a while ago) to now support up to 1.2PB of data.  I believe much of this comes from supporting the new T10,000C tape cartridge and drive (5TB uncompressed).  With the ability of VSM to cluster more VSM systems to the tapeplex, system capacity can now reach over 300PB.

Somewhere along the way VSM started supporting triple redundancy  for the VTSS disk storage which provides better availability than RAID6.  Not sure why they thought this was important but it does deal with increasing disk failures.

Oracle stated that VSM6 supports up to 1.5GB/Sec of throughput. Presumably this is landing data on disk or transferring the data to backend tape but not both.  There doesn’t appear to be any standard benchmarking for these sorts of systems so, will take their word for it.

Why would anyone want one?

Well it turns out plenty of mainframe systems use tape for a number of things such as data backup, HSM, and big data batch applications.  Once you get past the sunk  costs for tape transports, automation, cartridges and VSMs, VSM storage can be a pretty competitive data storage solution for the mainframe environment.

The fact that most mainframe environments grew up with tape and have long ago invested in transports, automation and new cartridges probably makes VSM6 an even better buy.  But tape is also making a comeback in open systems with LTO-5 and now LTO-6 coming out and with Oracle’s 5TB T10000C cartridge and IBM’s 4TB 3592 JC cartridge.

Not to mention Linear Tape File System (LTFS) as a new tape format that provides a file system for tape data which has brought renewed interest in all sorts of tape storage applications.

Competition not standing still

EMC introduced their Disk Library for Mainframe 6000 (DLm6000) product that supports two different backends to deal with the diversity of tape use in the mainframe environment.  Moreover, IBM has continuously enhanced their Virtual Tape Server the TS7700 but I would have to say it doesn’t come close to these capacities.

Lately, when I talked with long time StorageTek tape mainframe customers they have all said the same thing. When is VSM6 coming out and when will Oracle get their act in gear and start supporting us again.  Hopefully this signals a new emphasis on this market.  Although who is losing and who is winning in the mainframe tape market is the subject of much debate, there is no doubt that the lack of any update to VSM has hurt Oracle StorageTek tape business.

Something tells me that Oracle may have fixed this problem.  We hope that we start to see some more timely VSM enhancements in the future, for their sake and especially for their customers.

~~~~

Comments?

~~~~

Image credit: Interior of StorageTek tape library at NERSC (2) by Derrick Coetzee