Transporter, a private Dropbox in a tower

Move over DropboxBox and all you synch&share wannabees, there’s a new synch and share in town.

At SFD7 last month, we were visiting with Connected Data where CEO, Geoff Barrell was telling us all about what was wrong with today’s cloud storage solutions. In front of all the participants was this strange, blue glowing device. As it turns out, Connected Data’s main product is the File Transporter, which is a private file synch and share solution.

All the participants were given a new, 1TB Transporter system to take home. It was an interesting sight to see a dozen of these Transporter towers sitting in front of all the bloggers.

I was quickly, established a new account, installed the software, and activated the client service. I must admit, I took it upon myself to “claim” just about all of the Transporter towers as the other bloggers were still paying attention to the presentation.  Sigh, they later made me give back (unclaim) all but mine, but for a minute there I had about 10TB of synch and share space at my disposal.

Transporters rule

transporterB2So what is it. The Transporter is both a device and an Internet service, where you own the storage and networking hardware.

The home-office version comes as a 1 or 2TB 2.5” hard drive, in a tower configuration that plugs into a base module. The base module runs a secured version of Linux and their synch and share control software.

As tower power on, it connects to the Internet and invokes the Transporter control service. This service identifies the node, who owns it, and provides access to the storage on the Transporter to all desktops, laptops, and mobile applications that have access to it.

At initiation of the client service on a desktop/laptop it creates (by default) a new Transporter directory (folder). Files that are placed in this directory are automatically synched to the Transporter tower and then synchronized to any and all online client devices that have claimed the tower.

Apparently you can have multiple towers that are claimed to the same account. I personally tested up to 10 ;/ and it didn’t appear as if there was any substantive limit beyond that but I’m sure there’s some maximum count somewhere.

A couple of nice things about the tower. It’s your’s so you can move it to any location you want. That means, you could take it with you to your hotel or other remote offices and have a local synch point.

Also, initial synchronization can take place over your local network so it can occur as fast as your LAN can handle it. I remember the first time I up-synched 40GB to DropBox, it seemed to take weeks to complete and then took less time to down-synch for my laptop but still days of time. With the tower on my local network, I can synch my data much faster and then take the tower with me to my other office location and have a local synch datastore. (I may have to start taking mine to conferences. Howard (@deepstorage.net, co-host on our  GreyBeards on Storage podcast) had his operating in all the subsequent SFD7 sessions.

The Transporter also allows sharing of data. Steve immediately started sharing all the presentations on his Transporter service so the bloggers could access the data in real time.

They call the Transporter a private cloud but in my view, it’s more a private synch and share service.

Transporter heritage

The Transporter people were all familiar to the SFD crowd as they were formerly with  Drobo which was at a previous SFD sessions (see SFD1). And like Drobo, you can install any 2.5″ disk drive in your Transporter and it will work.

There’s workgroup and business class versions of the Transporter storage system. The workgroup versions are desktop configurations (looks very much like a Drobo box) that support up to 8TB or 12TB supporting 15 or 30 users respectively.  The also have two business class, rack mounted appliances that have up to 12TB or 24TB each and support 75 or 150 users each. The business class solution has onboard SSDs for meta-data acceleration. Similar to the Transporter tower, the workgroup and business class appliances are bring your own disk drives.

Connected Data’s presentation

transporterA1Geoff’s discussion (see SFD7 video) was a tour of the cloud storage business model. His view was that most of these companies are losing money. In fact, even Amazon S3/Glacier appears to be bleeding money, although this may not stop Amazon. Of course, DropBox and other synch and share services all depend on cloud storage for their datastores. So, the lack of a viable, profitable business model threatens all of these services in the long run.

But the business model is different when a customer owns the storage. Here the customer owns the actual storage cost. The only thing that Connected Data provides is the client software and the internet service that runs it. Pricing for the 1TB and 2TB transporters with disk drives are $150 and $240.

Having a Transporter

One thing I don’t like is the lack of data-at-rest encryption. They use TLS for data transfers across your LAN and the Internet. But the nice thing about having possession of the actual storage is that you can move it around. But the downside is that you may move it to less secure environments (like conference hotel rooms). And as with the any disk storage, someone can come up to the device and steel the disk. Whether the data would be easily recognizable is another question but having it be encrypted would put that question to rest. There’s some indication on the Transporter support site that encryption may be coming for the business class solution. But nothing was said about the Transporter tower.

On the Mac, the Transporter folder has the shared folders as direct links (real sub-folders) but the local data is under a Transporter Library soft link. It turns out to be a hidden file (“.Transporter Library”) under the Transporter folder. When you Control click on this file your are given the option to view deleted files. You can also do this with shared files as well.

One problem with synch and share services is once someone in your collaboration group deletes some shared files they are gone (over time) from all other group users. Even if some of them wanted them. Transporter makes it a bit easier to view these files and save them elsewhere. But I assume at some point they have to be purged to free up space.

When I first installed the Transporter, it showed up as a network node on my finder shared servers. But the latest desktop version (3.1.17) has removed this.

Also some of the bloggers complained about files seeing files “in flux” or duplicates of the shared files but with unusual file suffixes appended to them, such as ” filename124224_f367b3b1-63fa-4d29-8d7b-a534e0323389.jpg”. Enrico (@ESignoretti) opened up a support ticket on this and it’s supposedly been fixed in the latest desktop and was a temporary filename used only during upload and should have been deleted-renamed after the upload was completed. I just uploaded 22MB with about 40 files and didn’t see any of this.

I really want encryption as I wanted one transporter in a remote office and another in the home office with everything synched locally and then I would hand carry the remote one to the other location. But without encryption this isn’t going to work for me. So I guess I will limit myself to just one and move it around to wherever I want to my data to go.

Here are some of the other blog posts by SFD7 participants on Transporter:

Storage field day 7 – day 2 – Connected Data by Dan Firth (@PenguinPunk)

File Transporter, private Synch&Share made easy by Enrico Signoretti (@ESignoretti)

Transporter – Storage Field Day 7 preview by Keith Townsend (@VirtualizedGeek)

Comments?

DS3, the BlackPearl and the way forward for … tape

Spectra Logic Summit 2013, Nathan Thompson, CEO talking about  Spectra Logic's historyJust got back from an analyst summit with Spectra Logic.  They announced a new interface to tape called, Deep Simple Storage Service (DS3) and an appliance that implements this interface named the BlackPearl.  The intent is to broaden the use of tape to include, todays more web services, application environments.

The main problems addressed by the new interface is how do you map an essentially sequential, high throughput but long latency access to first byte, removable media device to an essentially small file, get and put environment.  And is there a market for such services. I think Spectra Logic has answered the first set of questions and is about to embark on a journey to answer the second set of questions.

The new interface – it’s all about simplifying tape

The DS3 interface answers the first set of questions. With DS3 Specra Logic has extended Amazon’s S3 interface to expose some of the sequentiality and removability of tape to the object storage world.

As you should recall, Amazon S3 is a RESTful, web interface that uses HTTP type GET and PUT commands to move data to and from the S3 storage service.  The data you are moving is considered an object and the object name or identifier is unique across the storage service. When you “PUT” an object you get to add key-value pairs of information called meta-data to the object. When you “GET” an object you retrieve the data from the storage service. The other thing one needs to be aware of is that you get and put objects into “BUCKET”s.

With DS3, Spectra Logic has added essentially 4 new commands to S3 protocol, which are:

  • Bulk Put – this provides a list of objects that one wants to “PUT” into a DS3 storage service and the response from the DS3 storage service is an ordered list of which objects to PUT in sequence and which DS3 storage server node (essentially an IP address) to send the data.
  • Bulk Get – this supplies a list of objects that one wants to GET from a DS3 storage service and the response is an ordered list of the sequence to get those objects and the node address to use for those object gets
  • Export Bucket – this identifies a BUCKET that you wish to remove from a DS3 storage service.  Presumably the response would be where the bucket can be found,  the number of pieces of media to expect, and some identification of the media serial numbers that constitute a bucket on the DS3 storage service.
  • Import Bucket – this identifies a new bucket which will be imported into a DS3 storage service and will supply some necessary information such as how many pieces of media to expect and the serial numbers of the media.  Presumably the response will be a location which can be used to import the media.

With these four simple commands and an appropriate DS3 client, DS3 server and DS3 storage backend one now has everything they need to support a removable media object store. I could see real value for export/import like this on the “rare occasion” when a  cloud service provider goes out of business.

The DS3 interface will be publicly available and the intent is to both supply Spectra Logic developed clients as well a ISV/partner developed DS3 clients so as to provide removable media object stores for all sorts of other applications.

Spectra’s is providing developer tools and documentation so that anyone can write a DS3 client. To that end, the DS3 developer portal is up (couldn’t find a link this AM but will update this post when I find it) and available free of charge to anyone today (believe you need to register to gain access to the doc.). They have a DS3 server simulator that DS3 client developers can use to test out and validate their client software. They also have a try & buy service for client developers.

Essentially, the combination of DS3 clients, DS3 servers and DS3 backend storage create a really deep archive for object data. It’s not intended for primary or secondary storage access but it’s big, cheap, and power/space efficient storage that can be very effective if used for archive data.

BlackPearl, the first DS3 Server

Their second announcement is the first implementation of a DS3 server, Spectra Logic calls BlackPearl(™). The BlackPearl connects to one or more Spectra Logic tape libraries as a backend store which together essentially provides a DS3 object storage archive. The DS3 server talks to DS3 clients on the front end. BlackPearl uses SAS or FC connected tape transports, which can be any transport currently supported by SpectraLogic tape libraries, including IBM TS1140, LTO-4, -5 and -6.

In addition to BlackPearl, Spectra Logic is releasing the first DS3 client for Hadoop. In this case, the DS3 client implements a new version of the Hadoop DistCp (distributed copy) command which can be used to create a copy of an HDFS directory tree onto a DS3 storage service.

Current BlackPearl hardware is a standard 2U server with 4-400GB SSDs inside which act as sort of a speed matching buffer for the Object interface to SAS/FC tape interface.

We only saw a configuration with one BlackPearl in operation (GA of BlackPearl is expected this December). But the plan is to support multiple BlackPearl appliances to talk with the same DS3 backend storage. In that case, there will be a shared database and (tape) resource scheduler across all the appliances in the cluster.

Yes, but what about the market?

It’s a gutsy move for someone like Spectra Logic to define a new open interface to deep storage. The fact that the appliance exists outside the tape library itself and could potentially support any removable media offers interesting architectural capabilities. The current (beta) implementation lacked some sophistication but the expectation is that much of this will be resolved by GA or over time through incremental enhancements.

Pricing is appealing. When you add BlackPearl appliance(s), with a T950 Spectra Logic tape library using LTO drives which supports uncompressed data store of ~2.4PB of archive data, the purchase price is ~$0.10/GB. This compares especially well with current Amazon Glacier pricing of $0.01/GB/Month, so that for the price of 10 months of Glacier storage you could own your own DS3 storage service.

At larger capacities, such as BlackPearl using T950 with TS1140 tape drives supporting 6.4PB is even cheaper, at $0.09/GB. Other configurations are available and in general bigger congfigurations are cheaper on $/GB and smaller ones more expensive.  The configurations are speced by Spectra Logic to have all the media, tape drives and BlackPearl systems be needed to support an archives object store.

As for markets, Spectra Logic already has beta interest from a large well known web services customer and a number of media & entertainment customers.

In the long run, Spectra Logic believes that if they can simplify access to tape for an application where it’s well qualified to support (deep archive), that this will enable new applications to take advantage of tape, that weren’t even dreamed of before.  By opening up a Object Store interface to tape, anyone currently using S3 is a potential customer.

Amazon announced earlier this year that they have over 2 trillion objects is their S3. And as far as I can tell (see my post Who’s the next winner in storage?) they are growing with no end in sight.

~~~~

Comments?

 

VMworld day 2

At today’s VMworld keynote the subject was end user computing.  The start was all the work being done with VMware view to enable virtual desktops to execute anywhere it needs to be.  VMware has some special graphical functionality to enable View to interact even better with today’s touch screen UIs, allowing cut and paste between View desktop application on Android tablet to a native tablet application – pretty impressive.

Next, VMware discussed Wanova Mirage application which provides for centralized management of live desktops. The demo onstage had a laptop running Windows XP upgraded in real time (with Restart) to Windows 7 (just in time to move up to Windows 2012).  Then the demonstrator tripped and destroyed his ancient laptop. Mirage had synched an image of the laptop and was able to bridge the image to a virtual desktop which enabled the use of View on his Galaxy tablet to show the presentation he was updating on his laptop.  Next, the end user showed up with a Mac Air laptop and Mirage was able to extract the desktop image and have it run under VMware FusionPro natively on the Mac Air laptop.  Apparently Mirage maintains a synchronized version of the desktop as it changes and enables it to run/deliver it anywhere it needs to be used.

VMware has been talking about the new Mulit-Device world for a while now and this vision is being delivered in their Horizon set of applications.  They showed an alpha version of their Horizon Suite which joins Horizon App manager, Horizon Data (Project Octopus, Dropbox for enterprise data) collaboration data sharing, and Horizon Desktop.  It seems to me as an attempt to move vCloud director like management services to desktop users.  Unclear to me how this interacts with View and Mirage but it seems to be the next evolution.

With the alpha version Horizon Suite, they showed how easy it would be to create a new Horizon user and also how easy it was to add applications to the Horizon Apps manager that every user would be able to download or optionally could be pushed to all desktop environments. Apparently, desktop apps become vApps in this environment and can be pushed or pulled into any Horizon managed desktop environment.

They had previously showed how a Horizon virtual machine running on Android phones would enable the Enterprise app to run on mobile phones but today they also showed how a Horizon Encapsulated Application would run on an iPhone. It showed an enterprise email client running on the iPhone. The user had to login to access their email.  Also it showed an attempt to cut and paste from the enterprise application to a native iPhone app and it generate a stock statement that pasting from enterprise (Horizon encapsulated) iPhone apps was prohibited. The new app that was added to desktop support was able to be downloaded onto the iPhone and was immediately available as an iPhone app as well as a desktop app.

The end of the 2nd day keynote was a sort of Diamond Sponsor Hunger Games where each vendor got 4 minutes to present anything they wanted to show.  Cisco showed a package called LISP which with tunnel routers would enable Vmotion across continents, not exactly sure what Dell showed, but EMC demoed the new VMware Virtual Data Protection capability (Avamar light embedded into vSphere), HP demoed their 3par storage capability to configure VMs, NetApp showed cluster mode capabilities how a customer was able to create a Vsan in seconds, how that data could live on long after its underlying storage was gone.

NetApp won the demo wars.  VMware made charitable contributions to each of the vendors favorite charities.

That’s about it for day 2’s keynote, stay tuned for more…

VMworld first thoughts kickoff session

[Edited for readability. RLL] The drummer band was great at the start but we couldn’t tell if it was real or lipsynched. It turned out that each of the Big VMWORLD letters had a digital drum pad on them which meant it was live, in realtime.

Paul got a standing ovation as he left the stage introducing Pat the new CEO.  With Paul on the stage, there was much discussion of where VMware has come the last four years.  But IDC stats probably say it better than most in 2008 about 25% of Intel X86 apps were virtualized and in 2012 it’s about 60% and and Gartner says that VMware has about 80% of that activity.

Pat got up on stage and it was like nothing’s changed. VMware is still going down the path they believe is best for the world a virtual data center that spans private, on premises equipment and extrenal cloud service providers equipment.

There was much ink on software defined data center which is taking the vSphere world view and incorporating networking, more storage, more infrastructure to the already present virtualized management paradigm.

It’s a bit murky as to what’s changed, what’s acquired functionality and what’s new development but suffice it to say that VMware has been busy once again this year.

A single “monster vm” (has it’s own facebook page) now supports up to 64 vCPUs, 1TB of RAM, and can sustain more than a million IOPS. It seems that this should be enough for most mission critical apps out there today. No statement on latency the IOPS but with a million IOS a second and 64 vCPUs we are probably talking flash somewhere in the storage hierarchy.

Pat mentioned that the vRAM concept is now officially dead. And the pricing model is now based on physical CPUs and sockets. It no longer has a VM or vRAM component to it. Seemed like this got lots of applause.

There are now so many components to vCloud Suite that it’s almost hard to keep track of them all:  vCloud Director, vCloud Orchestrator, vFabric applications director, vCenter Operations Manager, of course vSphere and that’s not counting relatively recent acquisitions Dynamic Op’s a cloud dashboard and Nicira SDN services and I am probably missing some of them.

In addition to all that VMware has been working on Serengeti which is a layer added to vSphere to virtualize Hadoop clusters. In the demo they spun up and down a hadoop cluster with MapReduce operating to process log files.  (I want one of these for my home office environments).

Showed another demo of the vCloud suite in action spinning up a cloud data center and deploying applications to it in real time. Literally it took ~5minutes to start it up until they were deploying applications to it.  It was a bit hard to follow as it was going a lot into the WAN like networking environment configuration of load ballancing, firewalls and other edge security and workload characteristics but it all seemed pretty straightforward and took a short while but configured an actual cloud in minutes.

I missed the last part about social cast but apparently it builds a social network of around VMs?  [Need to listen better next time]

More to follow…

 

Electro-human interface

Flaming Lotus Girls Neuron by SanFranAnnie (cc) (from Flickr)
Flaming Lotus Girls Neuron by SanFranAnnie (cc) (from Flickr)

There has been a lot of talk recently about neuromorphic computing (see IBM introduces SyNAPSE) and using Phase Change Memory for artificial neurons but there hasn’t been much discussion of ways to interface human’s or for that matter any life whatsoever to electronics.

That is until today.

Recently a report came out highlighted in IEEE Spectrum on a Transistor Made to Run on Protons.  It seems that transistor technology used everywhere today runs on electrons or negative charges but bio-neurological systems all run on positive IONs and/or protons.  This makes a proton transistor especially appealing for a biocompatible electronic interface.

Proton transistors are born

Apparently the chip is made out of “nanofibers of chitosan” originally derived from a squid.  Also the device works in the presence of only high humidity and when “loaded with proton-donating acid groups”.

All sounds a bit biological to me but that’s probably the point.

It seems in the past when they tried to provide a bio-electronic interface like this they used micro-fluidics with a flow of positive IONs through a pipe. But this new approach does not use flowing liquid at all but rather just a flow of protons across an acid.  The proton flow is controlled by an electrostatic potential applied to the transistors gate electrode.

Today the proton transistor has a channel width of 3.5 μm.  At that size, it’s ~1000X bigger than current transistor technology (maybe even more).  Which means it will be some time before they embed a proton based, 12-core Xeon processor in a brain.

Apparently the protons have a flow rate of ~5×10−3 cm2 V−1 s−1 through the transistor or by my calculations, roughly about ~1/2 bit per 100 seconds.  Seems like we are going to need a lot more channels, but its only a start.  For more information on the new transistor read the original article in Nature, A polysaccharide bioprotonic field-effect transistor.

But what can it do for me?

A proton transistor has the potential to interface directly with human neurons and as such, can form a biocompatible electronic interface.  Such a bionanoprotonic device can conceivably create an in the brain-to-electronics interface that can bring digital information directly into a person’s consciousness.  Such a capability would be a godsend to the blind, deaf and handicapped.

Of course if information can go in, it can also come out

One can imagine that such an interface can provide a portal to the web, an interface to a desktop or mobile computing device without the burden of displays, keyboards, or speakers. Such a device, when universally available, may make today’s computing paradigm look like using a manual typewriter.

This all sounds like science fiction but it feels like it just got a step closer to to reality.

Can The Singularity be that far behind?

—–

Comments?