Mobile devices as a cache for cloud data

3030107334_945202743c_z
Howard Marks (DeepStorage.net) and I were on a GreyBeard’s podcast last month (PB are the new TB) talking with the CTO (Brian Carmody [@initzero]) of hybrid storage vendor, Infinidat, who just happened to mention in passing that “our mobile devices pretty much act as caches for cloud data”. That’s interesting.

Mobile app’s caching data

There’s a part of me that couldn’t agree more. Most of my mobile mail uses IMAP which acts as a browser for email residing elsewhere. Radio apps stream music from the cloud. Photo apps can store pictures on the cloud. Social media apps (Facebook, LinkedIN, Twitter, etc.) use the cloud to store posts/pokes/photos and only cache minimal data locally. There are many more apps that act similarly.

But not all data is cached

On the other hand, I have downloaded all of my music library to my mobile devices. There was a time when I was more selective but later generation devices have more than enough storage to hold it all.

Movies are another.  Most purchased movies are download to my desktop. With only 64GBs of storage on my iPad/iPhone, I have to be a bit more judicious with which movies I store on the devices. Most of the time, when I am watching movies on mobile devices, I don’t have Internet access, so caching/streaming won’t work. Yet, for some services (Amazon Prime Video & Apple TV) I do stream at home and then the TV or AppleTV caches cloud media.

My photo library is similar, there’s just too many photos to fit them all on the  device. So for now, they reside on my desktop, only a select subset are copied to the device.

Contacts, passwords, calendars and countless other datums that reside on the cloud or my desktop computer are also replicated (not cached) on mobile devices. Could they be cached, probably, but with the need for these items, even when internet service is not available, caching them makes no sense.

Storage caching vs. mobile device caching

Storage caches are pretty sophisticated and Infinidat’s as sophisticated as any of them. Historically, storage caching is resilient in the face of power outages, storage device failures, software bugs, etc. Essentially, when storage data hits the cache the storage system “guarantees” to write it to backend storage, some time in the future. Read data caching requires less resilience/fault tolerance because data already resides somewhere else on backend storage.

It’s unclear whether mobile caching has similar strengths. As each app caches data in it’s own way, there would be less resilience in mobile caching than storages subsystem caches. But I am no app developer, so don’t have a clue as to what caching services are available within the mobile app ecosystem.

Device internet speed too slow

One thing that keeps me storing data on my mobile devices is the speed of Wi-Fi and cellular internet. It’s often much slower than I would like. I suppose when these speed up there would be less need to save data on my mobile devices. But by that time, mobile storage will bemuch cheaper as well and I will have even more data to cache/store. So who knows.

Then again in the foreseeable future, there will be times without cellular or Wi-Fi Internet. So , storing data on the mobile device will always be the way to go at least for some data.

Maybe Brian’s right

From my perspective, Brian is partially right about the mobile devices caching cloud data. But maybe it’s just because I am old school that I decide to store a lot of data on my mobile devices.

From Brian’s perspective, all that data is stored elsewhere (desktop or cloud). So it all could be cached and probably should be.

As the world rolls out IoT, with even less storage at the edge, caching cloud data will become even more of a necessity. Hopefully by then Internet access will become even more universal than it is already.

Comments?

Photo Credits: Blake Patterson, iPhone apps sphere

At Scale conference keynote, Facebook video experience re-engineered

11990439_1644273839179047_2244380699715442158_nThe At Scale conference happened this past week in LA. Jay Parikh, Global Head of Engineering and Infrastructure at Facebook, kicked off the conference by talking about how Facebook is attempting to conquer some of it’s intrinsic problems, as it scales up from over 1B users today. I was unable to attend the conference but watched a video of the keynote (on Facebook of course).

The At Scale community is a group of large, hyper-scale, web companies such as Google, Microsoft, Twitter, and of course Facebook, among a gaggle of others that all have problems trying to scale up their infrastructure to handle more and more users activities. They had 1800 people registered for the At Scale 2015 conference on Monday, double last years count. The At Scale community are trying to push the innovation level of the industry faster, through a community of companies that need to work at hyper-scale.

Facebook’s video problem

At Facebook the current hot problem that’s impacting customer satisfaction seems to be video uploads and playback (downloads). The issues with Facebook’s video experience are multifaceted and range from the time it takes to successfully upload a video, to the bandwidth it takes to playback a video to the -system requirements to support live streaming video to 100,000s of users.

Facebook started as a text only service, migrated to a photo oriented service, but now is quickly moving to a video oriented user experience. But it doesn’t stop there they can see on the horizon that augmented and virtual reality will become a significant driver of activity for Facebook uses?!

Daily video 1B last year now at 4B video views/day. They also launched a new service lately, LiveMentions, which was a live streaming service for celebrities (real time video streams). Several celebrities were live streaming to 150K of their subscribers. So video has become and will continue as the main consumer of bandwidth at Facebook.

Struggling to enhance the Facebook user’s video experience over the past year, they have come up with three key engineering principles that have helped them: Planning, Iteration and Performance.

Planning

Facebook is already operating a terabit scale network, so doing something to its network wrong is going to cause major problems, around the world. As a result, Facebook engineering focused early on, into incorporating lots of instrumentation in their network and infrastructure services. This has allowed them to constantly monitor the activity of their users across their infrastructure to identify problems and solutions.

One metric Parikh talked about was “playback success rate”, this is the percentage where the video starts to play in under 1 second for a facebook user.  One chart he showed, was a playback success rate colored ove a world map  but aggregated (averaged) at the country level. But with their instrumentation Facebook was able to drill down to regions within a country and  even cities within a region. This allows engineering to identify problems at almost any level of granularity they need.

One key take away to Planing, is if you have the instrumentation in place, have people to monitor and mine the data and are willing to address the problems that crop up, then you can create a more flexible, efficient and effective environment and build a better product for your users.

Iteration

Iteration is not just about feature deployment, but it’s also about the Facebook user experience. Their instrumentation had told them that they were doing ok on video uploads but it turns out that when they looked at the details, they saw that some customers were not having a satisfactory video upload experience. For instance, one Facebook engineer had to wait 82 hours to upload a video.

The Facebook world is populated with 10s of thousands of unique devices with different memory, compute and storage. They had to devise approaches that could optimize the encoding for all the different devices, some of which was done on mobile phones.

They also had to try to optimize the network stack for different devices and mobile networking technologies. Parikh had another map showing network connectivity. Surprise, most of the world is not on LTE, and a vast majority of world is on 2G and 3G cellular networks. So via iteration Facebook went about improving video upload by 1% here and 1% there, but with Facebook’s user base, these improvements impact millions of users. They used cross functional teams to address the problems they uncovered.

However, video uploads problems were not just in device and connectivity realms. Turns out they had a big cancel upload button on their screen after the start of the video upload. This was sometimes clicked by mistake and they found that almost 10% of users hit the cancel upload. So they went through and re-examined the whole user experience to try to eliminate other hindrances to successful video uploads.

Performance

The key take away from this segment of the talk was that performance has to be considered from the get go of a new service or service upgrade. It is impossible to improve performance after the fact, especially for At Scale environments.

In my CS classes, the view was make it work and then make it work fast.  What Facebook has found is that you never have the time after a product has shipped to make it fast. As soon as it works, they had to move on to the next problem.

As a result if performance is not built in from the start, not a critical requirement/feature of a system architecture and design, it never gets addressed. Also if all you focus on is making it work then the design and all the code is built around feature functionality. Changing working functionality later to improve performance is an impossible task and typically represents a re-architecture/re-design/re-implementation of the functionality.

For instance, Facebook used to do video encoding in serial on a single server. It often took a long time (10 to 30 minutes). Engineering reimplemented their video encoding to partition the video and distribute the encoding across multiple servers. Doing this, sped up encoding time considerably.

But they didn’t stop there, with such a diverse user networking environment, they felt that they could save bandwidth and better optimize user playback if could reduce playback video size. They were able to take their machine learning/AI investments that Facebook has made and apply this to distributed video encoding. They were able to analyze the video scene by scene and opportunistically reduce bandwidth load and storage size but still maintain video  playback quality. By implementing the new video encoding process they have achieved double digit reductions in bandwidth requirements for playback.

Another example of the importance of performance was the LiveMentions feature discussed above. Celebrities often record streams in places with poor networking infrastructure. So in order to insure a good streaming experience Facebook  had to implement variable bit rate video upload to adjust upload bandwidth requirements based on networking environmentr. Moreover, once a celebrity starts a live stream all the fans in the world get notified. then there’s a thundering herd (boot storms anyone) to start watching the video stream. In order to support this mass streaming, Facebook implemented stream blocking, which holds off the start of a live stream viewing until they have cached enough of the video stream at their edge servers, worldwide. This guaranteed that all the fans had a good viewing experience, once it started.

There were a couple more videos of the show sessions but I didn’t have time to review them.  But Facebook sounds like a fun place to work, especially for infrastructure performance experts.

~~~~

Comments?

#VMworld2015 day 1 announcements

 

IMG_5411It seemed like today was all about the cloud and cloud native apps. Among the many announcements, VMware announced two key new capabilities: VMware integrated containers and the Python Photon Platform.

Containers running on VMware

  • VMware vSphere Integrated Containers is an implementation of containers that runs natively under vSphere. The advantage of this solution is that now when developers fire up a multi-container app,  each container now exists as a separate VM under vSphere and can be managed, monitored and secured just like any other VM in the environment. Previously a multi-container app would be one VM per container engine  containing potentially many containers running under the single VM. But with vSphere Integrated Containers, the container engine and the light weight Linux kernel (Python Photon OS) are now integrated into the ESX hypervisor so each container runs as a native VM. Integrated containers is an follow on to a combination of Project Bonneville, Project Python Photon (OS) and Instant clones. Recall with Instant Clones one can spin up a clone of a VM in less than a second and its memory footprint is 0MB.
  • Python Photon Platform takes container execution to a whole new level, with a new deployment of a hypervisor tailor made to run containers (not VMs). With the Python Photon Platform one natively runs container frameworks underneath the platform. Python Photon Platform consists of Python Photon Machine which is Python Photon OS (lightweight Linux Kernel distro) & the new Microvisor (new light weight hypervisor for container hardware calls) and Python Photon Controller which is a distributed control plane and management API. With Python Photon Platform one can manage 100K to Millions of containers, running under 1000s of container frameworks.

Over time Python Photon Platform is intended to be open sourced. VMware also announced a bundling of Pivotal Cloud Foundry with the Python Photon Platform so as to better run cloud native apps implemented in Cloud Foundry. But the ultimate intent is to provide support for Google Kubernetes, Apache Mesos and any other container framework that comes out.

So now you can run your Docker container apps or any other container app solution in two different ways. One depends on vSphere standard management platform and runs container apps as a standard VMs. The other takes a completely green field approach and runs container frameworks natively in a ground up new hypervisor solution with a new management solution altogether that scales.

The advantage of Python Photon is that it scales to extreme, cloud level types of application environments. Python Photon is intended to run cloud-native apps.

vCloud Air extensions

One of the other major things that VMware demoed today was moving a VM from on premises to vCloud Air and back again – a real crowd pleaser. One VMware Exec said that after MIT had convinced them they needed to be able to move apps from on premises to the cloud for dev-test apps. They then turned around and decided they wanted to move dev-test activity back to their onprem environment and instead wanted to move their production to vCloud Air.

They demoed both capabilities using vMotion to move a VM to vCloud Air and using it again to move it back. The nice thing about all this is that all the security and other attributes of the VM can move to the cloud and back again along with the VM. All the while the VM continued to operate, with no disruption to execution. They mention that it could potentially take hours to move the data for the VM.

IMG_5413There were a number of other capabilities announced today including EVO SDDC (EVO: RACK reborn) which includes a new datacenter management solution. Customers can now roll in a rack of servers and have EVO SDDC manage them and deploy software defined data center on them in a matter of hours. Within EVO SDDC you can have application domains which span racks of servers but provide isolation and management multi-tennancy.

NSX 6.2 was also discussed and essentially is key to extending your networking from on premises to vCloud Air. With NSX 6.2 local routing, micro segmentation security and app firewalls can be configured locally and then be “extended” to the vCloud Air environment.

Lots of moving parts here and I probably missed some key components to these solutions and didn’t cover any of them well enough other than to give a feel for what they are.

But one thing is clear, VMware’s long term strategy is to take your native, on premises VMs to vCloud Air and back again as well as if your Dev-Ops group or any other BU wants to use containers to implement cloud apps, VMware has you covered coming and going.

Comments?

Flash’s only at 5% of data storage

7707062406_6508dba2a4_oWe have been hearing for years that NAND flash is at price parity with disk. But at this week’s Flash Memory Summit, Darren Thomas, VP Storage BU, Micron said at his keynote that NAND only store 5% of the bits in a data center.

Darren’s session was all about how to get flash to become more than 5% of data storage and called this “crossing the chasm”. I assume the 5% is against yearly data storage shipped.

Flash’s adoption rate

Darren, said last year flash climbed from 4% to 5% of data center storage, but he made no mention on whether flash’s adoption was accelerating. According to another of Darren’s charts, flash is expected to ship ~77B Gb of storage in 2015 and should grow to about 240B Gb by 2019.

If the ratio of flash bits shipped to data centers (vs. all flash bits shipped) holds constant then Flash should be ~15% of data storage by 2019. But this assumes data storage doesn’t grow. If we assume a 10% Y/Y CAGR for data storage, then flash would represent about ~9% of overall data storage.

Data growth at 10% could be conservative. A 2012 EE Times article said2010-2015 data growth CAGR would be 32%  and IDC’s 2012 digital universe report said that between 2012 and 2020, data will double every two years, a ~44% CAGR. But both numbers could be talking about the world’s data growth, not just data center.

How to cross this chasm?

Geoffrey Moore, author of Crossing the Chasm, came up on stage as Darren discussed what he thought it would take to go beyond early adopters (visionaries) to early majority (pragmatists) and reach wider flash adoption in data center storage. (See Wikipedia article for a summary on Crossing the Chasm.)

As one example of crossing the chasm, Darren talked about the electric light bulb. At introduction it competed against candles, oil lamps, gas lamps, etc. But it was the most expensive lighting system at the time.

But when people realized that electric lights could allow you to do stuff at night and not just go to sleep, adoption took off. At that time competitors to electric bulb did provide lighting it just wasn’t that good and in fact, most people went to bed to sleep at night because the light then available was so poor.

However, the electric bulb  higher performing lighting solution opened up the night to other activities.

What needs to change in NAND flash marketing?

From Darren’s perspective the problem with flash today is that marketing and sales of flash storage are all about speed, feeds and relative pricing against disk storage. But what’s needed is to discuss the disruptive benefits of flash/NAND storage that are impossible to achieve with disk today.

What are the disruptive benefits of NAND/flash storage,  unrealizable with disk today.

  1. Real time analytics and other RT applications;
  2. More responsive mobile and data center applications;
  3. Greener, quieter, and potentially denser data center;
  4. Storage for mobile, IoT and other ruggedized application environments.

Only the first three above apply  to data centers. And none seem as significant  as opening up the night, but maybe I am missing a few.

Also the Wikipedia article cited above states that a Crossing the Chasm approach works best for disruptive or discontinuous innovations and that more continuous innovations (doesn’t cause significant behavioral change) does better with Everett Roger’s standard diffusion of innovation approaches (see Wikepedia article for more).

So is NAND flash a disruptive or continuous innovation?  Darren seems firmly in the disruptive camp today.

Comments?

Photo Credit(s): 20-nanometer NAND flash chip, IntelFreePress’ photostream

Nanterro emerges from stealth with CNT based NRAM

512px-Types_of_Carbon_NanotubesNanterro just came out of stealth this week and bagged $31.5M in a Series E funding round. Apparently, Nanterro has been developing a new form of non-volatile RAM (NRAM), based on Carbon Nanotubes (CNT), which seems to work like an old T-bar switch, only in the NM sphere and using CNT for the wiring.

They were founded in 2001, and are finally  ready to emerge from stealth. Nanterro already has 175+ issued patents, with another 200 patents pending. The NRAM is currently in production at 7 CMOS fabs already and they are sampling 4Mb NRAM chips  to a number of customers.

NRAM vs. NAND

Performance of the NRAM is on a par with DRAM (~100 times faster than NAND), can be configured in 3D and supports MLC (multi-bits per cell) configurations.  NRAM also supports orders of magnitude more (assume they mean writes) accesses and stores data much longer than NAND.

The only question is the capacity, with shipping NAND on the order of 200Gb, NRAM is  about 2**14X behind NAND. Nanterre claims that their CNT-NRAM CMOS process can be scaled down to <5nm. Which is one or two generations below the current NAND scale factor and assuming they can pack as many bits in the same area, should be able to compete well with NAND.They claim that their NRAM technology is capable of Terabit capacities (assumed to be at the 5nm node).

The other nice thing is that Nanterro says the new NRAM uses less power than DRAM, which means that in addition to attaining higher capacities, DRAM like access times, it will also reduce power consumption.

It seems a natural for mobile applications. The press release claims it was already tested in space and there are customers looking at the technology for automobiles. The company claims the total addressable market is ~$170B USD. Which probably includes DRAM and NAND together.

CNT in CMOS chips?

Key to Nanterro’s technology was incorporating the use of CNT in CMOS processes, so that chips can be manufactured on current fab lines. It’s probably just the start of the use of CNT in electronic chips but it’s one that could potentially pay for the technology development many times over. CNT has a number of characteristics which would be beneficial to other electronic circuitry beyond NRAM.

How quickly they can ramp the capacity up from 4Mb seems to be a significant factor. Which is no doubt, why they went out for Series E funding.

So we have another new non-volatile memory technology.On the other hand, these guys seem to be a long ways away from the lab, with something that works today and the potential to go all the way down to 5nm.

It should interesting as the other NV technologies start to emerge to see which one generates sufficient market traction to succeed in the long run. Especially as NAND doesn’t seem to be slowing down much.

Comments?

Picture Credits: Wikimedia.com

EMCWorld2015 day 1 news

We are at EMCWorld2015 in Vegas this week. Day 1 was great with new XtremIO 4.0, “The Beast”, new enhanced Data Protection, and a new VCE VxRACK converged infrastructure solution announcements. Somewhere in all the hoopla I saw an all flash VNXe appliance and VMAX3 with a cloud storage tier but these seemed to be just teasers.

XtremIO 4.0

The new hardware provides 40TB per X-brick and with compression/dedupe and the new 8-Xbrick cluster provides 320TB raw or 1.9PB effective capacity. As XtremIO supports 150K mixed IOPS/XBrick, an 8-Xbrick cluster could do 1.2M IOPS or with 250K read IOPS/Xbrick that’s 2.0M IOPS.

XtremIO 4.0 now also includes RecoverPoint integration. (I assume this means they have integrated the write splitter directly into XtremIO that way you don’t need the host version or the switch version of the write splitter.)

The other thing XtremIO 4.0 introduces is non-disruptive upgrades. This means that they can expand or contract the cluster without taking down IO activity.

There was also some mention of better application consistent snapshots, which I suspect means Microsoft VSS integration.

XtremIO 4.0 is a free software upgrade, so the ability to scale up to 8-Xbricks and non-disruptive cluster changes, and RecoverPoint integration can all be added to current XtremIO systems.

Data Protection

EMC introduced a new top end DataDomain hardware appliance the DataDomain 9500, which has 1.5X the performance (58.7TB/hr) and 4X the capacity (1.7PB) of their nearest competitor solution.

They also added a new software feature (from Maginetics) called CloudBoost™.  CloudBoost allows Networker and Avamar to backup to cloud storage. EMC also added Microsoft Ofc365 cloud backup to Spannings previous Google Apps and SalesForce cloud backups.

VMAX3 Protect Point was also enhanced to provide native backup for Oracle, Microsoft SQL Server, and IBM DB2 application environments. ProtectPoint offers a direct path between VMAX3 and  DataDomain appliances and can speed up backup performance by 20X.

EMC also announced Project Falcon which is a virtual appliance version of DataDomain software

VCE VxRACK

This is a rack sized, stack of VSPEX Blue appliances (a VMware EVO:RAIL solution) with new software to bring the VCE useability and data center scale services to a hyper-converged solution. Each appliance is a 2U rack mounted compute intensive or storage intensive unit. The Blue appliances are configureed in a rack for VxRACK and with version 1 you can use VMware or KVM as a chose your own hypervisor. Version 2 will come out later this year and will be based on a complete VMware stack known as EVO: RACK.

Storage services are supplied by EMC ScaleIO. You can purchase a 1/4 rack, 1/2  rack or full rack which includes top of rack networking. You can also scale out by adding more full racks to the system. EMC said that it can technically support 1000s of racks VSPEX Blue appliances for up to ~38PB of storage.

The significant thing is that the VCE VxRACK supplies the VCE customer experience, in a hyper converged solution. However, the focus for VxRACK is tier 2 applications that don’t have a need for the extremely high availability, low response times and high performance of tier 1 applications that run on their VBLOCK solutions (with VNX, VMAX or XtremIO storage).

VMAX3

They had a 5th grader provision an VMAX3 gold storage (LUN) and convert it to a diamond storage (LUN) in 20.48 seconds. It seemed pretty simple to me but the kid blazed through the screens a bit fast for me to see what was going on. It wasn’t nearly as complex as it used to be.

VMAX3 also introduces CloudArray™, which uses FastX storage tiering to cloud storage (using onboard TwinStrata software). This could be used as a tier 3 or 4 storage. EMC also mentioned that you can have an XtremIO (maybe an Xbrick) behind a VMAX3 storage system. VMAX3’s software rewrite has separated data services from backend storage and one can see EMC rolling out different backend storage (like cloud storage or XtremIO) in future offerings.

Other Notes

There was a lot of discussion about the “Information Generation” a new customer for IT services. This is tied to the 3rd platform transformation that’s happening in the industry today. To address this new world IT needs to have 5 attributes:

  1. Predictively spot new opportunities for services/products
  2. Deliver a personalized experience
  3. Innovate in an agile way
  4. Develop trusted programs/apps Demonstrate transparency & trust
  5. Operate in real time

David Goulden talked a lot about what this all means and I encourage you to take a look at the video stream to learn more.

Speaking of video last year was the first year there were more online viewers of EMCWorld than actual participants. So this year EMC upped their game with more entertainment value. The opening dance sequence was pretty impressive.

A lot of talk today was on 3rd platform and the transition from 2nd platform. EMC says their new products are Platform 2.5 which are enablers for 3rd platform. I asked the question what the 3rd platform storage environment looks like and they said scale-out (read ScaleIO) converged storage environment with flash for meta-data/indexing.

As the 3rd platform transforms IT there will be some customers that will want to own the infrastructure, some that will want to use service providers and some that will use public cloud services. EMC’s hope is to capture those customers that want to own it or use service providers.

Tomorrow the focus will be on the Federation with Pivotal and VMware being up for keynotes and other sessions. Stay tuned.

 

 

EMC acquisitions & other announcements at #EMCSummit last week

EMC’s recently announced the acquisition of Maginetics and Spanning, which are mainly for pushing data protection out to cloud storage and cloud storage data protection. The other major item to come out of EMC Global Analyst Summit last week was the announcement of EMC’s Hybrid Cloud Solution.

Magnetic for cloud onramp

Maginetics is intended to be yet another tier in the deep archive for Avatar, Data Domain and NetWorker but this one is intended to be in the cloud. MagFS Maginetic’s file system manages the file to object transition, provides their own deduplication and can replicate data to one or more cloud providers, even supporting different cloud services such as, AWS, Google Compute, Azzure, CleverSafe etc. I think ultimately this may be broadened beyond just data protection to be another tier for Unified storage to move to the cloud as well but that’s subject for another post.

How does Maginetics differs from another recent EMC acquisition, TwinStrata? TwinStrata is mainly targeted for primary storage moving a LUN to the cloud and maintaining data avaliability and something like reasonable responsiveness to the data that’s moved to the cloud. So where Maginetics is for data protection storage TwinStrata is for primary storage. Unclear where this leaves other file storage…

Spanning for protection of cloud data

Spanning is intended to be a data protection solution for  data that’s born in the cloud. In this case, you can use Spanning to backup your cloud applications to different cloud service providers or even the same cloud service providers. Even if you don’t want to use Spanning to backup your cloud app’s data, with a cloud version of Data Protection Advisor (based somewhat on Spanning), you should be ultimately able to use it to monitor your current cloud provider’s replication/protection activities to insure they are copying and backing up your data properly across data center domains etc. In this way you can better monitor your cloud providers internal data replication/protection services.

EMC seems like it’s got a vision of where it intends to go and the cloud represents a significant new potential data stream and they want to be there to help protect it and use it to help protect other data.

EMC’s Hybrid Cloud announcement

The main announcement at the summit was on EMC’s Hybrid Cloud Offering which was pre-announced at EMC World last spring. With their Hybrid Cloud Offering making it easier for data centers to take advantage of the cloud to burst applications back and forth, EMC’s trying to cover anyway to use the cloud that makes sense.

EMC announced that their Hybrid Cloud solution will support a “federation” hybrid cloud solution based on VCE/VBLOCK or VSPEX and a software defined version based on ViPR storage controller. They also made a statement of direction to have their Hybrid Cloud solution support Microsoft Azure as well as OpenStack at some point in the future…

Well I think that about covers it for EMC cloud announcements from the EMC Global Summit last week.

Comments?

Apple SIM and more flexible data plans

(c) 2014, Apple (from their website)The new US and UK iPad Air 2 and Mini iPad 3’s now come with a new, programable SIM (see wikipedia SIM article for more info) card for their cellular data services. This is a first in the industry and signals a new movement to more flexible cellular data plans.

Currently, the iPad 2 Apple SIM card supports AT&T, Sprint and T-Mobile in the US (what no Verizon?) and EE in the UK. With this new flexibility one can switch iPad data carriers anytime, seemingly right on the phone without having to get up from your chair at all. You no longer need to go into a cellular vendor’s store and get a new SIM card and insert the new SIM card into your iPad Air 2.

It seems not many cellular carriers are be signed up to the new programmable SIM cards. But with the new Apple SIM’s ability to switch data carriers in an instant, can the other data carriers hold out for long.

What’s a little unclear to me is how the new Apple SIM doesn’t show support for Verizon but the iPad 2 Air literature does show support for Verizon data services. After talking with Apple iPad sales there is an actual SIM card slot in the new iPads that holds the new Apple SIM card and if you want to use Verizon you would need to get a SIM card from them and swap out the Apple SIM card for the Verizon SIM card and insert it into the iPad Air 2.

Having never bought a cellular option for my iPad’s this is all a little new to me. But it seems that when you purchase a new iPad Air 2 wifi + cellular, the list pricing is without any data plan already. So you are free to go to whatever compatible carrier you want right out of the box. With the new Apple SIM the compatible US carriers are AT&T, T-Mobile and Sprint. If you want a Verizon data plan you have to buy a Verizon iPad.

For AT&T, it appears that you can use  their Dataconnect cellular data service for tablets on a month by month basis. I assume the same is true for  T-Mobile who makes a point of not having any service contract even for phones.  Not so sure about Sprint but if AT&T offer it can Sprint be far behind.

I have had a few chats with the cellular service providers and I would say they are not all up to speed on the new Apple SIM capabilities but hopefully they will get there over time.

Now if Apple could somehow do the same for cable data plans or cable TV providers, they really could change the world – Apple TV anyone?

Comments?