Who’s the next winner in data storage?

Strange Clouds by michaelroper (cc) (from Flickr)
Strange Clouds by michaelroper (cc) (from Flickr)

“The future is already here – just not evenly distributed”, W. Gibson

It starts as it always does outside the enterprise data center. In the line of businesses, in the development teams, in the small business organizations that don’t know any better but still have an unquenchable need for data storage.

It’s essentially an Innovator’s Dillemma situation. The upstarts are coming into the market at the lower end, lower margin side of the business that the major vendors don’t seem to care about, don’t service very well and are ignoring to their peril.

Yes, it doesn’t offer all the data services that the big guns (EMC, Dell, HDS, IBM, and NetApp) have. It doesn’t offer the data availability and reliability that enterprise data centers have come to demand from their storage. require. And it doesn’t have the performance of major enterprise data storage systems.

But what it does offer, is lower CapEx, unlimited scaleability, and much easier to manage and adopt data storage, albeit using a new protocol. It does have some inherent, hard to get around problems not the least of which is speed of data ingest/egress, highly variable latency and eventual consistency. There are other problems which are more easily solvable, with work, but the three listed above are intrinsic to the solution and need to be dealt with systematically.

And the winner is …

It has to be cloud storage providers and the big elephant in the room has to be Amazon. I know there’s a lot of hype surrounding AWS S3 and EC2 but you must admit that they are growing, doubling year over year. Yes it is starting from a much lower capacity point and yes, they are essentially providing “rentable” data storage space with limited or even non-existant storage services. But they are opening up whole new ways to consume storage that never existed before. And therein lies their advantage and threat to the major storage players today, unless they act to counter this upstart.

On AWS’s EC2 website there must be 4 dozen different applications that can be fired up in the matter of a click or two. When I checked out S3 you only need to signup and identify a bucket name to start depositing data (files, objects). After that, you are charged for the storage used, data transfer out (data in is free), and the number of HTTP GETs, PUTs, and other requests that are done on a per month basis. The first 5GB is free and comes with a judicious amount of gets, puts, and out data transfer bandwidth.

… but how can they attack the enterprise?

Aside from the three systemic weaknesses identified above, for enterprise customers they seem to lack enterprise security, advanced data services and high availability storage. Yes, NetApp’s Amazon Direct addresses some of the issues by placing enterprise owned, secured and highly available storage to be accessed by EC2 applications. But to really take over and make a dent in enterprise storage sales, Amazon needs something with enterprise class data services, availability and security with an on premises storage gateway that uses and consumes cloud storage, i.e., a cloud storage gateway. That way they can meet or exceed enterprise latency and services requirements at something that approximates S3 storage costs.

We have talked about cloud storage gateways before but none offer this level of storage service. An enterprise class S3 gateway would need to support all storage protocols, especially block (FC, FCoE, & iSCSI) and file (NFS & CIFS/SMB). It would need enterprise data services, such as read-writeable snapshots, thin provisioning, data deduplication/compression, and data mirroring/replication (synch and asynch). It would need to support standard management configuration capabilities, like VMware vCenter, Microsoft System Center, and SMI-S. It would need to mask the inherent variable latency of cloud storage through memory, SSD and hard disk data caching/tiering.. It would need to conceal the eventual consistency nature of cloud storage (see link above). And it would need to provide iron-clad, data security for cloud storage.

It would also need to be enterprise hardened, highly available and highly reliable. That means dually redundant, highly serviceable hardware FRUs, concurrent code load, multiple controllers with multiple, independent, high speed links to the internet. Todays, highly-available data storage requires multi-path storage networks, multiple-independent power sources and resilient cooling so adding multiple-independent, high-speed internet links to use Amazon S3 in the enterprise is not out of the question. In addition to the highly available and serviceable storage gateway capabilities described above it would need to supply high data integrity and reliability.

Who could build such a gateway?

I would say any of the major and some of the minor data storage players could easily do an S3 gateway if they desired. There are a couple of gateway startups (see link above) that have made a stab at it but none have it quite down pat or to the extent needed by the enterprise.

However, the problem with standalone gateways from other, non-Amazon vendors is that they could easily support other cloud storage platforms and most do. This is great for gateway suppliers but bad for Amazon’s market share.

So, I believe Amazon has to invest in it’s own storage gateway if they want to go after the enterprise. Of course, when they create an enterprise cloud storage gateway they will piss off all the other gateway providers and will signal their intention to target the enterprise storage market.

So who is the next winner in data storage – I have to believe its going to be and already is Amazon. Even if they don’t go after the enterprise which I feel is the major prize, they have already carved out an unbreachable market share in a new way to implement and use storage. But when (not if) they go after the enterprise, they will threaten every major storage player.

Yes but what about others?

Arguably, Microsoft Azure is in a better position than Amazon to go after the enterprise. Since their acquisition of StorSimple last year, they already have a gateway that with help, could be just what they need to provide enterprise class storage services using Azure. And they already have access to the enterprise, already have the services, distribution and goto market capabilities that addresses enterprise needs and requirements. Maybe they have it all but they are not yet at the scale of Amazon. Could they go after this – certainly, but will they?

Google is the other major unknown. They certainly have the capability to go after enterprise cloud storage if they want. They already have Google Cloud Storage, which is priced under Amazon’s S3 and provides similar services as far as I can tell. But they have even farther to go to get to the scale of Amazon. And they have less of the marketing, selling and service capabilities that are required to be an enterprise player. So I think they are the least likely of the big three cloud providers to be successful here.

There are many other players in cloud services that could make a play for enterprise cloud storage and emerge out of the pack, namely Rackspace, Savvis, Terremark and others. I suppose DropBox, Box and the other file sharing/collaboration providers might also be able to take a shot at it, if they wanted. But I am not sure any of them have enterprise storage on their radar just yet.

And I wouldn’t leave out the current major storage, networking and server players as they all could potentially go after enterprise cloud storage if they wanted to. And some are partly there already.

Comments?

 

Enhanced by Zemanta

Cheap phones + big data = better world

Big data visualization, Facebook friend connections, Data science
Facebook friend carrousel by antjeverena (cc) (from flickr)

Read an article today in MIT Technical Review website (Big data from cheap phones) that shows how cheap phones, call detail records (CDRs) and other phone logs can be used to help fight disease and help understand disaster impacts.

Cheap phones generate big data

In one example, researchers took cell phone data from Kenya and used it to plot people movements throughout the country. What they were looking for is people who frequented malaria disease hot spots so that they could try to intervene in the transmission of this disease. Researchers discovered one region (cell tower) that had many people that were frequenting a particular bad location for malaria.  It turned out the region they identified had a large plantation with many migrant workers. These workers moved around a lot.  In order to reduce the transmission of the disease public health authorities could target this region to use more bed nets or try to reduce infestation at source of the disease.  In either case, people mobility was easier to see with cell phone data than actually putting people on the ground and counting where people go or come from.

In another example, researchers took cell phone data from Haiti before and after the earthquake and were able to calculate how many people were in the region hardest hit by the earthquake.  They were also able to identify how many people left the region and where the went to.  As a follow on to this, researchers were able to in real time show how many people had fled the cholera epidemic.

Gaining access to cheap phone data

Most of this call detail record data is limited to specific researchers for very specialized activities requested by the host countries. But recently  Orange released 2.5 billion cell phone call and text data records for five million customers they have in Ivory Coast that occurred during five months time.  They released the data to the public under some specific restrictions in order to see what data scientists could do with it. The papers detailing their activities will be published at a MIT Data for Development conference.

~~~~

Big data’s contribution to a better world is just beginning but from what we see here there’s real value in data that already exists, if only the data were made more widely available.

Comments?

Upverter, electronic design-as-a-service

Read a recent article on TechCrunch about Upverter a cloud based service supporting electronic hardware design and development.  The ultimate intent is to provide a electronic design as a service  (EDaaS) offering that’s almost equivalent to electronic design automation (EDA) tools available on the market today.

EDA tools available

I am no EDA expert but currently, they have some basic electronic design, simulation and build tools available.  These allow a person or an organization to design, simulate and build real electronic circuits, boards etc. They even provided tools for motherboard routing and layout as well as services to have a circuit manufactured.  But this all came with a cloud oriented electronic design versioning system which seemed pretty slick.

The TechCrunch article had a video of a tour of the service (also available on their website).  I was especially impressed with the rollback-undo options on the electronic circuit design pallet.  Seeing an electronic circuit being designed in almost a line by line build was interesting to say the least.

Not sure if we are talking ASICs or FPGA design yet but they certainly have the platform to support these tools if and when they develop it.  However, simulation time and cost might go off the charts for circuits including custom designed ASICs and FPGAs.

Everything seems to execute in the cloud and any EDA specifications reside in the cloud under their control as well. However, they do offer some tools to import EDA information from other tools and provide a JSON file format export of the EDA information you provide.

EDA service pricing

Pricing seemed pretty reasonable $7/month for an individual part timer, $99/month for full time user and both these include 10 CPU hrs of simulation time and can work on public and private projects.  Other pricing options are available for bigger teams and/or more part and full timers on a project.  I didn’t see any information on more simulation time but I am sure these would be available.

And if you are just interested in working on public projects the price is FREE.

Open source electronic design

Now, I am no hardware design expert but having such a cloud based service and essentially free for public projects opens up a whole new dimension in hardware design. Open source electronic hardware wouldn’t be as easy to support/perform as open source software but the advantages seem similar.  Such as, open sourced PCIe card instrumentation, an open sourced X86 CPU, perhaps even an open sourced server.

For instance in data storage alone I could foresee open sourced circuitry to perform NAND wear leveling, data compression and/or protocol handling to name just a few.  Any of these might make it easier for companies and even individuals to create their own, hardware accelerated storage systems.

Unclear what IP licensing requirements would be for open sourced hardware. I am certainly no lawyer but something akin to GPL might be required to help create the ecosystem of open sourced electronic design.

A new renaissance of hardware innovation?

Innovation in hardware design has always been harder mostly because of the cost and time involved.  Now Upverter doesn’t seem to do much about the time involved but it can have a bearing on the cost’s associated with electronic design if they can scale up their service to provide more sophisticated EDA tools.

Nonetheless, the advantages of hardware innovation are many and include speeding up processing by orders of magnitude over what can often be done in software alone. (For more please see our posts on Better storage through hardwareCommodity hardware always loses and Commodity hardware debates heat up again). So anything which can make hardware innovation easier to accomplish is a good thing in my book.

Also having these sorts of tools available in the cloud opens up a whole array of educational opportunities never before available.  EDA tools were never cheap and if schools had access to some of these they were often limited to only a few select students.  So with cloud based service that’s essentially free for open sourced circuit design this should no longer be a problem.

Finally, I firmly believe having more hardware designers is a good thing, having the ability to contribute and collaborate on hardware design for free is a great thing and anything that makes it easier to innovate in electronic hardware design is an important step and deserves our support.

It appears that electronic design is undergoing a radical shift from an enterprise/organizational based endeavor back to something a single person can do from anywhere connected to the internet.  Some would say this is back to the roots of electronic design when this could all be done in a garage, with a soldering iron and some electronic componentry.

Comments?

Photo Credit: 439 – Circuit Board Texture by Patrick Hoesly

Big open data leads to citizen science

Read an article the other day in ScienceLine about the Astronomical Data Explosion.  It appears that as international observatories start to open up their archives and their astronomical data to anyone and anybody, people are starting to do useful science with it.

Hunting for planets

The story talked about a pair of amateur astronomers who were looking through Kepler telescope data which had recently been put online (see PlanetHunters.org) to find anomalies that signal the possibility of a planet.  They saw a diming of a particular star’s brightness and then saw it again 132 days later. At that point they brought it to the attention of real scientists who later discovered that what they found was a 4 star solar system which they labeled Tatooine.

It seems with all the latest astronomical observations coming in from Kepler, the Sloan Digital Sky Survey and Hubble observatories are generating a deluge of data. And although all this data is being subjected to intense scrutiny by professional astronomers, they can’t do everything they want to do with it.

Consequently, in astronomy today we now have come to a new world of abundant data but not enough resources to do all the science that can be done.  This is where the citizen or amateur scientist enters the picture. Using standard web accessible tools they are able to subject the data to many more eyes each looking for whatever interest spurs them on and as such, can often contribute real science from their efforts.

Citizen science platforms

It turns out PlanetHunters.org is one of a number of similar websites put up by Zooniverse to support citizen science in astronomy, biology, nature, climate and humanities. Their latest project is to classify animal found in snapshots taken on the Serengheti (see SnapshotSerengeti.org).

Of course crowdsourced scientific activity like this has been going on for a long time now with Boinc projects like SETI@Home screen savers that sifted through radio signals searching for extra-terestial signals. But that made use of the extra desktop compute cycles people were waisting with screen savers.

 

In contrast, Zooniverse started with the GalaxyZoo project (original retired site here). They put Hubble telescope images online and asked for amateur astronomers to classify the type of galaxies found in the images.

GalaxyZoo had modest aspirations at first but when they put the Hubble images online their servers were overwhelmed with the response and had to be beefed up considerably to deal with the traffic.  Overtime, they were able to get literally millions of galaxy classifications. Now they want more, and the recent incarnation of GalaxyZoo has put the brightest 250K galaxies online and they are asking for even finer, more detailed classifications of them.

Today’s Zooniverse projects are taking advantage of recent large and expanding data repositories plus newer data visualization tools to help employ human analysis to their data.  Automated tools are not yet sophisticated enough to classify images as well as a human can.

One criteria for Zooniverse projects is to have a massive amount of data which needs to be classified.  In this way, science is once again returning to it’s amateur roots but this time guided by professionals.  Together we can do more than what either could do apart.

~~~~

I suppose it was only a matter of time before science got inundated with more data than they could process effectively.  Having the ability to put all this data online, parcel it out to concerned citizens and ask them to help understand/classify it has brought a new dawn to citizen science.

Comments?

Photo credits:
Twin Suns on Mos Espa by Stéfan
BONIC running SETI@Home by Keng Susumpow
Galaxy Group Stephan’s Quintet by HubbleColor {Zolt}

Coolest solar PV cells around

[Published the post early by mistake, this is a revised version] Read an article the other day on Inhabitat.com about a new solar array design from V3Solar. They are revolutionizing the mechanical configuration of solar PV cells. Their solution has taken a systems level view of the problem and attacked the solar PV issues from multiple angles, literally.

There are at least two problems with today’s flat, static solar PV arrays these days:

  1. Most static solar arrays work great around noon but their efficiency trails off from there supplying very little power at dawn or sundown and less than peak power 2-3 hours before and after local noon. Solar arrays which track the sun can do better but they require additional circuitry and motors to do so.
  2. Solar cells generate more power when sunlight is concentrated, but they can’t handle much heat. As such, most PV arrays are just flat panels behind flat glass or plastic sheets, with no magnification.

V3 solar has solved these and other problems with an ingenious new mechanical design that provides more power per PV cell surface area. Using a cone geometry there is always a portion of the solar PV cells that face the sun.

The other interesting item about V3Solar’s new technology is that it spins. The main advantage this brings is that it automatically cools itself. In the graphic above there is a hard transparent shell that surrounds the cone of solar PV cells. The transparent shell remains stationary while the inner cone with PV cells on it rotates automatically cooling the PV logic. This is all better displayed on their website with their youtube video.

Also, hard to see in graphic above but depicted well in their video is that there are a couple of linear lenses located around the transparent shell used to concentrate the sunlight. This generates even more power from cells while they are temporarily under the lens, but also heats them up. But with the automatic cooling this isn’t a problem anymore.

At the base of the cone a plate with an array of magnets is stationary but acts as a dynamo as the cone above it rotates with it’s own array of electro magnets. This automatically generates AC power and provides magnetic levitation for the rotating cone.

V3Solar has also patented a power pole which mounts multiple spin cells in a tree like configuration, to generate even more power. The pole’s spin cells would be mathematically located so as not to cast a shadow on the other spin cells on the pole.

V3Solar claims that their spin cell which has a footprint of a square meter generates 20X the power of an equivalent amount of flat solar panels (see their website for details). Besides the animation on their website they have a video of a prototype spin cell in operation.

As of September 24th, V3Solar has contracted with a Southern California company to complete the design and engineering of the spin cell. Which means you can’t buy it just yet. But I want one when they come out.

Some suggestions for improvements

  • Smaller cones and larger cones would make sense. Of course standardizing on one size and cone geometry makes it easier to manufacture. But having different sizes say, 1/2 meter square and 2 meter square, would provide some interesting possibilities and more diverse configurations/applications.
  • One would think that the cone geometry should vary for each different parallel or longitudelatitude, e.g., being flatter at the equator and narrower at the poles to gather even more sunlight.
  • V3Solar shows their spin cell cone in a vertical orientation. It would seem to me that there are just as many opportunities for other positions. Perhaps having the cone point directly south (in the northern hemisphere) or north (in southern hemisphere) or even in a horizontal orientation. I was thinking of having a spin cell located on the back of a wind turbine, in a streamlined orientation (with the top of the cone facing the propeller or even a double cone solution. This way you could combine the two forms of renewable energy in one combined unit.
  • Have the spin cells be able to float on the water in a self contained, ocean hardened configuration. Using such devices could power a more sophisticated, advanced functionaling buoy. Also one could now construct a solar power generation facility that floats on the ocean.
  • Other geometries than just a cone come to mind. I suppose the nice part about the cone is that its planar. But other geometric solutions exist that satisfy this constraint. For example, a cylinder would work. But this time the angle of the cylinder spin would be based on the location. Solar efficiency could be easily boosted by just adding more PV cell surface area to the cylinder or connecting multiple cylinders together t form any length necessary. Such cylinders could be used as an outer casing for any pole. Another possibility is a spinning disk that could replace static flat solar panels to boost energy production.

Just brainstorming here but spinning solar cells open up a number of interesting possibilities.

Needless to say, I want one or more for my backyard. And where can I invest?

Comments?

~~~~

Image: (c) 2012 V3Solar from their website

Robots on the road

Just heard that California is about to start working on formal regulations for robot cars to travel their roads which is the second state to regulate these autonomous machines, the first was Nevada.  At the moment the legislation signed into law requires CA to draft regulations for these vehicles by January 1, 2015.

I suppose being in the IT industry this shouldn’t be a surprise to me or anyone else. Google has been running autonomously driven vehicles for over 300K miles now.

But it always seems a bit jarring when something like this goes from testing  to production, seems almost Jetson like.  I remember seeing a video of something like this from Bell Labs/GM Labs or somebody like that when they were talking about the future way back in the 60s of last century.  Gosh only 50 years later and its almost here.

DARPA Grand Challenges spurred it on

Of course it all started probably in the late 70s when AI was just firing up.  But robot cars seemed to really take off when DARPA, back in 2004 wanted to push the technology to develop a autonomous vehicle for the DOD. They funded a and created the DARPA Grand Challenge.

In 2004 the requirements were to drive over 150 miles (240 km) in and around the Mojave desert in southwestern USA. In that first year, none of the vehicles managed to finish the distance.  Over the next few years, the course got more difficult, the prize money increased, and the vehicles got a lot smarter.

In 2005 DARPA grand challenge once again a rural setting, 5 vehicles finished the course 1 from Stanford, 2 from Carnegie Mellon (CMU), 1 from Oshkosh Trucking, and the other 1 from Gray’s Insurance Company.  At first I thought an insurance company, then it hit me maybe there’s a connection to auto insurance.

DARPA’s next challenge for 2007 was for an urban driving environment but this time  DARPA providing research funding to a select group as well as larger prize to any winners.  Six teams were able to finish the urban challenge, 1 each from CMU, Stanford, Virginia Tech, MIT, University of Pennsylvania & Lehigh University and Cornell University.  That was the last DARPA challenge for autonomous vehicles, seems they had what they wanted.

Google’s streetview helped

Sometime around 2010, Google started working withg self-driving cars to provide some of the streetview shots they needed.  Shortly thereafter they had  logged ~140K miles with them.   Fast forward a couple of years and Google’s Sergey Brin was claiming that people will be driving in robotic cars in 5 years. To get their self-driving cars up and running they hired the leaders of both the CMU and Stanford teams as well as somebody who worked on the first autonomous motorcycle which ran in the Urban Challenge.

For all of the 300K miles they currently have logged, the cars were manned by a safety driver and a software engineer in the car, just for safety reasons.  Also, local police were notified that the car would be in their area.  Before the autonomous car took off another car, this one driven by a human, was sent out to map out the route in detail including all traffic signs, signals, lane markers, etc.  This was then up(?) loaded to the self-driving car which followed the same exact route.

I couldn’t find and detailed hardware list but Google’s blog post on the start of the project indicated computers (maybe 2 for HA), multiple cameras, infrared sensors, laser rangefinders, radar, and probably multiple servos (gear shift, steering, accelerator and brake pedals), all fitted to Toyota Prius cars.  Although the servos may no longer be as necessary as many new cars, use drive by wire for some of these function.

Monetization?

I could imagine quite a few ways to monetize self-driving, robotic cars:

  • License the service to the major auto and truck manufacturers around the world, with the additional hardware either supplied as a car/truck option (probably at first) or provided on all cars/trucks (probably a ways down the line).
  • Cars/trucks would need computer screens for the driving console as well as probably for entertainment.  Possibly advertisements on these screens could be used to offset some of the licensing/hardware costs.
  • Insurance companies may wish to subsidize the cost of the system.  Especially, if the cars could reduce accidents, it would then have a positive ROI, just for accident reduction alone, let alone saving lives.
  • In the car internet would need to be more available (see below). This would no doubt be based on 4G or whatever the next cellular technology comes along. Maybe the mobile phone companies would want to help subsidize this service, like they do for phones, if you had to sign a contract for a couple of years. I am thinking the detailed maps required for self-driving might require a more bandwidth than Google Maps does today, which could help chew up those bandwidth limits.
  • With all these sensors, it’s quite possible that self-driving cars, when being driven by humans, could be used to map new routes.  If you elected to provide these sorts of services then maybe one could also get something of a kickback.

I assume the robotic cars need Internet access but nothing I read says for sure. Maybe they could get by without Internet access if they just used manual driving mode for those sections of travel which lacked Internet  Perhaps, the cars could download the route before it went into self-driving mode and that way if you kept to the plan you would be ok.

Other uses of robotic cars

Of course with all these Internet enabled cars, tollways and city centers could readily establish new congestion based pricing.  Police could potentially override a car and cause it to pull over, automatically without the driver being able to stop it.  Traffic data would be much more available, more detailed, and more real time than it is already.  All these additional services could help to offset the cost of the HW and licensing of the self-driving service.

The original reason for the DARPA grand challenge was to provide a way to get troops and/or equipment from one place to another without soldiers having to drive the whole way there.  Today, this is still a dream but if self-driving cars become a reality in 5 years or so, I would think the DOD could have something deployed before then.

~~~~

If the self-driving car maps require more detailed information than today’s GPS maps, there’s probably a storage angle here both in car and at some centralized data center(s) located around a country.  If the cars could be also used to map new routes,  perhaps even a skosh more storage would be required in car.

Just imagine driving cross country and being able to sleep most of the way, all by yourself with your self-driving car.  Now if they could only make a port-a-potty that would fit inside a sedan I would be all set to go…, literally 🙂

Comments?

Image: Google streetview self-driving car by DoNotLick

 

VMworld first thoughts kickoff session

[Edited for readability. RLL] The drummer band was great at the start but we couldn’t tell if it was real or lipsynched. It turned out that each of the Big VMWORLD letters had a digital drum pad on them which meant it was live, in realtime.

Paul got a standing ovation as he left the stage introducing Pat the new CEO.  With Paul on the stage, there was much discussion of where VMware has come the last four years.  But IDC stats probably say it better than most in 2008 about 25% of Intel X86 apps were virtualized and in 2012 it’s about 60% and and Gartner says that VMware has about 80% of that activity.

Pat got up on stage and it was like nothing’s changed. VMware is still going down the path they believe is best for the world a virtual data center that spans private, on premises equipment and extrenal cloud service providers equipment.

There was much ink on software defined data center which is taking the vSphere world view and incorporating networking, more storage, more infrastructure to the already present virtualized management paradigm.

It’s a bit murky as to what’s changed, what’s acquired functionality and what’s new development but suffice it to say that VMware has been busy once again this year.

A single “monster vm” (has it’s own facebook page) now supports up to 64 vCPUs, 1TB of RAM, and can sustain more than a million IOPS. It seems that this should be enough for most mission critical apps out there today. No statement on latency the IOPS but with a million IOS a second and 64 vCPUs we are probably talking flash somewhere in the storage hierarchy.

Pat mentioned that the vRAM concept is now officially dead. And the pricing model is now based on physical CPUs and sockets. It no longer has a VM or vRAM component to it. Seemed like this got lots of applause.

There are now so many components to vCloud Suite that it’s almost hard to keep track of them all:  vCloud Director, vCloud Orchestrator, vFabric applications director, vCenter Operations Manager, of course vSphere and that’s not counting relatively recent acquisitions Dynamic Op’s a cloud dashboard and Nicira SDN services and I am probably missing some of them.

In addition to all that VMware has been working on Serengeti which is a layer added to vSphere to virtualize Hadoop clusters. In the demo they spun up and down a hadoop cluster with MapReduce operating to process log files.  (I want one of these for my home office environments).

Showed another demo of the vCloud suite in action spinning up a cloud data center and deploying applications to it in real time. Literally it took ~5minutes to start it up until they were deploying applications to it.  It was a bit hard to follow as it was going a lot into the WAN like networking environment configuration of load ballancing, firewalls and other edge security and workload characteristics but it all seemed pretty straightforward and took a short while but configured an actual cloud in minutes.

I missed the last part about social cast but apparently it builds a social network of around VMs?  [Need to listen better next time]

More to follow…

 

Roads to R&D success – part 2

This is the second part of a multi-part post.  In part one (found here) we spent some time going over some prime examples of corporations that generated outsize success from their R&D activities, highlighting AT&T with Bell Labs, IBM with IBM Research, and Apple.

I see two viable models for outsized organic R&D success:

  • One is based on a visionary organizational structure which creates an independent R&D lab.  IBM has IBM Research, AT&T had Bell Labs, other major companies have their research entities.  These typically have independent funding not tied to business projects, broadly defined research objectives, and little to no direct business accountability.  Such organizations can pursue basic research and/or advanced technology wherever it may lead.
  • The other is based on visionary leadership, where a corporation identifies a future need, turns completely to focus on the new market, devotes whatever resources it needs and does a complete forced march towards getting a product out the door.  While these projects sometimes have stage gates, more often than not, they just tell the project what needs to be done next, and where resources are coming from.

The funny thing is that both approaches have changed the world.  Visionary leadership typically generates more profit in a short time period. But visionary organizations often outlast any one person and in the long run may generate significant corporate profits.

The challenges of Visionary Leadership

Visionary leadership balances broad technological insight with design aesthetic that includes a deep understanding of what’s possible within a corporate environment. Combine all that with an understanding of what’s needed in some market and you have a combination reconstructs industries.

Visionary leadership is hard to find.  Leaders like Bill Hewlett, Akio Morita and Bill Gates seem to come out of nowhere, dramatically transform multiple industries and then fade away.  Their corporations don’t ever do as well after such leaders are gone.

Often visionary leaders come up out of the technical ranks.  This gives them the broad technical knowledge needed to identify product opportunities when they occur.   But, this technological ability also helps them to push development teams beyond what they thought feasible.  Also, the broad technical underpinnings gives them an understanding of how different pieces of technology can come together into a system needed by new markets.

Design aesthetic is harder to nail down.  In my view, it’s intrinsic to understanding what a market needs and where a market is going.   Perhaps this should be better understood as marketing foresight.  Maybe it’s just the ability to foresee how a potential product fits into a market.   At some deep level, this is essence of design excellence in my mind.

The other aspect of visionary leaders is that they can do it all, from development to marketing to sales to finance.  But what sets them apart is that they integrate all these disciplines into a single or perhaps pair of individuals.  Equally important, they can recognize excellence in others.  As such, when failures occur, visionary leader’s can decipher the difference between bad luck and poor performance and act accordingly.

Finally, most visionary leaders are deeply immersed in the markets they serve or are about to transform.  They understand what’s happening, what’s needed and where it could potentially go if it just apply the right technologies to it.

When you combine all these characteristics in one or a pair of individuals, with corporate resources behind them, they move markets.

The challenges of Visionary Organizations

On the other hand, visionary organizations that create independent research labs can live forever.  As long as they continue to produce viable IP.   Corporate research labs must balance an ongoing commitment to advance basic research against a need to move a corporation’s technology forward.

That’s not to say that the technology they work on doesn’t have business applications.  In some cases, they create entire new lines of businesses, such as Watson from IBM Research.   However, probably most research may never reach corporate products, Nonetheless research labs always generate copious IP which can often be licensed and may represent a significant revenue stream in its own right.

The trick for any independent research organization is to balance the pursuit of basic science within broad corporate interests, recognizing research with potential product applications, and guiding that research into technology development.  IBM seems to have turned their research arm around by rotating some of their young scientists out into the field to see what business is trying to accomplish.  When they return to their labs, often their research takes on some of the problems they noticed during their field experience.

How much to fund such endeavors is another critical factor.  There seems to be a size effect. I have noticed small research arms, less than 20 people that seem to flounder going after the trend of the moment which fail to generate any useful IP.

In comparison, IBM research is well funded (~6% of 2010 corporate revenue) with over 3000 researchers (out of total employee population of 400K) in 8 labs.  The one lab highlighted in the article above (Zurich) had 350 researchers, covering 5 focus areas, or ~70 researchers per area.

Most research labs augment their activities by performing joint research projects with university researchers and other collaborators. This can have the effect of multiplying research endeavors but often it will take some funding to accomplish and get off the ground.

Research labs often lose their way and seem to spend significant funds on less rewarding activities.  But by balancing basic science with corporate interests, they can become very valuable to corporations.

~~~~

In part 3 of this series we discuss the advantages and disadvantages of startup acquisitions and how they can help and hinder a company’s R&D effectiveness.

Image: IBM System/360 by Marcin Wichary