Coolest solar PV cells around

[Published the post early by mistake, this is a revised version] Read an article the other day on Inhabitat.com about a new solar array design from V3Solar. They are revolutionizing the mechanical configuration of solar PV cells. Their solution has taken a systems level view of the problem and attacked the solar PV issues from multiple angles, literally.

There are at least two problems with today’s flat, static solar PV arrays these days:

  1. Most static solar arrays work great around noon but their efficiency trails off from there supplying very little power at dawn or sundown and less than peak power 2-3 hours before and after local noon. Solar arrays which track the sun can do better but they require additional circuitry and motors to do so.
  2. Solar cells generate more power when sunlight is concentrated, but they can’t handle much heat. As such, most PV arrays are just flat panels behind flat glass or plastic sheets, with no magnification.

V3 solar has solved these and other problems with an ingenious new mechanical design that provides more power per PV cell surface area. Using a cone geometry there is always a portion of the solar PV cells that face the sun.

The other interesting item about V3Solar’s new technology is that it spins. The main advantage this brings is that it automatically cools itself. In the graphic above there is a hard transparent shell that surrounds the cone of solar PV cells. The transparent shell remains stationary while the inner cone with PV cells on it rotates automatically cooling the PV logic. This is all better displayed on their website with their youtube video.

Also, hard to see in graphic above but depicted well in their video is that there are a couple of linear lenses located around the transparent shell used to concentrate the sunlight. This generates even more power from cells while they are temporarily under the lens, but also heats them up. But with the automatic cooling this isn’t a problem anymore.

At the base of the cone a plate with an array of magnets is stationary but acts as a dynamo as the cone above it rotates with it’s own array of electro magnets. This automatically generates AC power and provides magnetic levitation for the rotating cone.

V3Solar has also patented a power pole which mounts multiple spin cells in a tree like configuration, to generate even more power. The pole’s spin cells would be mathematically located so as not to cast a shadow on the other spin cells on the pole.

V3Solar claims that their spin cell which has a footprint of a square meter generates 20X the power of an equivalent amount of flat solar panels (see their website for details). Besides the animation on their website they have a video of a prototype spin cell in operation.

As of September 24th, V3Solar has contracted with a Southern California company to complete the design and engineering of the spin cell. Which means you can’t buy it just yet. But I want one when they come out.

Some suggestions for improvements

  • Smaller cones and larger cones would make sense. Of course standardizing on one size and cone geometry makes it easier to manufacture. But having different sizes say, 1/2 meter square and 2 meter square, would provide some interesting possibilities and more diverse configurations/applications.
  • One would think that the cone geometry should vary for each different parallel or longitudelatitude, e.g., being flatter at the equator and narrower at the poles to gather even more sunlight.
  • V3Solar shows their spin cell cone in a vertical orientation. It would seem to me that there are just as many opportunities for other positions. Perhaps having the cone point directly south (in the northern hemisphere) or north (in southern hemisphere) or even in a horizontal orientation. I was thinking of having a spin cell located on the back of a wind turbine, in a streamlined orientation (with the top of the cone facing the propeller or even a double cone solution. This way you could combine the two forms of renewable energy in one combined unit.
  • Have the spin cells be able to float on the water in a self contained, ocean hardened configuration. Using such devices could power a more sophisticated, advanced functionaling buoy. Also one could now construct a solar power generation facility that floats on the ocean.
  • Other geometries than just a cone come to mind. I suppose the nice part about the cone is that its planar. But other geometric solutions exist that satisfy this constraint. For example, a cylinder would work. But this time the angle of the cylinder spin would be based on the location. Solar efficiency could be easily boosted by just adding more PV cell surface area to the cylinder or connecting multiple cylinders together t form any length necessary. Such cylinders could be used as an outer casing for any pole. Another possibility is a spinning disk that could replace static flat solar panels to boost energy production.

Just brainstorming here but spinning solar cells open up a number of interesting possibilities.

Needless to say, I want one or more for my backyard. And where can I invest?

Comments?

~~~~

Image: (c) 2012 V3Solar from their website

Roads to R&D success – part 2

This is the second part of a multi-part post.  In part one (found here) we spent some time going over some prime examples of corporations that generated outsize success from their R&D activities, highlighting AT&T with Bell Labs, IBM with IBM Research, and Apple.

I see two viable models for outsized organic R&D success:

  • One is based on a visionary organizational structure which creates an independent R&D lab.  IBM has IBM Research, AT&T had Bell Labs, other major companies have their research entities.  These typically have independent funding not tied to business projects, broadly defined research objectives, and little to no direct business accountability.  Such organizations can pursue basic research and/or advanced technology wherever it may lead.
  • The other is based on visionary leadership, where a corporation identifies a future need, turns completely to focus on the new market, devotes whatever resources it needs and does a complete forced march towards getting a product out the door.  While these projects sometimes have stage gates, more often than not, they just tell the project what needs to be done next, and where resources are coming from.

The funny thing is that both approaches have changed the world.  Visionary leadership typically generates more profit in a short time period. But visionary organizations often outlast any one person and in the long run may generate significant corporate profits.

The challenges of Visionary Leadership

Visionary leadership balances broad technological insight with design aesthetic that includes a deep understanding of what’s possible within a corporate environment. Combine all that with an understanding of what’s needed in some market and you have a combination reconstructs industries.

Visionary leadership is hard to find.  Leaders like Bill Hewlett, Akio Morita and Bill Gates seem to come out of nowhere, dramatically transform multiple industries and then fade away.  Their corporations don’t ever do as well after such leaders are gone.

Often visionary leaders come up out of the technical ranks.  This gives them the broad technical knowledge needed to identify product opportunities when they occur.   But, this technological ability also helps them to push development teams beyond what they thought feasible.  Also, the broad technical underpinnings gives them an understanding of how different pieces of technology can come together into a system needed by new markets.

Design aesthetic is harder to nail down.  In my view, it’s intrinsic to understanding what a market needs and where a market is going.   Perhaps this should be better understood as marketing foresight.  Maybe it’s just the ability to foresee how a potential product fits into a market.   At some deep level, this is essence of design excellence in my mind.

The other aspect of visionary leaders is that they can do it all, from development to marketing to sales to finance.  But what sets them apart is that they integrate all these disciplines into a single or perhaps pair of individuals.  Equally important, they can recognize excellence in others.  As such, when failures occur, visionary leader’s can decipher the difference between bad luck and poor performance and act accordingly.

Finally, most visionary leaders are deeply immersed in the markets they serve or are about to transform.  They understand what’s happening, what’s needed and where it could potentially go if it just apply the right technologies to it.

When you combine all these characteristics in one or a pair of individuals, with corporate resources behind them, they move markets.

The challenges of Visionary Organizations

On the other hand, visionary organizations that create independent research labs can live forever.  As long as they continue to produce viable IP.   Corporate research labs must balance an ongoing commitment to advance basic research against a need to move a corporation’s technology forward.

That’s not to say that the technology they work on doesn’t have business applications.  In some cases, they create entire new lines of businesses, such as Watson from IBM Research.   However, probably most research may never reach corporate products, Nonetheless research labs always generate copious IP which can often be licensed and may represent a significant revenue stream in its own right.

The trick for any independent research organization is to balance the pursuit of basic science within broad corporate interests, recognizing research with potential product applications, and guiding that research into technology development.  IBM seems to have turned their research arm around by rotating some of their young scientists out into the field to see what business is trying to accomplish.  When they return to their labs, often their research takes on some of the problems they noticed during their field experience.

How much to fund such endeavors is another critical factor.  There seems to be a size effect. I have noticed small research arms, less than 20 people that seem to flounder going after the trend of the moment which fail to generate any useful IP.

In comparison, IBM research is well funded (~6% of 2010 corporate revenue) with over 3000 researchers (out of total employee population of 400K) in 8 labs.  The one lab highlighted in the article above (Zurich) had 350 researchers, covering 5 focus areas, or ~70 researchers per area.

Most research labs augment their activities by performing joint research projects with university researchers and other collaborators. This can have the effect of multiplying research endeavors but often it will take some funding to accomplish and get off the ground.

Research labs often lose their way and seem to spend significant funds on less rewarding activities.  But by balancing basic science with corporate interests, they can become very valuable to corporations.

~~~~

In part 3 of this series we discuss the advantages and disadvantages of startup acquisitions and how they can help and hinder a company’s R&D effectiveness.

Image: IBM System/360 by Marcin Wichary

Roads to R&D success – part 1

Large corporations have a serious problem.  We have talked about this before (see Is M&A the only way to grow, R&D Effectiveness, and Technology innovation).

It’s been brewing for years, some say decades. Successful company’s generate lot’s of cash but investing in current lines of business seldom propels corporations into new markets.

So what can they do?

  • Buy startups – yes, doing so can move corporations into new markets, obtain new technology and perhaps, even a functioning product.  However, they often invest in  unproven technology, asymmetrical organizations and mistaken ROIs.
  • Invest internally – yes, they can certainly start new projects, give it resources and let it run it’s course.  However, they burden most internal project teams with higher overhead, functioning perfection, and loftier justification.

Another approach trumpeted by Cisco and others in recent years is spin-out/spin-in which is probably a little of both.   Here a company can provide funding, developers, and even IP to an entity that is spun out of a company.  The spin-out is dedicated to producing some product in a designated new market and then if goals are met, can be spun back into the company at a high, but fair price.

The most recent example is Cisco’s spin-in Insieme that is going after SDN and Open Flow networking but their prior success with Andiamo and it’s FC SAN technology is another one.  GE, Intel and others have also tried this approach with somewhat less success.

Corporate R&D today

Most company’s have engineering departments with a tried and true project management/development team approach that has stage gates, generates  requirements, architects systems, designs components and finally, develops products.   A staid, steady project cycle which nevertheless is fraught with traps, risks and detours.  These sorts of projects seem only able to enhance current product lines and move products forward to compete in their current markets.

But these projects never seem transformative.  They don’t take a company from 25% to 75% market share or triple corporate revenues in a decade.  They typically fight a rear-guard action against a flotilla of competitors all going after the same market, at worst trying not to lose market share and at best gain modest market share, where possible.

How corporation’s succeed at internal R&D

But there are a few different models that have generated outsized internal R&D success in the past.  These generally fall into a few typical patterns.  We discuss two below.

One depends on visionary leadership and the other on visionary organizations.  For example, let’s look at IBM, AT&T’s Bell Labs and Apple.

IBM R&D in the past and today

First, examine IBM whose CEO, Thomas J. Watson Jr. bet the company on System 360 from 1959 to 1964.  That endeavor cost them ~$5B at the time but eventually catapulted them from one of many computer companies to almost a mainframe monopoly for two decades years.  They created an innovative microcoded, CISC architecture, that spanned a family of system models, and standardized I/O with common peripherals.  From that point on, IBM was able to dominate corporate data processing until the mid 1980’s.  IBM has arguably lost and found their way a couple of times since then.

However as another approach to innovation in 1945, IBM Research was founded.  Today IBM Research is a well funded, independent research lab that generates significant IP in super computing, artificial intelligence and semiconductor technology.

Nonetheless, during the decades since 1945, IBM Research struggled for corporate relevance.  Occasionally coming out with significant IT technology like relational databases, thin film recording heads, and RISC architectures. But arguably such advances were probably put to better use outside IBM.  Recently, this seems to have changed and we now see significant technology moving IBM into new markets from IBM Research.

AT&T and Bell Labs

Bell  Labs is probably the most prolific research organization the world has seen.  They invented statistical process control, the transistor, information theory and probably another dozen or so Nobel prize winning ideas. Early on most of their technology made it into the Bell system but later on they lost their way.

Their parent company AT&T, had a monopoly on long distance phone service, switching equipment and other key technologies in USA’s phone system for much of the twentieth century.  During most of that time Bell Labs was well funded and charged with advancing Bell system technology.

Nonetheless, despite Bell Labs obvious technological success, in the end they mostly served to preserve and enhance the phone system rather than disrupt it.  Some of this was due to justice department decrees limiting AT&T endeavors. But in any case, like IBM research much of Bell Labs technology was taken up by others and transformed many markets.

Apple yesterday and today

Then there’s Apple. They have almost single handedly created three separate market’s, the personal computer, the personal music player and the tablet computer markets while radically transforming the smart phone market as well.   In every case there were sometimes, significant precursors to the technology, but Apple was the one to catalyze, popularize and capitalize on each one.

Apple II was arguably the first personal computer but the Macintosh redefined the paradigm.  The Mac wasn’t the great success it could have been, mostly due to management changes that moved Jobs out of Apple.  But it’s potential forced major competitors to change their products substantially.

When Jobs returned, he re-invigorated the Mac.  After that, he went about re-inventing the music player, the smart phone and tablet computing.

Could Apple have done all these without Jobs, I doubt it.  Could a startup have taken any of these on, perhaps but I think it unlikely.

The iPod depended on music industry contracts, back office and desktop software and deep technological acumen.  None of these were exclusive to Apple nor big corporations.  Nevertheless, Jobs saw the way forward first, put the effort into making them happen and Apple reaped the substantial rewards that ensued.

~~~~

In part 2 of the Road to R&D success we propose some options for how to turn corporate R&D into the serious profit generator it can become.  Stay tuned

To be continued …

Image: Replica of first transistor from Wikipedia

 

Software defined radio hits the market

[Sorry, published this post early, final version below]

Phi card (c) 2012 Per Vices (off their website)A couple of years ago I was at an IEEE technology conference and heard a presentation on software defined radios (SDR).  At the time, it was focused on military applications where a number of different radio frequencies were used by different organizations. The military and other services wanted a single piece of hardware with DR, to talk over any frequency band that was currently being used.

Over time I  heard nothing more about this technology until today when I read a ARS Technica article on an SDR startup company Per Vices  and their Phi SDR.  We have recently posted on OpenFlow and its software defined networking this takes that flexibility and applies it to radio.

Looking at the hardware it still primarily for hobbyists and engineers, with a RF daughter card, computer card and the main box.  It’s available as a PCIe card or comes in a kit. But it’s a start.

Not to much today but if it can be shrunk and become more widely available, any smartphone could be a multi-network phone right out of the box.  Signing up for ATT, Verizon, Sprint and others would be as easy as toggling a setting and letting the SDR do the rest.

More to come

Not only that, but with SDR, that same smartphone could act as an AM/FM/shortwave radio, multi-band walki-talki, even it’s own radio station for any and all frequency bands.   Not to mention including a WiFi hot spot, BlueTooth, RFid, and NearField transceiver just as easily the other bands, all in the same mobile phone without any specialized hardware other than the shrunken RF gear.

Currently the iPhone and other smart phones require separate hardware technology and at best only do some of this.  But with SDR and appropriate RF gear, all this could be done over the same hardware, all within the smart phone itself and it could be just as easy as changing a setting.  I could see a radio station app in my future when SDR is here.

Another possibility

I have often wondered why smart phones don’t form a mesh network with the phones closest to a cell tower offering telcom access and the phones farther away  using closer ones as a sort of on/off-ramp.  One reason for no mesh support is that it would take more phone processing and energy to do it.  Without any compensation who would volunteer their phone to do it.  But with SDR, standardized protocols could be developed together with mobile micro payment options which would allow phone users to be compensated for providing mesh services and everyone gains.

Open source radio

Of course the other thing is that with SDR, the radio logic is now open source and could be tweaked to do just about anything an engineer wanted to do.  This would really open up the radio spectrum to all sorts of new possibilities.

The FCC and other regulatory agencies might have some concerns about this.  But if some spectrum could be set aside for these sorts of experimentation, I am sure the world would be better off for it.

~~~

Per Vices compares themselves to Apple and it’s Apple 1 which was just a computer card  with no software for hobbyists to play with.  Given where they are today it certainly is an apt description.

They just need to take it to the next step and make the Apple II version of SDR. A complete package, with software and hardware where any person could construct their own radio. Then the next step is to create the Macintosh of radios where everyone could use it for radio services and they could conquer radio.

Comments?

EMC buys ExtremeIO

Wow, $430M for a $25M startup that’s been around since 2009 and hasn’t generated any revenue yet.  It probably compares well against Facebook’s recent $1B acquisition of Instagram but still it seems a bit much.

It certainly signals a significant ongoing interest in flash storage in whatever form that takes. Currently EMC offers PCIe flash storage (VFCache), SSD options in VMAX and VNX, and has plans for a shared flash cache array (project: Thunder).  An all-flash storage array makes a lot of sense if you believe this represents an architecture that can grab market share in storage.

I have talked with ExtremeIO in the past but they were pretty stealthy then (and still are as far as I can tell). Not much details about their product architecture, specs on performance, interfaces or anything substantive. The only thing they told me then was that they were in the flash array storage business.

In a presentation to SNIA’s BOD last summer I said that the storage industry is in revolution.  When a 20 or so device system can generate ~250K or more IO/second with a single controller, simple interfaces, and solid state drives, we are no longer in Kansas anymore.

Can a million IO storage system be far behind.

It seems to me, that doing enterprise storage performance has gotten much easier over the last few years.  Now that doesn’t mean enterprise storage reliability, availability or features but just getting to that level of performance before took 1000s of disk drives and racks of equipment.  Today, you can almost do it in a 2U enclosure and that’s without breaking a sweat.

Well that seems to be the problem, with a gaggle of startups, all vying after SSD storage in one form or another the market is starting to take notice.  Maybe EMC felt that it was a good time to enter the market with their own branded product, they seem to already have all the other bases covered.

Their website mentions that ExtremeIO was a load balanced, deduplicated clustered storage system with enterprise class services (this could mean anything). Nonetheless, a deduplicating, clustered SSD storage system built out of commodity servers could define at least 3 other SSD startups I have recently talked with and a bunch I haven’t talked with in awhile.

Why EMC decided that ExtremeIO was the one to buy is somewhat a mystery.  There was some mention of an advanced data protection scheme for the flash storage but no real details.

Nonetheless, enterprise SSD storage services with relatively low valuation and potential to disrupt enterprise storage might be something to invest in.  Certainly EMC felt so.

~~~~

Comments, anyone know anything more about ExtremeIO?

Mobile health (mHealth) takes off in Kenya

iHanging out with Kenya Techies by whiteafrican (cc) (from Flickr)
Hanging out with Kenya Techies by whiteafrican (cc) (from Flickr)

Read an article today about startups and others in Kenya  providing electronic medical care via mHealth and improving the country’s health care system (see Kenya’s Startup Boom).

It seems that four interns were able to create a smartphone and web App in a little over 6 months, to help track Kenya’s infectious disease activity.   They didn’t call it healthcare-as-a-service nor was there any mention of the cloud in the story, but they were doing it all, just the same.

Old story, new ending

The Kenyan government was in the process of contracting out the design and deployment of a new service that would track the cases of infectious disease throughout the country to enable better strategies to counteract them.  They were just about ready to sign a $1.9M contract with one mobile phone company when they decided it was inappropriate for them to lock-in a single service provider.

So they decided to try a different approach, they contacted the head of the Clinton Health Access Initiative (CHAI) who contacted an instructor at Strathmore University who identified four recent graduates and set them to work as interns for $150/month. They spent the spring and summer gathering requirements and pounding out the App(s).  At the end of the summer it was up and running on smart phones and the web throughout their country.

They are now working on an SMS version of the system to allow others who do not own smart phones to be able to use the system to record infectious disease activity. They are also taking on a completely new task to try and track government drug shipments to hospitals and clinics to eliminate shortages and waste.

mHealth, the future of healthcare

The story cited above says that there are at least 45 mHealth programs actively being developed or already completed in Kenya. Many of them created by a startup incubator called iHub.  We have written about Kenya’s use of mobile phones to support novel services before (see Is cloud a leapfrog technology).

Some of these mHealth projects include:

  • AMPATH which uses OpenMRS (open sourced medical records platform) and SMS messaging to remind HIV patients to take their medicines and provides call-in for questions about the medication or treatments,
  • Daktari, a mobile service provider’s call-a-doc service that provides a phone-in hot-line for medical questions, in a country with only one doctor per every 6000 citizens, such phone-in health care can more effectively leverage the meagre healthcare resources available,
  • MedAfrica App which provides doctors or dentists phone numbers and menus to find basic healthcare and diagnostic information in Kenya.

There are many others mHealth projects on the drawing board including a national electronic medical records (EMR) service, medical health payment cards loaded up using mobile payments, and others.

Electronic medical care through mHealth

It seems that Kenya is becoming a leading edge provider of mHealth solutions based in the cloud mainly because it’s inexpensive, fits well with technology that pervades the country, and can be scaled up rapidly to cover its citizens.

If Kenya can move to deploy healthcare-as-a-service using mobile phones, so can the rest of the third world.

Speaking of mHealth, I got a new free app on my iPhone the other day called iTriage, check it out.

Comments?

 

Why EMC is doing Project Lightening and Thunder

Picture of atmospheric lightening striking ground near a building at night
rayo 3 by El Garza (cc) (from Flickr)

Although technically Project Lightening and Thunder represent some interesting offshoots of EMC software, hardware and system prowess,  I wonder why they would decide to go after this particular market space.

There are plenty of alternative offerings in the PCIe NAND memory card space.  Moreover, the PCIe card caching functionality, while interesting is not that hard to replicate and such software capability is not a serious barrier of entry for HP, IBM, NetApp and many, many others.  And the margins cannot be that great.

So why get into this low margin business?

I can see a couple of reasons why EMC might want to do this.

  • Believing in the commoditization of storage performance.  I have had this debate with a number of analysts over the years but there remain many out there that firmly believe that storage performance will become a commodity sooner, rather than later.  By entering the PCIe NAND card IO buffer space, EMC can create a beachhead in this movement that helps them build market awareness, higher manufacturing volumes, and support expertise.  As such, when the inevitable happens and high margins for enterprise storage start to deteriorate, EMC will be able to capitalize on this hard won, operational effectiveness.
  • Moving up the IO stack.  From an applications IO request to the disk device that actually services it is a long journey with multiple places to make money.  Currently, EMC has a significant share of everything that happens after the fabric switch whether it is FC,  iSCSI, NFS or CIFS.  What they don’t have is a significant share in the switch infrastructure or anywhere on the other (host side) of that interface stack.  Yes they have Avamar, Networker, Documentum, and other software that help manage, secure and protect IO activity together with other significant investments in RSA and VMware.   But these represent adjacent market spaces rather than primary IO stack endeavors.  Lightening represents a hybrid software/hardware solution that moves EMC up the IO stack to inside the server.  As such, it represents yet another opportunity to profit from all the IO going on in the data center.
  • Making big data more effective.  The fact that Hadoop doesn’t really need or use high end storage has not been lost to most storage vendors.  With Lightening, EMC has a storage enhancement offering that can readily improve  Hadoop cluster processing.  Something like Lightening’s caching software could easily be tailored to enhance HDFS file access mode and thus, speed up cluster processing.  If Hadoop and big data are to be the next big consumer of storage, then speeding cluster processing will certainly help and profiting by doing this only makes sense.
  • Believing that SSDs will transform storage. To many of us the age of disks is waning.  SSDs, in some form or another, will be the underlying technology for the next age of storage.  The densities, performance and energy efficiency of current NAND based SSD technology are commendable but they will only get better over time.  The capabilities brought about by such technology will certainly transform the storage industry as we know it, if they haven’t already.  But where SSD technology actually emerges is still being played out in the market place.  Many believe that when industry transitions like this happen it’s best to be engaged everywhere change is likely to happen, hoping that at least some of them will succeed. Perhaps PCIe SSD cards may not take over all server IO activity but if it does, not being there or being late will certainly hurt a company’s chances to profit from it.

There may be more reasons I missed here but these seem to be the main ones.  Of the above, I think the last one, SSD rules the next transition is most important to EMC.

They have been successful in the past during other industry transitions.  If anything they have shown similar indications with their acquisitions by buying into transitions if they don’t own them, witness Data Domain, RSA, and VMware.  So I suspect the view in EMC is that doubling down on SSDs will enable them to ride out the next storm and be in a profitable place for the next change, whatever that might be.

And following lightening, Project Thunder

Similarly, Project Thunder seems to represent EMC doubling their bet yet again on the SSDs.  Just about every month I talk to another storage startup coming out in the market providing another new take on storage using every form of SSD imaginable.

However, Project Thunder as envisioned today is not storage, but rather some form of external shared memory.  I have heard this before, in the IBM mainframe space about 15-20 years ago.  At that time shared external memory was going to handle all mainframe IO processing and the only storage left was going to be bulk archive or migration storage – a big threat to the non-IBM mainframe storage vendors at the time.

One problem then was that the shared DRAM memory of the time was way more expensive than sophisticated disk storage and the price wasn’t coming down fast enough to counteract increased demand.  The other problem was making shared memory work with all the existing mainframe applications was not easy.  IBM at least had control over the OS, HW and most of the larger applications at the time.  Yet they still struggled to make it usable and effective, probably some lesson here for EMC.

Fast forward 20 years and NAND based SSDs are the right hardware technology to make  inexpensive shared memory happen.  In addition, the road map for NAND and other SSD technologies looks poised to continue the capacity increase and price reductions necessary to compete effectively with disk in the long run.

However, the challenges then and now seem as much to do with software that makes shared external memory universally effective as with the hardware technology to implement it.  Providing a new storage tier in Linux, Windows and/or VMware is easier said than done. Most recent successes have usually been offshoots of SCSI (iSCSI, FCoE, etc).  Nevertheless, if it was good for mainframes then, it certainly good for Linux, Windows and VMware today.

And that seems to be where Thunder is heading, I think.

Comments?

 

Comments?

How has IBM research changed?

20111207-204420.jpg
IBM Neuromorphic Chip (from Wired story)

What does Watson, Neuromorphic chips and race track memory have in common. They have all emerged out of IBM research labs.

I have been wondering for some time now how it is that a company known for it’s cutting edge research but lack of product breakthrough has transformed itself into an innovation machine.

There has been a sea change in the research at IBM that is behind the recent productization of tecnology.

Talking the past couple of days with various IBMers at STGs Smarter Computing Forum, I have formulate a preliminary hypothesis.

At first I heard that there was a change in the way research is reviewed for product potential. Nowadays, it almost takes a business case for research projects to be approved and funded. And the business case needs to contain a plan as to how it will eventually reach profitability for any project.

In the past it was often said that IBM invented a lot of technology but productized only a little of it. Much of their technology would emerge in other peoples products and IBM would not recieve anything for their efforts (other than some belated recognition for their research contribution).

Nowadays, its more likely that research not productized by IBM is at least licensed from them after they have patented the crucial technologies that underpin the advance. But it’s just as likely if it has something to do with IT, the project will end up as a product.

One executive at STG sees three phases to IBM research spanning the last 50 years or so.

Phase I The ivory tower:

IBM research during the Ivory Tower Era looked a lot like research universities but without the tenure of true professorships. Much of the research of this era was in materials and pure mathematics.

I suppose one example of this period was Mandlebrot and fractals. It probably had a lot of applications but little of them ended up in IBM products and mostly it advanced the theory and practice of pure mathematics/systems science.

Such research had little to do with the problems of IT or IBM’s customers. The fact that it created pretty pictures and a way of seeing nature in a different light was an advance to mankind but it didn’t have much if any of an impact to IBM’s bottom line.

Phase II Joint project teams

In IBM research’s phase II, the decision process on which research to move forward on now had people from not just IBM research but also product division people. At least now there could be a discussion across IBM’s various divisions on how the technology could enhance customer outcomes. I am certain profitability wasn’t often discussed but at least it was no longer purposefully ignored.

I suppose over time these discussions became more grounded in fact and business cases rather than just the belief in the value of the research for research sake. Technological roadmaps and projects were now looked at from how well they could impact customer outcomes and how such technology enabled new products and solutions to come to market.

Phase III Researchers and product people intermingle

The final step in IBM transformation of research involved the human element. People started moving around.

Researchers were assigned to the field and to product groups and product people were brought into the research organization. By doing this, ideas could cross fertilize, applications could be envisioned and the last finishing touches needed by new technology could be envisioned, funded and implemented. This probably led to the most productive transition of researchers into product developers.

On the flip side when researchers returned back from their multi-year product/field assignments they brought a new found appreciation of problems encountered in the real world. That combined with their in depth understanding of where technology could go helped show the path that could take research projects into new more fruitful (at least to IBM customers) arenas. This movement of people provided the final piece in grounding research in areas that could solve customer problems.

In the end, many research projects at IBM may fail but if they succeed they have the potential to make change IT as we know it.

I heard today that there were 700 to 800 projects in IBM research today if any of them have the potential we see in the products shown today like Watson in Healthcare and Neuromorphic chips, exciting times are ahead.