12 atoms per bit vs 35 bits per electron

Shows 6 atom pairs in a row, with coloration of blue for interstitial space and yellow for external facets of the atom
from Technology Review Article

Read a story today in Technology Review on Magnetic Memory Miniaturized to Just 12 Atoms by a team at  IBM Research that created a (spin) magnetic “storage device” that used 12 iron atoms  to record a single bit (near absolute zero and just for a few hours).  The article said it was about 100X  denser than the previous magnetic storage record.

Holographic storage beats that

Wikipedia’s (soon to go dark for 24hrs) article on Memory Storage Density mentioned research at Stanford that in 2009 created an electronic quantum holographic device that stored 35 bits/electron using a sheet of copper atoms to record the letters S and U.

The Wikipedia article went on to equate 35bits/electron to ~3 Exabytes[10**18 bytes]/In**2.  (Although, how Wikipedia was able to convert from bits/electron to EB/in**2 I don’t know but I’ll accept it as a given)

Now an iron atom has 26 electrons and copper has 29 electrons.  If 35 bits/electron is 3 EB/in**2 (or ~30Eb/in**2), then 1 bit per 12 iron atoms (or 12*26=312 electrons) should be 0.0032bits/electron or ~275TB/in**2 (or ~2.8Pb/in**2).   Not quite to the scale of the holographic device but interesting nonetheless.

What can that do for my desktop?

Given that today’s recording head/media has demonstrated ~3.3Tb/in**2 (see our Disk drive density multiplying by 6X post), the 12 atoms per bit  is a significant advance for (spin) magnetic storage.

With today’s disk industry shipping 1TB/disk platters using ~0.6Tb/in**2 (see our Disk capacity growing out of sight post), these technologies, if implemented in a disk form factor, could store from 4.6PB to 50EB in a 3.5″ form factor storage device.

So there is a limit to (spin) magnetic storage and it’s about 11000X larger than holographic storage.   Once again holographic storage proves it can significantly store more data than magnetic storage if only it could be commercialized. (Probably a subject to cover in a future post.)

~~~~

I don’t know about you but 4.6PB drive is probably more than enough storage for my lifetime and then some.  But then again those new 4K High Definition videos, may take up a lot more space than my (low definition) DVD collection.

Comments?

 


Making hardware-software systems design easier

Exposed by AMagill (cc) (from Flickr)
Exposed by AMagill (cc) (from Flickr)

Recent research from MIT on a Streamlining Chip Design was in the news today.  The report described work was done  by Nyrav Dave PhD and Myron King to create a new programming language, BlueSpec that can convert specifications into hardware chip design (Verilog) or compile it into software programming (C++).

BlueSpec designers can tag (annotate) system modules to be hardware or software.  The intent of the project is to make it easier to decide what is done in hardware versus software.  By specifying this decision using a language attribute, it should make architectural hardware-software tradeoffs much easier to do and as a result, delay that decision until much later in the development cycle.

Hardware-software tradeoffs

Making good hardware-software tradeoffs are especially important in mobile handsets where power efficiency and system performance requirements often clash.  It’s not that unusual in these systems that functionality is changed from hardware to software implementations or vice versa.

The problem is that the two different implementations (hardware or software) use different design languages and would typically require a complete re-coding effort to change, delaying system deployment significantly.  Which makes such decisions all the more important to get right early on in system architecture.

In contrast, with BlueSpec, all it would take is a different tag to have the language translate the module into Verilog (chip design language) or C++ (software code).

Better systems through easier hardware design

There is a long running debate around commodity hardware versus special purpose hardware designed systems in storage systems (see Commodity Hardware Always Loses and Commodity Hardware Debate Heats-up Again).  We believe that there will continuing place for special purpose built hardware in storage.  Also, I would go on to say this is likely the case in networking, server systems as well as telecommunications handsets/back-office equipment.

The team at MIT specifically created their language to help create more efficient mobile phone hand sets. But from my perspective it has an equally valid part to play in storage and other systems.

Hardware and software design, more similar than different

Nowadays, hardware and software designers are all just coders using different languages.

Yes hardware engineers have more design constraints and have to deal with the real, physical world of electronics. But what they deal with most, is a hardware design language and design verification tools tailored for their electronic design environment.

Doing hardware design is not that much different from software developers coding in a specific language like C++ or Java.  Software coders must also be able to understand their framework/virtual machine/OS environment their code operates in to produce something that works.  Perhaps, design verification tools don’t work or even exist in software as much as they should but that is more a subject for research than a distinction between the two types of designers.

—-

Whether BlueSpec is the final answer or not isn’t as interesting as the fact that it has taken a first step to unify system design.  Being able to decide much later in the process whether to make a module hardware or software will benefit all system designers and should get products out with less delay.  But getting hardware designers and software coders talking more, using the same language to express their designs can’t help but result in better/tighter integrated designs which end up benefiting the world.

Comments?

How has IBM research changed?

20111207-204420.jpg
IBM Neuromorphic Chip (from Wired story)

What does Watson, Neuromorphic chips and race track memory have in common. They have all emerged out of IBM research labs.

I have been wondering for some time now how it is that a company known for it’s cutting edge research but lack of product breakthrough has transformed itself into an innovation machine.

There has been a sea change in the research at IBM that is behind the recent productization of tecnology.

Talking the past couple of days with various IBMers at STGs Smarter Computing Forum, I have formulate a preliminary hypothesis.

At first I heard that there was a change in the way research is reviewed for product potential. Nowadays, it almost takes a business case for research projects to be approved and funded. And the business case needs to contain a plan as to how it will eventually reach profitability for any project.

In the past it was often said that IBM invented a lot of technology but productized only a little of it. Much of their technology would emerge in other peoples products and IBM would not recieve anything for their efforts (other than some belated recognition for their research contribution).

Nowadays, its more likely that research not productized by IBM is at least licensed from them after they have patented the crucial technologies that underpin the advance. But it’s just as likely if it has something to do with IT, the project will end up as a product.

One executive at STG sees three phases to IBM research spanning the last 50 years or so.

Phase I The ivory tower:

IBM research during the Ivory Tower Era looked a lot like research universities but without the tenure of true professorships. Much of the research of this era was in materials and pure mathematics.

I suppose one example of this period was Mandlebrot and fractals. It probably had a lot of applications but little of them ended up in IBM products and mostly it advanced the theory and practice of pure mathematics/systems science.

Such research had little to do with the problems of IT or IBM’s customers. The fact that it created pretty pictures and a way of seeing nature in a different light was an advance to mankind but it didn’t have much if any of an impact to IBM’s bottom line.

Phase II Joint project teams

In IBM research’s phase II, the decision process on which research to move forward on now had people from not just IBM research but also product division people. At least now there could be a discussion across IBM’s various divisions on how the technology could enhance customer outcomes. I am certain profitability wasn’t often discussed but at least it was no longer purposefully ignored.

I suppose over time these discussions became more grounded in fact and business cases rather than just the belief in the value of the research for research sake. Technological roadmaps and projects were now looked at from how well they could impact customer outcomes and how such technology enabled new products and solutions to come to market.

Phase III Researchers and product people intermingle

The final step in IBM transformation of research involved the human element. People started moving around.

Researchers were assigned to the field and to product groups and product people were brought into the research organization. By doing this, ideas could cross fertilize, applications could be envisioned and the last finishing touches needed by new technology could be envisioned, funded and implemented. This probably led to the most productive transition of researchers into product developers.

On the flip side when researchers returned back from their multi-year product/field assignments they brought a new found appreciation of problems encountered in the real world. That combined with their in depth understanding of where technology could go helped show the path that could take research projects into new more fruitful (at least to IBM customers) arenas. This movement of people provided the final piece in grounding research in areas that could solve customer problems.

In the end, many research projects at IBM may fail but if they succeed they have the potential to make change IT as we know it.

I heard today that there were 700 to 800 projects in IBM research today if any of them have the potential we see in the products shown today like Watson in Healthcare and Neuromorphic chips, exciting times are ahead.

Commodity hardware debate heats up again

Gold Nanowire Array by lacomj (cc) (from Flickr)
Gold Nanowire Array by lacomj (cc) (from Flickr)

A post by Chris M. Evans, in his The Storage Architect blog (Intel inside storage arrays) re-invigorated the discussion we had last year on commodity hardware always loses.

But buried in the comments was one from Michael Hay (HDS) which pointed to another blog post by Andrew Huang in his bunnie’s blog (Why the best days of open hardware are ahead) where he has an almost brillant discussion on how Moore’s law will eventually peter out (~5nm) and as such, will take much longer to double transistor density.  At that time, hardware customization (by small companies/startups) will once again, come to the forefront in new technology development.

Custom hardware, here now and the foreseeable future

Although it would be hard to argue against Andrew’s point nevertheless, I firmly believe there is still plenty of opportunity today to customize hardware that brings true value to the market.   The fact is that Moore’s law doesn’t mean that hardware customization cannot still be worthwhile.

Hitachi’s VSP (see Hitachi’s VSP vs. VMAX) is a fine example of the use of both custom ASICs, FPGAs (I believe) and standard off the shelf hardware.   HP’s 3PAR  is another example,  they couldn’t have their speedy mesh architecture without custom hardware.

But will anyone be around that can do custom chip design?

Nigel Poulton commented on Chris’s post that with custom hardware seemingly going away, the infrastructure, training and people will no longer be around to support any re-invigorated custom hardware movement.

I disagree.  Intel, IBM, Samsung, and many others large companies still maintain an active electronics engineering team/chip design capability, any of which are capable of creating state of the art ASICs.  These capabilities are what make Moore’s law a reality and will not go away over the long run (the next 20-30 years).

The fact that these competencies are locked up in very large organizations doesn’t mean it cannot be used by small companies/startups as well.  It probably does mean that these wherewithal may cost more. But the market place will deal with that in the long run, that is if the need continues to exist.

But do we still need custom hardware?

Custom hardware creates capabilities that magnify Moore’s law processing capabilities to do things that standard, off the shelf hardware cannot.  The main problem with Moore’s law from a custom hardware perspective is it takes functionality that once took custom hardware yesterday (or 18 months ago) and makes it available on off the shelf components with custom software today.

This dynamic just means that custom hardware needs to keep moving, providing ever more user benefits and functionality to remain viable.  When custom hardware cannot provide any real benefit over standard off the shelf components – that’s when it will die.

Andrew talks about the time it takes to develop custom ASICs and the fact that by the time you have one ready, a new standard chip has come out which doubles processor capabilities. Yes custom ASICs take time to develop, but FPGAs can be created and deployed in much less time. FPGA’s, like custom ASICs, also take advantage of Moore’s law with increased transistor density every 18 months. Yes, FPGAs  may be run slower than custom ASICs, but what it lacks in processing power, it makes up in time to market.

Custom hardware has a bright future as far as I can see.

—–

Comments?

HDS buys BlueArc

wall o' storage (fisheye) by ChrisDag (cc) (From Flickr)
wall o' storage (fisheye) by ChrisDag (cc) (From Flickr)

Yesterday, HDS announced that they had closed on the purchase of BlueArc their NAS supplier for the past 5 years or so.  Many commentators mentioned that this was a logical evolution of their ongoing OEM agreement, how the timing was right and speculated on what the purchase price might have been.   If you are interested in these aspects of the acquisition, I would refer you to the excellent post by David Vellante from Wikibon on the HDS BlueArc deal.

Hardware as a key differentiator

In contrast, I would like to concentrate here on another view of the purchase, specifically on how HDS and Hitachi, Ltd. have both been working to increase their product differentiation through advanced and specialized hardware (see my post on Hitachi’s VSP vs VMAX and for more on hardware vs. software check out Commodity hardware always loses).

Similarly, BlueArc shared this philosophy and was one of the few NAS vendors to develop special purpose hardware for their Titan and Mercury systems to specifically speedup NFS and CIFS processing.  Most other NAS systems use more general purpose hardware and as a result,  a majority of their R&D investment focuses on software functionality.

But not BlueArc, their performance advantage was highly dependent on specifically designed FPGAs and other hardware.  As such, they have a significant hardware R&D budget to continue their maintain and leverage their unique hardware advantage.

From my perspective, this follows what HDS and Hitachi, Ltd., have been doing all along with the USP, USP-V,  and now their latest entrant the VSP.  If you look under the covers at these products you find a plethora of many special purpose ASICs, FPGAs and other hardware that help accelerate IO performance.

BlueArc and HDS/Hitachi, Ltd. seem to be some of the last vendors standing that still believe that hardware specialization can bring value to data storage. From that standpoint, it makes an awful lot of sense to me to have HDS purchase them.

But others aren’t standing still

In the mean time, scale out NAS products continue to move forward on a number of fronts.  As readers of my newsletter know, currently the SPECsfs2008 overall performance winner is a scale out NAS solution using 144 nodes from EMC Isilon (newsletter signup is above right or can also be found here).

The fact that now HDS/Hitachi, Ltd. can bring their considerable hardware development skills and resources to bear on helping BlueArc develop and deploy their next generation of hardware is a good sign.

Another interesting tidbit was HDS’s previous purchase of ParaScale which seems to have some scale out NAS capabilities of its own.  How this all gets pulled together within HDS’s product line will need to be seen.

In any event, all this means that the battle for NAS isn’t over and is just moving to a higher level.

—-

Comments?

Technology innovation

Newton & iPad by mac_ivan (cc) (from Flickr)
Newton & iPad by mac_ivan (cc) (from Flickr)

A recent post by Mark Lewis on innovation in large companies (see Episode 105: Innovation – a process problem?) brought to mind some ideas that have been intriguing me for quite awhile now.  While Mark’s post is only the start of his discussion on the management of innovation, I think the problem goes far beyond what he has outlined there.

Outside of Apple and a few select others, there doesn’t appear to be many large corporate organization that continually succeed at technology innovation.  On the other hand there are a number of large organizations which spend $Millions, if not $Billions on R&D with at best, mediocre return on such investments.

Why do startups innovate so well and corporations do so poorly.

  • Most startup cost is sweat equity and not money, at least until business success is more assured.  Well run companies have a gate review process which provide more resources as new ideas mature over time, but the cost of “fully burdened” resources applied to any project is much higher and more monetary right from the start.  As such, corporate innovation costs, for the exact same product/project, are higher at every stage in the process, hurting ROI.
  • Most successful startups engage with customers very early in the development of a product. Alpha testing is the life blood of technical startups. Find a customer that has (hopefully, a hard) problem you want to solve and take small, incremental steps to solve it, giving the customer everything you have, the moment you have it, so they can determine if it helped and where to go next.  If their problem is shared by enough other customers you have a business.  Large companies cannot readily perform alpha tests or in some cases even beta tests in real customer environments.  Falling down and taking the many missteps that alpha testing would require might have significant brand repercussions.  So large companies end up funding test labs to do this activity.  Naturally, such testing increases the real and virtual costs of corporate innovation projects versus a startup with alpha testing.  Also, any “simulated testing” may be far removed from real customer experience, often leading corporate projects down unproductive development paths, increasing development time and costs.
  • Many startups fail, hopefully before monetary investment has been significant. Large corporate innovation activities also fail often but typically much later in the development process and only after encountering higher real and virtual monetary costs.  Thus, the motivation for continuing innovation in major corporations typically diminishes after every failure, as does the ROI on R&D in general.  On the other hand, startup failures, as they generally cost little actual money, typically induce participants to re-examine customer concerns to better target future innovations.  Such failures often lead to an even higher motivation in startup personnel to successfully innovate.

There are probably many other problems with innovation in large corporate organizations but these seem most significant to me.  Solutions to such issues within large corporations are not difficult to imagine, but the cultural changes that may be needed to go along with such solutions may represent the truly harder problem to solve.

Comments?

 

R&D effectiveness

A recent Gizmodo blog post compared a decade of R&D at Sony, Microsoft and Apple.  There were some interesting charts but mostly it showed that R&D as a percent of revenue, fluctuates from year to year and R&D spend has been rising for all the companies (although at different rates).

R&D Effectiveness, (C) 2010 Silverton Consulting, All Rights Reserved
R&D Effectiveness, (C) 2010 Silverton Consulting, All Rights Reserved

Overall from a percentage of Revenue basis, Microsoft wins, spending ~15% of revenue on R&D over the past decade, Apple loses, spending only ~4% on R&D and Sony is right in the middle at spending ~7% on R&D.  Yet viewing the impact on corporate revenue R&D spending had significantly different impacts on each company than what pure % R&D spending would indicate.

How can one measure R&D effectiveness.

  • Number of patents – this is often used as an indicator, but unclear how this correlates to business success.  Patents can be licensed but only if they prove important to other companies. However, patent counts can be gauged early on during the R&D activities rather than much later when a product reaches the market.
  • Number of projects – by projects we mean an idea from research taken into development.  Such projectst may or may not make it out to market.  At one level this can be a leading indicator of “research” effectiveness, as this means an idea was deemed at least of commercial interest.  A problem with this is that not all projects get released to the market or become commercially viable.
  • Number of products – by products, we mean something sold to customers.  At least such a measure reflects that the total R&D effort was deemed worthy enough to take to market.  How successful such a product is still to be determined.
  • Revenue of products – product revenue seems easy enough but often can be hard to allocate properly.  Looking at the iPhone, do we count just handset revenues or include application and cell service revenues. But assuming one can properly allocate revenue sources to R&D efforts, one can come up with a revenue from R&D spending.  The main problem with revenue generated from R&D ratios are all the other non-R&D factors confound it, e.g., marketing, manufacturing, competition, etc.
  • Profitability of products – product profitability is even messier than revenue when it comes to confoundability.  But ultimately profitability of R&D efforts may be the best factor to use as any product that’s truly effective should generate the most profits.

There are probably other R&D effectiveness factors that could be considered but these will suffice for now.

How did they do?

Returning to the Gizmodo discussion, their post didn’t include any patent counts, project counts (only visibly internally), product counts, or profitability measures but they did show revenue for each company.  From a purely Revenue impact one would have to say that Apple’s R&D was a clear winner with Microsoft a clear second.  Although we would have to say that Apple started from considerable smaller revenue than Sony or Microsoft but Apple’s ~$148B of revenue in 2005 was only small in comparison to other giants.  We all know the success of the iPhone and iPod but they also stumbled on the Apple TV.

Why did they do so well?

What then makes Apple do so good?  We have talked before about an elusive quality we called visionary leadership.  Certainly Bill Gates is as technically astute as Steve Jobs and there can be no denying that their respective marketing machines are evenly matched.  But both Microsoft and Apple were certainly led by more technical individuals than Sony over the last decade.   Both Microsoft and Apple have had significant revenue increases over the past ten years, that parallel one another while Sony, in comparison, has remained relatively flat.

I would say both Microsoft and Apple results show that “visionary leadership” has a certain portion of technicality to it that can’t be denied.  Moreover, I think that if one looked at Sony under Akio Morita, HP under Bill Hewlett and Dave Packard or many other large companies today, one could conclude that technical excellence is a significant component of visionary leadership.  All these companies highest revenue growth came under leadership which had significant technical knowledge.  There’s more to visionary leadership then technicality alone but it seems at least foundational.

I still owe a post on just what constitute’s visionary leadership, but I seem to be surrounding it rather than attacking it directly.