Oracle (finally) releases StorageTek VSM6

[Full disclosure: I helped develop the underlying hardware for VSM 1-3 and also way back, worked on HSC for StorageTek libraries.]

Virtual Storage Manager System 6 (VSM6) is here. Not exactly sure when VSM5 or VSM5E were released but it seems like an awful long time in Internet years.  The new VSM6 migrates the platform to Solaris software and hardware while expanding capacity and improving performance.

What’s VSM?

Oracle StorageTek VSM is a virtual tape system for mainframe, System z environments.  It provides a multi-tiered storage system which includes both physical disk and (optional) tape storage for long term big data requirements for z OS applications.

VSM6 emulates up to 256 virtual IBM tape transports but actually moves data to and from VSM Virtual Tape Storage Subsystem (VTSS) disk storage and backend real tape transports housed in automated tape libraries.  As VSM data ages, it can be migrated out to physical tape such as a StorageTek SL8500 Modular [Tape] Library system that is attached behind the VSM6 VTSS or system controller.

VSM6 offers a number of replication solutions for DR to keep data in multiple sites in synch and to copy data to offsite locations.  In addition, real tape channel extension can be used to extend the VSM storage to span onsite and offsite repositories.

One can cluster together up to 256 VSM VTSSs  into a tapeplex which is then managed under one pane of glass as a single large data repository using HSC software.

What’s new with VSM6?

The new VSM6 hardware increases volatile cache to 128GB from 32GB (in VSM5).  Non-volatile cache goes up as well, now supporting up to ~440MB, up from 256MB in the previous version.  Power, cooling and weight all seem to have also gone up (the wrong direction??) vis a vis VSM5.

The new VSM6 removes the ESCON option of previous generations and moves to 8 FICON and 8 GbE Virtual Library Extension (VLE) links. FICON channels are used for both host access (frontend) and real tape drive access (backend).  VLE was introduced in VSM5 and offers a ZFS based commodity disk tier behind the VSM VTSS for storing data that requires longer residency on disk.  Also, VSM supports a tapeless or disk-only solution for high performance requirements.

System capacity moves from 90TB (gosh that was a while ago) to now support up to 1.2PB of data.  I believe much of this comes from supporting the new T10,000C tape cartridge and drive (5TB uncompressed).  With the ability of VSM to cluster more VSM systems to the tapeplex, system capacity can now reach over 300PB.

Somewhere along the way VSM started supporting triple redundancy  for the VTSS disk storage which provides better availability than RAID6.  Not sure why they thought this was important but it does deal with increasing disk failures.

Oracle stated that VSM6 supports up to 1.5GB/Sec of throughput. Presumably this is landing data on disk or transferring the data to backend tape but not both.  There doesn’t appear to be any standard benchmarking for these sorts of systems so, will take their word for it.

Why would anyone want one?

Well it turns out plenty of mainframe systems use tape for a number of things such as data backup, HSM, and big data batch applications.  Once you get past the sunk  costs for tape transports, automation, cartridges and VSMs, VSM storage can be a pretty competitive data storage solution for the mainframe environment.

The fact that most mainframe environments grew up with tape and have long ago invested in transports, automation and new cartridges probably makes VSM6 an even better buy.  But tape is also making a comeback in open systems with LTO-5 and now LTO-6 coming out and with Oracle’s 5TB T10000C cartridge and IBM’s 4TB 3592 JC cartridge.

Not to mention Linear Tape File System (LTFS) as a new tape format that provides a file system for tape data which has brought renewed interest in all sorts of tape storage applications.

Competition not standing still

EMC introduced their Disk Library for Mainframe 6000 (DLm6000) product that supports two different backends to deal with the diversity of tape use in the mainframe environment.  Moreover, IBM has continuously enhanced their Virtual Tape Server the TS7700 but I would have to say it doesn’t come close to these capacities.

Lately, when I talked with long time StorageTek tape mainframe customers they have all said the same thing. When is VSM6 coming out and when will Oracle get their act in gear and start supporting us again.  Hopefully this signals a new emphasis on this market.  Although who is losing and who is winning in the mainframe tape market is the subject of much debate, there is no doubt that the lack of any update to VSM has hurt Oracle StorageTek tape business.

Something tells me that Oracle may have fixed this problem.  We hope that we start to see some more timely VSM enhancements in the future, for their sake and especially for their customers.

~~~~

Comments?

~~~~

Image credit: Interior of StorageTek tape library at NERSC (2) by Derrick Coetzee

 

VMworld first thoughts kickoff session

[Edited for readability. RLL] The drummer band was great at the start but we couldn’t tell if it was real or lipsynched. It turned out that each of the Big VMWORLD letters had a digital drum pad on them which meant it was live, in realtime.

Paul got a standing ovation as he left the stage introducing Pat the new CEO.  With Paul on the stage, there was much discussion of where VMware has come the last four years.  But IDC stats probably say it better than most in 2008 about 25% of Intel X86 apps were virtualized and in 2012 it’s about 60% and and Gartner says that VMware has about 80% of that activity.

Pat got up on stage and it was like nothing’s changed. VMware is still going down the path they believe is best for the world a virtual data center that spans private, on premises equipment and extrenal cloud service providers equipment.

There was much ink on software defined data center which is taking the vSphere world view and incorporating networking, more storage, more infrastructure to the already present virtualized management paradigm.

It’s a bit murky as to what’s changed, what’s acquired functionality and what’s new development but suffice it to say that VMware has been busy once again this year.

A single “monster vm” (has it’s own facebook page) now supports up to 64 vCPUs, 1TB of RAM, and can sustain more than a million IOPS. It seems that this should be enough for most mission critical apps out there today. No statement on latency the IOPS but with a million IOS a second and 64 vCPUs we are probably talking flash somewhere in the storage hierarchy.

Pat mentioned that the vRAM concept is now officially dead. And the pricing model is now based on physical CPUs and sockets. It no longer has a VM or vRAM component to it. Seemed like this got lots of applause.

There are now so many components to vCloud Suite that it’s almost hard to keep track of them all:  vCloud Director, vCloud Orchestrator, vFabric applications director, vCenter Operations Manager, of course vSphere and that’s not counting relatively recent acquisitions Dynamic Op’s a cloud dashboard and Nicira SDN services and I am probably missing some of them.

In addition to all that VMware has been working on Serengeti which is a layer added to vSphere to virtualize Hadoop clusters. In the demo they spun up and down a hadoop cluster with MapReduce operating to process log files.  (I want one of these for my home office environments).

Showed another demo of the vCloud suite in action spinning up a cloud data center and deploying applications to it in real time. Literally it took ~5minutes to start it up until they were deploying applications to it.  It was a bit hard to follow as it was going a lot into the WAN like networking environment configuration of load ballancing, firewalls and other edge security and workload characteristics but it all seemed pretty straightforward and took a short while but configured an actual cloud in minutes.

I missed the last part about social cast but apparently it builds a social network of around VMs?  [Need to listen better next time]

More to follow…

 

Roads to R&D success – part 2

This is the second part of a multi-part post.  In part one (found here) we spent some time going over some prime examples of corporations that generated outsize success from their R&D activities, highlighting AT&T with Bell Labs, IBM with IBM Research, and Apple.

I see two viable models for outsized organic R&D success:

  • One is based on a visionary organizational structure which creates an independent R&D lab.  IBM has IBM Research, AT&T had Bell Labs, other major companies have their research entities.  These typically have independent funding not tied to business projects, broadly defined research objectives, and little to no direct business accountability.  Such organizations can pursue basic research and/or advanced technology wherever it may lead.
  • The other is based on visionary leadership, where a corporation identifies a future need, turns completely to focus on the new market, devotes whatever resources it needs and does a complete forced march towards getting a product out the door.  While these projects sometimes have stage gates, more often than not, they just tell the project what needs to be done next, and where resources are coming from.

The funny thing is that both approaches have changed the world.  Visionary leadership typically generates more profit in a short time period. But visionary organizations often outlast any one person and in the long run may generate significant corporate profits.

The challenges of Visionary Leadership

Visionary leadership balances broad technological insight with design aesthetic that includes a deep understanding of what’s possible within a corporate environment. Combine all that with an understanding of what’s needed in some market and you have a combination reconstructs industries.

Visionary leadership is hard to find.  Leaders like Bill Hewlett, Akio Morita and Bill Gates seem to come out of nowhere, dramatically transform multiple industries and then fade away.  Their corporations don’t ever do as well after such leaders are gone.

Often visionary leaders come up out of the technical ranks.  This gives them the broad technical knowledge needed to identify product opportunities when they occur.   But, this technological ability also helps them to push development teams beyond what they thought feasible.  Also, the broad technical underpinnings gives them an understanding of how different pieces of technology can come together into a system needed by new markets.

Design aesthetic is harder to nail down.  In my view, it’s intrinsic to understanding what a market needs and where a market is going.   Perhaps this should be better understood as marketing foresight.  Maybe it’s just the ability to foresee how a potential product fits into a market.   At some deep level, this is essence of design excellence in my mind.

The other aspect of visionary leaders is that they can do it all, from development to marketing to sales to finance.  But what sets them apart is that they integrate all these disciplines into a single or perhaps pair of individuals.  Equally important, they can recognize excellence in others.  As such, when failures occur, visionary leader’s can decipher the difference between bad luck and poor performance and act accordingly.

Finally, most visionary leaders are deeply immersed in the markets they serve or are about to transform.  They understand what’s happening, what’s needed and where it could potentially go if it just apply the right technologies to it.

When you combine all these characteristics in one or a pair of individuals, with corporate resources behind them, they move markets.

The challenges of Visionary Organizations

On the other hand, visionary organizations that create independent research labs can live forever.  As long as they continue to produce viable IP.   Corporate research labs must balance an ongoing commitment to advance basic research against a need to move a corporation’s technology forward.

That’s not to say that the technology they work on doesn’t have business applications.  In some cases, they create entire new lines of businesses, such as Watson from IBM Research.   However, probably most research may never reach corporate products, Nonetheless research labs always generate copious IP which can often be licensed and may represent a significant revenue stream in its own right.

The trick for any independent research organization is to balance the pursuit of basic science within broad corporate interests, recognizing research with potential product applications, and guiding that research into technology development.  IBM seems to have turned their research arm around by rotating some of their young scientists out into the field to see what business is trying to accomplish.  When they return to their labs, often their research takes on some of the problems they noticed during their field experience.

How much to fund such endeavors is another critical factor.  There seems to be a size effect. I have noticed small research arms, less than 20 people that seem to flounder going after the trend of the moment which fail to generate any useful IP.

In comparison, IBM research is well funded (~6% of 2010 corporate revenue) with over 3000 researchers (out of total employee population of 400K) in 8 labs.  The one lab highlighted in the article above (Zurich) had 350 researchers, covering 5 focus areas, or ~70 researchers per area.

Most research labs augment their activities by performing joint research projects with university researchers and other collaborators. This can have the effect of multiplying research endeavors but often it will take some funding to accomplish and get off the ground.

Research labs often lose their way and seem to spend significant funds on less rewarding activities.  But by balancing basic science with corporate interests, they can become very valuable to corporations.

~~~~

In part 3 of this series we discuss the advantages and disadvantages of startup acquisitions and how they can help and hinder a company’s R&D effectiveness.

Image: IBM System/360 by Marcin Wichary

Roads to R&D success – part 1

Large corporations have a serious problem.  We have talked about this before (see Is M&A the only way to grow, R&D Effectiveness, and Technology innovation).

It’s been brewing for years, some say decades. Successful company’s generate lot’s of cash but investing in current lines of business seldom propels corporations into new markets.

So what can they do?

  • Buy startups – yes, doing so can move corporations into new markets, obtain new technology and perhaps, even a functioning product.  However, they often invest in  unproven technology, asymmetrical organizations and mistaken ROIs.
  • Invest internally – yes, they can certainly start new projects, give it resources and let it run it’s course.  However, they burden most internal project teams with higher overhead, functioning perfection, and loftier justification.

Another approach trumpeted by Cisco and others in recent years is spin-out/spin-in which is probably a little of both.   Here a company can provide funding, developers, and even IP to an entity that is spun out of a company.  The spin-out is dedicated to producing some product in a designated new market and then if goals are met, can be spun back into the company at a high, but fair price.

The most recent example is Cisco’s spin-in Insieme that is going after SDN and Open Flow networking but their prior success with Andiamo and it’s FC SAN technology is another one.  GE, Intel and others have also tried this approach with somewhat less success.

Corporate R&D today

Most company’s have engineering departments with a tried and true project management/development team approach that has stage gates, generates  requirements, architects systems, designs components and finally, develops products.   A staid, steady project cycle which nevertheless is fraught with traps, risks and detours.  These sorts of projects seem only able to enhance current product lines and move products forward to compete in their current markets.

But these projects never seem transformative.  They don’t take a company from 25% to 75% market share or triple corporate revenues in a decade.  They typically fight a rear-guard action against a flotilla of competitors all going after the same market, at worst trying not to lose market share and at best gain modest market share, where possible.

How corporation’s succeed at internal R&D

But there are a few different models that have generated outsized internal R&D success in the past.  These generally fall into a few typical patterns.  We discuss two below.

One depends on visionary leadership and the other on visionary organizations.  For example, let’s look at IBM, AT&T’s Bell Labs and Apple.

IBM R&D in the past and today

First, examine IBM whose CEO, Thomas J. Watson Jr. bet the company on System 360 from 1959 to 1964.  That endeavor cost them ~$5B at the time but eventually catapulted them from one of many computer companies to almost a mainframe monopoly for two decades years.  They created an innovative microcoded, CISC architecture, that spanned a family of system models, and standardized I/O with common peripherals.  From that point on, IBM was able to dominate corporate data processing until the mid 1980’s.  IBM has arguably lost and found their way a couple of times since then.

However as another approach to innovation in 1945, IBM Research was founded.  Today IBM Research is a well funded, independent research lab that generates significant IP in super computing, artificial intelligence and semiconductor technology.

Nonetheless, during the decades since 1945, IBM Research struggled for corporate relevance.  Occasionally coming out with significant IT technology like relational databases, thin film recording heads, and RISC architectures. But arguably such advances were probably put to better use outside IBM.  Recently, this seems to have changed and we now see significant technology moving IBM into new markets from IBM Research.

AT&T and Bell Labs

Bell  Labs is probably the most prolific research organization the world has seen.  They invented statistical process control, the transistor, information theory and probably another dozen or so Nobel prize winning ideas. Early on most of their technology made it into the Bell system but later on they lost their way.

Their parent company AT&T, had a monopoly on long distance phone service, switching equipment and other key technologies in USA’s phone system for much of the twentieth century.  During most of that time Bell Labs was well funded and charged with advancing Bell system technology.

Nonetheless, despite Bell Labs obvious technological success, in the end they mostly served to preserve and enhance the phone system rather than disrupt it.  Some of this was due to justice department decrees limiting AT&T endeavors. But in any case, like IBM research much of Bell Labs technology was taken up by others and transformed many markets.

Apple yesterday and today

Then there’s Apple. They have almost single handedly created three separate market’s, the personal computer, the personal music player and the tablet computer markets while radically transforming the smart phone market as well.   In every case there were sometimes, significant precursors to the technology, but Apple was the one to catalyze, popularize and capitalize on each one.

Apple II was arguably the first personal computer but the Macintosh redefined the paradigm.  The Mac wasn’t the great success it could have been, mostly due to management changes that moved Jobs out of Apple.  But it’s potential forced major competitors to change their products substantially.

When Jobs returned, he re-invigorated the Mac.  After that, he went about re-inventing the music player, the smart phone and tablet computing.

Could Apple have done all these without Jobs, I doubt it.  Could a startup have taken any of these on, perhaps but I think it unlikely.

The iPod depended on music industry contracts, back office and desktop software and deep technological acumen.  None of these were exclusive to Apple nor big corporations.  Nevertheless, Jobs saw the way forward first, put the effort into making them happen and Apple reaped the substantial rewards that ensued.

~~~~

In part 2 of the Road to R&D success we propose some options for how to turn corporate R&D into the serious profit generator it can become.  Stay tuned

To be continued …

Image: Replica of first transistor from Wikipedia

 

12 atoms per bit vs 35 bits per electron

Shows 6 atom pairs in a row, with coloration of blue for interstitial space and yellow for external facets of the atom
from Technology Review Article

Read a story today in Technology Review on Magnetic Memory Miniaturized to Just 12 Atoms by a team at  IBM Research that created a (spin) magnetic “storage device” that used 12 iron atoms  to record a single bit (near absolute zero and just for a few hours).  The article said it was about 100X  denser than the previous magnetic storage record.

Holographic storage beats that

Wikipedia’s (soon to go dark for 24hrs) article on Memory Storage Density mentioned research at Stanford that in 2009 created an electronic quantum holographic device that stored 35 bits/electron using a sheet of copper atoms to record the letters S and U.

The Wikipedia article went on to equate 35bits/electron to ~3 Exabytes[10**18 bytes]/In**2.  (Although, how Wikipedia was able to convert from bits/electron to EB/in**2 I don’t know but I’ll accept it as a given)

Now an iron atom has 26 electrons and copper has 29 electrons.  If 35 bits/electron is 3 EB/in**2 (or ~30Eb/in**2), then 1 bit per 12 iron atoms (or 12*26=312 electrons) should be 0.0032bits/electron or ~275TB/in**2 (or ~2.8Pb/in**2).   Not quite to the scale of the holographic device but interesting nonetheless.

What can that do for my desktop?

Given that today’s recording head/media has demonstrated ~3.3Tb/in**2 (see our Disk drive density multiplying by 6X post), the 12 atoms per bit  is a significant advance for (spin) magnetic storage.

With today’s disk industry shipping 1TB/disk platters using ~0.6Tb/in**2 (see our Disk capacity growing out of sight post), these technologies, if implemented in a disk form factor, could store from 4.6PB to 50EB in a 3.5″ form factor storage device.

So there is a limit to (spin) magnetic storage and it’s about 11000X larger than holographic storage.   Once again holographic storage proves it can significantly store more data than magnetic storage if only it could be commercialized. (Probably a subject to cover in a future post.)

~~~~

I don’t know about you but 4.6PB drive is probably more than enough storage for my lifetime and then some.  But then again those new 4K High Definition videos, may take up a lot more space than my (low definition) DVD collection.

Comments?

 


Making hardware-software systems design easier

Exposed by AMagill (cc) (from Flickr)
Exposed by AMagill (cc) (from Flickr)

Recent research from MIT on a Streamlining Chip Design was in the news today.  The report described work was done  by Nyrav Dave PhD and Myron King to create a new programming language, BlueSpec that can convert specifications into hardware chip design (Verilog) or compile it into software programming (C++).

BlueSpec designers can tag (annotate) system modules to be hardware or software.  The intent of the project is to make it easier to decide what is done in hardware versus software.  By specifying this decision using a language attribute, it should make architectural hardware-software tradeoffs much easier to do and as a result, delay that decision until much later in the development cycle.

Hardware-software tradeoffs

Making good hardware-software tradeoffs are especially important in mobile handsets where power efficiency and system performance requirements often clash.  It’s not that unusual in these systems that functionality is changed from hardware to software implementations or vice versa.

The problem is that the two different implementations (hardware or software) use different design languages and would typically require a complete re-coding effort to change, delaying system deployment significantly.  Which makes such decisions all the more important to get right early on in system architecture.

In contrast, with BlueSpec, all it would take is a different tag to have the language translate the module into Verilog (chip design language) or C++ (software code).

Better systems through easier hardware design

There is a long running debate around commodity hardware versus special purpose hardware designed systems in storage systems (see Commodity Hardware Always Loses and Commodity Hardware Debate Heats-up Again).  We believe that there will continuing place for special purpose built hardware in storage.  Also, I would go on to say this is likely the case in networking, server systems as well as telecommunications handsets/back-office equipment.

The team at MIT specifically created their language to help create more efficient mobile phone hand sets. But from my perspective it has an equally valid part to play in storage and other systems.

Hardware and software design, more similar than different

Nowadays, hardware and software designers are all just coders using different languages.

Yes hardware engineers have more design constraints and have to deal with the real, physical world of electronics. But what they deal with most, is a hardware design language and design verification tools tailored for their electronic design environment.

Doing hardware design is not that much different from software developers coding in a specific language like C++ or Java.  Software coders must also be able to understand their framework/virtual machine/OS environment their code operates in to produce something that works.  Perhaps, design verification tools don’t work or even exist in software as much as they should but that is more a subject for research than a distinction between the two types of designers.

—-

Whether BlueSpec is the final answer or not isn’t as interesting as the fact that it has taken a first step to unify system design.  Being able to decide much later in the process whether to make a module hardware or software will benefit all system designers and should get products out with less delay.  But getting hardware designers and software coders talking more, using the same language to express their designs can’t help but result in better/tighter integrated designs which end up benefiting the world.

Comments?

How has IBM research changed?

20111207-204420.jpg
IBM Neuromorphic Chip (from Wired story)

What does Watson, Neuromorphic chips and race track memory have in common. They have all emerged out of IBM research labs.

I have been wondering for some time now how it is that a company known for it’s cutting edge research but lack of product breakthrough has transformed itself into an innovation machine.

There has been a sea change in the research at IBM that is behind the recent productization of tecnology.

Talking the past couple of days with various IBMers at STGs Smarter Computing Forum, I have formulate a preliminary hypothesis.

At first I heard that there was a change in the way research is reviewed for product potential. Nowadays, it almost takes a business case for research projects to be approved and funded. And the business case needs to contain a plan as to how it will eventually reach profitability for any project.

In the past it was often said that IBM invented a lot of technology but productized only a little of it. Much of their technology would emerge in other peoples products and IBM would not recieve anything for their efforts (other than some belated recognition for their research contribution).

Nowadays, its more likely that research not productized by IBM is at least licensed from them after they have patented the crucial technologies that underpin the advance. But it’s just as likely if it has something to do with IT, the project will end up as a product.

One executive at STG sees three phases to IBM research spanning the last 50 years or so.

Phase I The ivory tower:

IBM research during the Ivory Tower Era looked a lot like research universities but without the tenure of true professorships. Much of the research of this era was in materials and pure mathematics.

I suppose one example of this period was Mandlebrot and fractals. It probably had a lot of applications but little of them ended up in IBM products and mostly it advanced the theory and practice of pure mathematics/systems science.

Such research had little to do with the problems of IT or IBM’s customers. The fact that it created pretty pictures and a way of seeing nature in a different light was an advance to mankind but it didn’t have much if any of an impact to IBM’s bottom line.

Phase II Joint project teams

In IBM research’s phase II, the decision process on which research to move forward on now had people from not just IBM research but also product division people. At least now there could be a discussion across IBM’s various divisions on how the technology could enhance customer outcomes. I am certain profitability wasn’t often discussed but at least it was no longer purposefully ignored.

I suppose over time these discussions became more grounded in fact and business cases rather than just the belief in the value of the research for research sake. Technological roadmaps and projects were now looked at from how well they could impact customer outcomes and how such technology enabled new products and solutions to come to market.

Phase III Researchers and product people intermingle

The final step in IBM transformation of research involved the human element. People started moving around.

Researchers were assigned to the field and to product groups and product people were brought into the research organization. By doing this, ideas could cross fertilize, applications could be envisioned and the last finishing touches needed by new technology could be envisioned, funded and implemented. This probably led to the most productive transition of researchers into product developers.

On the flip side when researchers returned back from their multi-year product/field assignments they brought a new found appreciation of problems encountered in the real world. That combined with their in depth understanding of where technology could go helped show the path that could take research projects into new more fruitful (at least to IBM customers) arenas. This movement of people provided the final piece in grounding research in areas that could solve customer problems.

In the end, many research projects at IBM may fail but if they succeed they have the potential to make change IT as we know it.

I heard today that there were 700 to 800 projects in IBM research today if any of them have the potential we see in the products shown today like Watson in Healthcare and Neuromorphic chips, exciting times are ahead.

Commodity hardware debate heats up again

Gold Nanowire Array by lacomj (cc) (from Flickr)
Gold Nanowire Array by lacomj (cc) (from Flickr)

A post by Chris M. Evans, in his The Storage Architect blog (Intel inside storage arrays) re-invigorated the discussion we had last year on commodity hardware always loses.

But buried in the comments was one from Michael Hay (HDS) which pointed to another blog post by Andrew Huang in his bunnie’s blog (Why the best days of open hardware are ahead) where he has an almost brillant discussion on how Moore’s law will eventually peter out (~5nm) and as such, will take much longer to double transistor density.  At that time, hardware customization (by small companies/startups) will once again, come to the forefront in new technology development.

Custom hardware, here now and the foreseeable future

Although it would be hard to argue against Andrew’s point nevertheless, I firmly believe there is still plenty of opportunity today to customize hardware that brings true value to the market.   The fact is that Moore’s law doesn’t mean that hardware customization cannot still be worthwhile.

Hitachi’s VSP (see Hitachi’s VSP vs. VMAX) is a fine example of the use of both custom ASICs, FPGAs (I believe) and standard off the shelf hardware.   HP’s 3PAR  is another example,  they couldn’t have their speedy mesh architecture without custom hardware.

But will anyone be around that can do custom chip design?

Nigel Poulton commented on Chris’s post that with custom hardware seemingly going away, the infrastructure, training and people will no longer be around to support any re-invigorated custom hardware movement.

I disagree.  Intel, IBM, Samsung, and many others large companies still maintain an active electronics engineering team/chip design capability, any of which are capable of creating state of the art ASICs.  These capabilities are what make Moore’s law a reality and will not go away over the long run (the next 20-30 years).

The fact that these competencies are locked up in very large organizations doesn’t mean it cannot be used by small companies/startups as well.  It probably does mean that these wherewithal may cost more. But the market place will deal with that in the long run, that is if the need continues to exist.

But do we still need custom hardware?

Custom hardware creates capabilities that magnify Moore’s law processing capabilities to do things that standard, off the shelf hardware cannot.  The main problem with Moore’s law from a custom hardware perspective is it takes functionality that once took custom hardware yesterday (or 18 months ago) and makes it available on off the shelf components with custom software today.

This dynamic just means that custom hardware needs to keep moving, providing ever more user benefits and functionality to remain viable.  When custom hardware cannot provide any real benefit over standard off the shelf components – that’s when it will die.

Andrew talks about the time it takes to develop custom ASICs and the fact that by the time you have one ready, a new standard chip has come out which doubles processor capabilities. Yes custom ASICs take time to develop, but FPGAs can be created and deployed in much less time. FPGA’s, like custom ASICs, also take advantage of Moore’s law with increased transistor density every 18 months. Yes, FPGAs  may be run slower than custom ASICs, but what it lacks in processing power, it makes up in time to market.

Custom hardware has a bright future as far as I can see.

—–

Comments?