AI processing at the edge

Read a couple of articles over the past few weeks (TechCrunch: Google is making a fast, specialized TPU chip for edge devices … and IEEE Spectrum: Two startups use processing in flash for AI at the edge) about chips for AI at the IoT edge.

The two startups, Syntiant and Mythic, are moving to analog only or analog-digital solutions to provide AI processing needed at the edge while Google is taking their TPU technology to the edge.  We have written about Google’s TPU before (see: TPU and hardware vs. software  innovation (round 3) post).

The major challenge in AI processing at the edge is power consumption. Both  startups attack the power problem by using flash and other analog circuitry to provide power efficient compute.

Google attacked the power problem with their original TPU by reducing computational precision from 64- to 8-bits. By reducing transistor counts, they lowered power requirements proportionally.

AI today is based on neural networks (NN), that connect simulated neurons via simulated synapses with weights attached to indicate whether to boost or decrease the signal being transmitted. AI learning is done by setting those weights and creating the connections between simulated neurons and the synapses.  So learning is setting weights and establishing connections. Actual inferences (using AI to do something) is a process of exciting input simulated neurons/synapses and letting the signal flow through the NN with each weight being used to determine output(s).

AI with standard compute

The problem with doing AI learning or inferencing with normal CPUs or even CUDAs is that the NN does thousands if not millions of  multiplication-accumulation actions at each simulated synapse-neuron connection. Doing all these multiplication-accumulation takes power. CPUs and CUDAs can do these sorts of operations on 32 or 64 bit numbers or even floating point but it still takes power.

AI processing power

AI processing power is measured in trillions of (accumulate-multiply) operations per second per watt (TOPS/W). Mythic believes it can perform 4 TOPS/W and Syntiant says it can do 20 TOPS/W. In comparison, the NVIDIA Volta V100 can do about 0.4 TOPS/W (according to the article). Although  comparing Syntiant-Mythic TOPS to NVIDIA TOPS is a little like comparing apples to oranges.

A current Intel Xeon Platinum 8180M (2.5Ghz, 28 Core processors, 205 W) can probably do (assuming one multiplication-accumulation per hertz) about 2.5 Billion X 28 Cores = 70 Billion Ops Second/205 W or 0.3 GOPS/W (source: Platinum 8180M Data sheet).

As for Google’s TPU TOPS/W, TPU2 is rated at 45 GFLOPS/chip and best guess for power consumption is between 160W and 200W, let’s say 180W. With power at that level, TPU2 should hit 0.25 GFLOPS/W.  TPU3 is coming out with 8X the power but it uses water cooling (read LOTS MORE POWER).

Nonetheless, it appears that Mythic and Syntiant are one to two orders of magnitude better than the best that NVIDIA and TPU2 can do today and many orders of magnitude better than Intel X86.

Improving TOPS/W

Use NAND, as an analog memory to read, write and hold  NN weights is an easy way to reduce power consumption. Combine that with  analog circuitry that can do multiplication and addition with those flash values and you have a AI NN processor. This way you reduce the need to hold weights in memory and do compute in registers by collapsing both compute and memory into the same componentry.

The major difference between Syntiant and Mythic seems to be the amount of analog circuitry they use. Mythic seems to relegate the analog circuitry to an accelerator while Syntiant has a more extensive use of analog circuitry throughout their chip. Probably why it can perform 5X the TOPS/W of Mythic’s IPU.

IBM and others have been working on neuromorphic chips some of which are analog based and others which are all digital based. We’ve written extensively on IBM and some on MIT’s approaches (for the latest on IBM see: More power efficient deep learning through IBM and PCM, and for MIT see: MIT builds an analog synapse chip) and follow the links there to learn more.

~~~~

Special purpose AI hardware is emerging from the labs and finally reaching reality. IBM R&D has been playing with it for a long time. Google is working on TPU3 so there’s no stopping them. And startups are seeing an opening and are taking everyone on. Stay tuned, were in for a good long ride before the someone rises above the crowd and becomes the next chip giant.

Comments?

 

Photo Credit(s): TechCrunch  Google is making a fast, specialized TPU chip for edge devices … article

Introduction to Digital Design Verification at Mythic, Medium.com Article

Images from Google Cloud Platform Blog on the TPU

Two startups use processing in flash for AI at the edge, IEEE Spectrum article courtesy of Mythic

Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base

graphs
We were at Nimble Storage (videos of their sessions) for Storage Field Day 10 (SFD10) last week and they presented some interesting IO statistics from data analysis across their 7500 customer install base using InfoSight.

As I understand it, the data are from all customers that have maintenance and are currently connected to InfoSight, their SaaS service solution for Nimble Storage. The data represents all IO over the course of a single month across the customer base. Nimble wrote a white paper summarizing their high level analysis, called Busting the myth of storage block size.
Continue reading “Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base”

(#Storage-QoW 2015-002): Will we see 3D TLC NAND GA in major vendor storage products in the next year?

450_x_492_3d_nand_32_layer_stack

I was almost going to just say something about TLC NAND but there’s planar TLC and 3D TLC. From my perspective, planar NAND is on the way out, so we go with 3D TLC NAND.

QoW 2015-002 definitions

By “3D TLC NAND” we mean 3 dimensional (rather than planar or 2 dimensional) triple level cell (meaning 3 values rather than two [MLC] or one [SLC]) NAND technology. It could show up in SSDs, PCIe cards and perhaps other implementations. At least one flash vendor is claiming to be shipping 3D TLC NAND so it’s available to be used. We did a post earlier this year on 3D NAND, how high can it go. Rumors are out that startup vendors will adopt the technology but have heard nothing any major vendor plans for the technology.

By “major vendor storage products” I mean EMC VMAX, VNX or XtremIO;  HDS VSP G1000, HUS VM (or replacement), VSP-F/VSP G800-G600; HPE 3PAR, IBM DS8K, FlashSystem, or V7000 StorWize; & NetApp AFF/FAS 8080, 8060, or 8040. I tried to use 700 drives or better block storage product lines for the major storage vendors.

By “in the next year” I mean between today (15Dec2015) and one year from today (15Dec2016).

By “GA” I mean a generally available product offering that can be ordered, sold and installed within the time frame identified above.

Forecasts for QoW 2015-002 need to be submitted via email (or via twitter with email addresses known to me) to me before end of day (PT) next Tuesday 22Dec2015.

Thanks to Howard Marks (DeepStorage.net, @DeepStorageNet) for the genesis of this weeks QoW.

We are always looking for future QoW’s, so if you have any ideas please drop me a line.

Forecast contest – status update for prior QoW(s):

(#Storage-QoW 2015-001) – Will 3D XPoint be GA’d in  enterprise storage systems within 12 months? 2 active forecasters, current forecasts are:

A) YES with 0.85 probability; and

B) NO with 0.62 probability.

These can be updated over time, so we will track current forecasts for both forecasters with every new QoW.

 

An analyst forecasting contest ala SuperForecasting & 1st #Storage-QoW

71619318_80d2135743_zI recently read the book SuperForecasting: the art and science of prediction by P. E. Tetlock & D. Gardner. Their Good Judgement Project has been running for years now and the book is the results of their experiments.  I thought it was a great book.

But it also got me to thinking, how can industry analysts do a better job at forecasting storage trends and events?

Impossible to judge most analyst forecasts

One thing the book mentioned was that typically analyst/pundit forecasts are too infrequent, vague and time independent to be judge-able as to their accuracy. I have committed this fault as much as anyone in this blog and on our GreyBeards on Storage podcast (e.g. see our Yearend podcast videos…).

What do we need to do differently?

The experiments documented in the book show us the way. One suggestion is to start putting time durations/limits on all forecasts so that we can better assess analyst accuracy. The other is to start estimating a probability for a forecast and updating your estimate periodically when new information becomes available. Another is to document your rational for making your forecast. Also, do post mortems on both correct and incorrect forecasts to learn how to forecast better.

Finally, make more frequent forecasts so that accuracy can be assessed statistically. The book discusses Brier scores as a way of scoring the accuracy of forecasters.

How to be better forecasters?

In the back of the book the author’s publish a list of helpful hints or guidelines to better forecasting which I will summarize here (read the book for more information):

  1. Triage – focus on questions where your work will pay off.  For example, try not to forecast anything that’s beyond say 5 years out, because there’s just too much randomness that can impact results.
  2. Split intractable problems into tractable ones – the author calls this Fermizing (after the physicist) who loved to ballpark answers to hard questions by breaking them down into easier questions to answer. So decompose problems into simpler (answerable) problems.
  3. Balance inside and outside views – search for comparisons (outside) that can be made to help estimate unique events and balance this against your own knowledge/opinions (inside) on the question.
  4. Balance over- and under-reacting to new evidence – as forecasts are updated periodically, new evidence should impact your forecasts. But a balance has to be struck as to how much new evidence should change forecasts.
  5. Search for clashing forces at work – in storage there are many ways to store data and perform faster IO. Search out all the alternatives, especially ones that can critically impact your forecast.
  6. Distinguish all degrees of uncertainty – there are many degrees of knowability, try to be as nuanced as you can and properly aggregate your uncertainty(ies) across aspects of the question to create a better overall forecast.
  7. Balance under/over confidence, prudence/decisiveness – rushing to judgement can be as bad as dawdling too long. You must get better at both calibration (how accurate multiple forecasts are) and resolution (decisiveness in forecasts). For calibration think weather rain forecasts, if rain tomorrow is 80% probably then over time rain probability estimates should be on average correct. Resolution is no guts no glory, if all your estimates are between 0.4 and 0.6 probable, your probably being to conservative to really be effective.
  8. During post mortems, beware of hindsight bias – e.g., of course we were going to have flash in storage because the price was coming down, controllers were becoming more sophisticated, reliability became good enough, etc., represents hindsight bias. What was known before SSDs came to enterprise storage was much less than this.

There are a few more hints than the above.  In the Good Judgement Project, forecasters were put in teams and there’s one guideline that deals with how to be better forecasters on teams. Then, there’s another that says don’t treat these guidelines as gospel. And a third, on trying to balance between over and under compensating for recent errors (which sounds like #4 above).

Again, I would suggest reading the book if you want to learn more.

Storage analysts forecast contest

I think we all want to be better forecasters. At least I think so. So I propose a multi-year long contest, where someone provides a storage question of the week and analyst,s such as myself, provide forecasts. Over time we can score the forecasts by creating a Brier score for each analysts set of forecasts.

I suggest we run the contest for 1 year to see if there’s any improvements in forecasting and decide again next year to see if we want to continue.

Question(s) of the week

But the first step in better forecasting is to have more frequent and better questions to forecast against.

I suggest that the analysts community come up with a question of the week. Then, everyone would get one week from publication to record their forecast. Over time as the forecasts come out we can then score analysts in their forecasting ability.

I would propose we use some sort of hash tag to track new questions, “#storage-QoW” might suffice and would stand for Question of the week for storage.

Not sure if one question a week is sufficient but that seems reasonable.

(#Storage-QoW 2015-001): Will 3D XPoint be GA’d in  enterprise storage systems within 12 months?

3D XPoint NVM was announced last July by Intel-Micron (wrote a post about here). By enterprise storage I mean enterprise and mid-range class, shared storage systems, that are accessed as block storage via Ethernet or Fibre Channel as SCSI device protocols or as file storage using SMB or NFS file access protocols. By 12 months I mean by EoD 12/8/2016. By GA’d, I mean announced as generally available and sellable in any of the major IT regions of the world (USA, Europe, Asia, or Middle East).

I hope to have my prediction in by next Monday with the next QoW as well.

Anyone interested in participating please email me at Ray [at] SilvertonConsulting <dot> com and put QoW somewhere in the title. I will keep actual names anonymous unless told otherwise. Brier scores will be calculated starting after the 12th forecast.

Please email me your forecasts. Initial forecasts need to be in by one week after the QoW goes live.  You can update your forecasts at any time.

Forecasts should be of the form “[YES|NO] Probability [0.00 to 0.99]”.

Better forecasting demands some documentation of your rational for your forecasts. You don’t have to send me your rational but I suggest you document it someplace you can use to refer back to during post mortems.

Let me know if you have any questions and I will try to answer them here

I could use more storage questions…

Comments?

Photo Credits: Renato Guerreiro, Crystalballer

HDS Influencer Summit wrap up

[Sorry for the length, it was a long day] There was an awful lot of information suppied today. The morning sessions were all open but most of the afternoon was under NDA.

Jack Domme,  HDS CEO started the morning off talking about the growth in HDS market share.  Another 20% y/y growth in revenue for HDS.  They seem to be hitting the right markets with the right products.  They have found a lot of success in emerging markets in Latin America, Africa and Asia.  As part of this thrust into emerging markets HDS is opening up a manufacturing facility in Brazil and a Sales/Solution center in Columbia.

Jack spent time outlining the infrastructure cloud to content cloud to information cloud transition that they believe is coming in the IT environment of the future.   In addition, there has been even greater alignment within Hitachi Ltd and consolidation of engineering teams to tackle new converged infrastructure needs.

Randy DeMont, EVP and GM Global Sales, Services and Support got up next and talked about their success with the channel. About 50% of their revenue now comes from indirect sources. They are focusing some of their efforts to try to attract global system integrators that are key purveyors to Global 500 companies and their business transformation efforts.

Randy talked at length about some of their recent service offerings including managed storage services. As customers begin to trust HDS with their storage they are start considering moving their whole data center to HDS. Randy said this was a $1B opportunity for HDS and the only thing holding them back is finding the right people with the skills necessary to provide this service.

Randy also mentioned that over the last 3-4 years HDS has gained 200-300 new clients a quarter, which is introducing a lot of new customers to HDS technology.

Brian Householder, EVP, WW Marketing, Business Development and Partners got up next and talked about how HDS has been delivering on their strategic vision for the last decade or so.    With HUS VM, HDS has moved storage virtualization down market, into a rack mounted 5U storage subsystem.

Brian mentioned that 70% of their customers are now storage virtualized (meaning that they have external storage managed by VSP, HUS VM or prior versions).  This is phenomenal seeing as how only a couple of years back this number was closer to 25%.  Later at lunch I probed as to what HDS thought was the reason for this rapid adoption, but the only explanation was the standard S-curve adoption rate for new technologies.

Brian talked about some big data applications where HDS and Hitachi Ltd, business units collaborate to provide business solutions. He mentioned the London Summer Olympics sensor analytics, medical imaging analytics, and heavy construction equipment analytics. Another example he mentioned was financial analysis firms usingsatellite images of retail parking lots to predict retail revenue growth or loss.  HDS’s big data strategy seems to be vertically focused building on the strength in Hitachi Ltd’s portfolio of technologies. This was the subject of a post-lunch discussion between John Webster of Evaluator group, myself and Brian.

Brian talked about their storage economics professional services engagement. HDS has done over 1200 storage economics engagements and  have written books on the topic as well as have iPad apps to support it.  In addition, Brian mentioned that in a late The Info Pro survey, HDS was rated number 1 in value for storage products.

Brian talked some about HDS strategic planning frameworks one of which was an approach to identify investments to maximize share of IT spend across various market segments.  Since 2003 when HDS was 80% hardware revenue company to today where they are over 50% Software and Services revenue they seem to have broaden their portfolio extensively.

John Mansfield, EVP Global Solutions Strategy and Development and Sean Moser, VP Software Platforms Product Management spoke next and talked about HCP and HNAS integration over time. It was just 13 months ago that HDS acquired BlueArc and today they have integrated BlueArc technology into HUS VM and HUS storage systems (it was already the guts of HNAS).

They also talked about the success HDS is having with HCP their content platform. One bank they are working with plans to have 80% of their data in an HCP object store.

In addition there was a lot of discussion on UCP Pro and UCP Select, HDS’s converged server, storage and networking systems for VMware environments. With UCP Pro the whole package is ordered as a single SKU. In contrast, with UCP Select partners can order different components and put it together themselves.  HDS had a demo of their UCP Pro orchestration software under VMware vSphere 5.1 vCenter that allowed VMware admins to completely provision, manage and monitor servers, storage and networking for their converged infrastructure.

They also talked about their new Hitachi Accelerated Flash storage which is an implementation of a Flash JBOD using MLC NAND but with extensive Hitachi/HDS intellectual property. Together with VSP microcode changes, the new flash JBOD provides great performance (1 Million IOPS) in a standard rack.  The technology was developed specifically by Hitachi for HDS storage systems.

Mike Walkey SVP Global Partners and Alliances got up next and talked about their vertical oriented channel strategy.  HDS is looking for channel partners perspective the questions that can expand their reach to new markets, providing services along with the equipment and that can make a difference to these markets.  They have been spending more time and money on vertical shows such as VMworld, SAPhire, etc. rather than horizontal storage shows (such as SNW). Mike mentioned key high level partnerships with Microsoft, VMware, Oracle, and SAP as helping to drive solutions into these markets.

Hicham Abhessamad, SVP, Global Services got up next and talked about the level of excellence available from HDS services.  He indicated that professional services grew by 34% y/y while managed services grew 114% y/y.  He related a McKinsey study that showed that IT budget priorities will change over the next couple of years away from pure infrastructure to more analytics and collaboration.  Hicham talked about a couple of large installations of HDS storage and what they are doing with it.

There were a few sessions of one on ones with HDS executives and couple of other speakers later in the day mainly on NDA topics.  That’s about all I took notes on.  I was losing steam toward the end of the day.

Comments?

Roads to R&D success – part 2

This is the second part of a multi-part post.  In part one (found here) we spent some time going over some prime examples of corporations that generated outsize success from their R&D activities, highlighting AT&T with Bell Labs, IBM with IBM Research, and Apple.

I see two viable models for outsized organic R&D success:

  • One is based on a visionary organizational structure which creates an independent R&D lab.  IBM has IBM Research, AT&T had Bell Labs, other major companies have their research entities.  These typically have independent funding not tied to business projects, broadly defined research objectives, and little to no direct business accountability.  Such organizations can pursue basic research and/or advanced technology wherever it may lead.
  • The other is based on visionary leadership, where a corporation identifies a future need, turns completely to focus on the new market, devotes whatever resources it needs and does a complete forced march towards getting a product out the door.  While these projects sometimes have stage gates, more often than not, they just tell the project what needs to be done next, and where resources are coming from.

The funny thing is that both approaches have changed the world.  Visionary leadership typically generates more profit in a short time period. But visionary organizations often outlast any one person and in the long run may generate significant corporate profits.

The challenges of Visionary Leadership

Visionary leadership balances broad technological insight with design aesthetic that includes a deep understanding of what’s possible within a corporate environment. Combine all that with an understanding of what’s needed in some market and you have a combination reconstructs industries.

Visionary leadership is hard to find.  Leaders like Bill Hewlett, Akio Morita and Bill Gates seem to come out of nowhere, dramatically transform multiple industries and then fade away.  Their corporations don’t ever do as well after such leaders are gone.

Often visionary leaders come up out of the technical ranks.  This gives them the broad technical knowledge needed to identify product opportunities when they occur.   But, this technological ability also helps them to push development teams beyond what they thought feasible.  Also, the broad technical underpinnings gives them an understanding of how different pieces of technology can come together into a system needed by new markets.

Design aesthetic is harder to nail down.  In my view, it’s intrinsic to understanding what a market needs and where a market is going.   Perhaps this should be better understood as marketing foresight.  Maybe it’s just the ability to foresee how a potential product fits into a market.   At some deep level, this is essence of design excellence in my mind.

The other aspect of visionary leaders is that they can do it all, from development to marketing to sales to finance.  But what sets them apart is that they integrate all these disciplines into a single or perhaps pair of individuals.  Equally important, they can recognize excellence in others.  As such, when failures occur, visionary leader’s can decipher the difference between bad luck and poor performance and act accordingly.

Finally, most visionary leaders are deeply immersed in the markets they serve or are about to transform.  They understand what’s happening, what’s needed and where it could potentially go if it just apply the right technologies to it.

When you combine all these characteristics in one or a pair of individuals, with corporate resources behind them, they move markets.

The challenges of Visionary Organizations

On the other hand, visionary organizations that create independent research labs can live forever.  As long as they continue to produce viable IP.   Corporate research labs must balance an ongoing commitment to advance basic research against a need to move a corporation’s technology forward.

That’s not to say that the technology they work on doesn’t have business applications.  In some cases, they create entire new lines of businesses, such as Watson from IBM Research.   However, probably most research may never reach corporate products, Nonetheless research labs always generate copious IP which can often be licensed and may represent a significant revenue stream in its own right.

The trick for any independent research organization is to balance the pursuit of basic science within broad corporate interests, recognizing research with potential product applications, and guiding that research into technology development.  IBM seems to have turned their research arm around by rotating some of their young scientists out into the field to see what business is trying to accomplish.  When they return to their labs, often their research takes on some of the problems they noticed during their field experience.

How much to fund such endeavors is another critical factor.  There seems to be a size effect. I have noticed small research arms, less than 20 people that seem to flounder going after the trend of the moment which fail to generate any useful IP.

In comparison, IBM research is well funded (~6% of 2010 corporate revenue) with over 3000 researchers (out of total employee population of 400K) in 8 labs.  The one lab highlighted in the article above (Zurich) had 350 researchers, covering 5 focus areas, or ~70 researchers per area.

Most research labs augment their activities by performing joint research projects with university researchers and other collaborators. This can have the effect of multiplying research endeavors but often it will take some funding to accomplish and get off the ground.

Research labs often lose their way and seem to spend significant funds on less rewarding activities.  But by balancing basic science with corporate interests, they can become very valuable to corporations.

~~~~

In part 3 of this series we discuss the advantages and disadvantages of startup acquisitions and how they can help and hinder a company’s R&D effectiveness.

Image: IBM System/360 by Marcin Wichary

Making hardware-software systems design easier

Exposed by AMagill (cc) (from Flickr)
Exposed by AMagill (cc) (from Flickr)

Recent research from MIT on a Streamlining Chip Design was in the news today.  The report described work was done  by Nyrav Dave PhD and Myron King to create a new programming language, BlueSpec that can convert specifications into hardware chip design (Verilog) or compile it into software programming (C++).

BlueSpec designers can tag (annotate) system modules to be hardware or software.  The intent of the project is to make it easier to decide what is done in hardware versus software.  By specifying this decision using a language attribute, it should make architectural hardware-software tradeoffs much easier to do and as a result, delay that decision until much later in the development cycle.

Hardware-software tradeoffs

Making good hardware-software tradeoffs are especially important in mobile handsets where power efficiency and system performance requirements often clash.  It’s not that unusual in these systems that functionality is changed from hardware to software implementations or vice versa.

The problem is that the two different implementations (hardware or software) use different design languages and would typically require a complete re-coding effort to change, delaying system deployment significantly.  Which makes such decisions all the more important to get right early on in system architecture.

In contrast, with BlueSpec, all it would take is a different tag to have the language translate the module into Verilog (chip design language) or C++ (software code).

Better systems through easier hardware design

There is a long running debate around commodity hardware versus special purpose hardware designed systems in storage systems (see Commodity Hardware Always Loses and Commodity Hardware Debate Heats-up Again).  We believe that there will continuing place for special purpose built hardware in storage.  Also, I would go on to say this is likely the case in networking, server systems as well as telecommunications handsets/back-office equipment.

The team at MIT specifically created their language to help create more efficient mobile phone hand sets. But from my perspective it has an equally valid part to play in storage and other systems.

Hardware and software design, more similar than different

Nowadays, hardware and software designers are all just coders using different languages.

Yes hardware engineers have more design constraints and have to deal with the real, physical world of electronics. But what they deal with most, is a hardware design language and design verification tools tailored for their electronic design environment.

Doing hardware design is not that much different from software developers coding in a specific language like C++ or Java.  Software coders must also be able to understand their framework/virtual machine/OS environment their code operates in to produce something that works.  Perhaps, design verification tools don’t work or even exist in software as much as they should but that is more a subject for research than a distinction between the two types of designers.

—-

Whether BlueSpec is the final answer or not isn’t as interesting as the fact that it has taken a first step to unify system design.  Being able to decide much later in the process whether to make a module hardware or software will benefit all system designers and should get products out with less delay.  But getting hardware designers and software coders talking more, using the same language to express their designs can’t help but result in better/tighter integrated designs which end up benefiting the world.

Comments?

Hadoop – part 2

Hadoop Graphic (c) 2011 Silverton Consulting
Hadoop Graphic (c) 2011 Silverton Consulting

(Sorry about the length).

In part 1 we discussed some of Hadoop’s core characteristics with respect to the Hadoop distributed file system (HDFS) and the MapReduce analytics engine. Now in part 2 we promised to discuss some of the other projects that have emerged to make Hadoop and specifically MapReduce even easier to use to analyze unstructured data.

Specifically, we have a set of tools which use Hadoop to construct a database like out of unstructured data.  Namely,

  • Casandra – which maps HDFS data into a database but into a columnar based sparse table structure rather than the more traditional relational database row form. Cassandra was written by Facebook for Mbox search. Columnar databases support a sparse data much more efficiently.  Data access is via a Thrift based API supporting many languages.  Casandra’s data model is based on column, column families and column super-families. The datum for any column item is a three value structure and consists of a name, value of item and a time stamp.  One nice thing about Cassandra is that one can tune it for any consistency model one requires, from no consistency to always consistent and points inbetween.  Also Casandra is optimized for writes.  Cassandra can be used as the Map portion of a MapReduce run.
  • Hbase – which also maps HDFS data into a database like structure and provides Java API access to this DB.  Hbase is useful for million row tables with arbitrary column counts. Apparently Hbase is an outgrowth of Google’s Bigtable which did much the same thing only against the Google file system (GFS).  In contrast to Hive below Hbase doesn’t run on top of MapReduce rather it replaces MapReduce, however it can be used as a source or target of MapReduce operations.  Also, Hbase is somewhat tuned for random access read operations and as such, can be used to support some transaction oriented applications.  Moreover, Hbase can run on HDFS or Amazon S3 infrastructure.
  • Hive – which maps a” simple SQL” (called QL) ontop of a data warehouse built on Hadoop.  Some of these queries may take a long time to execute and as the HDFS data is unstructured the map function must extract the data using a database like schema into something approximating a relational database. Hive operates ontop of Hadoop’s MapReduce function.
  • Hypertable – is a Google open source project which is a  c++ implementation of BigTable only using HDFS rather than GFS .  Actually Hypertable can use any distributed file systemand and is another columnar database (like Cassandra above) but only supports columns and column families.   Hypertable supports both a client (c++) and Thrift API.  Also Hypertable is written in c++ and is considered the most optimized of the Hadoop oriented databases (although there is some debate here).
  • Pig – is a dataflow processing (scripting) language built ontop of Hadoop which supports a sort of database interpreter for HDFS  in combination with an interpretive analysis.  Essentially, Pig uses the scripting language and emits a dataflow graph which is then used by MapReduce to analyze the data in HDFS.  Pig supports both batch and interactive execution but can also be used through a Java API.

Hadoop also supports special purpose tools used for very specialized analysis such as

  • Mahout – an Apache open source project which applies machine learning algorithms to HDFS data providing classification, characterization, and other feature extraction.  However, Mahout works on non-Hadoop clusters as well.  Mahout supports 4 techniques: recommendation mining, clustering, classification, and itemset machine learning functions.  While Mahout uses the MapReduce framework of Hadoop, it doesnot appear that Mahout uses Hadoop MapReduce directly but is rather a replacement for MapReduce focused on machine learning activities.
  • Hama – an Apache open source project which is used to perform paralleled matrix and graph computations against Hadoop cluster data.  The focus here is on scientific computation.  Hama also supports non-Hadoop frameworks including BSP and Dryad (DryadLINQ?). Hama operates ontop of MapReduce and can take advantage of Hbase data structures.

There are other tools that have sprung up around Hadoop to make it easier to configure, test and use, namely

  • Chukwa – which is used for monitoring large distributed clusters of servers.
  • ZooKeeper – which is a cluster configuration tool  and distributed serialization manager useful to build large clusters of Hadoop nodes.
  • MRunit – which is used to unit test MapReduce programs without having to test it on the whole cluster.
  • Whirr – which extends HDFS to use cloud storage services, unclear how well this would work with PBs of data to be processed but maybe it can colocate the data and the compute activities into the same cloud data center.

As for who uses these tools, Facebook uses Hive and Cassandra, Yahoo uses Pig, Google uses Hypertable and there are myriad users of the other projects as well.  In most cases the company identified in the previous list developed the program source code originally, and then contributed it to the Apache for use in the Hadoop open source project. In addition, those companies continue to fix, support and enhance these packages as well.